link
stringlengths 41
45
| date
stringlengths 9
9
| paper
dict | reviews
listlengths 1
6
| version
int64 1
5
| main
stringlengths 38
42
|
|---|---|---|---|---|---|
https://f1000research.com/articles/4-136/v1
|
01 Jun 15
|
{
"type": "Research Article",
"title": "The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs",
"authors": [
"Johannes Gudbrandsson",
"Ehsan P. Ahi",
"Sigridur R. Franzdottir",
"Kalina H. Kapralova",
"Bjarni K. Kristjansson",
"S. Sophie Steinhaeuser",
"Valerie H. Maier",
"Isak M. Johannesson",
"Sigurdur S. Snorrason",
"Zophonias O. Jonsson",
"Arnar Palsson",
"Ehsan P. Ahi",
"Sigridur R. Franzdottir",
"Kalina H. Kapralova",
"Bjarni K. Kristjansson",
"S. Sophie Steinhaeuser",
"Valerie H. Maier",
"Isak M. Johannesson",
"Sigurdur S. Snorrason",
"Zophonias O. Jonsson"
],
"abstract": "Species and populations with parallel evolution of specific traits can help illuminate how predictable adaptations and divergence are at the molecular and developmental level. Following the last glacial period, dwarfism and specialized bottom feeding morphology evolved rapidly in several landlocked Arctic charr Salvelinus alpinus populations in Iceland. To study the genetic divergence between small benthic morphs and limnetic morphs, we conducted RNA-sequencing charr embryos at four stages in early development. We studied two stocks with contrasting morphologies: the small benthic (SB) charr from Lake Thingvallavatn and Holar aquaculture (AC) charr.The data reveal significant differences in expression of several biological pathways during charr development. There was also an expression difference between SB- and AC-charr in genes involved in energy metabolism and blood coagulation genes. We confirmed differing expression of five genes in whole embryos with qPCR, including lysozyme and natterin-like which was previously identified as a fish-toxin of a lectin family that may be a putative immunopeptide. We also verified differential expression of 7 genes in the developing head that associated consistently with benthic v.s.limnetic morphology (studied in 4 morphs). Comparison of single nucleotide polymorphism (SNP) frequencies reveals extensive genetic differentiation between the SB and AC-charr (~1300 with more than 50% frequency difference). Curiously, three derived alleles in the otherwise conserved 12s and 16s mitochondrial ribosomal RNA genes are found in benthic charr.The data implicate multiple genes and molecular pathways in divergence of small benthic charr and/or the response of aquaculture charr to domestication. Functional, genetic and population genetic studies on more freshwater and anadromous populations are needed to confirm the specific loci and mutations relating to specific ecological traits in Arctic charr.",
"keywords": [
"Salmonids",
"Aquaculture",
"ecomorphs",
"Polymorphism",
"parallel evolution",
"immunology",
"craniofacial divergence",
"mtDNA"
],
"content": "Introduction\n\nHistorical contingencies and chance shape organisms during evolution1,2, but convergence in phenotype and molecular systems indicates that evolution is to some extent predictable3,4. Identification of genes and variants that influence evolved differences is not a trivial task5. Ideal systems to study the role of chance and necessity in ecological evolution would be related species or populations with readily observable phenotypic variation, living in a tractable ecological setting, and most crucially showing parallel evolution of specific traits within/among species/populations. Examples of such a species complex are the finches of the Galapagos islands, cichlids in the African great lakes are exciting multi-species systems in this respect6,7. The threespine stickleback has also emerged as a model “single species” system8. The amount of diversity in the feeding specializations of fish provide great opportunities for studying adaptation and divergence at the developmental and genetic level.\n\nTranscriptomic methods have been to address evolutionary and ecological questions in fish. For example microarrays were used to compare gene expression in anadromous and resident populations of brown trout (Salmo trutta), revealing that life history was a better predictor of gene expression in the liver than relatedness9. The newer technique, RNA-sequencing (RNA-seq) has been applied to species such as the Mexican cavefish (Astyanax mexicanus), cod (Gadus morhua) brook charr (Salvelinus fontinalis) and Atlantic salmon (Salmo salar)10–15, addressing questions concerning evolution, molecular genetics, development and aquaculture. RNA-seq was used to study salinity tolerance in Arctic charr, linking expression and quantitative trait loci16. Microarray studies of adult lake whitefish (Coregonus clupeaformis) pointed to parallel expression differences between benthic and limnetic forms17. Filteau et al. (2013)18 found that a set of coexpressed genes differentiated the two whitefish morphotypes, implicating Bone morphogenesis protein (BMP) signaling in the development of ecological differences in tropic morphology. One approach to identify pathways related to function or morphological differences is to study gene expression during development19,20.\n\nSome northern freshwater fish species exhibit frequent parallelism in trophic structures and life history and in several cases are they found as distinct resource morphs8,21–25. One of these species, Arctic charr (Salvelinus alpinus), is well suited for studying the developmental underpinnings of trophic divergence and parallel evolution. Local adaptation has been extensively studied in the salmonid family, to which Arctic charr belongs26. The family is estimated to be between 63.2 and 58.1 million years old27,28. A whole genome duplication event occurred before the radiation of the salmonid family29–32 which has provided time for divergence of ohnologous genes (paralogous genes originated by whole genome duplication event). Furthermore, recent estimates from the rainbow trout (Oncorhynchus mykiss) genome suggest that ohnologous genes were lost at a rate of about 170 ohnologous genes per million years and by utilizing multiple data sources the genome assembly problem of this family can be solved32. De novo assembly of genomes and transcriptomes is complicated if many paralogs are present, such as in salmonids. Furthermore, for data with short reads, mapping to a related reference genome/transcriptome is recommended over de novo assembly33.\n\nFollowing the end of the last glacial period, about 10.000 years ago, Arctic charr colonized northern freshwater systems34. It can be found as anadromous or lake/stream residents and exhibits high level of within species polymorphism23,34. Resource polymorphism in charr correlates with ecological attributes35–37. For instance small charr with benthic morphology, are found in multiple lavaspring and pond habitats in Iceland38, and a comparative study of Icelandic lakes37 found that lakes with greater limnetic habitat, fewer nutrients, and greater potential for zooplankton consumption appeared to promote resource polymorphism. Some of the larger lakes contain two or more distinct morphs, typically limnetic and benthic forms. Multiple lines of evidence show that these differences stem both from environmental and genetic causes39–43. The best studied example of sympatric charr are the four morphs in Lake Thingvallavatn44; two have a benthic morphotype, a large benthivorous (LB-charr) and a small benthivorous (SB-charr), and two morphs are limnetic, a large piscivorous morph (PI-charr) and small planktivorous morph (PL-charr)45. Both PL and PI-charr operate in open water and feed on free-swimming prey, PL on planktonic crustaceans and PI on small fish.\n\nSeveral population genetics studies, using allozymes and mtDNA revealed no differences among charr populations46–48, while studies on microsatellite markers and nuclear genes, reveled both subtle49–51 and strong genetic differences among morphs52. Importantly Kapralova et al. (2011)51 concluded that small benthic morphs have evolved repeatedly in Iceland and that gene flow has been reduced between the PL and SB morphs in Lake Thingvallavatn since its formation approximately 10,000 years ago53. We also discovered genetic separation in immunological genes (MHCIIα and cath2) between morphs in Iceland and within the lake52, consistent with ecologically driven evolution of immune functions. Recently qPCR analyses showed that expression of mTOR pathway components in skeletal muscle correlates with the SB-charr form in Iceland54, but it is unknown whether there is genetic differentiation in those genes or upstream regulators.\n\nBecause individual genes have distinct histories55,56, genome wide methods are needed to identify genes and mutation that associate with divergence. Icelandic aquaculture charr (AC) was founded with fish from the north of Iceland, and has been bred at Holar University College since 199057. The Holar AC-charr has responded to artificial selection in growth and performance characteristics, and is now the dominant charr breed in aquaculture in Iceland. While clearly a derived form, it has retained general limnetic craniofacial morphotype (Figure 1). In this study we compare SB-charr from Lake Thingvallavatn and AC-charr because i) SB charr represents an extensively studied and derived form of charr, that has been separated from Anadromous fish for approx. 10,000 years, ii) of the availability of abundant AC material and iii) we wanted an extreme contrast, because of budget reasons we could only sequence 8 samples at the time. This rather extreme contrast is justified as the data and other studies58,59 building on this data illustrate (see discussion).\n\nAdult individuals of the two morphs; the Holar aquaculture charr above and the small benthic charr from Lake Thingvallavatn below. Differences in size, coloration and head morphology are apparent.\n\nThe aims of this project are threefold. First, to find genes and pathways related to the development of phenotypic differences between benthic and limnetic Arctic charr morphs. Second, to screen for signals of genetic differentiation that may relate to divergence of benthic and limnetic charr. Third, we set out to verify a subset of the expression and genetic signals, in benthic and limnetic morphs. We conducted RNA-sequencing of developing offspring of two contrasting Arctic charr morphs, a small benthic charr from Lake Thingvallavatn and Icelandic aquaculture charr conforming to a limnetic morphotype. This identified candidate genetic changes and differential expression of developmental genes that may affect jaw and craniofacial traits which separate benthic and limnetic morphotypes in charr.\n\n\nMethods\n\nWe set up crosses and reared embryos in the laboratory as previously described in 58. Embryos from four charr morphs were studied: an aquaculture charr (AC-charr) from the Holar breeding program57 and three natural morphs from Lake Thingvallavatn; small benthivorous (SB), large benthivorous (LB) and small planktivorous (PL) charr60. Samples of the first two, AC and SB-charr, which exhibit contrasting adult size and morphology (Figure 1), were collected in 2009 and material sent for developmental transcriptome sequencing. The latter two were sampled in 2010 and used for qPCR and SNP studies of selected genes. Briefly, in September 2009 we got material from spawning AC-charr from the Holar breeding program57 and spawning SB-charr collected via gill netting in Olafsdrattur in Lake Thingvallavatn. Similarly, in the 2010 spawning season SB-, LB- and PL-charr were collected from Lake Thingvallavatn. Fishing permissions were obtained from the Thingvellir National Park Commission and the owner of the Mjóanes farm. For each parent group, eggs from several females were pooled and fertilized using milt from several males from the same group. Embryos were reared at ~5°C under constant water flow and in complete darkness at the Holar University College experimental facilities in Verid, Saudárkrókur. The water temperature was recorded twice daily and the average was used to estimate the relative age of the embryos using tausomite units (τs)61. Embryos and juveniles were sampled at designated time points, placed in RNAlater (Ambion) and frozen at -20°C. Post hatching juveniles were reared at the same temperature on standard Aquaculture food. For the investigation of different tissues of adult aquaculture charr (AC) from Hólar (fish size 20–25 cm) were used. Six randomly selected individuals were killed (by cutting through spinal cord) and dissected, and samples were taken from the skin, heart, liver, gills, spleen, intestine and kidney of each fish. The samples were placed in RNAlater (Ambion) and stored at -20°C. DNA for population genetic analyses was taken from our previous study52.\n\nFishing in Lake Thingvallavatn was with permissions obtained both from the owner of the land in Mjóanes and from the Thingvellir National Park commission. Ethics committee approval is not needed for regular or scientific fishing in Iceland (The Icelandic law on Animal protection, Law 15/1994, last updated with Law 157/2012). Sampling was performed by Holar University College Aquaculture Research Station (HUC-ARC) personnel. HUC-ARC has an operational license according to Icelandic law on aquaculture (Law 71/2008), which includes clauses of best practices for animal care and experiments.\n\nEmbryos of AC- and SB-charr sampled in 2009 were used for transcriptome sequencing. For this we focused on the time covering development of pharyngeal arches and morphogenesis of the head: at 141, 163, 200 and 433 τs (post fertilization). For each combination of morphs and timepoints we pooled RNA from approximately six individuals. RNA extraction and following steps were performed as described earlier58,62. The embryos were dechorionated and homogenized with a disposable Pellet Pestle Cordless Motor tissue grinder (Kimble Kontes, Vineland, NJ, USA) and RNA was extracted into two size-fractions using the Ambion mirVana kit (Life Technologies, Carlsbad, CA, USA). The high molecular weight fraction was further used for mRNA-seq and RNA quality was analysed using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). First and second strand cDNA synthesis, fragmentation, adapter ligation and amplification were performed using the mRNA-Seq 8-Sample Prep Kit (Illumina, San Diego, CA, USA) according to manufacturer’s instructions. Sequencing was performed at DeCode genetics (Reykjavík, Iceland) using SOLEXA GAII technology (Illumina, San Diego, CA, USA). The sequencing reads were deposited into the NCBI SRA archive under BioProject identifier PRJNA239766 and with accession numbers: SRX761559, SRX761571, SRX761575, SRX761577, SRX761451, SRX761461, SRX761490 and SRX761501.\n\nThe embryos sampled in 2010 were used for qPCR expression analyses. RNA was extracted from six whole embryos, in two replicates (two repetitions X three fish) (AC and SB sampled at 161 and 200 τs). For the extraction of RNA from heads of AC, SB, LB and PL, 12 embryos (two repetitions X six fish) at 178, 200 and 216 τs were used. Embryos were dechorionated and decapitated in front of the pectoral fin. RNA extraction and cDNA preparation were performed as described previously in 58. Similarly, RNA was extracted from a small piece (approximately 2 mm2) of skin, heart, liver, gill, spleen, intestine and liver from six adult AC-charr.\n\nAs no S. alpinus genome is available and de novo assembly of the 36 bp reads yielded an excessive number of short contigs we chose to assess expression and genetic variation by mapping the reads to 59336 S. salar expressed sequence tag (EST) contigs from the SalmonDB [63, downloaded 22. March 2012] and the Arctic charr mitochondrial genome [55, NC_000861].\n\nTo estimate expression, reads were aligned with RSEM version 1.1.18 with default parameters. RSEM distributes reads that map to multiple locations to the most likely contig, using expectation maximization64. The read counts for contigs with the same annotation were pooled because some genes were represented by more than one contig, and due to whole genome duplication salmonids have closely related paralogous genes30,32. Thus the expression tests are done on gene or paralog group level, instead of the contig level. In the remainder of the paper, the term ’gene’ will have this broader meaning, some genes are represented by one contig and others by two or more (indicated in all relevant tables). This brought the number of genes considered down to 16851. Lastly, genes with fewer than 800 mapped reads in the entire dataset were excluded from the analyses, yielding a total of 10496 genes.\n\nA generalized linear model (GLM) with morph and developmental time as explanatory variables was used to find genes with different expression levels between the two charr morphotypes (groups) using the edgeR-package in R65.\n\nY = Morph + Time + Error\n\nWe could not test for interaction, as biological replicates were unavailable. To obtain further insight into the expression profiles of differently expressed genes, we preformed clustering analyses on log-transformed cpm-values (counts per million; cpm-function in edgeR). The values for each gene were scaled by mean and standard deviation, and the euclidean distance used for the hclust-function in R66 with the default settings. We used the hypergeometric-test in goseq67 to test for gene ontology enrichment. Since we pooled the read-count from different contigs we could unfortunately not take gene length into account in those tests.\n\nWe previously identified suitable reference genes to study Arctic charr development58. Here we examined the expression of several genes in whole charr embryos, embryonic heads and adult tissues. Primers were designed using the Primer3 tool68 and checked for self-annealing and heterodimers according to the MIQE guidelines69 (S1 Table). Primers for genes with several paralogs were designed for regions conserved among paralogs. For natterin, primers for the different paralogs were designed to match regions differing in sequence. Relative expression was calculated using the 2−∆∆Ct method70. For the calculation of relative expression of genes in whole embryos, the geometric mean expression of three reference genes, β-Actin, elongation factor 1α and Ubiquitin-conjugating enzyme E2 L3, was used for normalization. For visual comparisons among samples, the normalized expression was presented as relative to the expression in AC at 161 τs (calibration sample). For the embryonic head samples IF5A1 and ACTB were used as reference genes and a biological replicate of AC at 178 (τs) as the calibrator sample. Standard errors of relative expression were calculated from the standard errors (SE) of the ∆CT-values with the formula 2-(∆∆Ct+SE) = minimum fold expression and 2-(∆∆Ct−SE) = maximum fold expression. The statistical analysis was performed using the ∆CT-values with a two-way ANOVA with GLM function in R.\n\nY = Morph + Time + M x T + Error\n\nNormal distribution of residuals was confirmed for all data. For the study of expression in the embryonic head we followed a significant morph effect in the ANOVA with Tukey’s post-hoc honest significant difference test, on relative expression ratios (∆CTs).\n\nFor analysis of genetic variation we mapped the reads to the salmon contigs, this time using the Burrows-Wheeler Aligner (BWA)71 with a seed length of 25, allowing two mismatches. We re-mapped the reads, since BWA allows short indels (RSEM does not) but disregarding them leads to many false SNPs close to indels. To extract candidate polymorphic sites from the Arctic charr transcriptome we ran VarScan272 with minimum coverage of 50 reads and minimum minor allele frequency of 0.1 on reads mapped to each S. salar contig for all of the 8 timepoints and morph combinations. This was done separately for reads that mapped uniquely to one contig only (UNI) and reads that mapped to two or more contigs (REP). These SNP-candidates were further processed in R66. SNP-candidates at 90% frequency or higher in all samples were disregarded, as they reflect differences between Arctic charr and S. salar and are not the focus of this study. SNP-candidates with poor coverage in specific samples - i.e. coverage of five or fewer reads in three or four samples of each morph - were removed. As the SNP analysis was done on individual contigs, differences among paralogs appear in the data. However, since each sample is a pool of few individuals, it is very unlikely that we have the same frequency of true SNPs in the samples. This property was used to remove variants that are most likely due to expressed paralogs. Using Fisher exact tests to evaluate differences between samples, only SNPs that were significantly different between samples with a p < 0.05 (with no multiple testing correction) were chosen for further examination. As equal cDNA input from individuals in sample cannot be assumed, due to expression differences among them and stochastic processes in sample preparation, read numbers were summed over the four samples from each morph for the comparison between them. A conservative approach was taken to look for differences between morphs. We focused on SNP-candidates that showed differences in frequency between morphs, without adjusting for multiple testing (Fisher exact test, p > 5%). We extracted the most interesting candidates by filtering by frequency difference between the morphs (delta). SNP-candidates with the highest frequency difference (delta > 95%) were manually processed and redundant candidates removed. A similar approach was used to mine for polymorphisms in Arctic charr mtDNA (NC_000861), using S. salar mtDNA as the outgroup (NC_001960.1).\n\nWe wrote a python script to predict the impact of SNPs within the mRNA sequences. Polymorphisms were categorized according to their location (3’UTR, coding, 5’UTR), and those within the coding region into synonymous or non-synonymous.\n\nWe chose 12 candidate SNPs for verification (see below). The candidates were verified using a similar approach as previously52; first, we conducted genomic comparisons of the Salmon genome, ESTs and short contigs from the preliminary assembly of the Arctic charr transcriptome. This allowed us to infer the placement of the putative polymorphism in the locus, and design paralog specific primers for PCR (less than 1 kb amplicons) for verification of the 12 candidate SNPs (S2 Table). MJ tetrad machine was used for PCR and the program was 5 min. at 95°C, followed by 35 cycles of 30 sec. at 52°C, 1 min. at 72°C, 30 sec. at 95°C, ending with 12°C while waiting on the human. Each individual was genotyped by first amplifying the region of interest using PCR, followed by ExoSAP (Affymetrix), direct sequencing (BigDye) and finally run on an Applied Biosystems 3500xL Genetic Analyzer (Hitachi). Raw data was base-called using the Sequencing Analysis Software v5.4 with KBTMBasecaller v1.41 (Applied Biosystems). Ab1 files were run through Phred and Phrap and imported to Consed for visual editing of ambiguous bases and putative polymorphisms, and for trimming primers. The FASTA files were aligned with ClustalW online [73, http://www.ebi.ac.uk/Tools/msa/clustalw2/] and manually inspected in Genedoc74. All sequences where deposited to Genbank as popsets under the accession numbers KP019972-KP020026.\n\nComparative genomics showed that several verified SNPs affected evolutionarily constrained parts of the mitochondrial genome. Two approaches were used: performing a BLAST search on salmon ESTs (May 2013) and retrieving multiZ alignments of vertebrates from the UCSC genome browser (in September 2013). This yielded several hundred sequences from related fish and other vertebrates. The list was reduced to 20 sequences for visualization, by keeping members of the major taxa but removing more closely related sequences, aligned with ClustalW and manually adjusted in Genedoc. The species and genome versions used are; Human (Homo sapiens, hg19), Lamprey (Petromyzon marinus, petMar1), Fugu (Takifugu rubripes, fr2), Medaka (Oryzias latipes, oryLat2), Stickleback (Gasterosteus aculeatus, gasAcu1), Tetraodon (Tetraodon nigroviridis, tetNig2), Zebrafish (Danio rerio, danRer6). We also downloaded from NCBI the sequence of whole or partial mtDNA from several fish species; Brown trout (Salmo trutta, JQ390057 and AF148843), Broad whitefish (Coregonus nasus, JQ390058), Legless searsid (Platytroctes apus, AP004107), Pacific menhaden (Ethmidium maculatum, AP011602), Icefish (Salanx ariakensis, AP006231 and HM151535), Chain pickerel (Esox niger, AP013046) and Western Pacific roughy (Hoplostethus japonicus, AP002938). The three mitochondrial variants (numbered by the S. alpinus mtDNA - NC_000861) are; m1829G>A (CCACGTTGTGAAACCAAC[G/A]TCCGAAGGTGGATTTAGCAGT), m3211T>C (CGTGCAGAAGCGGGCATAAG[T/C]ACATAAGACGAGAAGACCCT) and m3411C>T (CTCTAAGCACCAGAATTT[C/T]TGACCAAAAATGATCCGGC).\n\n\nResults\n\nEach sample yielded good quality data, with sequencing depth from 49 to 58 million (average: 55 million) reads. To quantify the expression levels, the reads were aligned to a salmon EST-assembly63. Around 20% of the reads mapped uniquely to the EST data (S3 Table). A further 30% mapped to two or more contigs, probably representing paralogous genes, recent duplications or repeat-like elements within transcribed regions. A substantial fraction of the RNA-sequencing reads did not map to the contigs from S. salar. Analyses of those reads require an Arctic charr genome sequence or transcriptome assembly from longer and paired end reads.\n\nFor the expression analysis, ESTs were collapsed into 16851 “genes” or paralog groups (see the Methods for the broader meaning of gene). We only considered genes (total of 10496) with 800 or more reads mapped and tested for differential expression using the edgeR-package65. We detected considerable changes in the transcriptome during Arctic charr development (Figure 2a). The expression of 1603 and 2459 genes differed significantly between developmental timepoints at the 1% and 5% levels of false discovery rate (FDR), respectively (S1 file). The difference was most pronounced between pre-hatching (timepoints: 141, 163, 200 τs) and post hatching embryos (timepoint 433 τs), as more than 70% of the genes with FDR below 1% had higher expression in the latter (Figure 2a). According to Gene Ontology analyses, six separate GO categories are significant (below 10% FDR). The most drastic changes were seen in processes related to glycolysis (GO:0006096, FDR = 0.0009), were the expression of 19 out of 25 genes changed during this developmental period. The other five classes that were differentially expressed during charr development are: ion transport (GO:0006811, FDR = 0.027), blood coagulation (GO:0007596, FDR = 0.03), DNA repair (GO:0006281, FDR = 0.08) and two immune related categories (GO:0019882, FDR = 0.08, GO:0006955, FDR = 0.09). Those results probably reflect developmental changes and/or differences in the environment of embryos before and after hatching.\n\nTwo morphs (SB and AC) are represented, at four timepoints. (A) The 1603 genes with expression difference among time points, here clustered into four groups . (B) The 71 genes differentially expressed between morphs, are clustered into 4 groups for each of the two morphs. High expression is indicated by blue and low expression by beige.\n\nWe were especially interested in genes showing expression differences between the two Arctic charr morphs as they might implicate pathways involved in the ecological divergence among charr populations. In the data 296 genes were differentially expressed (FDR < 5%) between the morphs (141 higher in SB and 152 higher in AC, S1 file). Among genes with higher expression in SB-charr two biological GO categories were enriched: blood coagulation (GO:0007596, p = 0.001) and proteolysis (GO:0006508, p = 0.002). Recall, expression of blood coagulation factors also differed between developmental stages (see above). In AC-charr, genes in three categories: respiratory electron transport chain (GO:0022904, p = 0.0006), ATP synthesis coupled electron transport (GO:0042773, p = 0.002) and neurotransmitter transport (GO:0006836, p = 0.009) have higher expression. The first two GO categories both relate to energy generation in mitochondria and could reflect higher expression of genes with mitochondrial functions in AC-charr.\n\nAt more stringent FDR (1%), 31 genes were higher expressed in SB and 40 genes higher in AC-charr (Figure 2b, Table 1 and Table 2). These genes have diverse functional annotations. The genes with higher expression in each morph were clustered into 4 groups, which aggregated genes of similar function. For instance SB cluster 3 has three immune related genes: Complement factor D (9), H-2 class I histocompatibility antigen L-D alpha chain (2) and Sushi domain-containing protein 2 (4) and one gene with unknown function (Table 1). Note, however, that immune genes were not significantly enriched in the GO comparison of morphs.\n\nlogFC – log Fold Change\n\nlogCPM – log Counts Per Million\n\nFDR – False Discovery Rate\n\nThe cluster numbering corresponds to Figure 1.\n\nFor column header explanation, see footer of Table 1.\n\nThe results suggest mitochondrial function and blood coagulation genes are differentially expressed between the morphs, but due to few samples used in the RNA-sequencing, qPCR verification was needed.\n\nThe data highlights genes likely to differ in expression between embryos of SB and AC-charr. Of the nine genes subjected to qPCR analyses of whole embryos, five were confirmed to be differentially expressed between AC and SB at 161 or 200 τs (Figure 3, S4 Table and S2 file). Three genes, Nattl, Alkaline phosphatase (Alp) and Lysozyme (Lyz), had significantly higher expression in SB. The other two, Keratin-associated protein 4–3 (Krtap4–3) and Poly polymerase 6 (Parp6) had higher expression in AC embryos (Figure 3, S4 Table). No morph and time interaction was detected for any of the genes.\n\nRelative expression of 9 genes (A–I) analysed by qPCR in the small benthic (SB) charr from Lake Thingvallavatn and aquaculture (AC) charr at two different developmental timepoints (161 and 200 $\\tau s$). 5 genes were differentially expressed between the two morphs (Alp, Krtap4-3, Lyz, Nattl, Parp6), while 4 further genes did not show significant expression differences between morphs (Cgat2, Cox6B1, Ndub6, Ubl5), see Table S3. Error bars represent standard deviation calculated from two biological replicates.\n\nAs some of the genes are represented by different contigs or even paralogs, we set out to disentangle the expression of one paralog group in detail. The qPCR primers used above matched conserved gene regions and thus estimate the combined expression of several paralogs. We chose to measure the expression of three different natterin paralogs (nattl1, 2 and 3), in part because this understudied gene was first characterized as a toxin produced by a tropical fish75,76. We studied nattl expression in several developmental stages in AC-, SB- and PL-charr as well as in selected tissues of adult AC-charr. The expression level of the three paralogs differed between morphs and timepoints (Figure 4 and S5 Table). Overall nattl2 had the highest expression in all morphs. The nattl1 had higher expression in embryos of PL-charr than in AC- and SB-charr, while nattl2 and nattl3 were more expressed in SB-embryos.\n\nThe expression is graphed for different morphs (SB, AC and PL) at four developmental timepoints (161, 200, 256 & 315 $\\tau s$, relative to AC-charr at timepoint 161. A) General {Nattl} expression along charr development. B–D) Expression of Nattl paralogs 1–3. ANOVA showing the variation among morphs is summarized in Table S4.\n\nIn order to evaluate the hypothesis that nattl genes have immune-related functions we studied expression in adult tissues (in AC-charr). The nattl expression was highest in the gills, followed by expression in kidney, skin and spleen. Low expression levels were detected in liver, intestine and heart (S1 Figure and S5 Table). The three nattl paralogs followed different patterns, whilst each of them showed significant expression differences among tissues. Nattl1 was mainly expressed in spleen and kidney, while nattl2 showed a significantly higher expression in skin, liver and in gills. Similarly, the relative expression of nattl3 was highest in the gills and skin. This indicates that the three nattl paralogs are expressed in a tissue specific manner, and also differently during the development of the three charr morphs studied here.\n\nTo study the craniofacial divergence between sympatric Arctic charr morphs we used qPCR to study 8 genes with expression difference in the RNA-seq data (all higher in SB). We focused on genes with known craniofacial expression in zebrafish development77 and compared two benthic (SB, LB) and two limnetic charr (AC, PL). We analyzed heads at three time-points (178, 200 and 218 τs) as this period overlaps with early stages of craniofacial skeletal formation in Arctic charr78,79. The qPCR confirmed the higher expression of seven out of these eight genes in the head of benthic charr compared to limnetic charr (Figure 5, S2 Figure and S3 file). These seven genes are Claudin 4 (Cldn4), adseverin (Scin), Junction plakoglobin (Jup), Lipolysis stimulated lipoprotein receptor (Lsr), Major vault protein (Mvp), Transforming growth factor beta receptor II (Tgfbr2) and Vitamin D receptor a (Vdra). The eighth gene, Retinoic acid receptor gamma-A (Rarg) gave a small but significant response in the head, but the effects were reversed, i.e. the expression was higher in AC. The expression difference of the seven genes was, in almost all cases, consistent over the three time-points studied (See S2 Figure).\n\nRelative expression ratios, calculated from the qPCR data, were subjected to an ANOVA to test the expression differences amongst four charr groups and three close time points ($\\tau s$). The underlined gene names reflect significant difference between SB and AC-charr. A post hoc Tukey's test (HSD) was performed to determine the effects of morphs, time and morph-time interaction (M X T). White boxes represent low expression, while black boxes represent high expression. The shading represents significant different expression between the samples (p = 0.05, NS = not significant).\n\nThe RNA-seq data also revealed segregating variations with large frequency differences between charr morphs. To uncover candidate SNPs we mapped the reads to all of the S. salar EST-contigs. Filtering on coverage yielded 165,790 candidate SNPs (Table 3); of those 66.569 came from reads that mapped uniquely and 57.009 candidate SNPs from reads that mapped to more than one contig; with limited overlap between lists. Assuming that the expression of paralogous genes is stable, then differences among paralogs appear as SNPs at similar frequency in all samples. By requiring variant frequency differences (p < 0.05, uncorrected) between samples we reduced the list of candidates by two thirds, yielding over 20.000 candidate SNPs. Note, that as cDNA from charr families was sequenced (not a population sample), estimates of SNP frequencies are imprecise. To err on the side of caution, we chose SNP candidates with 50% or higher frequency difference between morphs for further study. The candidate SNPs were also summarized by frequency of the derived allele, in reference to the S. salar sequence. This gave 672 and 872 SNPs at higher frequency, in AC-charr and SB-charr, respectively. The uniquely and multiply mapped reads, revealed approximately similar numbers of candidate SNPs. Gene ontology analysis showed that for derived SNPs in SB, there was an excess of variants in genes related to translation, both as a broad category and specific subgroups (S6 Table). There was also enrichment of SNPs in genes related to DNA-mediated transposition, DNA integration, DNA replication and oxidation-reduction process. No GO categories were enriched for high frequency derived SNPs in AC. Furthermore, functional effects of the candidate SNPs (UTR, synonymous and non-synonymous) were predicted. The distribution among those categories did not differ between variants detected by uniquely or repeatedly mapped reads, χ2[3] = 2.59, p = 0.46 (S7 Table).\n\nFor Delta > 0.95 we show the number of SNP-candidates before the redundant ones were removed.\n\nSNP-candidates are found by mapping to S. salar ESTs. From UNIquely or REPeatedly mapped RNA-reads. Delta: Differences in allele frequency between morphs, categorized by which morph had the higher derived allele frequency.\n\nA total of 60 candidate SNPs are nearly fixed in one morph, with frequency difference between morphs above 95% (after manual inspection of contigs and SNP position three candidates were removed since they represented the same SNP). Of these “fixed” SNPs 46 came from uniquely mapped reads and 14 from reads that mapped more than twice (Table 4 and Table 5). For the SNPs from uniquely mapped reads, 17 are fixed in AC-charr and 29 in SB-charr. The few genes with two or more polymorphic sites were; Keratin type II cytoskeletal 3 (KRT3), Cysteine sulfinic acid decarboxylase (CSAD) and DNA-directed RNA polymerase I subunit RPA12 (RPA12) with 5, 5 and 2 SNPs respectively. KRT3 and CSAD had significant differentiation in both SB and AC. Similarly, 14 SNPs with large differentiation between morphs were predicted from reads that mapped on two or more contigs (Table 5). Of these, we found two variants in the mitochondrial 60S ribosomal protein L36 (RPL36) and variants in 4 other mitochondrial genes (28S ribosomal protein S18a mitochondrial (MRPS18A), Apoptosis-inducing factor 1 mitochondrial (AIFM1), Isocitrate dehydrogenase [NADP] mitochondrial (acIDH1) and Protein S100-A1 (S100A1)), all at higher frequency in AC-charr. PCR and Sanger sequencing was used to confirm SNPs in DNA2-like helicase (DNA2), a gene with nuclear and mitochondrial function, and two other genes Uroporphyrinogen decarboxylase (UROD), and Mid1-interacting protein 1-like (MID1IP1) (S2 Table). The candidate variant Eukaryotic translation initiation factor 4 gamma 2 (EIF4G2) was not substantiated by the PCR/sequencing.\n\na Those genes are distinct paralogs\n\na Those genes are distinct paralogs\n\nConsidering the enrichment of differentially expressed genes related to mitochondrial energy metabolism (above), and high frequency candidate SNPs in several genes with mitochondrial function in AC-charr we decided to study the mitochondrial transcriptome further. The charr studied here reflect metabolic extremes, the aquaculture charr was bred for growth while the small benthic morph is thought to have experienced natural selection for slow metabolism and retarded growth45,80. Although mRNA preparation protocols were used for generating cDNA for the RNA-sequencing, a substantial number of reads came from non-polyadenylated sequences. By mapping the reads to mtDNA sequence of Arctic charr we could estimate expression and infer polymorphism both in genes and intergenic regions. There was a clear difference in sequencing coverage, with more than twice as many reads mapped from the AC- compared to SB-charr (mean fold difference 2.27, Wilcoxon test, p < 0.0004). Note, as only two types of fish are compared, it is impossible to determine the polarity of expression divergence.\n\nThe mapped RNA-reads were used to identify polymorphism and divergence in the entire mitochondrial chromosome. The polymorphisms were found by mapping to mtDNA from a Canadian S. alpinus55, but ancestral vs. derived status inferred by comparison to S. salar mtDNA. Bioinformatics revealed 82 candidate sites, including 35 that represent divergence between Icelandic and Canadian charr. A total of 20 candidate SNPs had high (more than 50%) frequency difference between SB and AC-charr (Figure 6). There was no bias in the distribution of derived SNPs, 11 on the AC branch and 9 in SB. Note, the frequency distribution is most irregular as we sequenced embryos of related individuals (see Materials and Methods), not a population sample. The divergence between Iceland and Canada is particularly little in the 12s and 16s ribosomal RNA genes. Curiously in those genes were two SNPs differing strongly in frequency between morphs (Figure 6). To confirm and better estimate the frequency of variants in the ribosomal genes, we PCR amplified and sequenced two ~550 bp regions in the rRNA genes, comparing three morphs (PL, LB and SB) from Lake Thingvallavatn (Figure 7A, C and E, S2 Table). The 12s polymorphism (m1829G>A) differed significantly between the morphs (χ2[2] = 8.6, p = 0.014), and was at highest frequency in the SB (0% in PL, 12.5% in LB and 75% in SB). Similarly m3411C>T in the 16s was enriched in SB (62.5%) but found at lower frequency in PL (0%) and LB (12.5%) (it differed significantly between morphs, χ2[2] = 9.3333, p = 0.009). The Sanger sequencing also revealed three other polymorphisms in the amplified region, not seen in the transcriptome. Among those m3211T>C in the 16s gene was at 75% frequency in LB, but not found in the other morphs (χ2[2] = 19.76, p < 0.0001).\n\nThe frequency differences between morphs of candidate SNPs, estimated from the RNA-sequencing, graphed along the mtDNA chromosome. The SNPs indicate whether the derived allele is of higher frequency in SB (black dots) or AC (open circles). Sites of divergence between the Icelandic stocks and the Canadian reference sequence are indicated by triangles. The two black boxes represent the 12s (left) and 16s (right) rRNA genes, and gray boxes the 14 coding sequences.\n\nIn order to gauge the potential functionality of those variants we aligned the rRNA genes from nearly hundred fishes and several vertebrates. The position affected by m1829G>A and m3211T>C, in the 12s and 16s rRNAs, are not well conserved in fishes or vertebrates (Figure 7B and Figure 7D). However m3411C>T, in the 16s rRNA, alters a position that is nearly invariant in 100 fish genomes (Figure 7F). The only exception is Pacific menhaden, which curiously also has T in this position. This region could not be aligned properly in other vertebrates. Thus m3411C>T alters a conserved position, but probably not very drastically as the introduced allele is tolerated in another fish species.\n\nAligned are several fish genomes, with Lamprey or humans as outgroups, reflecting a 38 bp window around each of the 3 positions (*). A, C, E) Frequency of each of those variants in three Arctic charr populations from Lake Thingvallavatn (PL, LB and SB). A total of 8 individuals were genotyped from each morph, see methods. B) Alignment of variant m1829G>A in the 12s rRNA gene in fishes, using humans as an outgroup. D) Similar alignment of a 16s variant, m3211T>C and F) alignment of variant m3411C>T in the 16s rRNA gene.\n\n\nDiscussion\n\nWe are interested in how predictable evolution is a the molecular level and if there certain principles influence the rewiring of developmental and regulatory systems during evolution81. One way to study this is to identify genetic and developmental effects affecting key traits in species or populations which exhibit parallel evolution. The objective of this study were to get a handle on genetic and molecular systems that associate with benthic morphology in charr by mainly focusing on the small benthic morph in Lake Thingvallavatn, Iceland. To this end we performed transcriptome analysis contrasting the development of embryos from SB-charr and aquaculture charr.\n\nAs no reference genome is available for Arctic charr, we mapped reads to S. salar EST-contigs63 in order to estimate expression and identify candidate genetic polymorphisms. As many of the contigs are short or have overlapping annotations, we collapsed genes into paralogous genes when appropriate for the expression analysis. The main advantage of this approach was the reduction of the number of statistical tests (and hence an increase in statistical power). The downside is that paralog-specific expression patterns are masked, as our qPCR results of the natterin like gene family show (Figure 3 and S1 Figure). Recent rainbow trout data shows most paralogs from the latest whole genome duplication event retain the same expression pattern32 indicating that this scenario is probably uncommon; hence it is of considerable interest when two paralogs show distinct expression patterns82. In their analysis of the Arctic charr gill transcriptome, Norman et al. (2014)16 also used Illumina sequencing technology to evaluate expression. Their reads were longer (2x100 bp) than in this study (36 bp) enabling them to assemble contigs. They did not consider the paralogs in their approach and merged contigs based on sequence identity. Thus the complexity of Arctic charr transcriptome still remains a mystery that advances in sequencing technology, assembly algorithms and genome sequencing of this species could aid in revealing.\n\nOur data reflects differential deployment of several gene classes during Arctic charr development. Studies in salmonids and other fish have demonstrated large changes in expression during early development, including coordinated changes in many cellular and developmental systems12,19,83–85. Several blood coagulation factors genes showed significant changes during charr development, and were also more highly expressed in the SB-charr. This might reflect differences in the rate of development of blood composition, or tissue composition, in the two morphs. Our main interest is on the derived and repeatedly evolved small benthic charr. For this study we chose AC-charr as a point of reference for several reasons, i) it has limnetic like head morphology, ii) availability, and iii) because we wanted a strong contrast in this first survey of charr developmental diversity. The AC-charr proved a useful, as the data presented here has already revealed differential expression of several developmental genes and regulators with differential expression between benthic and limnetic charr58,59. Furthermore we previously found tight correlation of RNA-seq expression and qPCR estimates - in this very same transcriptome58. Furthermore, we have actually used the same morphs (AC and SB) and samples in a comparison of the developmental miRNA transcriptome – which reveal that expression of several miRNAs correlates with morph differences62.\n\nThe genetic separation in two immunity genes among sympatric morphs in Lake Thingvallavatn52 prompted us to examine further the expression of Lyz and nattl that were differentially expressed between morphs.\n\nBoth genes are expected to be involved in immune defenses and had higher expression in SB. The substrate of lysozyme86 is the bacterial cell wall peptidogly-can and it acts directly on Gram-positive bacteria87. Lysozyme also promotes the degradation of the outer membrane and therefore indirectly acts also on Gram-negative bacteria88. Another gene that caught our attention was natterin-like. Natterins were first discovered from the venom gland of the tropical toxic fish species Thalassophryne nattereri75,76, and are found by sequence similarity in e.g. zebrafish, Atlantic salmon and here in Arctic charr. The predicted Natterin proteins contain a mannose-binding lectin-like domain (Jacalin-domain) and a pore-forming toxin-like domain and can cause edema and pain due to kininogenase activity75. Mannose-binding lectins are pathogen recognition proteins (antibodies) and therefore are important for the acute phase response of fish89,90. Our data suggest an immune related function of nattl genes in charr, as the highest expression was found in skin and kidney. This needs to be verified. It is possible that higher expression of those two genes in SB-charr reflect preparation of juveniles for bottom dwelling habitats, which may be rich in bacteria and challenging for immune systems.\n\nIn this study we collapsed contigs into gene or paralog groups for the transcriptome analyses. The disadvantage of this approach is that differential expression in one paralog, can be masked by other related genes that do not differ between groups or have contrasting expression patterns. We looked at this by studying the expression of three paralogs of the natterin like genes in different morphs during Arctic charr development, and among tissues of adult AC-charr. The data show that the three nattl genes are expressed differentially between the morphs, thus it is not divergence in the expression of one paralog that explains the general nattl expression disparity in the transcriptome. Certainly, other scenarios could apply to other genes in the transcriptome.\n\nA study of the skulls of charr post-hatching embryos and juveniles from Lake Thingvallavatn, showed that some elements of the developing head ossified earlier in SB-charr than in PL-charr91. Our new data also demonstrate differences in craniofacial elements between AC- and SB-charr, along a limnetic vs. benthic axis79. Based on those differences between benthic and limnetic charr, we investigated further genes with roles in craniofacial development that were differentially expressed in the transcriptome. Guided by this transcriptome we had already found two extra-cellular matrix (ECM) remodeling genes, Mmp2 and Sparc and a conserved co-expression module of genes with known roles in craniofacial morphogenesis, to have higher expression in developing heads of benthic Arctic charr morphs than in limnetic morphs58,59. Bioinformatic and qPCR analyses suggest the co-expression module may potentially be affected by quantity of the transcription factor ETS2. These studies and the current data confirm the utility of the contrasting developmental transcriptomes for identifying candidate genes with differential expression during head development, as 7 out of 8 candidates were confirmed by qPCR. These genes had consistently higher expression in the developing head of two benthic morphs (SB and LB), and lower in more limnetic fish (AC and PL). This is striking, as three of the morphs (SB, LB and PL) studied are closely related and live in sympatry in Lake Thingvallavatn52.\n\nWe focused on a few targets of Tgf-β and Ahr signaling pathways because of their role in craniofacial morphogenesis and transcriptional connection92–94. Adseverin (Scin) was one of the top differentially expressed genes (Table 1) and has roles in rearrangements of the actin cytoskeleton, chondrocyte differentiation and skeletal formation95,96. Also, in the transcriptome Lsr, Cldn4 and Tgfbr2 had higher expression in SB-charr, and we show that higher expression of those genes associated with the benthic morphotype. Lsr is a molecular component of tri-cellular tight junctions97 and has been shown to be suppressed upon Tgf-β1 stimulation98 in a human cell line. Similarly, Cldn4, a tight junction protein with unknown role during embryonic morphogenesis, is a target of the Tgf-β and Ahr signaling pathways99,100. Finally, the expression of Tgfbr2, encoding a receptor of Tgf-β was slightly but significantly higher in the head of benthic morphs. Previous studies suggest a crucial role of Tgfbr2 in craniofacial morphogenesis101.\n\nWe also confirmed differential expression of other genes, including two with higher expression in SB-charr. Mvp is the predominant component of cytoplasmic ribonucleoprotein structures called vaults102, which is highly conserved across eukaryotes. The vaults have been something of an enigma, but are implicated in several processes from signal transmission and immune response103. The Jup gene also showed higher expression in SB-charr. Finally, higher expression of Vdra, encoding the vitamin D receptor A, was found in the heads of benthic forms. The receptor regulates mineral homeostasis, osteoblast differentiation and bone metabolism104.\n\nTo summarize, the results show that RNA-sequencing of Aquaculture charr with limnetic craniofacial morphology and small benthic charr can be used to reveal differential expression of genes that associate with limnetic and benthic divergence in craniofacial elements in sympatric charr morphs. It would be interesting if expression of these genes associates with benthic morphology in independently evolved charr populations, or even in other species with similar trophic diversity.\n\nBy comparing AC and SB-charr, that represents a small benthic resource morph that has evolved repeatedly in Icelandic stream and pond habitats51, we hoped to implicate genes and pathways involved in adaptation to these special habitats. The data point to differences between SB and AC-charr in systems related to energy metabolism, as may be expected considering their contrasting life histories. First, there is 2X higher expression of respiratory electron transport chain components in AC compared to SB-charr and 100% more mitochondrial derived reads are found in the AC-charr samples. Note that the direction of divergence is unknown, i.e. whether expression was up in AC or down in SB. Second, many derived candidate-SNPs in genes related to mitochondrial function were at high frequency on the AC branch. For instance in S100A1, which has been implicated in mitochondrial regulation in cardiac tissue in humans105, but its expression is probably not exclusive to this tissue. Third, while the mitochondrial ribosomal genes generally evolve slowly, we do see derived variants at high frequency in the SB and large benthic charr in Lake Thingvallavatn. Specifically, m3411C>T in SB affects a position that is highly conserved among fish, and could affect function of the 16s rRNA. Earlier studies of mitochondrial markers in S. alpinus did not find large signals of divergence within Iceland47,49,52, probably because they studied other genes. In summary, the results suggest divergence in mitochondrial function, due to the domestication of aquaculture charr and/or possibly reflecting adaptation of the small benthic charr in Lake Thingvallavatn.\n\nThe mitochondrion is more than a powerhouse, it integrates metabolism, cell cycle and apoptosis106. The number of mitochondria and its functions are known to correlate with environmental attributes. For instance in Antarctic fishes under extreme cold, higher numbers of mitochondria are found in muscle and heart cells107. Our data suggest an expression difference between morphs that could reflect differences in total number of mitochondrion, the number of mtDNA copies per mitochondrion or cell, or difference in RNA expression from the mtDNA, possibly due to evolution of mtDNA related to diet and/or temperature108. Further work is needed to map out the expression differences of mitochondrial related genes in more SB and anadromous charr morphs (representing the ancestral state). The mtDNA signals could also be investigated in populations along ecological clines (e.g. temperature) or with respect to life history109.\n\n\nConclusions\n\nThe data presented here set the stage for future investigations of the molecular and genetic systems involved in the development and divergence of the highly polymorphic and rapidly evolving Arctic charr. The results suggest genetic and expression changes in multiple systems relate to divergence among populations. The data reveal differential expression of two immunological genes between morphs and of several craniofacial developmental genes, that may help sculpture benthic vs. limnetic heads. The genetic data suggest among other things differentiation in the charr mtDNA between morphs. Our broad interest is in how natural selection tweaks genetic regulatory systems, for instance via genetic changes in regulatory sequences or post transcriptional modifiers relating to adaptations. Genetic changes affecting gene expression can be raw material for adaptation, but could also rise in frequency due to reverberations in regulatory cascades81. Our specific aim was to cast light on the developmental and population genetics of the unique small benthic charr, typically found in cold springs and small pond habitats in Iceland, particularly those with lava substratum36,51. The availability of charr populations at different stages of divergence sets the stage for future genomic studies of the roles of genes, environment and plasticity for shaping this polymorphic species.\n\n\nData availability\n\nF1000Research: Dataset 1. Parameters and multiple testing corrected p-values for expression analysis, 10.5256/f1000research.6402.d48005110\n\nF1000Research: Dataset 2. qPCR data for tests of expression in charr developing embryos and adult tissues., 10.5256/f1000research.6402.d48006111\n\nF1000Research: Dataset 3. qPCR data for tests of expression in charr developing embryo heads., 10.5256/f1000research.6402.d48007112",
"appendix": "Author contributions\n\n\n\nConceived and designed the study: JG, AP, ZOJ, SSS, SRF, VHM, EPA.\n\nSampling, crosses and rearing: SSS, BKK, ZOJ, KHK, VHM, AP.\n\nRNA extraction and RNA sequencing: SRF.\n\nAnalyses of RNA sequencing data: JG, AP.\n\nqPCR work: EPA, SSS2, VHM.\n\nSNP analyses: JG, AP.\n\nSNP confirmation: IMJ, KHK, AP.\n\nComparative genomic analysis: AP.\n\nWriting: AP, JG, EPA, VHM, SSS.\n\nAnalyses: JG, AP, EPA, SSS2.\n\nGathered the data: ZOJ, SRF, EPA, IAJ, KHK, SSS2.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis project was supported by The Icelandic Center for Research (grant number: 100204011) to SSS, AP, ZOJ and BKK, The University of Iceland Research/Doctoral Fund to JG and KHK and University of Iceland research fund to AP, SSS and ZOJ.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Baldur Kristjansson for help with genomic alignments. We are very grateful to Droplaug N. Magnusdottir, Gudbjorg Th. Orlygsdottir, Steinunn Snorradottir and Olafur Th. Magnusson at deCODE Genetics for help with the Illumina sequencing.\n\n\nSupporting Information\n\nRelative expression of Natterin (A) & Natterin paralogs 1–3 (B–D) within different tissues (skin, heart, liver, gill, spleen, intestine & kidney) of adult aquaculture charr (RT-qPCR); expression plotted for different tissues, relative to heart tissue (lowest expression levels).\n\nRelative expression of 12 candidate genes with characterized craniofacial expression during zebrafish development (ZFIN website) in the head of SB, LB, PL and AC at three time points in development. In the transcriptome data all of the genes had shown higher expression in SB at 200 τs. The expression is normalized to the geometric means of two craniofacial reference genes (ACTB and IF5A1). Expression is relative to a replicate of AC morph at 200 (τs), set to one. Error bars represent standard deviation calculated from two biological replicates and each biological replicate contains homogenate of six heads.\n\nPrimer sequences, melting temperatures and primary data.\n\nParalogs: indicates whether the PCR and sequencing yielded mixed products, indicative of paralogous genes.\n\nSequence: name of the genebank sequence or EST-contig used as reference for mapped reads.\n\nGenome contig: name of salmon genome (ICSASG_v1) contig with best sequence match to the respective EST-contig.\n\nRef: Reference variant.\n\nVar: The derived variant.\n\nFreq_AC and Freq_SB: Frequency of variant reads as fraction of total numbers of reads mapped in Aquaculture (AC) or Small benthic (SB).\n\nFreqP: The frequency of variant in genotyping by PCR and direct sequencing, as a fraction of total number of chromosomes sequenced.\n\nNumbers of reads aligning to salmon reference for each sample.\n\nExpression of nine genes was analyzed in whole SB and AC-charr embryos, at two developmental timepoints (161 and 200 day degrees).\n\nSignificance: p > 0.05; * p < 0.05; ** p < 0.01.\n\nFDR RNA-seq: indicates significance of Morph and Time effects in the transcriptome data.\n\nANOVA for relative expression levels of Natterin-like and Natterin-like paralogs 1–3 in Arctic charr whole embryos (among SB, AC and PL morphs) and tissues from adult AC-charr.\n\nSignificance: p > 0.05; * p < 0.05; ** p < 0.01.\n\nFrom RNA-reads that mapped to one (Uni) or more (Rep) S. salar ESTs.\n\nThe candidate SNPs frequencies differ more than 50% between SB and AC-charr, summarized by which morph with higher frequency of the derived allele.\n\n\nReferences\n\nGould SJ: Ontogeny and Phylogeny. Harvard University Press, 1977. Reference Source\n\nJacob F: Evolution and tinkering. Science. 1977; 196(4295): 1161–1166. PubMed Abstract | Publisher Full Text\n\nStern DL, Orgogozo V: The loci of evolution: how predictable is genetic evolution? Evolution. 2008; 62(9): 2155–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStern DL, Orgogozo V: Is genetic evolution predictable? Science. 2009; 323(5915): 746–751. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRockman MV: The QTN program and the alleles that matter for evolution: all that’s gold does not glitter. Evolution. 2012; 66(1): 1–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbzhanov A, Protas M, Grant BR, et al.: Bmp4 and morphological variation of beaks in Darwin’s finches. Science. 2004; 305(5689): 1462–1465. PubMed Abstract | Publisher Full Text\n\nAlbertson RC, Kocher TD: Genetic and developmental basis of cichlid trophic diversity. Heredity (Edinb). 2006; 97(3): 211–221. PubMed Abstract | Publisher Full Text\n\nCresko WA, McGuigan KL, Phillips PC, et al.: Studies of threespine stickleback developmental evolution: progress and promise. Genetica. 2007; 129(1): 105–126. PubMed Abstract | Publisher Full Text\n\nGiger T, Excoffier L, Day PJ, et al.: Life history shapes gene expression in salmonids. Curr Biol. 2006; 16(8): R281–2. PubMed Abstract | Publisher Full Text\n\nGross JB, Furterer A, Carlson BM, et al.: An integrated transcriptome-wide analysis of cave and surface dwelling Astyanax mexicanus. PLoS One. 2013; 8(2): e55659. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLanes CF, Bizuayehu TT, de Oliveira Fernandes JM, et al.: Transcriptome of Atlantic Cod (Gadus morhua L.) early embryos from farmed and wild broodstocks. Mar Biotechnol (NY). 2013; 15(6): 677–694. PubMed Abstract | Publisher Full Text\n\nBougas B, Audet C, Bernatchez L: The influence of parental effects on transcriptomic landscape during early development in brook charr (Salvelinus fontinalis, Mitchill). Heredity (Edinb). 2013; 110(5): 484–491. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMicallef G, Bickerdike R, Reiff C, et al.: Exploring the transcriptome of Atlantic salmon (Salmo salar) skin, a major defense organ. Mar Biotechnol (NY). 2012; 14(5): 559–569. PubMed Abstract | Publisher Full Text\n\nWang S, Furmanek T, Kryvi H, et al.: Transcriptome sequencing of Atlantic salmon (Salmo salar L.) notochord prior to development of the vertebrae provides clues to regulation of positional fate, chordoblast lineage and mineralisation. BMC Genomics. 2014; 15: 141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQian X, Ba Y, Zhuang Q, et al.: RNA-Seq technology and its application in fish transcriptomics. OMICS. 2014; 18(2): 98–110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorman JD, Ferguson MM, Danzmann RG: Transcriptomics of salinity tolerance capacity in Arctic charr (Salvelinus alpinus): a comparison of gene expression profiles between divergent QTL genotypes. Physiol Genomics. 2014; 46(4): 123–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDerome N, Duchesne P, Bernatchez L: Parallelism in gene transcription among sympatric lake whitefish (Coregonus clupeaformis Mitchill) ecotypes. Mol Ecol. 2006; 15(5): 1239–49. PubMed Abstract | Publisher Full Text\n\nFilteau M, Pavey SA, St-Cyr J, et al.: Gene coexpression networks reveal key drivers of phenotypic divergence in lake whitefish. Mol Biol Evol. 2013; 30(6): 1384–96. PubMed Abstract | Publisher Full Text\n\nDomazet-Lošo T, Tautz D: A phylogenetically based transcriptome age index mirrors ontogenetic divergence patterns. Nature. 2010; 468(7325): 815–8. PubMed Abstract | Publisher Full Text\n\nBozinovic G, Sit TL, Hinton DE, et al.: Gene expression throughout a vertebrate’s embryogenesis. BMC genomics. 2011; 12: 132. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSkúlason S, Smith TB: Resource polymorphisms in vertebrates. Trends Ecol Evol. 1995; 10(9): 366–370. PubMed Abstract | Publisher Full Text\n\nSnorrason SS, Skúlason S: Adaptive Speciation in Northern Freshwater Fishes. In Ulf Dieckmann, Michael Doebeli, Johan A. J. Metz, and Diethard Tautz, editors, Adaptive speciation. Cambridge University Press, Cambridge, chapter 10, 2004; 210–228. Publisher Full Text\n\nKlemetsen A: The Charr Problem Revisited: Exceptional Phenotypic Plasticity Promotes Ecological Speciation in Postglacial Lakes. Freshwater Rev. 2010; 3(1): 49–74. Publisher Full Text\n\nBernatchez L, Renaut S, Whiteley AR, et al.: On the origin of species: insights from the ecological genomics of lake whitefish. Philos Trans R Soc Lond B Biol Sci. 2010; 365(1547): 1783–800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMerilä J: Nine-spined stickleback (Pungitius pungitius): an emerging model for evolutionary biology research. Ann N Y Acad Sci. 2013; 1289: 18–35. PubMed Abstract | Publisher Full Text\n\nFraser DJ, Weir LK, Bernatchez L, et al.: Extent and scale of local adaptation in salmonid fishes: review and meta-analysis. Heredity (Edinb). 2011; 106(3): 404–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAllendorf FW, Thorgaard GH: Tetraploidy and the Evolution of Salmonid Fishes. In Bruce J. Turner, editor, Evolutionary Genetics of Fishes. Springer US, 1984; 1–53. Publisher Full Text\n\nCrête-Lafrenière A, Weir LK, Bernatchez L: Framing the Salmonidae family phylogenetic portrait: a more complete picture from increased taxon sampling. PloS One. 2012; 7(10): e46662. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavidson WS, Koop BF, Jones SJ, et al.: Sequencing the genome of the Atlantic salmon (Salmo salar). Genome Biol. 2010; 11(9): 403. PubMed Abstract | Free Full Text\n\nMoghadam HK, Ferguson MM, Danzmann RG: Whole genome duplication: Challenges and considerations associated with sequence orthology assignment in Salmoninae. J Fish Biol. 2011; 79(3): 561–574. PubMed Abstract | Publisher Full Text\n\nMacqueen DJ, Johnston IA: A well-constrained estimate for the timing of the salmonid whole genome duplication reveals major decoupling from species diversification. Proc Biol Sci. 2014; 281(1778): 20132881. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerthelot C, Brunet F, Chalopin D, et al.: The rainbow trout genome provides novel insights into evolution after whole-genome duplication in vertebrates. Nat Commun. 2014; 5: 3657. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVijay N, Poelstra JW, Künstner A, et al.: Challenges and strategies in transcriptome assembly and differential gene expression quantification. A comprehensive in silico assessment of RNA-seq experiments. Mol Ecol. 2013; 22(3): 620–34. PubMed Abstract | Publisher Full Text\n\nNoakes DLG: Charr truth: Sympatric differentiation in Salvelinus species. Environ Biol Fishes. 2008; 83(1): 7–15. Publisher Full Text\n\nAdams CE, Fraser D, Wilson AJ, et al.: Patterns of phenotypic and genetic variability show hidden diversity in Scottish Arctic charr. Ecol Freshwater Fish. 2007; 16(1): 78–86. Publisher Full Text\n\nKristjánsson BK, Malmquist HJ, Ingimarsson F, et al.: Relationships between lake ecology and morphological characters in Icelandic Arctic charr, Salvelinus alpinus. Biol J Linn Soc. 2011; 103(4): 761–771. Publisher Full Text\n\nWoods PJ, Skúlason S, Snorrason SS, et al.: Intraspecific diversity in Arctic charr, Salvelinus alpinus, in Iceland: II. Which environmental factors influence resource polymorphism in lakes? Evolutionary Ecol Res. 2012; 14(8): 993–1013. Reference Source\n\nKristjánsson BK, Skúlason S, Snorrason SS, et al.: Fine-scale parallel patterns in diversity of small benthic Arctic charr (Salvelinus alpinus) in relation to the ecology of lava/groundwater habitats. Ecol Evol. 2012; 2(6): 1099–112. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSkúlason S, Noakes DL, Snorrason SS: Ontogeny of trophic morphology in four sympatric morphs of Arctic charr Salvelinus alpinus in Thingvallavatn, Iceland. Biol J Linn Soc. 1989; 38(3): 281–301. Publisher Full Text\n\nSkúlason S, Snorrason SS, Ota D, et al.: Genetically based differences in foraging behaviour among sympatric morphs of Arctic charr (Pisces: Salmonidae). Animal Behaviour. 1993; 45(6): 1179–1192. Publisher Full Text\n\nSkúlason S, Snorrason SS, Noakes DLG, et al.: Genetic basis of life history variations among sympatric morphs of Arctic char Salvelinus alpinus. Can J Fish Aquat Sci. 1996; 53(8): 1807–1813. Publisher Full Text\n\nParsons KJ, Skúlason S, Ferguson M: Morphological variation over ontogeny and environments in resource polymorphic Arctic charr (Salvelinus alpinus). Evol Dev. 2010; 12(3): 246–257. PubMed Abstract | Publisher Full Text\n\nParsons KJ, Sheets HD, Skúlason S, et al.: Phenotypic plasticity, heterochrony and ontogenetic repatterning during juvenile development of divergent Arctic charr (Salvelinus alpinus). J Evolutionary Biol. 2011; 24(8): 1640–1652. PubMed Abstract | Publisher Full Text\n\nSandlund TO, Gunnarsson K, Jónasson PM, et al.: The Arctic charr Salvelinus alpinus in Thingvallavatn. Oikos. 1992; 64(1/2): 305–351. Publisher Full Text\n\nSnorrason SS, Skúlason S, Jonsson B, et al.: Trophic specialization in Arctic charr Salvelinus alpinus (Pisces; Salmonidae): morphological divergence and ontogenetic niche shifts. Biol J Linn Soc. 1994; 52(1): 1–18. Publisher Full Text\n\nMagnusson KP, Ferguson MM: Genetic analysis of four sympatric morphs of Arctic charr, Salvelinus alpinus, from Thingvallavatn, Iceland. Environ Biol Fishes. 1987; 20(1): 67–73. Publisher Full Text\n\nDanzmann RG, Ferguson MM, Skulason S, et al.: Mitochondrial DNA diversity among four sympatric morphs of Arctic charr, Salvelinus alpinus L., from Thingvallavatn, Iceland. J Fish Biol. 1991; 39(5): 649–659. Publisher Full Text\n\nPálsson S, Árnason E: Sequence variation for cytochrome b genes of three salmonid species from Iceland. Aquaculture. 1994; 128(1–2): 29–39. Publisher Full Text\n\nVolpe JP, Ferguson MM: Molecular genetic examination of the polymorphic Arctic charr Salvelinus alpinus of Thingvallavatn, Iceland. Mol Ecol. 1996; 5(6): 763–72. PubMed Abstract | Publisher Full Text\n\nWilson AJ, Gíslason D, Skúlason S, et al.: Population genetic structure of Arctic charr, Salvelinus alpinus from northwest Europe on large and small spatial scales. Mol Ecol. 2004; 13(5): 1129–42. PubMed Abstract | Publisher Full Text\n\nKapralova KH, Morrissey MB, Kristjánsson BK, et al.: Evolution of adaptive diversity and genetic connectivity in Arctic charr (Salvelinus alpinus) in Iceland. Heredity (Edinb). 2011; 106(3): 472–487. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKapralova KH, Gudbrandsson J, Reynisdottir S, et al.: Differentiation at the MHCIIα and Cath2 loci in sympatric Salvelinus alpinus resource morphs in Lake Thingvallavatn. PloS One. 2013; 8(7): e69402. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaemundsson K: Geology of the Thingvallavatn Area. Oikos. 1992; 64(1/2): 40–68. Publisher Full Text\n\nMacqueen DJ, Kristjánsson BK, Paxton CG, et al.: The parallel evolution of dwarfism in Arctic charr is accompanied by adaptive divergence in mTOR-pathway gene expression. Mol Ecol. 2011; 20(15): 3167–84. PubMed Abstract | Publisher Full Text\n\nDoiron S, Bernatchez L, Blier PU: A comparative mitogenomic analysis of the potential adaptive value of Arctic charr mtDNA introgression in brook charr populations (Salvelinus fontinalis Mitchill). Mol Biol Evol. 2002; 19(11): 1902–9. PubMed Abstract | Publisher Full Text\n\nMiller W, Schuster SC, Welch AJ, et al.: Polar and brown bear genomes reveal ancient admixture and demographic footprints of past climate change. Proc Natl Acad Sci U S A. 2012; 109(36): E2382–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSvavarsson E: Árangur í kynbótum á bleikju og næstu skref [reference in Icelandic]. In Fræðaþing landbúnaðarsins 4. 2007; 121–125. Reference Source\n\nAhi EP, Gudbrandsson J, Kapralova KH, et al.: Validation of Reference Genes for Expression Studies during Craniofacial Development in Arctic Charr. PloS One. 2013; 8(6): e66389. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAhi EP, Kapralova KH, Pálsson A, et al.: Transcriptional dynamics of a conserved gene expression network associated with craniofacial divergence in Arctic charr. EvoDevo. 2014; 5(1): 40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnorrason SS, Skúlason S, Sandlund OT, et al.: Shape polymorphism in Arctic charr, Salvelinus alpinus. Physiol Ecol Japan. 1989; 1: 393–404. Reference Source\n\nGorodilov YN: Description of the early ontogeny of the Atlantic salmon, Salmo salar, with a novel system of interval (state) identification. Environ Biol Fishes. 1996; 47(2): 109–127. Publisher Full Text\n\nKapralova KH, Franzdóttir SR, Jónsson H, et al.: Patterns of MiRNA expression in Arctic Charr development. PLoS One. 2014; 9(8): e106084. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDi Génova A, Aravena A, Zapata L, et al.: SalmonDB: a bioinformatics resource for Salmo salar and Oncorhynchus mykiss. Database (Oxford). 2011; 2011: bar050. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi B, Dewey CN: RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinformatics. 2011; 12(1): 323. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team. R: A Language and Environment for Statistical Computing. 2014.\n\nYoung MD, Wakefield MJ, Smyth GK, et al.: Gene ontology analysis for RNA-seq: accounting for selection bias. Genome Biol. 2010; 11(2): R14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUntergasser A, Cutcutache I, Koressaar T, et al.: Primer3–new capabilities and interfaces. Nucleic Acids Res. 2012; 40(15), e115. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBustin SA, Benes V, Garson JA, et al.: The MIQE guidelines: Minimum Information for publication of Quantitative real-time PCR Experiments. Clin Chem. 2009; 55(4): 611–22. PubMed Abstract | Publisher Full Text\n\nLivak KJ, Schmittgen TD: Analysis of relative gene expression data using real-time quantitative PCR and the 2-ΔΔCT Method. Methods. 2001; 25(4): 402–8. PubMed Abstract | Publisher Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoboldt DC, Zhang Q, Larson DE, et al.: VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res. 2012; 22(3): 568–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarkin MA, Blackshields G, Brown NP, et al.: Clustal W and Clustal X version 2.0. Bioinformatics. 2007; 23(21): 2947–2948. PubMed Abstract | Publisher Full Text\n\nNicholas KB, Nicholas HB, Deerfield DW: GeneDoc: Analysis and Visualization of Genetic Variation. EMBNEW.NEWS. 1997; 4(14). Reference Source\n\nMagalhães GS, Lopes-Ferreira M, Junqueira-de Azevedo ILM, et al.: Natterins, a new class of proteins with kininogenase activity characterized from Thalassophryne nattereri fish venom. Biochimie. 2005; 87(8): 687–99. PubMed Abstract | Publisher Full Text\n\nMagalhães GS, Junqueira-de-Azevedo IL, Lopes-Ferreira M, et al.: Transcriptome analysis of expressed sequence tags from the venom glands of the fish Thalassophryne nattereri. Biochimie. 2006; 88(6): 693–9. PubMed Abstract | Publisher Full Text\n\nSprague J, Bayraktaroglu L, Clements D, et al.: The Zebrafish Information Network: the zebrafish model organism database. Nucleic Acids Res. 2006; 34(Database issue): D581–D585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEiriksson GM, Skulason S, Snorrason SS: Heterochrony in skeletal development and body size in progeny of two morphs of Arctic charr from Thingvallavatn, Iceland. J Fish Biol. 1999; 55(sA): 175–185. Publisher Full Text\n\nKapralova KH, Jónsson ZO, Pálsson A, et al.: Bones in motion: Ontogeny of craniofacial development in sympatric Arctic charr morphs. Developmental Dynamics, under revision. 2015.\n\nJonsson B, Skúlason S, Snorrason SS, et al.: Life History Variation of Polymorphic Arctic Charr (Salvelinus alpinus) in Thingvallavatn, Iceland. Can J Fish Aquat Sci. 1988; 45(9): 1537–1547. Publisher Full Text\n\nPalsson A, Wesolowska N, Reynisdóttir S, et al.: Naturally occurring deletions of hunchback binding sites in the even-skipped stripe 3+7 enhancer. PLoS One. 2014; 9(5); e91924. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZou C, Lehti-Shiu MD, Thomashow M, et al.: Evolution of stress-regulated gene expression in duplicate genes of Arabidopsis thaliana. PLoS Genet. 2009; 5(7): e1000581. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJantzen SG, Sanderson DS, von Schalburg KR, et al.: A 44K microarray dataset of the changing transcriptome in developing Atlantic salmon (Salmo salar L.). BMC Res Notes. 2011; 4: 88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDrivenes Ø, Taranger GL, Edvardsen RB: Gene expression profiling of Atlantic cod (Gadus morhua) embryogenesis using microarray. Mar Biotechnol (NY). 2012; 14(2): 167–76. PubMed Abstract | Publisher Full Text\n\nPiasecka B, Lichocki P, Moretti S, et al.: The hourglass and the early conservation models--co-existing patterns of developmental constraints in vertebrates. PLoS Genet. 2013; 9(4): e1003476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFleming A: On a Remarkable Bacteriolytic Element Found in Tissues and Secretions. Proceedings of the Royal Society of London. Series B. 1922; 93(653): 306–317. Publisher Full Text\n\nChipman DM, Sharon N: Mechanism of lysozyme action. Science. 1969; 165(3892): 454–465. PubMed Abstract | Publisher Full Text\n\nSubramanian S, MacKinnon SL, Ross NW: A comparative study on innate immune parameters in the epidermal mucus of various fish species. Comp Biochem Physiol B Biochem Mol Biol. 2007; 148(3): 256–63. PubMed Abstract | Publisher Full Text\n\nMagnadóttir B: Innate immunity of fish (overview). Fish Shellfish Immunol. 2006; 20(2): 137–51. PubMed Abstract | Publisher Full Text\n\nMagnadottir B: Immunological control of fish diseases. Mar Biotechnol (NY). 2010; 12(4): 361–79. PubMed Abstract | Publisher Full Text\n\nEiriksson GM: Heterochrony in bone development and growth in two morphs of Arctic charr (Salvelinus alpinus) from Thingvallavatn, Iceland. Master’s thesis, University of Iceland, 1999.\n\nChai Y, Ito Y, Han J: TGF-β signaling and its functional significance in regulating the fate of cranial neural crest cells. Crit Rev Oral Biol Med. 2003; 14(2): 78–88. PubMed Abstract | Publisher Full Text\n\nPuga A, Tomlinson CR, Xia Y: Ah receptor signals cross-talk with multiple developmental pathways. Biochem Pharmacol. 2005; 69(2): 199–207. PubMed Abstract | Publisher Full Text\n\nGoodale BC, La Du JK, Bisson WH, et al.: AHR2 mutant reveals functional diversity of aryl hydrocarbon receptors in zebrafish. PloS One. 2012; 7(1): e29346. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNurminsky D, Magee C, Faverman L, et al.: Regulation of chondrocyte differentiation by actin-severing protein adseverin. Dev Biol. 2007; 302(2): 427–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVieira FA, Thorne MA, Stueber K, et al.: Comparative analysis of a teleost skeleton transcriptome provides insight into its regulation. Gen Comp Endocrinol. 2013; 191: 45–58. PubMed Abstract | Publisher Full Text\n\nFuruse M, Oda Y, Higashi T, et al.: Lipolysis-stimulated lipoprotein receptor: a novel membrane protein of tricellular tight junctions. Ann N Y Acad Sci. 2012; 1257: 54–8. PubMed Abstract | Publisher Full Text\n\nJazag A, Ijichi H, Kanai F, et al.: Smad4 silencing in pancreatic cancer cell lines using stable RNA interference and gene expression profiles induced by transforming growth factor-β. Oncogene. 2005; 24(4): 662–71. PubMed Abstract | Publisher Full Text\n\nPlanchart A, Mattingly CJ: 2,3,7,8-Tetrachlorodibenzo-p-dioxin upregulates FoxQ1b in zebrafish jaw primordium. Chem Res Toxicol. 2010; 23(3): 480–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHering NA, Andres S, Fromm A, et al.: Transforming growth factor-β, a whey protein component, strengthens the intestinal barrier by upregulating claudin-4 in HT-29/B6 cells. J Nutr. 2011; 141(5): 783–9. PubMed Abstract | Publisher Full Text\n\nIto Y, Yeo JY, Chytil A, et al.: Conditional inactivation of Tgfbr2 in cranial neural crest causes cleft palate and calvaria defects. Development. 2003; 130(21): 5269–80. PubMed Abstract | Publisher Full Text\n\nKedersha NL, Miquel MC, Bittner D, et al.: Vaults. II. Ribonucleoprotein structures are highly conserved among higher and lower eukaryotes. J Cell Biol. 1990; 110(4): 895–901. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerger W, Steiner E, Grusch M, et al.: Vaults and the major vault protein: novel roles in signal pathway regulation and immunity. Cell Mol Life Sci. 2009; 66(1): 43–61. PubMed Abstract | Publisher Full Text\n\nvan Driel M, Pols HA, van Leeuwen JP: Osteoblast differentiation and control by vitamin D and vitamin D metabolites. Curr Pharm Des. 2004; 10(21): 2535–55. PubMed Abstract | Publisher Full Text\n\nVölkers M, Rohde D, Goodman C, et al.: S100A1: a regulator of striated muscle sarcoplasmic reticulum Ca2+ handling, sarcomeric, and mitochondrial function. Curr Pharm Des. 2010; 2010: 178614. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcBride HM, Neuspiel M, Wasiak S: Mitochondria: more than just a powerhouse. Curr Biol. 2006; 16(14): R551–60. PubMed Abstract | Publisher Full Text\n\nO’Brien KM, Mueller IA: The unique mitochondrial form and function of Antarctic channichthyid icefishes. Integr Comp Biol. 2010; 50(6): 993–1008. PubMed Abstract | Publisher Full Text\n\nWilliam J, Ballard O, Pichaud N: Mitochondrial DNA more than an evolutionary bystander. Functional Ecol. 2014; 28(1): 218–231. Publisher Full Text\n\nTeacher AG, André C, Merilä J, et al.: Whole mitochondrial genome scan for population structure and selection in the Atlantic herring. BMC Evol Biol. 2012; 12: 248. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGudbrandsson J, Ahi E, Franzdottir S, et al.: Dataset 1 in: The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs. F1000Research. 2015. Data Source\n\nGudbrandsson J, Ahi E, Franzdottir S, et al.: Dataset 2 in: The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs. F1000Research. 2015. Data Source\n\nGudbrandsson J, Ahi E, Franzdottir S, et al.: Dataset 3 in: The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs. F1000Research. 2015. Data Source"
}
|
[
{
"id": "8970",
"date": "07 Jul 2015",
"name": "Daniel Macqueen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nReview of Gudbrandsson et al. “The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs”.The work is founded on the solid premise that rapidly evolving phenotypes in nature can be underpinned by changes at the transcriptome level. The model system here is Arctic charr populations that have evolved (since the last ice age) major differences in phenotypes along the ‘benthic’ - ‘limnetic’ axis, with strong differences in head morphology linked to feeding specializations. The work provides an extensive analysis of transcriptome and genetic differences between different morphs and populations. It is interesting, generally well-written and has merit on many levels. It is also rather hard going, since so much ground is covered on diverse areas. The study also comes with a large number of caveats, of which the authors are undoubtedly aware. Overall though, I am supportive of this work, as it represents one of the most detailed analyses of molecular mechanisms linked to rapid phenotypic evolution in Arctic charr. I see it as a great start point for future work and a source of several new findings and hypotheses. I suggest that the paper be indexed in F1000 Research as long as its caveats are transparent and the authors address my comments.I list below a number of suggestions that may help the authors improve the work, or that at least highlight study limitations for the benefit of interested readers. I also provide a number of minor comments and suggestions, which should help improve the manuscript more incrementally.Main comments & caveatsRNAseq study design. I sympathize with the fact that the authors are trying to publish Illumina data that was generated in 2009, since (obviously) the technology has moved on greatly in the last 6 years, while its costs have been reduced dramatically. Adding to this is the fact that the authors are using a particularly complex transcriptome in terms of high content of similar paralogues (and expressed transposable elements), without a reference sequence for mapping in their species. I accept the author’s argument that it is more sensible to map against a closely related species with the sequence data rather than to try and create a de novo assembly from 36bp reads. I also believe it is sensible to pool read counts for putative paralogous contigs in this study, since the short read length ablates any ability to separate paralogous differences in expression (yet does not preclude the generation of useful hypotheses about putative gene expression differences among morphs).However, I do question whether the use of Atlantic salmon EST contigs is the best approach here. Firstly, reference assemblies for both Atlantic salmon and rainbow trout are now available, which distinguish paralogous variation. More importantly, using these reference genome data would provide certainty that reads are being mapped to exons from single genes, whereas many of the ESTs will provide a fragmented representation of exon sequences, presumably relying on annotation to piece them back into ‘genes’ post hoc . In addition, paired 100bp Ilumina reads are available at high coverage for Arctic charr (e.g. Norman et al. 2014), which could also be used to generate a specific reference transcriptome to map against in this study, although this might be underrepresented in terms of developmental genes as it is a gill study. Overall, I do wonder how much more information might have been gleaned from this dataset with a different mapping strategy?With all the above said, I understand that the authors have built up a large study based around the original mapping to the salmon ESTs and that it would not be routine for them to repeat the study using better reference data. Furthermore, the approach used has definitely led to the generation of several valid hypotheses concerning the nature of gene expression and genetic differences among charr morphs, which have been followed up using independent approaches. Methods “Biological Replication in RNAseq” – a general comment: obviously the design of the study is not optimal because biological variation within developmental stages is not considered in the statistics. Thus, the approach lacks power to detect differences when morph variation is restricted to different developmental stages. I wanted to explain my opinion (for the record) that the study design is nonetheless useful for identifying constitutive differences between morphs. This is especially true because gene expression variability is likely to be relatively low in embryonic stages (compared to a similar study design in adults at least). Further, the pooling of individuals will have helped to at least recapture some biological variation at different stages. Thus, as mentioned above, I see the author’s use of RNAseq as a hypothesis-generating approach, which has been quite fruitful in identifying putative differences between different morphs. Methods “QPCR study design”. The authors adhere to the MIQE guidelines, but do not always follow the best approaches. Most pertinently, the authors use the 2−∆∆Ct method (assuming PCR efficiency of 2.0) despite having gone to the effort of gaining and reporting efficiencies for each assay, which can be as low as 1.72 for some genes. The effect of failing to incorporate differences in efficiency are highly established and this is likely to have affected the author’s results. The authors should consider incorporating the effect of differences in efficiency into their analyses. This is likely to have some impact on the study conclusions in my opinion. Methods “Polymorphisms in charr transcriptome”. While this is not exactly my area of expertise, I struggled to understand the methods behind filtering paralogous variants from SNPs in the data. The authors state “As the SNP analysis was done on individual contigs, differences among paralogs appear in the data. However, since each sample is a pool of few individuals, it is very unlikely that we have the same frequency of true SNPs in the samples. This property was used to remove variants that are most likely due to expressed paralogs”. Can the authors please try to re-explain this in even simpler terms to help me get it? I don’t see how this description leads to a robust identification of paralogous variation. Is there an underlying assumption of equal expression among paralogues? If so, this is likely to be routinely invalidated. Methods “Verification of candidate SNPs”. While it is good that the authors have attempted to verify SNPs identified from their RNAseq data, I don’t believe the data is particularly well incorporated in the results section. It needs to be stated up front the extent to which the SNPs predicted from the RNAseq were independently verified. Also, the methods for this section can be improved, especially “we conducted genomic comparisons of the Salmon genome, ESTs and short contigs from the preliminary assembly of the Arctic charr transcriptome”. None of this information is elaborated on – what is the preliminary assembly of the Arctic charr transcriptome? Which version of the salmon genome was used and how? Moreover, it would be useful to actually explain in the methods that the genotyping was done on a small number of SB, PL and PI morphs, rather than relying on the reader to extract all the required information from Table S2. I guess overall, the way this section is incorporated into the manuscript needs some thought in terms of improving the reader’s experience. I struggled after reading it several times and am still not sure I have all the information I need. Results. “Analyses of those reads require an Arctic charr genome sequence or transcriptome assembly from longer and paired end reads.” As mentioned already, the latter is available to generate an Arctic charr transcriptome assembly to map against. Results; Figure 3 and 4. The authors found that around half the genes studied were not differentially expressed among morphs by qPCR. Obviously this is quite a large number, but on closer inspection, I noticed that Ndub6, Ubl5 and parp6 were not even differentially expressed according to RNAseq. Thus, I am confused at the selection of genes from the RNAseq analysis for verification by qPCR. The authors should explain this selection more transparently and provide clearer indices of the correlation between RNAseq and qPCR results and associated discussion.Minor comments, typos and suggested changesAbstract: “Species and populations with parallel evolution of specific traits can help illuminate how predictable adaptations and divergence are at the molecular and developmental level. Grammatically – his reads better: “….. can help illuminate the predictability of adaptations and divergence at the molecular and developmental level” Introduction: “Examples of such a species complex are the finches of the Galapagos islands, cichlids in the African great lakes are exciting multi-species systems in this respect”. Grammatically – reads better: “Examples of such species complexes are provided by finches of the Galapagos islands, while cichlids of the African great lakes also provide an exciting multi-species system in the same respect” Introduction: “Some northern freshwater fish species exhibit frequent parallelism in trophic structures and life history and in several cases are they found as distinct resource morphs” change to “…. are found as distinct resource morphs” Introduction: “in the development of ecological differences in tropic morphology” change to “… trophic morphology”. Introduction: “The family is estimated to be between 63.2 and 58.1 million years old”. This information is not correct – it is correct to state that the age of the salmonid crown (based on the cited paper; different estimates exist in the literature, e.g. Macqueen and Johnston, 2014; Campbell et al. 2013) is estimated at 63.2 and 58.1 million years old, but the family dates back much further – to the origin of the WGD event in fact, which occurred more like 88-103 Ma (Macqueen and Johnston, 2014; Berthelot et al. 2014). Thus, the last common ancestor to extant salmonid species is what the authors are actually referring to in this sentence. Introduction: “Furthermore, for data with short reads, mapping to a related reference genome/transcriptome is recommended over de novo assembly”. While this sentence is technically correct in the context of the work cited, I feel it is being used slightly out of context. For a start, what comprises a ‘short read’ is undefined. 36bp is short, but it is possible to get a sold reference transcriptome using 2*100bp, assuming the appropriate diversity of transcripts is represented and suitable depth is attained. Introduction: “nuclear genes, reveled both subtle” change to “nuclear genes, revealed both subtle” Minor comment – AC, PL, LB and SB were already defined in introduction. Methods: “Fishing in Lake Thingvallavatn was with permissions” changed to “Fishing in Lake Thingvallavatn was done with permissions”. Methods: “of differently expressed genes, we preformed clustering analyses” change to “…we performed clustering analyses” Results: “The most drastic changes were seen in processes related to glycolysis (GO:0006096, FDR = 0.0009), were the expression of 19 out of 25 genes” change to “…. where the expression”. Figure 7. What does the charr_WT vs. charr_M signify in the alignment data? Discussion “We are interested in how predictable evolution is a the molecular level and if there certain principles influence the rewiring of developmental and regulatory systems during evolution” consider changing to “We are interested in the predictability of evolution at the molecular level, especially whether there exist principles that influence the rewiring of developmental and regulatory systems”. Discussion. “Recent rainbow trout data shows most paralogs from the latest whole genome duplication event retain the same expression pattern32 indicating that this scenario is probably uncommon; hence it is of considerable interest when two paralogs show distinct expression patterns”. I do not agree that it is of considerable interest when two paralogs show distinct expression patterns – I could list tens of examples for salmonids. Conclusions “The results suggest genetic and expression changes in multiple systems relate to divergence among populations.” Change to “… associated with divergence among populations.”",
"responses": [
{
"c_id": "1901",
"date": "25 Apr 2016",
"name": "Arnar Palsson",
"role": "Author Response",
"response": "Main comments & caveats RNAseq study design. I sympathize with the fact that the authors are trying to publish Illumina data that was generated in 2009, since (obviously) the technology has moved on greatly in the last 6 years, while its costs have been reduced dramatically. Adding to this is the fact that the authors are using a particularly complex transcriptome in terms of high content of similar paralogues (and expressed transposable elements), without a reference sequence for mapping in their species. I accept the author’s argument that it is more sensible to map against a closely related species with the sequence data rather than to try and create a de novo assembly from 36bp reads. I also believe it is sensible to pool read counts for putative paralogous contigs in this study, since the short read length ablates any ability to separate paralogous differences in expression (yet does not preclude the generation of useful hypotheses about putative gene expression differences among morphs).However, I do question whether the use of Atlantic salmon EST contigs is the best approach here. Firstly, reference assemblies for both Atlantic salmon and rainbow trout are now available, which distinguish paralogous variation. More importantly, using these reference genome data would provide certainty that reads are being mapped to exons from single genes, whereas many of the ESTs will provide a fragmented representation of exon sequences, presumably relying on annotation to piece them back into ‘genes’ post hoc . In addition, paired 100bp Ilumina reads are available at high coverage for Arctic charr (e.g. Norman et al. 2014), which could also be used to generate a specific reference transcriptome to map against in this study, although this might be underrepresented in terms of developmental genes as it is a gill study. Overall, I do wonder how much more information might have been gleaned from this dataset with a different mapping strategy?With all the above said, I understand that the authors have built up a large study based around the original mapping to the salmon ESTs and that it would not be routine for them to repeat the study using better reference data. Furthermore, the approach used has definitely led to the generation of several valid hypotheses concerning the nature of gene expression and genetic differences among charr morphs, which have been followed up using independent approaches.Reply: We thank the reviewer for excellent diagnosis and suggestions. The paper describes the (in our humble opinion) most sensible summary of the data, as the writing of the paper started 2 years ago. We did map on the O.mykiss cDNA collection also, got similar results, but opted for reporting on the salmon data to avoid further extending an already long manuscript. We are currently analyzing DE and SNPs on a new assembly (100 bp PE reads - 48 samples - 3 morphs - development), and may include a remapping of this dataset in that. Methods “Biological Replication in RNAseq” – a general comment: obviously the design of the study is not optimal because biological variation within developmental stages is not considered in the statistics. Thus, the approach lacks power to detect differences when morph variation is restricted to different developmental stages. I wanted to explain my opinion (for the record) that the study design is nonetheless useful for identifying constitutive differences between morphs. This is especially true because gene expression variability is likely to be relatively low in embryonic stages (compared to a similar study design in adults at least). Further, the pooling of individuals will have helped to at least recapture some biological variation at different stages. Thus, as mentioned above, I see the author’s use of RNAseq as a hypothesis-generating approach, which has been quite fruitful in identifying putative differences between different morphs.Reply: We appreciate the reviewers careful analyses of the study and approach. We tried to emphasize the “hypothesis-generation” aspect during the rewrite. Methods “QPCR study design”. The authors adhere to the MIQE guidelines, but do not always follow the best approaches. Most pertinently, the authors use the 2−∆∆Ct method (assuming PCR efficiency of 2.0) despite having gone to the effort of gaining and reporting efficiencies for each assay, which can be as low as 1.72 for some genes. The effect of failing to incorporate differences in efficiency are highly established and this is likely to have affected the author’s results. The authors should consider incorporating the effect of differences in efficiency into their analyses. This is likely to have some impact on the study conclusions in my opinion.Reply: Great point. The qPCR primer efficiencies more than 1.90 can be easily assumed as 2 because of the negligible effects. Since we used LinReg software for efficiencies not the traditional method, it takes into account the efficiencies for each test for a given primer pair and discard those have different and lower efficiencies. However, the Natterin-like paralogues were below the cut-off. The statistical analyses were done on deltaCt values, prior to transformation based on efficiencies used for visualization. We now report the graphs of their expression adjusting for the lower efficiency, and state in the results “Note however, the efficiency of the primers for the nattl genes ranged from 1.72 to 1.77, which suggests this data should be interpreted with caution.” Methods “Polymorphisms in charr transcriptome”. While this is not exactly my area of expertise, I struggled to understand the methods behind filtering paralogous variants from SNPs in the data. The authors state “As the SNP analysis was done on individual contigs, differences among paralogs appear in the data. However, since each sample is a pool of few individuals, it is very unlikely that we have the same frequency of true SNPs in the samples. This property was used to remove variants that are most likely due to expressed paralogs”. Can the authors please try to re-explain this in even simpler terms to help me get it? I don’t see how this description leads to a robust identification of paralogous variation. Is there an underlying assumption of equal expression among paralogues? If so, this is likely to be routinely invalidated.Reply: We acknowledge this part is a hard read. We rewrote this part of the methods. Here is another summary. Reads from regions that are very similar in paralogous genes can map to both of them. Because we consider also reads that map to many contigs, some of the candidate variants will reflect sequence differences between paralogs, not polymorphism in either paralog. Next we deploy the population genetic argument, since we are sequencing RNA from 6 chromosomes in each sample, then it is very unlikely that a TRUE SNP will be at the same frequency in all of the 8 samples. But variants - that are due to differences bwn paralogs - are likely to be similar in frequency because they are unaffected by the population sampling. This filter is designed to toss those out.To emphasize the objective is not to find differences between paralogs, but rather to enrich for true SNPs. This method will toss out many sites separating paralogous genes (but not all because some paralogous genes are differentially expressed between morphs or time points). Methods “Verification of candidate SNPs”. While it is good that the authors have attempted to verify SNPs identified from their RNAseq data, I don’t believe the data is particularly well incorporated in the results section. It needs to be stated up front the extent to which the SNPs predicted from the RNAseq were independently verified. Also, the methods for this section can be improved, especially “we conducted genomic comparisons of the Salmon genome, ESTs and short contigs from the preliminary assembly of the Arctic charr transcriptome”. None of this information is elaborated on – what is the preliminary assembly of the Arctic charr transcriptome? Which version of the salmon genome was used and how? Moreover, it would be useful to actually explain in the methods that the genotyping was done on a small number of SB, PL and PI morphs, rather than relying on the reader to extract all the required information from Table S2. I guess overall, the way this section is incorporated into the manuscript needs some thought in terms of improving the reader’s experience. I struggled after reading it several times and am still not sure I have all the information I need.Reply: We fixed the methods section to accommodate both reviewers which brought up similar points. We highlight the sampling (8 individuals of 3 morphs), and extend the description of the genomic comparisons. We also extend the discussion of those results. Results. “Analyses of those reads require an Arctic charr genome sequence or transcriptome assembly from longer and paired end reads.” As mentioned already, the latter is available to generate an Arctic charr transcriptome assembly to map against.Reply: Unfortunately the great Norman et al. 2014 data (http://www.ncbi.nlm.nih.gov/pubmed/24368751) came to our attention after we had done these analyses, and started working on our new data (see above). Thus we opted for not redoing the whole analyses for this manuscript, but focus on the verification - and of course working on a new assembly using longer reads. Results; Figure 3 and 4. The authors found that around half the genes studied were not differentially expressed among morphs by qPCR. Obviously this is quite a large number, but on closer inspection, I noticed that Ndub6, Ubl5 and parp6 were not even differentially expressed according to RNAseq. Thus, I am confused at the selection of genes from the RNAseq analysis for verification by qPCR. The authors should explain this selection more transparently and provide clearer indices of the correlation between RNAseq and qPCR results and associated discussion.Reply: This reflects the history of the project, and the difference between the preliminary and final analyses. We decided to report on all the data - but explain better in the manuscript the classification of genes tested with qPCR, at 1%, 5% and 10% FDR. In summary, some of the genes tested were above 5% and one even just above 10% FDR. Some of those were not corroborated by qPCR. The number of genes is insufficient to do a statistical comparison of the verification rate at the different FDR levels. A table (new Table 3) - supported with few sentences in the results, hopefully clarifies this.Minor comments, typos and suggested changes Abstract: “Species and populations with parallel evolution of specific traits can help illuminate how predictable adaptations and divergence are at the molecular and developmental level. Grammatically – his reads better: “….. can help illuminate the predictability of adaptations and divergence at the molecular and developmental level”Reply: Thanks - fixed. Introduction: “Examples of such a species complex are the finches of the Galapagos islands, cichlids in the African great lakes are exciting multi-species systems in this respect”. Grammatically – reads better: “Examples of such species complexes are provided by finches of the Galapagos islands, while cichlids of the African great lakes also provide an exciting multi-species system in the same respect” Reply: Thanks - fixed. Introduction: “Some northern freshwater fish species exhibit frequent parallelism in trophic structures and life history and in several cases are they found as distinct resource morphs” change to “…. are found as distinct resource morphs”Reply: Thanks - fixed. Introduction: “in the development of ecological differences in tropic morphology” change to “… trophic morphology”. Reply: Thanks - fixed. Introduction: “The family is estimated to be between 63.2 and 58.1 million years old”. This information is not correct – it is correct to state that the age of the salmonid crown (based on the cited paper; different estimates exist in the literature, e.g. Macqueen and Johnston, 2014; Campbell et al. 2013) is estimated at 63.2 and 58.1 million years old, but the family dates back much further – to the origin of the WGD event in fact, which occurred more like 88-103 Ma (Macqueen and Johnston, 2014; Berthelot et al. 2014). Thus, the last common ancestor to extant salmonid species is what the authors are actually referring to in this sentence.Reply: Thanks so for pointing this out. We changed the text to “local adaptation has been extensively studied in the salmonid family, to which Arctic charr belongs {Fraser2011}. The family is estimated to be between 88-103 million years old {Macqueen2014,Berthelot2014c}. A whole genome duplication event occurred before the radiation of the salmonid family {Davidson2010,Moghadam2011,Macqueen2014,Berthelot2014c} which has provided time for divergence of ohnologous genes (paralogous genes originated by whole genome duplication event). ” Introduction: “Furthermore, for data with short reads, mapping to a related reference genome/transcriptome is recommended over de novo assembly”. While this sentence is technically correct in the context of the work cited, I feel it is being used slightly out of context. For a start, what comprises a ‘short read’ is undefined. 36bp is short, but it is possible to get a sold reference transcriptome using 2*100bp, assuming the appropriate diversity of transcripts is represented and suitable depth is attained.Reply: Great point, we opted for keeping the point (at this place in the ms) but changing the wording to: In this study we opted to map the reads (36 bp) to a related reference genome/transcriptome {Vijay2013a}, instead of conducting de novo assembly. Introduction: “nuclear genes, reveled both subtle” change to “nuclear genes, revealed both subtle”Reply: Thanks fixed. Minor comment – AC, PL, LB and SB were already defined in introduction.Reply: Thanks, removed this. Methods: “Fishing in Lake Thingvallavatn was with permissions” changed to “Fishing in Lake Thingvallavatn was done with permissions”.Reply: Ammended. Methods: “of differently expressed genes, we preformed clustering analyses” change to “…we performed clustering analyses”Reply: Thanks, fixed. Results: “The most drastic changes were seen in processes related to glycolysis (GO:0006096, FDR = 0.0009), were the expression of 19 out of 25 genes” change to “…. where the expression”.Reply: Thanks, fixed. Figure 7. What does the charr_WT vs. charr_M signify in the alignment data?Reply: Designates the two alleles, the legend now makes this explicit. Discussion “We are interested in how predictable evolution is a the molecular level and if there certain principles influence the rewiring of developmental and regulatory systems during evolution” consider changing to “We are interested in the predictability of evolution at the molecular level, especially whether there exist principles that influence the rewiring of developmental and regulatory systems”.Reply: Thanks, excellent suggestion, included Discussion. “Recent rainbow trout data shows most paralogs from the latest whole genome duplication event retain the same expression pattern32 indicating that this scenario is probably uncommon; hence it is of considerable interest when two paralogs show distinct expression patterns”. I do not agree that it is of considerable interest when two paralogs show distinct expression patterns – I could list tens of examples for salmonids.Reply: Good point, we have revisited this interpretation (see also point by rev. 1). Conclusions “The results suggest genetic and expression changes in multiple systems relate to divergence among populations.” Change to “… associated with divergence among populations.”Reply: Thanks, fixed."
}
]
},
{
"id": "9419",
"date": "09 Jul 2015",
"name": "Anne Dalziel",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper “The developmental transcriptome of contrasting Arctic charr (Salvelinus alpinus) morphs” Gudbrandsson et al. have tested for differential gene expression at multiple developmental time-points among a number of Artic charr morpho-types from Lake Thingvallavatn (3 wild morphs, 1 studied with RNA-seq and qPCR, the others with qPCR only) and Holar aquaculture (1 domesticated morph, RNA-seq and qPCR). They have also studied multiple tissues/body regions for a subset of the differentially expressed genes found with RNA-seq. The goal of the paper was to find candidate genes that may underlie variation in morphology, with a focus on craniofacial morphology related to benthic vs. limnetic feeding. In general, I think this goal was met and this paper contributes to our understanding of the mechanisms contributing to morphological evolution in a non-genetic model organism. The authors provide an extensive, multi-time point comparison of two morphologically divergent groups of charr reared in a common environment (reducing the influence of phenotypic plasticity) and have collected a tremendous amount of data. This information will help them to hone in on the genetic loci contributing to phenotypic evolution in this very interesting system, and on the effects of domestication. However, there are a number of major issues that do need to be more clearly addressed in the manuscript prior to final publication. I have outlined these comments below. Major CommentsIntroduction:Requires some reorganization, clarification of what phenotypes have evolved in parallel among morphs, and how the authors separate the effects of domestication (SB vs. AC) from benthic/limnetic evolution (SB/LB vs. PL/AC).a) At present, the introduction focuses upon the utility of instances of parallel evolution to help us determine how repeatable evolutionary change may be. This is definitely true, and the repeated evolution of the dwarf, benthic morph (SB; the focus of the introduction/abstract/discussion) in many lakes strongly argues that this phenotype has evolved via natural selection. However, it is not clear to me if true ‘parallelism’ seen among the SB (small benthic) and LB (large benthivorous) vs. AC (Holar aquaculture) and PL (small planktivorous) morphs because not enough information is provided for me to assess this. To support the argument for parallelism the specific traits that have evolved in parallel among morphs must be displayed and the evolutionary history of these morphs should be clarified (e.g. in paragraph 6 and Figure 1). As well, any related non-parallelism in traits should also be discussed (i.e. how are the domesticated AC and wild PL different?). At present Figure 1 only shows the AC and SB morphs, and does not point out the specific traits they are interested in. This is critical background information for readers who are not familiar with this system.b) The comparison of AC (domestic, limnetic-like head) vs. LB (wild, benthic like head) looks at two confounded variables: domestication and the benthic/limnetic morphology. This should be clearly stated in the introduction, and the use of the additional morphs (PL, LB) in detangling domestication vs. benthic/limnetic evolution should be noted.c) The use of the AC morph is still a bit unclear to me. The argument for point ‘ii) of the availability of abundant AC material’ could be expanded by providing more information on the ‘limnetic’ like features of this morph and why it is an appropriate comparison to a benthic morph, the genetic divergence from the lake Thingvallavatn fish, and also the selection regime it has experienced (selection for limnetic features? What other traits vary with domestication?).d) Paragraph 2 – Much of this paragraph, including discussing the ability to measure gene expression and relate to phenotype in fishes, is unnecessary as fish are no different from other vertebrates in this respect. Instead, the final sentence “One approach to identify pathways related to function or morphological differences is to study gene expression during development” should become the ‘topic sentence’ and expanded upon to explain why gene expression studies are especially relevant ways to link genotype to phenotype in evo-devo studies.e) Better highlight the strengths – The authors have done a wonderful job of assessing multiple developmental time points and rearing fish in a common garden environment. However, they do not highlight these strengths. Some small notes on the importance of controlling for phenotypic plasticity in these traits (which are known to be quite plastic) to better study genetic differentiation would be a nice addition. Methods:a) Page 4 paragraph 1 - Clarify the number of fish used to make the crosses (this will help us determine the likelihood of selecting a full or half-sib for sequencing/qPCR).b) I should note that I am not an expert in the analysis of RNA-seq data, but luckily the first reviewer has done an excellent job of commenting upon these aspects of the project. I fully agree with their comments and suggestions. I would also like to see more information on the methods used to pool samples and how RNA-seq data was normalized among samples, developmental times and morphs. I will also note that the authors often use S.salar for comparisions, not O.mykiss, which is a closer relative to S.alpinus. The reasons for this approach should be discussed.c) I am also not trained as a population geneticist. However, from my experience studying paralogous genes in salmonids, and with respect to the author’s own findings for the Nattl paralogs (Fig 4), I do not think it is prudent to “assume that the expression of paralogous genes is stable… ” in the methods (page 12). In fact, Berthelot et al. (2014) find the opposite (see my comments for the discussion).d) The authors should use their genetic information to test if the fish chosen are siblings with each other (full or half-sibs). This may have important implications for the population genetic analyses.e) Page 5 - It is not appropriate to change the meaning of the word ‘gene’. I think it is much clearer to use the term ‘paralog group’ or ‘gene family’ when referring to the fact that the authors do not study single genes, but instead groups of paralogs.f) Selection of genes for qPCR – the methods by which genes for the qPCR studies (Fig 3) were selected should be clearly noted. From my reading, it seems that most of these genes do not significantly vary among SB and AC at the 1% FDR level (Tables 1 and 2; only Natterin?). Thus, I am assuming these genes are only significant at the 5% FDR level (S1 file) – why focus upon these and not those significant at 1%? As well, it would be good to include information on why different genes were selected for Figure 3 (qPCR validation of whole fish) and Figure 4 (candidate genes-qPCR validation in just the head). Finally, the abbreviations used for qPCR validation should also be listed in Table 1 for easy comparisons among figures/tables. Results & Figures: a) Include an experimental design figure - At present, it is difficult to keep track of all of the morphotypes, tissues, and developmental time points used without referring to the methods. Thus, an experimental design figure summarizing the samples used (morphotype, population, sample size, developmental time point), how they were pooled and which techniques were used to measure gene expression on each sample (RNA-seq and/or qPCR) is needed.b) Include the LB and PL morphs in Figure 1 and clarify traits of interest – The legend states that “differences in size, coloration and head morphology are apparent”, but it would be better to specifically point out the differences they are referring to. F1000 is for a general audience, and this would help non-ichthylogists better understand what ecologically-important traits the authors are interested in (e.g. those related to benthic/limnetic feeding). In addition, the two other morphs used in the qPCR studies should also be displayed (large benthivorous and small planktivorous) to facilitate phenotypic comparisons and assess parallelism in benthic/limnetic feeding and/or the effects of domestication on AC.c) Figure 5- this is actually a table not a figure (?) and is a bit confusing. I think it is much easier to interpret Figure S2 (displaying the data as in Fig 3 and 4), and that Fig 5 and S2 should be switched. It would be great to show significant differences in mRNA content in this, and all other figures, by including symbols. Also, full gene names should be listed in all figure legends. Discussion:The discussion focuses on the SB morph (page 17 – “The objective of this study were to get a handle on genetic and molecular systems that associate with benthic morphology in charr by mainly focusing on the small benthic morph in Lake Thingvallavatn, Iceland”), while the introduction discusses parallel evolution (indicating that the comparisons should be among many morphs). These are two different topics i) mRNA content differences among benthic vs. limnetic morphs changing in parallel or ii) linking mRNA content to phenotype in SB (benthic, wild) vs. AC (limnetic head, domesticated) morphs. In particular, the role of domestication vs. wild fish divergence needs to be addressed. At present these two topics/questions are mixed in the introduction/discussion and should be addressed separately.a) Paragraph on Immune Defences - Is immunity also expected to evolve in parallel in all benthic morphs? Is this predicted to be unique to SB vs. AC? Whatever the case, the parallelism (or not) in these genes should also be discussed, and whether this relates more to domestication in AC or differences between limnetic vs. benthic fish. Much of the functional discussion can also be cut.b) Page 18 – The information about genes found to be differentially expressed among morphs in your prior work should also be in the introduction, as it is background work that explains why you took this transcriptomic approach. This can also be used to explain why you focused in on particular qPCR genes.c) A discussion of domestication related differences vs. benthic/limnetic differences should be included. I think the data from head gene expression is very interesting (Figs 5, S2) and really speaks to this question.d) In general, the role of stochastic evolutionary processes, and not just selection (artificial and natural) should be noted. For example, if the AC charr were simply taken from a stock with a different mtDNA haplotype then these differences in the mtDNA genome might not be adaptive, just random. If the AC fish has much higher mtDNA expression might this be simply a domestication issue and not indicative of selection in SB as stated?\n\nFinally, you find that not all mitochondrial transcripts (which are transcribed as a polycistronic transcript) are found at similar levels (Table 1) – what does this tell you about differential degradation/post-transcriptional processes?e) There is no discussion about the “Analyses of polymorphism in Arctic charr transcriptome” (Table 3, 4, 5), except for the mtDNA. Minor CommentsIntroduction: a) Paragraph 3 – “Furthermore, recent estimates from the rainbow trout….by utilizing multiple data sources the genome assembly problem of this family can be solved”. I am not sure how this statement is relevant to this particular study. This and the following statement seem more appropriate for the methods/discussion to me. b) The morphs being discussed should be clarified throughout the paper. For example, the authors often state “among morphs/among charr populations” but it is not clear which of the many morphs they are referring to (e.g. Paragraph 5, first sentence on allozymes and mtDNA and later sentence on MCHIIa – do you mean all 4 morphs of specific 2-way comparisons? Are some morphs more differentiated than others?) Methods: a) The authors should note why they did not use the PI (large piscivorous) morph in any qPCR studies (in the methods or discussion) as this would be a nice morph to use in their tests for parallelism.b) Page 5 (last paragraph) – the methods used to remove particular variants needs to be clarified. In particular, why the assumptions used to remove variants are valid by referencing past studies. Figures & Results: a) Figure 2. The key for Figure 2 should include a specific heading for morph and time-point with the abbreviations restated [e.g. Timepoint: 141 dpf, Morph: Small Benthic (SB)]. b) Figure 6 – would be helpful to label the protein coding genes in this figure as well as the 12s and 16s RNAs. c) Figure 7 – It is not clear to me which variant is present in which morph. Adding the nucleotide to the x-axis (i.e. frequency of m1829G for B) would make this figure easier to quickly interpret. The “A.charr_WT” and “A.charr_M” should also be defined in the legend and it would be more appropriate to use scientific names for all species. Discussion: a) Discussion of reference 32 – The discussion of reference 32 is not put into the proper context. Figure 6 of this paper (Berthelot et al. 2014) shows that there are many genes that have no correlation among expression patterns and/or differences in expression levels (1573, 1248, and 1895=4716 paralog pairs), and that together these represent more than the 1,407 correlated/similar expression level paralogs. This section of the discussion needs to be modified. b) The Norman et al. (2014) paper should be mentioned earlier – if this is available why was it not used for their analyses? As well, the last sentence in this paragraph can be cut as it is evident. c) Page 18 – “Our new data also demonstrate differences in craniofacial elements between AC- and SB-charr, along a limnetic vs. benthic axis79”. Are you referring to ref 79 or data from this study? If you are referring to 79, clarify and note what you found. This occurs a few times in the discussion General grammatical errorsThere are a number of grammatical errors throughout this paper (e.g. “31 genes were higher expressed in SB and 40 genes higher in AC-charr”; “that may help sculpture benthic vs. limnetic heads” pg 19).",
"responses": [
{
"c_id": "1902",
"date": "25 Apr 2016",
"name": "Arnar Palsson",
"role": "Author Response",
"response": "Major Comments Introduction:Requires some reorganization, clarification of what phenotypes have evolved in parallel among morphs, and how the authors separate the effects of domestication (SB vs. AC) from benthic/limnetic evolution (SB/LB vs. PL/AC).a) At present, the introduction focuses upon the utility of instances of parallel evolution to help us determine how repeatable evolutionary change may be. This is definitely true, and the repeated evolution of the dwarf, benthic morph (SB; the focus of the introduction/abstract/discussion) in many lakes strongly argues that this phenotype has evolved via natural selection. However, it is not clear to me if true ‘parallelism’ seen among the SB (small benthic) and LB (large benthivorous) vs. AC (Holar aquaculture) and PL (small planktivorous) morphs because not enough information is provided for me to assess this. To support the argument for parallelism the specific traits that have evolved in parallel among morphs must be displayed and the evolutionary history of these morphs should be clarified (e.g. in paragraph 6 and Figure 1). As well, any related non-parallelism in traits should also be discussed (i.e. how are the domesticated AC and wild PL different?). At present Figure 1 only shows the AC and SB morphs, and does not point out the specific traits they are interested in. This is critical background information for readers who are not familiar with this system.Reply: These are excellent suggestions. At the end of the intro we stress the difference between the aims of our research program (study the genetics of parallel evolution) and the aims of this study (get a handle on differences between sympatric morphs, with the AC as possible outgroup). The morphs studied here do not represent parallel evolution of benthic phenotypes (SB and LB are both from the same lake and appear to be closely related - Kapralova et al 2011). Analyses of that question requires further studies. This data can implicate genes that separate PL/AC and SB/LB and may be studied in such follow up analyses of more populations. We have updated figure 1 as advised - including the 4 morphs studied, expanded on the legend and also provide an overview of research approach (part B).b) The comparison of AC (domestic, limnetic-like head) vs. LB (wild, benthic like head) looks at two confounded variables: domestication and the benthic/limnetic morphology. This should be clearly stated in the introduction, and the use of the additional morphs (PL, LB) in detangling domestication vs. benthic/limnetic evolution should be noted.c) The use of the AC morph is still a bit unclear to me. The argument for point ‘ii) of the availability of abundant AC material’ could be expanded by providing more information on the ‘limnetic’ like features of this morph and why it is an appropriate comparison to a benthic morph, the genetic divergence from the lake Thingvallavatn fish, and also the selection regime it has experienced (selection for limnetic features? What other traits vary with domestication?).Reply (b and c): The reviewer is correct, AC and SB are separated by multiple traits, and the data probably reveal signals associating with most of them. Unfortunately the AC charr is not well characterized phenotypically, thus we can not address the question of other traits. We focus mainly on the head and jaw morphology, as these attributes distinguish benthic and limnetic morphs. The revised intro elaborates on the choice of AC, and how the follow up work on the morphs from Lake Thingvallavatn can help us sort this out. This point is also picked up in the discussion.d) Paragraph 2 – Much of this paragraph, including discussing the ability to measure gene expression and relate to phenotype in fishes, is unnecessary as fish are no different from other vertebrates in this respect. Instead, the final sentence “One approach to identify pathways related to function or morphological differences is to study gene expression during development” should become the ‘topic sentence’ and expanded upon to explain why gene expression studies are especially relevant ways to link genotype to phenotype in evo-devo studies.Reply: We restructured and shortened this paragraph around this topic sentence - and gave more room for the previous RNAseq study on Arctic charr.e) Better highlight the strengths – The authors have done a wonderful job of assessing multiple developmental time points and rearing fish in a common garden environment. However, they do not highlight these strengths. Some small notes on the importance of controlling for phenotypic plasticity in these traits (which are known to be quite plastic) to better study genetic differentiation would be a nice addition.Reply: Great advice, we tried to integrate this into the last paragraph of the intro. Methods:a) Page 4 paragraph 1 - Clarify the number of fish used to make the crosses (this will help us determine the likelihood of selecting a full or half-sib for sequencing/qPCR).Reply: We did bulk crosses, joining eggs from 5-10 females in a can and sperm from 3-5 males (SB, PL, LB) and single parent cross for AC. Each sample included RNA pooled from 3 embryos, so there is a chance that full sibs were sequenced, but unlikely. The embryos/samples for qPCR are from similar pools. Now described better in methods.b) I should note that I am not an expert in the analysis of RNA-seq data, but luckily the first reviewer has done an excellent job of commenting upon these aspects of the project. I fully agree with their comments and suggestions. I would also like to see more information on the methods used to pool samples and how RNA-seq data was normalized among samples, developmental times and morphs. I will also note that the authors often use S.salar for comparisions, not O.mykiss, which is a closer relative to S.alpinus. The reasons for this approach should be discussed.Reply: The RNA was isolated from individual embryos, quantified and then united (in equal concentrations) prior to cDNA synthesis. The read counts per gene are normalized per million reads in sample. Not normalized with other variables.c) I am also not trained as a population geneticist. However, from my experience studying paralogous genes in salmonids, and with respect to the author’s own findings for the Nattl paralogs (Fig 4), I do not think it is prudent to “assume that the expression of paralogous genes is stable… ” in the methods (page 12). In fact, Berthelot et al. (2014) find the opposite (see my comments for the discussion).Reply: Excellent suggestion. We corrected our misunderstanding, added this fact into the intro and discussion, and reinterpreted our data in this light.d) The authors should use their genetic information to test if the fish chosen are siblings with each other (full or half-sibs). This may have important implications for the population genetic analyses.Reply:The fish chosen for pop-gen work are random sample from spawning grounds - assumed to be not sibling groups. Our earlier study (Kapralova 2011) showed no family structure in charr collected this way from the lake.e) Page 5 - It is not appropriate to change the meaning of the word ‘gene’. I think it is much clearer to use the term ‘paralog group’ or ‘gene family’ when referring to the fact that the authors do not study single genes, but instead groups of paralogs.Reply: Excellent suggestion. We amended this., and use paralog group throughout.f) Selection of genes for qPCR – the methods by which genes for the qPCR studies (Fig 3) were selected should be clearly noted. From my reading, it seems that most of these genes do not significantly vary among SB and AC at the 1% FDR level (Tables 1 and 2; only Natterin?). Thus, I am assuming these genes are only significant at the 5% FDR level (S1 file) – why focus upon these and not those significant at 1%? As well, it would be good to include information on why different genes were selected for Figure 3 (qPCR validation of whole fish) and Figure 4 (candidate genes-qPCR validation in just the head). Finally, the abbreviations used for qPCR validation should also be listed in Table 1 for easy comparisons among figures/tables.Reply: Very important point. We deliberately studied some genes with less statistical support (FDR between 5% and 10%), to gauge the differences in the genes with less support and in particular to have a bigger pool of candidates that may relate to the specific developmental process (like head and jaw formation). Of course we can not assert that all the genes with strongest DE signal in the transcriptome are true positives, but the data can be used for hypothesis generation. We also amended table 1 and the figure legends accordingly. Results & Figures: a) Include an experimental design figure - At present, it is difficult to keep track of all of the morphotypes, tissues, and developmental time points used without referring to the methods. Thus, an experimental design figure summarizing the samples used (morphotype, population, sample size, developmental time point), how they were pooled and which techniques were used to measure gene expression on each sample (RNA-seq and/or qPCR) is needed.b) Include the LB and PL morphs in Figure 1 and clarify traits of interest – The legend states that “differences in size, coloration and head morphology are apparent”, but it would be better to specifically point out the differences they are referring to. F1000 is for a general audience, and this would help non-ichthylogists better understand what ecologically-important traits the authors are interested in (e.g. those related to benthic/limnetic feeding). In addition, the two other morphs used in the qPCR studies should also be displayed (large benthivorous and small planktivorous) to facilitate phenotypic comparisons and assess parallelism in benthic/limnetic feeding and/or the effects of domestication on AC.Reply: (a and b) Excellent suggestions. Now picture 1 has all 4 morphs, and a schematic describing the work flow and samples.c) Figure 5- this is actually a table not a figure (?) and is a bit confusing. I think it is much easier to interpret Figure S2 (displaying the data as in Fig 3 and 4), and that Fig 5 and S2 should be switched. It would be great to show significant differences in mRNA content in this, and all other figures, by including symbols. Also, full gene names should be listed in all figure legends.Reply: We acknowledge that this graph is not the simplest, but would like to keep it over Figure S2. Our reasoning is that this graph illustrates the sharp differences between the limnetic (AC-PL) and benthic (SB-LB), which are the main result in this section. But we will of course switch them, or possibly join both in a single figure ?? if the reviewer insists or the editors recommend it. Discussion:The discussion focuses on the SB morph (page 17 – “The objective of this study were to get a handle on genetic and molecular systems that associate with benthic morphology in charr by mainly focusing on the small benthic morph in Lake Thingvallavatn, Iceland”), while the introduction discusses parallel evolution (indicating that the comparisons should be among many morphs). These are two different topics i) mRNA content differences among benthic vs. limnetic morphs changing in parallel or ii) linking mRNA content to phenotype in SB (benthic, wild) vs. AC (limnetic head, domesticated) morphs. In particular, the role of domestication vs. wild fish divergence needs to be addressed. At present these two topics/questions are mixed in the introduction/discussion and should be addressed separately.Reply: We tried to separate these two aims more clearly in the revised discussion. The strategy was to use the AC vs SB contrast for hypothesis generation, as the first aim is central to our program. We have now added sentences on the domestication in two parts of the discussion.a) Paragraph on Immune Defenses - Is immunity also expected to evolve in parallel in all benthic morphs? Is this predicted to be unique to SB vs. AC? Whatever the case, the parallelism (or not) in these genes should also be discussed, and whether this relates more to domestication in AC or differences between limnetic vs. benthic fish. Much of the functional discussion can also be cut.Reply: Good question, we assume it to be so, but that may be wrong. We moved the discussion towards this question and away from functional description.b) Page 18 – The information about genes found to be differentially expressed among morphs in your prior work should also be in the introduction, as it is background work that explains why you took this transcriptomic approach. This can also be used to explain why you focused in on particular qPCR genes.Reply: We added a sentence in the intro about the published papers, that this transcriptome made available. In those papers we focused on genes with putative craniofacial effects, though the focus in this study was broader.c) A discussion of domestication related differences vs. benthic/limnetic differences should be included. I think the data from head gene expression is very interesting (Figs 5, S2) and really speaks to this question.d) In general, the role of stochastic evolutionary processes, and not just selection (artificial and natural) should be noted. For example, if the AC charr were simply taken from a stock with a different mtDNA haplotype then these differences in the mtDNA genome might not be adaptive, just random. If the AC fish has much higher mtDNA expression might this be simply a domestication issue and not indicative of selection in SB as stated? Finally, you find that not all mitochondrial transcripts (which are transcribed as a polycistronic transcript) are found at similar levels (Table 1) – what does this tell you about differential degradation/post-transcriptional processes?e) There is no discussion about the “Analyses of polymorphism in Arctic charr transcriptome” (Table 3, 4, 5), except for the mtDNA.Reply: (c,d,e) Excellent suggestions. We added in the final discussion section few sentences on domesticated charr vs Benthic/limnetic. Unfortunately we do not have quantitative data on the phenotypes (head shape, and jaw) of the AC charr and acknowledge that we categorize it as limnetic based on general features.We gladly added a sentence citing neutral forces, and are acutely aware that much of the divergence is likely due to history, drift etc. The domestication can certainly be the driver for the higher expression in AC - but we need transcriptomes from more populations/morphs to address that point. And yes, the variance in RNA levels from different parts of the mtDNA do indeed suggest differential half life of the various RNA species. Some are certainly degraded and others most probably actively utilized / protected. We decided not to follow that thought further though, as the MS already consists of quite a few threads already.We also added sentences on the genetic polymorphism, before focusing more on the mtDNA. The main reason we dont want to elaborate to much on the SNPs is that we feel these data are mainly for generating hypotheses, and that more work is needed to substantiate SNPs and study their distribution in other populations. Minor CommentsIntroduction: a) Paragraph 3 – “Furthermore, recent estimates from the rainbow trout….by utilizing multiple data sources the genome assembly problem of this family can be solved”. I am not sure how this statement is relevant to this particular study. This and the following statement seem more appropriate for the methods/discussion to me.Reply: We deleted this sentence and simplified the paragraph.b) The morphs being discussed should be clarified throughout the paper. For example, the authors often state “among morphs/among charr populations” but it is not clear which of the many morphs they are referring to (e.g. Paragraph 5, first sentence on allozymes and mtDNA and later sentence on MCHIIa – do you mean all 4 morphs of specific 2-way comparisons? Are some morphs more differentiated than others?)Reply: We tried to clarify this in various places in the manuscript, but in some cases we refer to morphs in general. Genetic separation can be estimated with Fst values either between pairs or over a larger set of groups (populations, morphs). In the intro we cite the work done to date in Iceland, which highlights the need for more pop. genetic analyses. Methods: a) The authors should note why they did not use the PI (large piscivorous) morph in any qPCR studies (in the methods or discussion) as this would be a nice morph to use in their tests for parallelism.Reply: The PI charr is very rare in the lake and hard to catch. We later captured few sexually mature individuals, and generated couple of families, that were used for one study (Ahi et al Evodevo 2015).b) Page 5 (last paragraph) – the methods used to remove particular variants needs to be clarified. In particular, why the assumptions used to remove variants are valid by referencing past studies.Reply: Many of the principles are common to most pipelines for removing spurious variants. In addition we applied filters necessitated by the properties of our dataset (pool of individuals), the mapping to an outgroup and paralogs due to salmonid genome complexity.Figures & Results: a) Figure 2. The key for Figure 2 should include a specific heading for morph and time-point with the abbreviations restated [e.g. Timepoint: 141 dpf, Morph: Small Benthic (SB)].Reply: Now fixed. b) Figure 6 – would be helpful to label the protein coding genes in this figure as well as the 12s and 16s RNAs.Reply: Now fixed. c) Figure 7 – It is not clear to me which variant is present in which morph. Adding the nucleotide to the x-axis (i.e. frequency of m1829G for B) would make this figure easier to quickly interpret. The “A.charr_WT” and “A.charr_M” should also be defined in the legend and it would be more appropriate to use scientific names for all species.Reply: Now fixedDiscussion: a) Discussion of reference 32 – The discussion of reference 32 is not put into the proper context. Figure 6 of this paper (Berthelot et al. 2014) shows that there are many genes that have no correlation among expression patterns and/or differences in expression levels (1573, 1248, and 1895=4716 paralog pairs), and that together these represent more than the 1,407 correlated/similar expression level paralogs. This section of the discussion needs to be modified.Reply: Really valuable point, that we are especially grateful for. That we have added this fact to the intro and altered our interpretations in the discussion.b) The Norman et al. (2014) paper should be mentioned earlier – if this is available why was it not used for their analyses? As well, the last sentence in this paragraph can be cut as it is evident.Reply: The Norman papers are now presented more clearly in the intro. There are historical reasons for not including their data in our analyses, we had completed the analyses for this manuscript when they became available and have since then focused our data analyses efforts on another transcriptome generated in the lab (with longer reads).c) Page 18 – “Our new data also demonstrate differences in craniofacial elements between AC- and SB-charr, along a limnetic vs. benthic axis79”. Are you referring to ref 79 or data from this study? If you are referring to 79, clarify and note what you found. This occurs a few times in the discussionReply: Ref 79 is a related study that built in part on the data presented here. We have now rephrased this in the manuscript, hopefully to the better."
}
]
}
] | 1
|
https://f1000research.com/articles/4-136
|
https://f1000research.com/articles/5-2809/v1
|
01 Dec 16
|
{
"type": "Method Article",
"title": "A data-driven approach to estimating the number of clusters in hierarchical clustering",
"authors": [
"Antoine E. Zambelli"
],
"abstract": "DNA microarray and gene expression problems often require a researcher to perform clustering on their data in a bid to better understand its structure. In cases where the number of clusters is not known, one can resort to hierarchical clustering methods. However, there currently exist very few automated algorithms for determining the true number of clusters in the data. We propose two new methods (mode and maximum difference) for estimating the number of clusters in a hierarchical clustering framework to create a fully automated process with no human intervention. These methods are compared to the established elbow and gap statistic algorithms using simulated datasets and the Biobase Gene ExpressionSet. We also explore a data mixing procedure inspired by cross validation techniques. We find that the overall performance of the maximum difference method is comparable or greater to that of the gap statistic in multi-cluster scenarios, and achieves that performance at a fraction of the computational cost. This method also responds well to our mixing procedure, which opens the door to future research. We conclude that both the mode and maximum difference methods warrant further study related to their mixing and cross-validation potential. We particularly recommend the use of the maximum difference method in multi-cluster scenarios given its accuracy and execution times, and present it as an alternative to existing algorithms.",
"keywords": [
"Clustering",
"Hierarchy",
"Dendrogram",
"Gene Expression",
"Empirical"
],
"content": "1 Introduction\n\nHierarchical clustering analysis (HCA) is an extensively studied field of unsupervised learning. Very useful in dimensionality reduction problems, we will study ways of using this clustering method with the aim of reducing (or removing) the need for human intervention.\n\nThe problem of human intervention stems from the fact that HCA is used when the correct number of clusters in a dataset is not known (otherwise we might use, for example, K-means). While the ability to cluster data with an unknown number of clusters is a powerful one, a researcher often needs to interpret the results - or cutoff the algorithm - to recover a meaningful cluster number. While our work was prompted by DNA microarray analysis and gene expression problems, these methods can be applied to general hierarchical clustering scenarios. Specifically, we analyze different existing automated methods for cutting off HCA and propose two new ones.\n\nIn Section 2 we discuss background material on HCA and the existing methods and in Section 3 we present some technical details on these methods and introduce our own. Section 4 contains results on simulated and actual data, and Section 5 examines data sampling procedures to improve accuracy.\n\n\n2 Background\n\nHierarchical clustering, briefly, seeks to pair up data points that are most similar to one another. With the agglomerative (or bottom-up) approach, we begin with N data points forming singleton clusters. For each point, we measure the distance between it and its N − 1 neighbors. The pair with the shortest distance between the two points is taken to form a new cluster. We then look at the distance between the N − 2 points remaining and the newly formed cluster, and again pair off the two points with the shortest distance (either adding a data point to our 2-cluster, or forming another cluster from two new data points). This process is repeated until there is a single cluster with N points (regardless of the absolute distance between points).\n\nNaturally, this is a very good dimensionality reduction algorithm. Unfortunately, it continues until the data is flattened to 1 dimension. In cases where there are n ≥ 2 clusters, this is problematic.\n\nThe results of a HCA are often expressed as a dendrogram, a tree-like graph that contains vital information about the distances measured in the clustering and the pairings generated. An example of a dendrogram is shown in Figure 1. Briefly, horizontal lines denote pairings, and the height of those lines represent the distance that needs to be bridged in order to cluster the points together. That is, the smaller the height (or jump) of a pairing, the closer the points were to begin with.\n\nOur goal is to find a way to stop the algorithm from arbitrarily flattening our data to 1 dimension. If another run would cluster two very dissimilar points together, it likely isn’t a scientifically sound choice. In that case we would stop the algorithm and keep the cluster structure built up to that point.\n\nThe problem of cutting off a dendrogram is one that researchers encounter often, but there are no reliable automated methods for doing so 1. Often, the gap statistic is the only proposed automated method, as in 1. As such, many researchers inspect the finished dendrogram and manually select a cutoff point, based on their own judgment. Apart from the obviously slow nature of this exercise, there is also the question of human error to consider - as well as bias. In cases where the cutoff is not easily determined, two different researchers may arrive at different conclusions as to the correct number of clusters - which could both be incorrect. Algorithmic approaches aim to eliminate this, and range from simpler methods to more complex ones. An excellent summary of existing methods is given in 2, which is referenced in 3.\n\nThe latter, more importantly, develops the gap statistic. We present the technical aspects in Section 3, but quickly discuss some properties here. First, the gap statistic is one of few methods that is capable of accurately estimating single clusters (in the case where all our data belongs to one cluster), a situation often undefined for other methods1. While it is rather precise overall, it requires the use of a “reference distribution”, which must be chosen by the researcher. In 3, the authors put forward that the uniform distribution is in fact the best choice for unimodal distributions. A powerful result, it is still limited in other cases, and thus many researchers still take the manual approach. However, it generally outperforms other complex methods (see 3) and, as such, we focus on the gap statistic.\n\nOn the other side of the complexity spectrum, are variants of the “elbow method\". The elbow method explains the variance in data as a function of the number of clusters assigned. The more clusters assigned, the more variance that can be explained. However, as we add more clusters we begin to get diminishing returns and each new cluster explains less and less of the variance - we choose the point where returns begin to diminish as the number of clusters. A variant of this method, often applied to dendrograms, looks for the largest acceleration of distance growth4. While this method is very flexible, it cannot handle the single-cluster case4.\n\n\n3 Approaches\n\nWe look at both the elbow method variant and the gap statistic, as well as our own 2 methods. While there are many other methods to compare to, the gap statistic is quite representative of a successful (if more complex) solution - and tends to outperform other known methods3. The elbow method is representative of the more accepted simple approaches. In all tests considered in this paper, an agglomerative hierarchy, with average linkage and euclidean distance measure is used.\n\nThe gap statistic is constructed from the within-cluster distances, and comparing their sum to the expected value under a null distribution. Specifically, as given in 3, for r clusters Cr\n\nThat is, we are looking at the sum of the within-cluster distances d, across all r clusters Cr. Computationally, we estimate the gap statistic and find the number of clusters to be (as per 3)\n\nThis variant of the elbow method, which looks at the acceleration, is seen in 4. A straightforward method, we simply look at the acceleration in jump sizes. So given the set of distances from our clustering {d1, … , dN}, the acceleration can be written as\n\nWe choose our number of clusters as the jump with the highest acceleration, giving us\n\nWhile very simple and very fast, this method will never find the endpoints, ie, the N singleton clusters and the single N-element cluster cases.\n\nThe first method we propose is based on the empirical distribution of jump sizes. Specifically, we use the mode of the distribution D = {d1, … , dN}, denoted D̂, adjusted by the standard deviation (σD). Our motivation is that the most common jump size likely does not represent a good cutoff point, and we should consider a higher jump threshold. As such, we take the number of clusters to be\n\nOur second method is even simpler, but is surprisingly absent from the literature. Inspired by the elbow method, we look at the maximum jump difference - as opposed to acceleration. Our number of clusters is then given by\n\nThis method shares the elbow method’s drawback that it cannot solve the single cluster case (though it can handle singleton clusters), but we thought it prudent to examine as the literature seemed to focus on acceleration and not velocity.\n\n\n4 Results\n\nWe present results of simulations on several numbers of true clusters, drawn from a 2-dimensional normal distribution. Each cluster is comprised of 100 samples. We are most interested in tracking the success rate and the error size given an incorrect estimate. That is, how often can we correctly estimate the number of clusters k and when we can’t, by how much are we off? Formally, this is given by\n\nThe data used was drawn from a standard normal distribution, with cluster centers at (−3, −3), (3, 3), (−3, 3), (3, −3), shown in Figure 2. In the case of 1 cluster, the first is taken, for 2 clusters, the first two, and so on. We present the results of the methods on n = 200 simulations below in Table 1–Table 3, with the best results in bold.\n\nAs shown, the best performing methods so far are the gap statistic and, surprisingly, the maximum difference.\n\nThe maximum difference method has a near perfect success rate on this simple example, besting the gap statistic in most areas. As noted though, it suffers from the same problem as the elbow method in that it cannot handle the single-cluster case. It is our recommendation that if the reader suspects their data may be a single cluster, they should consider the gap statistic method. Note, however, that it is much more computationally intensive. As measured by Python’s timeit function, by a factor of ~ 50000.\n\nTo get a better sense for the behavior of these methods, we look at clusters drawn from normal distributions with different parameters. For the clusters centered at (−3, −3), (3, 3), (−3, 3), (3, −3), we scale the standard deviations to use, respectively: 𝟙2, 𝟙2, 2𝟙2, 0.5𝟙2. Hopefully, these different distributions will stress the methods, highlight any weaknesses, and likely serve as a better proxy for real data. The results are detailed in Table 4–Table 6.\n\nIn this more complicated case, we see similar results. The 3-cluster case (in this example) seems problematic for our method and the elbow method. The gap statistic once again performs well for the single-cluster scenario, but shows some weakness at 4 clusters. Overall, it seems that for k = 2, 3, 4 clusters, the maximum difference method at the very least is equal to the gap statistic, and improves on it in certain cases.\n\nReturning to the equal-distribution 4-cluster problem, we now look at how the metrics evolve the distance between the clusters is increased. On the x-axis in Figure 3, a value of m corresponds to a cluster arrangement with coordinates: (−m/2, −m/2), (m/2, m/2), (−m/2, m/2), (m/2, −m/2). We expect all methods to perform better as the clusters drift further apart, since they are then more distinct.\n\nThis is indeed the case for the elbow, maximum difference and mode methods, which converge to a success rate of 1 and an error size of 0 (note some increased stability in the maximum difference). However, the gap statistic appears to do worse as the clusters separate - which could point to some underlying issues and should be explored more fully.\n\nAs mentioned in Section 1, our primary motivation for this problem was that of DNA microarray data and gene expression problems. It is also always prudent to test new methods using real data. As such (and to help with reproducibility), we test our methods on the ExpressionSet data from the R package Biobase (Bioconductor) v3.2,5. This is a sample of 26 different elements, with reconstructions presented in Figure 4 and Figure 5.\n\nIn this case, the maximum difference and elbow methods were in agreement and selected k̂E = k̂D = 3 clusters (with samples R and Z being singleton clusters). The mode, however, chose to add Z into the main cluster, producing k̂M = 2 final clusters. O the other hand, the gp statistic selects only k̂G = 1 cluster.\n\nThe author finds both the mode and gap results to be somewhat dubious - but they highlight an important issue. How can we know that a clustering is correct? Even if we examine the dendrogram as we did here, it is likely that in many examples the cutoff point could be debated. In this dataset, we find it more challenging to determine a correct clustering between 2 and 3 clusters - though 3 seems more natural to the author. This calls back to the previously mentioned issue with manual cutoff selection.\n\n\n5 Data mixing\n\nIn an effort to improve our new methods, we look at data sampling. Inspired by cross-validation methods, we will randomly sample M=N2 points L = 100 times. For each of the L samples j, we then run our method and get a k^X(j). We then set our estimated number of clusters to be\n\nWhile this requires running our method L times, for L = 100, it is still roughly 500 times faster that the gap statistic. Hopefully, this will improve our methods by averaging out any outlying errors or points in our data.\n\nWe present results on the same k = 2, 3, 4 cluster construction detailed above, each with the same distribution (Table 7–Table 9). Due to computational times, we did not perform data mixing on the gap statistic. Though given that these methods are much faster even with mixing, we believe that comparing them remains a fruitful exercise. We provide the gap statistic results from Table 1–Table 3 here for convenience.\n\nAs shown above, both the elbow method and the maximum difference seem to perfectly capture the simulated data. Perhaps even more surprising, the mode method now has results that are comparable to the gap statistic and not far behind the other methods.\n\nWe now return to the ExpressionSet to see if we can come to a consensus on its clustering. We run the same M=N2 and L = 100 mixing as for the simulated data to obtain Figure 6.\n\nWith mixing, there are slight differences in clustering. The elbow and maximum difference methods now agree on 2 clusters instead of 3. The mode method agrees with the gap statistic and sets the number of clusters to 1. Again, we find it difficult to argue in favor of 1 cluster, but maintain that 2 or 3 clusters seem viable - with a preference for 3. It is possible we somehow over-mixed the data when working with such a small sample.\n\nOur data mixing procedure resembles leave N − M out cross validation. In that spirit, let’s examine a method resembling leave one out cross validation (note that computation times will now increase with data size). In our case, this means taking M = 1 and removing each data point once (in a sense, L = N). For each “sampled\" set (of N − 1 points), we compute the number of clusters and again take the mode of the L sample cluster numbers as our estimate. With this we obtain Table 10–Table 12 and Figure 7.\n\nWhile the maximum difference method seems robust in the face of different sampling, it seems that this exercise has revealed some instability in the mode method, which has reverted back to a lackluster performance. To a much lesser extent, the elbow method has some trouble as well. It seems more likely that the choice of sampling parameters could be the cause of the clustering in the Biobase data in Figure 6. More generally, we should look into determining optimal mixing parameters M and L and/or their impact on these methods.\n\nThis mixing method does appear to perform better for the ExpressionSet than the previous choice of mixing parameters, which seems to confirm our hypothesis that there is perhaps an oversampling effect, or something along those lines which must be explored more fully.\n\n\n6 Conclusion\n\nWe have developed two new empirical methods for clustering data in a hierarchical framework. While our methods are substantially faster than the existing gap statistic, they do not handle the single-cluster case. In other cases, our maximum difference method is at least comparable to the gap statistic and outperforms the elbow method.\n\nIn addition, the use of the data mixing procedure presented here can greatly improve performance (especially for the mode method), leading to the maximum difference method outperforming the other 3. Lastly, these methods can be implemented in a few lines of code and should allow researchers to quickly utilize them at a low computational cost.\n\nIn the future we will study the possibility of finding optimal mixing numbers M and L and the impact of the choice of these parameters of our results. Hopefully, they are related to the instability detected in the mode method when using M = 1 in our mixing procedure.\n\n\nData availability\n\nWe provide base code to generate the simulated data used in this paper. The code is written in Python 2.7.\n\n\n\nIn order to access the ExpressionSet data, we use the following R code.\n\n",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2009. Publisher Full Text\n\nGordon AD: Classification. 2nd Edition. CRC Press, 1999; 272. Reference Source\n\nTibshirani R, Walther G, Hastie T: Estimating the number of clusters in a data set via the gap statistic. J Roy Stat Soc B. 2001; 63(2): 411–423. Publisher Full Text\n\nHees J: SciPy Hierarchical Clustering and Dendrogram Tutorial. Online resource, 2015. Accessed 02-Aug-2016. Reference Source\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "19976",
"date": "07 Feb 2017",
"name": "Xin Zou",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors proposed two simple methods for estimating the number of clusters. However, the results failed to support the motivation.\n\nThe authors criticized that the current methods are subject to subjective bias. The new methods also rely on human judgement.\n\nComputational cost is not a problem nowadays and mathematically the new methods do not show obvious advantages in accelerate the calculation.\n\nThis is not a valuable work for the community and not appropriate to be indexed.",
"responses": []
},
{
"id": "20087",
"date": "13 Feb 2017",
"name": "Alok Sharma",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis work addresses the concern of clustering analysis. There are mainly two issues in performing clustering: how to locate the clusters in the best possible way, and, how to determine the number of clusters. In most cases the determination of the number of clusters is rather intuitive. Recently some researchers also highlighted this point for hierarchical clustering and other clustering methods. There is a growing need to establish an automated system of estimating the number of clusters accurately for various applications. I hope this work will give due consideration in this field of research.\nTwo strategies were considered to find the number of clusters: jump size and maximum difference. It would be better if more theoretical aspect can be given to select parameter α, as different values of this parameter can lead to different solution. The same can be argued to maximum difference method which is given as maximum difference plus a constant, again changing this constant can vary the final results. It would be helpful to survey more methods published in this field of research.",
"responses": []
},
{
"id": "20844",
"date": "28 Mar 2017",
"name": "Dylan M. Owen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall I thought it was a very good article with well presented results.\nWhere it was slightly lacking was the Introduction - I would have liked more background on the different applications of this type of clustering in the broader scientific field. Are there, for example, applications in physics and cosmology? Ecology and the environment? Cell biology, neurobiology, immunology?? Psychology and sociology?\nA greater comparison to not hierarchical methods would also be appropriate here (or, alternatively in the conclusion). When should such methods be employed in comparison to simple clustering algorithms like Ripley's K-function or DBSCAN for example?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2809
|
https://f1000research.com/articles/5-2808/v1
|
01 Dec 16
|
{
"type": "Opinion Article",
"title": "Updating Darwin: Information and entropy drive the evolution of life",
"authors": [
"Irun R. Cohen"
],
"abstract": "The evolution of species, according to Darwin, is driven by struggle – by competition between variant autonomous individuals for survival of the fittest and reproductive advantage; the outcome of this struggle for survival is natural selection. The Neo-Darwinians reframed natural selection in terms of DNA: inherited genotypes directly encode expressed phenotypes; a fit phenotype means a fit genotype – thus the evolution of species is the evolution of selfish, reproducing individual genotypes. Four general characteristics of advanced forms of life are not easily explained by this Neo-Darwinian paradigm: 1) Dependence on cooperation rather than on struggle, manifested by the microbiome, ecosystems and altruism; 2) The pursuit of diversity rather than optimal fitness, manifested by sexual reproduction; 3) Life’s investment in programmed death, rather then in open-ended survival; and 4) The acceleration of complexity, despite its intrinsic fragility.\n\nHere I discuss two mechanisms that can resolve these paradoxical features; both mechanisms arise from viewing life as the evolution of information. Information has two inevitable outcomes; it increases by autocatalyis and it is destroyed by entropy. On the one hand, the autocalalysis of information inexorably drives the evolution of complexity, irrespective of its fragility. On the other hand, only those strategic arrangements that accommodate the destructive forces of entropy survive – cooperation, diversification, and programmed death result from the entropic selection of evolving species. Physical principles of information and entropy thus fashion the evolution of life.",
"keywords": [
"Evolution",
"Information",
"Complexity",
"Entropy",
"Cooperation",
"Death",
"Entropic Selection",
"Natural Selection"
],
"content": "Time for taking stock\n\nEvolution of living organisms is probably the one incontrovertible “law” of biology, and Charles Darwin sponsored the idea1. It only remained for biology to combine Darwin’s concept of survival of the fittest with Mendelian genetics and the discovery of DNA to generate what has been called the Neo-Darwinian synthesis – the ruling paradigm of today’s biology2. After a half century of research, any ruling paradigm, however revered, needs reexamination in the light of the findings that have emerged since its conception. Science advances by periodic review of its most cherished teachings; an outdated paradigm is not mere excess baggage: it actually obstructs new ideas and new experiments3.\n\n\nThe Neo-Darwinian narrative\n\nSurvival of the fittest is the mechanism behind the natural selection that drives the evolution of life, as originally proposed by Darwin1. The core idea is that replicating, autonomous living agents must compete for the necessarily limited resources provided by their environments. Consequently, living agents exist in a state of continuous struggle for survival – each agent fighting for its exclusive advantage. Since a phenotype is the expression of the individual’s genomic DNA, fitness is transmitted by the fittest individuals to their offspring. As a result, only those individuals most fit to their particular environments will win out, and the fittest genotypes will dominate and, by reproductive success, will eventually replace the less fit genotypes in the species. Succinctly said, the evolution of species is driven by individual selfishness. Living agents naturally shun death and strive to go on living; however, since death is inevitable, the payoff for fitness is success in generating fit offspring.\n\nWith the discovery of DNA as both the organism’s hereditary endowment and the code that determines protein sequence, biologists, and the public at large, have come to envision DNA as the master program of life4; the body and its functions are the overt expressions of one’s genotype; one’s inherited genome is one’s ultimate individuality. Hence, a fit individual who has passed on fit DNA to its offspring can be seen to compensate, at least conceptually, for the individual’s mortality; one’s genes residing in one’s offspring can persist despite one’s physical death. Successful breeding is not merely the payoff for fitness, it is life’s response to death. William Shakespeare, anticipating DNA, put it this way; “And nothing ‘gainst Time’s scythe can make defence save breed to brave him when he takes thee hence”, Sonnet XII. Figure 1 summarizes the Neo-Darwinian understanding of evolution.\n\nThe seminal idea is that the inherited, individual genotype encodes the individual phenotype that undergoes fitness testing imposed by the environment. Organisms within a species differ in their genotypes/phenotypes and the individual with the fittest genotype/phenotype survives and, through reproductive success, enriches the species with the fittest genes. The process continues to cycle, continuously adapting the species to the changing environment.\n\nSurvival of the fittest is often modeled using game theory: the struggle for dominance between competing agents is viewed as a zero-sum or zero-determinant game in which the winner prospers at the expense of the losers, and selfishness emerges as the most stable strategy of evolution5. The selfishness of the individual phenotype is thus linked to the exclusivity of its formative genotype.\n\nThe game of life is assumed to reflect rational economics – the winning strategy is the one that is optimally efficient, under the circumstances, in exploiting time, energy and materials; evolution generates optimal genetic solutions – or so teach Neo-Darwinians.\n\n\nShortcomings of the Neo-Darwinian narrative\n\nMore recent advances in the biomedical sciences have highlighted four universal characteristics of multi-cellular organisms that are not easily explained by selfish individual struggle and dominant reproduction:\n\n1) Mutual cooperation marks living systems. A telling example of the importance of cooperative living – symbiosis – is the discovery that individual multi-cellular organisms, humans included, house as many bacterial cells – the microbiota – as eukaryote cells; indeed, healthy microbiota genes (the microbiome), which usually outnumber one’s inherited genes, are essential for health, and health is essential for adaptive survival6,7. Universal symbiosis contradicts the Neo-Darwinian assumption that the individual’s phenotype is the functional expression of the individual’s genotype; it is not. The living phenotype is actually a consortium that expresses the individual’s inherited genotype along with the genotypes of the individual’s symbionts – the individual phenotype emerges from cooperative interactions between multiple genotypes (Figure 2); even identical twins house different symbionts, brains and immune systems. Hence, the individual is an ecosystem and the environment tests the fitness of the individual ecosystem, and not just the fitness of the individual’s inherited genes. Consequently, the survival and procreation of the composite individual feeds back on the frequency of particular genes in the breeding species and on the genes composing the symbiotic microbiome. In other words, even the basic individual is a group – the fittest individual expresses the fittest collective of interacting cells, prokaryote and eukaryote, within a single body; Neo-Darwinian natural selection now has to be considered as some type of group selection8 Essential symbiosis obliges us to revise our understanding of the game plan of evolution5.\n\nBeyond mortal individuals, life on earth is dependent on higher scale ecosystems involving networks of interacting species (Figure 3). No species functions autonomously; a living system depends on supporting interactions within and between other systems9: DNA, proteins, and other molecules exist thanks to network interactions within and outside of cells; cells cooperate with other cells to form organisms; even bacteria survive by social network interactions; organisms persist only through ongoing network interactions with other organisms, cells and molecules; species exist only within supporting ecosystems. Mutually supportive ecosystems characterize life on earth (Figure 3). It is now clear that the evolution of life is the evolution of cooperation10.\n\nA third example of the prevalence of unselfish cooperation is altruism – the sacrifice of one’s resources (even of one’s life) for the benefit of others. The very idea of altruism is clearly at odds with Darwin’s teaching that evolution is driven by selfishness. Neo-Darwinian explanations, however, can be derived, in principle, from positing some ultimate benefit to the self-sacrificing agent or to its genes, such as kin selection or other deferred advantages10. Below, we shall see that cooperation arises more “naturally” when we consider a different force that molds evolution.\n\n2) Living systems invest in diversity and avoid uniform fitness by sexual reproduction. A most obvious example of programed diversification is sexual reproduction. According to Neo-Darwinian thinking, fitness should reward the fittest with equally fit offspring (Figure 1) – if not, what could be the meaning of reproductive fitness? On the contrary, sexual reproduction guarantees that your offspring will never replicate your exact fitness no matter how optimally fit you may be – sexual reproduction randomly mixes half your genes with the genes of another, whose fitness has not been tested by your ability to survive – your offspring could be more or less fit than you (Figure 4). Indeed, a mutation encoding parthenogenesis should replace sexual reproduction by its efficiency alone11. Some have attempted to explain sexual reproduction as insurance against rapid shifts in the environment that require a shift in the optimal genome; sexual reproduction, nevertheless, remains an unsolved paradox for the Neo-Darwinian worldview12.\n\nSexual reproduction is not the only example of the evolution of less than optimal fitness. Indeed, the handicap principle proposes that exaggerated or burdensome overt traits serve as signals to perspective mates that prove the selective value of the suitor’s other, more covert evolutionary traits13; the cumbersome tail of the peacock assures the peahen that he must bear very fit genes to have escaped predators till now; here, a covert advantage (strength) is seemingly advertised by a wasteful or dangerous overt display (the ungainly tail). Mass migrations of salmon across oceans14 and migrations of lemmings to their deaths15 do not seem very economical. The reader can supply his or her own favorite example of the irrationality of evolution in devising less than optimal creatures and illogical behaviors.\n\n3) The paradox of organized death: The relentless struggle for survival envisioned by Darwin is challenged by the fact, long ignored, that much of the internal molecular machinery of the cell – genes included – is devoted to multiple processes of self-inflicted death by apoptosis and other means16. Organisms cannot develop and survive unless large numbers of their constituent cells commit suicide or are killed at regular intervals and under specific circumstances. Cells with irreparable DNA damage kill themselves for the good of the individual; in many social species, weak males refrain from reproduction, and altruistic humans even sacrifice themselves, for the good of the species.\n\n4) The paradox of accumulating complexity: An obvious feature of evolution has been the emergence of increasing complexity – from prokaryote to human culture. There is yet no accepted definition of complexity, but however you choose to define it, if humans are not more complex than are bacteria, there is something wrong with your definition of complexity. Likewise, however you choose to define fitness, if bacteria are not more fit than are humans, then there is something wrong with your definition of fitness. Bacteria have never undergone any of the mass extinctions that have visited other more complex forms of life; indeed, the more complex creature is the more fragile creature17. Who will better survive global warming, the rise of the oceans, and the destruction of biodiversity – we mammals or the bacteria? If survival of the fittest is the driving force of evolution and its measure of success, why did evolution not stop with bacteria? How could survival of the fittest drive evolution to fashion and maintain more complex, but more fragile organisms? Darwinian explanations for the evolution of complexity have been explored in detail18. But let us consider a non-Darwinian explanation for the evolution of complexity, cooperation, sex and programmed death.\n\nContrary to Neo-Darwinian teachings, the individual phenotype is not encoded exclusively by the inherited genotype. Rather, multiple genotypes generate the expressed, composite phenotype; the formative genotypes include the inherited genotype plus all the genotypes of the essential symbionts housed by the individual. The survival of the fit individual thus feeds back genetically both on the frequency of the heritable genetic alleles housed by the host species and on the frequency of the cooperating genotypes of the symbionts.\n\nA schematic view of multiple scales of life interacting within (internal) and between (external) living systems: interacting molecules form cells; interacting cells form organisms; interacting organisms form species; interacting species form ecosystems; and interacting ecosystems create the grand ecosystem that is the biosphere (not shown). The bonds between internally cooperating systems (red lines) and externally cooperating systems (dark blue lines) are encoded by eco-genes.\n\nThe cloud-like form represents a species and the stippled grey fill represents the frequency distribution of genes within the species. Two sexually reproducing individuals house genotypes that are each composed of a different combination of possible genes within the stippled fill of the species. The two offspring each bear their own species genotypes that are recombinations of parental genotypes. The individuals within the species are born, sexually reproduce and then die to maintain a frequency distribution of genes within the species that fits the life style and ecosystem arrangements that maintain the species.\n\n\nThe evolution of complex information and its adaptation by entropic selection\n\nSince Darwin’s discovery of evolution, we have succeeded in characterizing the molecules, cells and processes that form organisms and species. We now know that living systems owe their existence to particular arrangements of their internal component parts and to the dynamic interactions between them (Figure 3): cells are specific arrangements of interacting molecules; organisms are specific arrangements of interacting molecules and cells; species are interacting arrangements of organisms; ecosystems are interacting arrangements of species9. In each case, the details of what is arranged and what interacts are the domains of different branches of enquiry and experimentation. Nevertheless, we can adopt the formulation of Claude Shannon and define the common feature of any specific arrangement of any matter or process as information17,19. Information, according to Shannon, can be measured as the degree to which the particular arrangement of interest differs from a random collection of the same component parts. In other words, irrespective of whether we are dealing with carbon atoms in a macromolecule, molecules in a cell, cells in an organism, organisms in a species or species in an ecosystem – whatever manifests nonrandom arrangement bears information; information by this definition is not limited to words in a message. Information in the narrow sense of arrangement is distinct from information expressed as knowledge or meaning; knowledge and meaning express applications of information – they are ways arrangements can be used to achieve goals or support interactions17. Nevertheless, information, defined as arrangement per se, serves as a necessary substrate for the creation of knowledge and meaning. Since the essence of living systems at all scales is their particular arrangements of internal components and their internal and external interactions, we can say that living systems are formed by essential information, irrespective of just what materials or actions are arranged17,20.\n\nThis sounds uselessly abstract; immunologists, for example, are interested in lymphocytes, neurologists in neurons, ecologists in food chains, among other specifics – each field of biology strives to elucidate the details comprising a domain of study; what is to be gained by lumping diverse living systems together as forms of information? The gain is that this narrow definition of information provides a unifying concept for analyzing evolution applicable to all its varied manifestations. In our bird’s eye view of evolution as a seminal process, we are not interested in the fine details of this or that gene, protein, organism, society or ecosystem; we want to see the big picture in one encompassing perception. As we zoom out, we see that the strategic path of evolution can be heuristically reduced to a reconciliation between the two opposing fates of information: its amplification and its destruction. The instantiation of these fundamental forces of nature shape the large-scale strategy of evolution.\n\n\nInformation is autocatalytic and generates complexity automatically\n\nHenri Atlan has formalized the idea that information in any system will increase automatically when two conditions are fulfilled: the replication of existing information into multiple copies and some variation in at least one of the replicates21,22. A detailed presentation of Atlan’s formulation is beyond our present scope, but empirically we know, for example, that evolution has been marked by processes of gene duplication and variation23 – Atlan’s two conditions for increasing complexity. Also note that evolution is associated with pleiotropy – any gene, molecule, cell, or organism almost always performs more than a single function24. Indeed, any living system engages in more than one network of interactions at different stages in its life history. Provided there exist a threshold concentration and flow of compatible arrangements, a given arrangement, such as a molecule or cell, is likely to roam into additional networks of interaction during development, maturation, or aging. In other words, information will tend to have unforeseen side effects25. Complexity increases because the more information there is, the more likely that that information will become available for new engagements and, hence, for increased complexity and pleiotropism.\n\nThe automatic growth of complexity can be seen in the evolution of prokaryote cells into eukaryote cells: Eukaryote cells are demonstrably more complex than are prokaryote cells because the eukaryotes feature nuclei (eu – true; kary – nucleus) and other complex organelles and more complicated organizations of DNA (introns and exons, for example). It is now generally accepted that the first eukaryote emerged from endosymbiosis, the amalgamation of several prokaryotes into a single unit cell26,27. What made a group of prokaryotes enter into a consortium to form a new, more complex cell type? Nobody really knows, but let me propose that once there emerged a sufficiently high concentration of different prokaryote types hanging out in the same neighborhood, sooner or later some of them, by chance, were likely to fuse their cell walls or otherwise ingest one another to form a more complex cell. The amalgamation happened by chance, but why did it take about a billion years to evolve a viable prokaryote? To my mind, prokaryote amalgamations were probably forming spontaneously and breaking up spontaneously all the time, but it took a billion years to obtain, again by chance, just the right combination of disparate prokaryote elements needed for the survival and replication of the seminal progenitor eukaryote. A threshold of complex information will always recombine and vary to generate even more complexity. Growing complexity, however, is sculpted by entropic selection – the complex arrangements that flourish are those that survive the trimming imposed by entropy – the inevitable and relentless destruction of information.\n\n\nEntropic selection channels the outcome of evolving complexity\n\nEntropy, like Shannon-type information, has a formal, computational definition – entropy is a fundamental concept in thermodynamics28. Actually, information and entropy are sister concepts; Shannon formulated his concept of information using Boltzmann’s theorem for entropy19: The measure of information in a particular arrangement is relative to its improbability as opposed to the likelihood of its randomness; the influence of entropy on an arrangement is relative to the probability of its random disorder.\n\nBut in the present context we only need note that entropy ensures that specific arrangements ultimately fall apart spontaneously; the more complex the arrangement, the greater its fragility and the more likely it is to disintegrate. Even the optimal solution in the end will fail. Therefore, the complexities that emerge during the evolution of living systems will be selected by their ability to forestall, circumvent, or persist in the face of their inevitable destruction wrought by entropy – the outcomes of evolution are thus channeled by what we shall call entropic selection.\n\nNote that Darwin’s natural selection takes place whenever variant individuals compete for survival in a resource-limited environment. Struggle is essential; natural selection will not operate in the absence of competition. Entropic selection, in contrast, operates wherever entropy operates, even in the absence of variant individuals or environmental straits. And entropy operates everywhere. In other words, principles of physics – including information and entropy – sculpt the evolution of life. Let’s see how this informational-entropic view of evolution accounts for strategic characteristics of living systems that are not easily explained by the Neo-Darwinian paradigm:\n\n1. The prevalence of cooperative interactions: Entities that are involved in ongoing cooperative interactions are less likely to fall apart than are the same entities in a state of isolation or inactivity. Theoretical explanations exist but need not be invoked; the fact is readily observable: a house tends to fall apart unless somebody lives in it; enforced bed rest can quickly dispatch a previously active, apparently healthy old person; retirement can be deadly; active couples live longer than do lonely people; growing cells have a longer half-life time than do static cell cultures, and so on. Mutual interactions sustain the interacters and delay their dissolution. Selfishness is not viable; entropic selection ensures stable, natural cooperation.\n\nOur symbiosis with our gut microbiota exemplify advantageous mutual cooperation; we house and feed the bacteria and they, in turn, help us digest foodstuffs, provide needed vitamins and other metabolic products, prime the development of our guts and immune systems7, and possibly of our brains29. Altruism is good for bonding lovers, families, tribes, and societies; strong societies, in turn, are good for individual achievement, and especially for those in need such as orphans, the aged, and the debilitated. Finally, mutual cooperation between different species is essential for the ecosystems that maintain all life. The living world provides many more examples of cooperation than it does of selfish struggle10; we can thank entropic selection for that – we’ll return to the function of struggle below.\n\n2. The prevalence of sexual reproduction and the organization of species: Sexual reproduction guarantees genome reshuffling11,12, and so prevents any single dominant genome from taking over the species. Continuous diversification of individuals allows the species to survive the ultimate failure of its “most successful” individually expressed genotypes. Sexual reproduction is not merely Neo-Darwinian insurance against a possible future need for a new optimal genome; sex-mediated gene reshuffling is a sound response to entropic selection. The greater the variation within and between interacting systems, the greater the likelihood of survival – a diverse system manifests flexibility and resilience because it is not limited by a single essential set of components or a fixed plan of action. Entropic selection rewards diversity; Darwinian survival of the fittest in contrast, should reduce diversity by imposing a uniformly optimal fitness.\n\nEntropic selection also explains why life evolves as species – a species is a collective of arrangements that express a frequency distribution of alternative genomes and diverse interaction pathways (Figure 4). The individual members of the species are each particular instantiations of alternative collections of recombining genes and interaction networks; the failure of one or more individual alternatives does not wipe out the species. Thus, a species is materialized by its individual members who breed to maintain a frequency of alternative genotypes that fit the life style of the species in its environment. Successful species are composed of alternative individuals, not of optimal or master programs30. Perhaps there are no master programs around today because species dependent on single master programs were extinguished along the way; a species with only one or a few dominating genotypes, for example, would go extinct as soon as entropy inevitably would destroy that master genotype.\n\n3. Programed death and turnover: Organized death16 and regulated turnover of molecules, cells and individuals enable species to resist their dissolution by entropy, which kills by irregular debility or accident; programmed death is more tolerable because it kills at appointed times, locations and states. Organized, regulated self-destruction, followed by rebirth, is a strategy for surviving entropic selection – systematic shut down and restart of its component parts keeps an individual body alive for a lifetime and a collective species extent for an eon17,24. The characteristic life spans of individuals have evolved to suit the life style of the species, be it 3 days or 100 years or, in the case of certain trees, a thousand years.\n\n4. The growth of complexity: Accumulating complexity, as we discussed above, can readily be explained by the autocatalysis of information,21,22,24; Complexity, intrinsically fragile and contingent, is certainly not an outcome of survival of the fittest; complexity, despite its fragility, is inherent in the natural history of information – the emergence of complexity is inexorable.\n\n\nThe evolutionary role of competition\n\nWe have made the point that individual struggle and survival of the fittest do not easily explain many of the large-scale strategies of evolution. Nevertheless, competitive struggle is a fact of life so prominent that Western Society, in the wake of Darwin, continues to see competition as the driving force of evolution. So what might be the functions of competition and struggle in evolution driven by entropic selection?\n\nThe outcome of struggle depends on whether or not it results from co-evolution of the antagonists; co-evolved competition is characterized by an interaction that should be beneficial for both parties. For example, individual zebras are killed by lions, but such co-evolved predator-prey struggles support the health of both antagonists at the scale of species31; individuals are destined to die in any circumstance, but species of predators and prey thrive on organized individual predation. The monopoly on reproduction enjoyed by alpha males and females within a breeding population maintains the frequency of useful genes and establishes a functional social order within the species. Immune interactions with frequently encountered infectious agents are usually good for both host and parasite24. Social systems do better with open competition for positions of responsibility. Life at the scale of species has evolved to benefit from certain organized individual struggles. Struggle between co-evolved partners serves the adaptation of living species to entropic selection; co-evolved competition is just a different face of cooperation.\n\nIn contrast to co-evolved species competition, novel struggles between unaccustomed antagonists may destroy existing ecosystems and lead to a loss of biodiversity. The prehistoric migration, for example, of human hunters from Asia into the Americas led to the extinction of many species of animals32; the newly arrived human predators eradicated prey species that had previously supported other predators – many of the co-evolved predators and prey species became extinct. More recently, we have seen ecosystems destroyed by invading plant or animal species33. We can conclude that co-evolved competition is likely to benefit the antagonistic species, while new struggles may damage previously operating ecosystems – until a new state of equilibrium can emerge; some wolves, for example, have co-evolved with human culture to generate dogs and cynophiles34. Competition, in summary, has more than one role in evolution; it can help life resist entropic selection, but it can also act as an agent of disorderly destructive entropy, which, paradoxically, may engender from time to time new ecosystems.\n\n\nEntropic selection and natural selection compared\n\nTable 1 outlines the major distinctions between Neo-Darwinian natural selection and entropic selection. Neo-Darwinians see the genome as life’s program; the advance of evolution is driven by competition between diverse autonomous agents, each struggling for survival and successful procreation; the unit vehicle of evolution is the individual, autonomous agent; the evolutionary strategy is to win a zero-sum game in which the most fit agent in the particular environment survives at the expense of the less fit – survival of the fittest; the reward is the reproductive success of the fittest agents leading to enrichment of the species with optimal genes most suited to the environment (Figure 1); a new, divergent species can emerge in a new environment.\n\nEntropic selection, in contrast, sees living systems as the embodiment of information; evolution is thus driven by the two inevitable fates of information – amplification by autocatalysis and dissolution by entropy. Life’s program (Figure 3) emerges from enmeshed, mutually supportive networks30; the unit vehicle of evolution is the species – a continuously recombining population of alternative genotypes (Figure 4); the strategy of living systems is to accommodate entropic dissolution by evolving supportive cooperation within networks of ecosystems in a win-win game, which is played by continuous diversification through sexual reproduction and other means, and by organized death and restart to ensure survival of the fitted; the outcome is the evolution of increasing complexity in which the environment itself is fashioned by enmeshed, interacting biological networks35. Figure 5 summarizes the process.\n\nAn individual phenotype of a multi-cellular species expresses a composite genotype that emerges from three cooperating sources: a subset of genes inherited from the species (including eco-genes); genes borne and expressed by its symbionts; and eco-genes expressed by its ecosystem partners. The individual is then tested by its ability survive in its environment – both the physical (temperature, oxygen tension, pH, etc) and the biological (eco-interactions suitable for sustenance, reproduction, etc). In parallel, entropic selection tests whether the individual manifests the requisite diversification, ability to cooperate, and programed death and restart that suit the life style of the species. Successful individual survival and reproduction feeds back on the distributions of genes in three compartments: in the species, in the symbionts and in the ecosystem partners.\n\n\nThe mechanism of entropic selection\n\nEntropic selection can account for the strategic features of evolution discussed here; but how does this selection actually operate? Ultimately, the effects of entropic selection must be based on the actions of genes; genes, after all, are the bearers of heredity. Sexual reproduction ensures continuous diversification of genotypes within species, but what genes encode the mutual cooperation inherent in symbiosis and in ecosystems? Moreover, how does entropic selection prevent the emergence of selfish genes36?\n\nSelfish genes that sabotage cooperation are destined to undergo negative entropic selection because cooperation is an essential element in preserving living systems; systems that try to stand alone fall. To counteract the emergence of selfishness, the gene pools of co-evolved species must also include genes that positively encode cooperation, including cooperative competition. Such cooperation-enhancing genes enable each system to identify and interact with signals specific for the collaboration. Let’s call these genes eco-genes (from Greek oikos, one’s house). Eco-genes are not a theoretical invention; existing eco-genes are already known:\n\nThe guts and the gut immune systems of mammals, for example, are equipped with receptors that accommodate symbiotic microbiota and distinguish them from microbial pathogens37; indeed, the tolerance of the immune system towards symbiotic microbiota highlights the impossibility of defining an immutable “immune self” – the “self” is an interactive process rather than a closed entity24,38; in fact, the individual immune system is an ecosystem of interacting cells39. Struggles between predators and prey are organized by sight, smell and taste receptors that mark what should be hunted and eaten by the predators and avoided by the prey: mice and cats, for example, innately recognize one another; zebras and lions signal each other clearly to establish a stable equilibrium between hunters and hunted40. Humans are born with visual systems that recognize faces – face recognition serves as a foundation for human bonding and social interactions24. The concept of eco-genes, in short, includes all the genes involved in essential interactions between individuals and between species that organize life in the face of entropic selection; particular eco-genes are too numerous to enumerate here, but their expressions operate to hold together species and ecosystems.\n\nBeyond the process of evolution, the concept of survival of the fittest fosters a mindset whose consequences impact human social, economic and political behaviors. In contrast to Darwin’s natural selection by selfish competition, I have argued here that we can explain evolution in a more factual way as the evolution of interacting information and its pruning by entropic selection. This view founds biology on fundamental physical laws, and attributes the evolution of life to the dance of creation and destruction. Best of all, it teaches us that cooperation is a saving grace; fitness is fittedness.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis paper has emerged from discussions over the years with many colleagues and students; I could have never have done it without their help.\n\n\nReferences\n\nDarwin CR: On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. John Murray, London (2nd edition); 1860. Reference Source\n\nMayr E, Provine WB, Eds: The Evolutionary Synthesis: Perspectives on the Unification of Biology. Harvard University Press, Cambridge MA; 1998. Reference Source\n\nKuhn TS: The Structure of Scientific Revolutions. (3rd edn.), University of Chicago Press, Chicago; 1996. Reference Source\n\nMayr E: Cause and effect in biology. Science. 1961; 134(3489): 1501–6. PubMed Abstract | Publisher Full Text\n\nHilbe C, Wu B, Traulsen A, et al.: Evolutionary performance of zero-determinant strategies in multiplayer games. J Theor Biol. 2015; 374: 115–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenberg E, Sharon G, Atad I, et al.: The evolution of animals and plants via symbiosis with microorganisms. Environ Microbiol Rep. 2010; 2(4): 500–6. PubMed Abstract | Publisher Full Text\n\nKau AL, Ahern PP, Griffin NW, et al.: Human nutrition, the gut microbiome and the immune system. Nature. 2011; 474(7351): 327–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKramer J, Meunier J: Kin and multilevel selection in social evolution: a never-ending controversy? [version 1; referees: 4 approved]. F1000Res. 2016; 5: pii: F1000 Faculty Rev-776. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCohen IR, Harel D: Explaining a complex living system: dynamics, multi-scaling and emergence. J R Soc Interface. 2007; 4(13): 175–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNowak MA: Five rules for the evolution of cooperation. Science. 2006; 314(5805): 1560–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaynard-Smith J: The Evolution of Sex. Cambridge Univ. Press, Cambridge; 1978. Reference Source\n\nHadany L, Beker T: Sexual selection and the evolution of obligatory sex. BMC Evol Biol. 2007; 7: 245–251. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZahavi A, Zahavi A: The Handicap Principle: A Missing Piece of Darwin’s Puzzle. Oxford Univ. Press, Oxford; 1997. Reference Source\n\nCrossin GT, Hatcher BG, Denny S, et al.: Condition-dependent migratory behaviour of endangered Atlantic salmon smolts moving through an inland sea. Conserv Physiol. 2016; 4(1): cow018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIms RA, Yoccoz NG, Killengreen ST: Determinants of lemming outbreaks. Proc Natl Acad Sci U S A. 2011; 108(5): 1970–1974. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKang TB, Ben-Moshe T, Varfolomeev EE, et al.: Caspase-8 serves both apoptotic and nonapoptotic roles. J Immunol. 2004; 173(5): 2976–2984. PubMed Abstract | Publisher Full Text\n\nCohen IR: Informational landscapes in art, science, and evolution. Bull Math Biol. 2006; 68(5): 1213–29. PubMed Abstract | Publisher Full Text\n\nKoonin EV: The logic of Chance: The Nature and Origin of Biological Evolution. FT Press, Upper Saddle River, New Jersey; 2012. Reference Source\n\nShannon CE: A Mathematical Theory of Communication. Bell Syst Tech J. 1948; 27: 379–423 & 623–656. Reference Source\n\nFarnsworth KD, Nelson J, Gershenson C: Living is information processing: from molecules to global systems. Acta Biotheor. 2013; 61(2): 203–22. PubMed Abstract | Publisher Full Text\n\nAtlan H, Cohen IR: Immune information, self-organization and meaning. Int Immunol. 1998; 10(6): 711–7. PubMed Abstract | Publisher Full Text\n\nLouzoun Y, Atlan H: The emergence of goals in a self-organizing network: a non-mentalist model of intentional actions. Neural Netw. 2007; 20(2): 156–71. PubMed Abstract | Publisher Full Text\n\nZhang J: Evolution by gene duplication: an update. Trends Ecol Evol. 2003; 18(6): 292–8. Publisher Full Text\n\nCohen IR: Tending Adam’s Garden: Evolving the Cognitive Immune Self. Academic Press, London; 2000. Reference Source\n\nGould SJ: The exaptive excellence of spandrels as a term and prototype. Proc Natl Acad Sci U S A. 1997; 94(20): 10750–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMargulis L: Symbiogenesis. A new principle of evolution rediscovery of Boris Mikhaylovich Kozo-Polyansky (1890–1957). Paleontol J. 2010; 44(12): 1525–39. Publisher Full Text\n\nSagan L: On the origin of mitosing cells. J Theor Biol. 1967; 14(3): 225–274. PubMed Abstract | Publisher Full Text\n\nNoy A, Sutthibutpong T, A Harris S: Protein/DNA interactions in complex DNA topologies: expect the unexpected. Biophys Rev. 2016; 8(3): 233–243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong X, Xu W, Romisher R, et al.: Gut Microbiome and Infant Health: Brain-Gut-Microbiota Axis and Host Genetic Factors. Yale J Biol Med. 2016; 89(3): 299–308. PubMed Abstract | Free Full Text\n\nCohen IR, Atlan H, Efroni S: Genetics as explanation: Limits to the Human Genome Project. In: eLS. JohnWiley & Sons, Ltd: Chichester; 2016. Publisher Full Text\n\nAbrams PA: Predators that benefit prey and prey that harm predators: Unusual effects of interacting foraging adaptation. Am Nat. 1992; 140(4): 573–600. Publisher Full Text\n\nBarnosky AD, Lindsey EL: Timing of Quaternary megafaunal extinction in South America in relation to human arrival and climate change. Quat Int. 2010; 217(1–2): 10–29. Publisher Full Text\n\nHowe HF, Brown JS, Zorn-Arnold B: A rodent plague on prairie diversity. Ecol Lett. 2002; 5(1): 30–36. Publisher Full Text\n\nArendt M, Cairns KM, Ballard JW, et al.: Diet adaptation in dog reflects spread of prehistoric agriculture. Heredity (Edinb). 2016; 117(5): 301–306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolland HD: The oxygenation of the atmosphere and oceans. Philos Trans R Soc Lond B Biol Sci. 2006; 361(1470): 903–915. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDawkins R: The Selfish Gene (30th anniversary edition). Oxford Univ. Press, Oxford; 2006. Reference Source\n\nSwiatczak B, Cohen IR: Gut feelings of safety: tolerance to the microbiota mediated by innate immune receptors. Microbiol Immunol. 2015; 59(10): 573–85. PubMed Abstract | Publisher Full Text\n\nTauber AI: The immune self: theory or metaphor? Immunol Today. 1994; 15(3): 134–136. PubMed Abstract | Publisher Full Text\n\nTauber AI: Immunity: The Evolution of an Idea. Oxford University Press, in press. Reference Source\n\nZu J, Wang J, Huang G: Evolutionary Diversification of Prey and Predator Species Facilitated by Asymmetric Interactions. PLoS One. 2016; 11(9): e0163753. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "18297",
"date": "07 Dec 2016",
"name": "Eugene Rosenberg",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well-written, thought-provoking article that challenges the foundations of Neo-Darwinism and presents an alternative concept for the evolution of complexity. To begin with, Cohen argues that Neo-Darwinism fails to explain cooperation, biological diversity, programmed death and complexity, which we now know are all hallmarks of life on this planet. Although these phenomena appear to contradict the selfishness of Darwinian evolution, evolutionary biologists have attempted to reconcile them with existing theory, more or less successfully. The co-discoverer of evolution, Alfred R. Wallace, argued that a characteristic can evolve also because it benefits the group even though it may be harmful to the individual possessing it. Subsequently, evolutionary biologists referred to this latter concept as group selection. Recently, a number of prominent evolutionary theorists have come to see the body of ideas known loosely as \"multi-level selection theory\" as a potent explanatory principle. Selection can operate simultaneously at different levels of the biological hierarchy, including groups (cooperation).\n\nThe most interesting and important part of the article is the concept of viewing life as the evolution of information. Information can increase by autocatalysis and be destroyed by entropy. According to Cohen, physical principles of information and entropy drive the evolution of life. Entropic selection operates wherever entropy operates, even in the absence of variant individuals, environmental constraints or competition. This is a novel concept.\n\nThe paradigm-changing concept presented in this article raises many questions that should be addressed by scientists from different disciplines. How exactly does information and entropy drive complexity at the cell and molecular levels? How does enthalpy enter the picture? Most important, how can this novel idea be tested? Hopefully, this article will inspire an in-depth examination of Neo-Darwinism.",
"responses": []
},
{
"id": "18271",
"date": "28 Dec 2016",
"name": "Daniel M. Altmann",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this piece Cohen gives a masterclass to young scientists that our aim is, not just to accrue datasets, but to ponder, synthesise and then write something thought-provoking, challenging, elegant and readable. Here he considers a timely update/iteration to the Neo-Darwinian paradigm, proposing a model of evolution of interacting information, with pruning by entropic selection.\n\nThe title and abstract are appropriate.\nHis case is elegantly made, drawing widely on biological evidence and models to construct the argument.\nOne might perhaps have wished in one or two areas for more detailed exposition of recent data in the light of his hypothesis: while interaction of the microbiota with host immunity is briefly alluded to, this has been one of the fastest growing areas in the past few years and surely offers food for thought. Also, any Neo-Darwinian reappraisal surely needs to incorporate space for evidence from environmentally-determined epigenetic modifications of the genome and impacts thereof on survival?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2808
|
https://f1000research.com/articles/5-2806/v1
|
01 Dec 16
|
{
"type": "Method Article",
"title": "DWCox: A density-weighted Cox model for outlier-robust prediction of prostate cancer survival",
"authors": [
"Jinfeng Xiao",
"Sheng Wang",
"Jingbo Shang",
"Henry Lin",
"Doris Xin",
"Xiang Ren",
"Jiawei Han",
"Jian Peng",
"Sheng Wang",
"Jingbo Shang",
"Henry Lin",
"Doris Xin",
"Xiang Ren",
"Jiawei Han"
],
"abstract": "Reliable predictions on the risk and survival time of prostate cancer patients based on their clinical records can help guide their treatment and provide hints about the disease mechanism. The Cox regression is currently a commonly accepted approach for such tasks in clinical applications. More complex methods, like ensemble approaches, have the potential of reaching better prediction accuracy at the cost of increased training difficulty and worse result interpretability. Better performance on a specific data set may also be obtained by extensive manual exploration in the data space, but such developed models are subject to overfitting and usually not directly applicable to a different data set. We propose DWCox, a density-weighted Cox model that has improved robustness against outliers and thus can provide more accurate predictions of prostate cancer survival. DWCox assigns weights to the training data according to their local kernel density in the feature space, and incorporates those weights into the partial likelihood function. A linear regression is then used to predict the actual survival times from the predicted risks. In the 2015 Prostate Cancer DREAM Challenge, DWCox obtained the best average ranking in prediction accuracy on the risk and survival time. The success of DWCox is remarkable given that it is one of the smallest and most interpretable models submitted to the challenge. In simulations, DWCox performed consistently better than a standard Cox model when the training data contained many sparsely distributed outliers. Although developed for prostate cancer patients, DWCox can be easily re-trained and applied to other survival analysis problems. DWCox is implemented in R and can be downloaded from https://github.com/JinfengXiao/DWCox.",
"keywords": [
"DREAM",
"Prostate cancer",
"Cox model"
],
"content": "Introduction\n\nProstate cancer is the 2nd leading cause of cancer death in men in the United States1 and the 6th worldwide2. In the past 10 years more than 2 million men in the US suffered from prostate cancer, and about 5% of those patients had metastatic castrate-resistant prostate cancer (mCRPC), an advanced form of the disease whose outcomes are poor and treatment remains unclear. Survival analysis based on clinical records has attracted researchers’ attention, since it can hopefully direct cancer treatment and help elucidate the disease mechanism.\n\nThe Cox regression3, also known as the proportional hazards model, is a classic model in survival analysis. The simplicity and interpretability of the Cox model come from the proportional hazards assumption, which basically states that the risk can be estimated based on a linear combination of the predictive variables. A trained Cox model can calculate a relative risk score for a new patient based on his/her clinical information, and is thus able to rank patients with their expected order of death. It cannot, though, directly predict the expected time to death.\n\nThe Cox-based model proposed by Halabi et al. in 20144 (referred to as Halabi’s model in the rest of this manuscript) is a state-of-the-art method for clinical prediction of prostate cancer survival. Halabi’s model is outlined in Figure 1(a). It starts with 22 features (“Halabi’s 22 features”), including some previously defined predictors of overall survival and some clinical parameters, picks out the eight most important features (“Halabi’s 8 features”) using L1 regularization, and predicts patients’ risks using those eight features only.\n\nDWCox is also able to predict the days to death using linear regression with the risk scores (not demonstrated in this figure). N: number of patients. MICE: Multivariate Imputation by Chained Equations. L1: Lasso regularization. DW: Density-based weighting. Note that the objective functions in the Cox step of (a) and (b) are different, as discussed in the main text.\n\nWe propose DWCox, a density-weighted Cox model for predicting prostate cancer survival. DWCox was a best-performing method in the 2015 Prostate Cancer DREAM Challenge (PCDC), with performance better than or comparable to the best ensemble approaches. Simulations have shown that DWCox can achieve better performance than a standard Cox model when many sparsely distributed outliers exist in training data. DWCox is implemented in R in a way such that it can be easily re-trained and applied to other survival analysis problems, not restricted to prostate cancer. Please refer to the section “Data and software availability” for a download link and a citable link to the software.\n\n\nMethods\n\nDWCox assigns weights to the training data according to their local kernel density in the feature space, and then trains an adopted Cox model with those weights incorporated into the loss function, as demonstrated in Figure 1(b). DWCox can also predict the actual survival time from the predicted risk score using a linear regression.\n\nThe development of DWCox underwent two phases. It was first developed and tested during the PCDC, and then further refined after its success. In this paper, unless something is stated to happen during the PCDC, DWCox should be understood as what it is now after the post-challenge refinements.\n\nTraining DWCox requires a training group of N patients whose clinical features X and survival outcomes Y are known. X is an N-by-M matrix, where M is the number of clinical features and each element Xij is the value of the jth clinical feature of the ith patient. Y is an N-by-2 matrix, where each row gives the survival outcome of a patient. The 1st column of Y is a vector of the last observed survival time t, and the 2nd column is a vector of binary event indicators d. A patient i with di= TRUE is known to die at time ti. Oppositely, one with di= FALSE is known to be alive at time ti, but no information is available after ti. In the latter case, the record of that patient is said to be censored. In the data sets used in the PCDC, Y is known, while X needs to be constructed from clinical data.\n\nTo ensure fair comparison with Halabi’s model, DWCox constructed X in line with the way Halabi defined his 22 features, as summarized in Table 1 and described in details in the Supplementary material. Note that two features Halabi’s model started with, namely the Charlson comorbidity index and the Biopsy Gleason score, were not considered by DWCox since during the PCDC the former was not available in the training data and the latter was 100% missing in the leaderboard data. (Data were split into training, leaderboard and final validation sets. Details will be described in the Experiments section). That means M = 20.\n\nAt this stage X was not complete (i.e. there were many missing elements in that matrix) due to missing information in the raw clinical records. Those missing values in X were imputed with the algorithm Multivariate Imputation by Chained Equations (MICE)5,6. The idea of MICE is to use Bayesian statistics to iteratively infer the missing values from other known and previously inferred values. Missing values in the training data were imputed with knowledge about the survival outcome, since it was argued that the outcomes could help generate less biased imputations7. The survival outcome was incorporated into the imputation in the form of the Nelson–Aalen estimator as suggested by White and Royston8. Imputation on the leaderboard and final validation data were done without using the survival outcome.\n\nDuring the PCDC, three more binary features were used to indicate the trial ID (described in the Experiments section) of each patient. Those features were removed in post-challenge analysis so that the performance of DWCox does not depend on prior knowledge about the data source.\n\nAfter the imputation, the N-by-M matrix X can be represented by N points scattered in a M-dimensional space 𝔽 (“feature space”). Each point represents a patient whose each coordinate is the value of one of his/her M clinical features. We assign each patient i a weight wi ∈ [0, 1] proportional to the estimated local Gaussian kernel density in the feature space. To calculate wi, we used the default settings of the function kepdf in the R package pdfCluster9. These weights were then divided by the maximum value. Thus a patient with a higher weight indicates there are more other patients with similar clinical features.\n\nAfter density-based weighting, we used the R package glmnet10 to maximize the weighted partial likelihood\n\nDuring the PCDC, L2 regularization was imposed to the objective function. The penalty weight was chosen to optimize the model performance (more specifically, iAUC, as defined in the next subsection) averaged over 100 repeated random sub-sampling validation on the training data. In each random sub-sampling validation experiment, 2/3 of all the training data were randomly selected to train the model with a wide range of possible penalty weights, and the iAUC was evaluated for each possible penalty weight on the remaining 1/3 of the training data. After the PCDC, the regularization was removed from DWCox since its contribution to the model performance was not obvious during the challenge and its removal sped up training.\n\nAfter model training, the risk vector r of the training patients were calculated as\n\nThe trained model was used to predict the risk rtest and the remaining lifetime ttest for a new group of patients whose clinical features Xtest could be constructed from clinical data while the outcome Ytest was not seen by the model. The model performance was then evaluated by comparing rtest and ttest to Ytest.\n\nThe predicted risks rtest were evaluated with the integrated area under the ROC curve (iAUC) as described below. After obtaining β^ by maximizing Equation (1), we can estimate the risks of the patients rtest=Xtestβ^. Then an estimated order of death ô can be constructed by sorting rtest (i.e. ôi = j where i = 1, 2, … , N and rtest,i is the jth smallest element of rtest). By comparing ô with the actual outcome Ytest, at any given time threshold ti we can calculate the area under the receiver operating characteristic curve AUCti. If we integrate AUCti with respect to ti from the 6th to the 30th month, we get the integrated area under curve iAUC ∈ [0, 1]. The greater the iAUC, the better the predicted risks reflect the actual order of death.\n\nDWCox also gives the estimated time to death of the test set: t^test=k^rtest+b^. In the PCDC t^test was evaluated by its RMSE from ttest.\n\nThe open-source release of DWCox is coded in a way such that it can be easily re-trained and applied to other survival analysis problems, not restricted to prostate cancer. To re-train and apply DWCox to a new dataset, users simply need to:\n\n• Format their data into the three matrices X, Y and Xtest.\n\n• Hit enter and get some coffee.\n\n• Now they get the predicted risk rtest and time to event t^test.\n\nHere X and Xtest can have as many rows (i.e. subjects) and columns (i.e. features) as needed. They can have missing values as well. More details can be found in the documentation inside the package.\n\n\nExperiments\n\nDWCox has been developed and evaluated with data from the comparator arms of four phase III clinical trials with over 2,000 mCRPC patients in total treated with first-line docetaxel. Those four trials and the corresponding data providers are:\n\n• ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center)11,\n\n• MAINSAIL (Celgene)12,\n\n• VENICE (Sanofi)13, and\n\n• ENTHUSE-33 (AstraZeneca)14.\n\nDuring the PCDC those trials were referred to with their study IDs (Table 2).\n\nThe development and evaluation of DWCox began with the 2015 Prostate Cancer DREAM Challenge and continued after the challenge. The full anonymized information about the patients in trials ASCENT-2, MAINSAIL and VENICE was released to the challenge participants. As for trial ENTHUSE-33, the participants only knew the clinical records available at the beginning of the trial (\"baseline clinical records\"), while data obtained after the start of the trial including the survival outcome were visible only to the challenge organizers. The challenge goal was to develop models that used the baseline clinical records to predict the patients’ relative risk (sub-challenge 1a), days till death (sub-challenge 1b), and treatment discontinuation (sub-challenge 2) (Table 3).\n\nDWCox was trained on Trials ASCENT-2, MAINSAIL and VENICE (“PCDC training data”) by the authors, and evaluated on Trial ENTHUSE-33 (“PCDC validation data”) by the challenge organizers. Trial ENTHUSE-33 was further divided into a leaderboard set (157 patients) and a validation set (313 patients). The leaderboard set was used to run three leaderboard rounds. In each round, the challenge organizers randomly subsampled 80% patients from the leaderboard set, evaluated the participants’ models on that random sample, and returned the feedback to the participants. After the 3rd leaderboard round, each participating team submitted a final model, whose performance on the validation set was used to rank the teams. Bootstrapping was performed by the challenge organizers to make sure the winning teams gave statistically significantly better predictions than other teams and Halabi’s model. DWCox was involved in the leaderboard rounds of sub-challenge 1a and the final scoring round of sub-challenges 1a & 1b.\n\nSimulation experiments were performed to evaluate the contribution of density-based weighting to the model performance. DWCox was trained and evaluated on 100 simulated data sets (one example is given in Figure 2) separately, each of which was designed to mimic the real challenge data to some extent, while the randomness in the data generation process assured the variation across simulations. In each simulation, three groups of patients were simulated. Each patient had 20 features and an outcome.\n\nEach point represents a patient. The shapes mark the mean of each group. (Best viewed in color).\n\nOne group (“signal group”) represented a group of 1,000 patients that reflected the true correlation between the outcome and the features. The features were sampled from Gaussian distributions:\n\nWe would like to clarify a few things about Equation (4). Readers may get confused if they see an online manuscript with the same title and authors as those of Reference 15, where the minus sign of Equation (4) is outside the parenthesis. Obviously it is a typo, and it has been corrected in the version cited here. Although Equation (4) may not look like a Weibull distribution at first glance, the proof is a very straightforward and standard procedure. The shape and scale parameters of the Weibull distribution is ν and (λeXsignal,i*Tβ)−1/ν respectively.\n\nSuch generated survival times follow a Cox model with the baseline hazard function h0(t) = λνtν−115. The parameters λ, ν and β were estimated from the uncensored part of the PCDC training data as follows. First, we assumed β = 0 and fit a Weibull distribution to the distribution of tuncensored to estimate ν and λ. Then DWCox was applied to the PCDC training data to obtain β^. At this stage β^ did not include β^0, a constant term that affected t^ but not iAUC, since β^0 played no role during the maximization process of Equation (1). We chose a β^0 value such that the mean of the survival times simulated with Equation (4) was close to the mean of the uncensored survival times in the PCDC training data. After getting the estimates of λ, ν and β, tsignal was simulated with Equation (4).\n\nWe then generated 1,000 more patients (“noise group”) to represent outliers, or noises, in the training data. We made the outliers more sparsely and widely distributed in the feature space than the signal group by simulating\n\nThe survival times of the noise group were simulated with a Weibull distribution independent of Xnoise:\n\nA 3rd group of 500 patients (“validation group”) was generated in a fashion similar to that of the signal group.\n\nWe let\n\nAfter simulating the three groups of patients, we mixed the signal and noise groups together to form a training set. DWCox and a 20-feature standard Cox model were trained on this training set, and evaluated with iAUC on the validation group.\n\n\nResults\n\nDWCox was submitted to the sub-challenges 1a & 1b (Table 3) of the 2015 Prostate Cancer DREAM Challenge. Sub-challenge 1a aimed at better predictions on the relative risks and order of death, evaluated with iAUC. Sub-challenge 1b evaluated the models using the RMSE between the predicted days to death and the actual time. While this manuscript is focused on our method, more details about other teams’ methods and performance can be found in papers from the challenge organizers and individual teams.\n\nAnalysis of the PCDC data suggests that there exists rather high heterogeneity across the three training trials and the validation trial. The missing-rate profile of the 20 clinical features varies across trials (Figure 3). The average values of the first two principle components of the 20 features of Trial ASCENT-2 is farther away from those of the validation trial, compared to those of the other two training trials (Figure 4). Leave-one-trial-out cross-validation (i.e. to train with two training trials and evaluate with the left-out training trial) gives very different results when different trials are left out (Table 4).\n\nEach point represents a patient. The shapes mark the average values of each trial.\n\nThose facts give such a clue: If we consider the “true model” underlying the validation trial as the signal, it is very likely that the PCDC training data contain many outliers. Those outliers do not follow the “true model”, and thus tend to bring down the validation-set performance of models that failed to deal with the outliers properly during training. Therefore robustness against outliers is probably important to models aimed at winning the PCDC.\n\nIndeed, several other winning teams of the PCDC tried hard to deal with the outliers in the training data. For example, the top performer (FIMM-UTU) of sub-challenge 1a decided to discard the entire ASCENT-2 trial, because after some manual exploration in the data space they found significant differences in clinical variables that set this trial apart from the other trials. Our team (Team Cornfield) instead used all available data and let DWCox automatically handle the outliers.\n\nDWCox obtained the best average ranking in sub-challenges 1a & 1b among about 50 models (Figure 5). On the PCDC validation data, DWCox gave an iAUC of 0.7789 and a RMSE of 194.8650 days, out-performing Halabi’s model which gave an iAUC of 0.7581 and a RMSE of 196.6704 days. Bootstrapping has shown that DWCox outperforms Halabi’s model with a Bayes Factor (BF) > 3. Note that while the other numbers in this paragraph are official results provided by the challenge organizers, the Halabi RMSE is not. In order to get the Halabi RMSE, we implemented a Halabi’s model and appended to it a linear regression step similar to the one in DWCox. After applying bootstrapping and the BF > 3 threshold against other teams’ submissions, the challenge organizers reported DWCox as a winner in sub-challenge 1b and a runner-up in sub-challenge 1a. The winner of sub-challenge 1a, FIMM-UTU, obtained an iAUC of 0.7915 and a RMSE of 201.3779 days. Their model is an ensemble of penalized cox regressions developed with extensive manual exploration in the data space. More details about the challenge results can be found at https://www.synapse.org/#!Synapse:syn2813558/wiki/232674. Table 1 gives the regression coefficients determined by DWCox.\n\nThe six best teams of each sub-challenge are included. DWCox was submitted by the authors’ Team Cornfield.\n\nAn inverse correlation between the actual survival time t and risk scores r was observed (Figure 6). Note that the adjusted R2 of the linear regression t^=k^r+b^ is small (0.1513), and the shape of the t vs r plot implies that there may exist models better than a linear regression for capturing their correlation.\n\nThe straight line is the linear regression line with slope = -234.6, intercept = 810.3 and adjusted R2 = 0.1513.\n\nIn the 100 repeated simulations (described in the Experiments section), DWCox performed better than a standard Cox model when as many as half of the training data were outliers. DWCox not only gave better average performance over the 100 experiments (Table 5, Figure 7), but also performed consistently better in each experiment (Figure 8, paired t-test p-value = 2.1 × 10−20). The improvement in performance clearly resulted from the density-based weighting, since everything else was the same across the two models.\n\nThe boxes show the medians and inter-quartile ranges (IQR). The vertical black lines extends from the boxes by at most 1.5 IQR. Black points represent experiments whose iAUC is more than 1.5 IQR away from the boxes.\n\nEach point is given by a simulation. The straight line has slope = 1 and intercept = 0.\n\nNote that in the simulations we used iAUC but not the RMSE to evaluate model performance. There are three reasons for that. 1. iAUC evaluates model performance on the validation data in a more comprehensive manner, while RMSE is based on individual predictions which are independent of each other. 2. DWCox’s time-to-event prediction is dependent on its predicted risks. 3. A standard Cox model does not directly give the predicted time-to-event.\n\n\nDiscussion\n\nWe propose DWCox, a density-weighted Cox model for survival analysis that is more robust against overfitting outliers from the training data. In our simulations DWCox outperformed the standard Cox when as many as half of the training data were noise. In the 2015 Prostate Cancer DREAM Challenge (the PCDC), DWCox obtained the best average ranking in sub-challenge 1, which was to predict the risk and survival time of prostate cancer patients from clinical data available at the beginning of trials.\n\nDWCox was one of the only two models among the seven winners of the PCDC sub-challenge 1 that did not use super-learners (or ensemble methods). (The other model16 of the two was a standard Cox trained with different features. In Figure 5 the corresponding team name is M S.) This is a remarkable achievement, since super-learners usually give better results than single methods. Given that now DWCox gives results comparable to or better than ensemble methods, there are even more reasons to prefer DWCox over ensemble ones in real-world applications. During the training of ensemble methods, there often exist some empirical parameters (e.g. the number of base learners to use) that require more hyperparameter tuning, because people do not know exactly which value works best and why. In addition, some ensemble methods (e.g. random forests) have great built-in randomness and produce very complex models, and thus it is sometimes hard to interpret and understand the results they give. Oppositely, the training phase of DWCox involves no empirical parameters or built-in randomness (except when the user wants DWCox to impute the missing data with MICE), and the results can be easily interpreted.\n\nDWCox’s success in the PCDC should be credited to its density-based weighting mechanism. There exists inter-trial heterogeneity in the PCDC data, which implies some training trials may contribute more signals than others, while some may contain more outliers. It turned out that several top-performing methods of the PCDC recognized such problem and tried to handle it properly. DWCox achieved this by taking in all training data and automatically weighting away outliers with the local Gaussian kernel density. DWCox can be easily re-trained and applied to other data sets, not restricted to prostate cancer survival data.\n\nPerhaps the greatest limitation of DWCox also lies in its density-based weighting mechanism. Such mechanism cannot weight away outliers falling inside the signaling region of the feature space, or outliers that happen to cluster together in the feature space and thus give a local kernel density similar to those of the signals. In another extreme case where the data contain few outliers and follow a standard Cox model rather well, introducing weights into the partial likelihood function can make the performance worse. Therefore it is better to apply DWCox to cases where the data are expected to contain some sparsely distributed outliers.\n\n\nData and software availability\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\nThe clinical trial data used in the PCDC, in its raw and processed format, can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/content/149?pcdc=true. Challenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge.\n\nAn R implementation of DWCox can be downloaded from https://github.com/JinfengXiao/DWCox. A citable snapshot of that GitHub repository has also been archived with the DOI: 10.5281/zenodo.16714317.",
"appendix": "Author contributions\n\n\n\nJX and JP designed DWCox and the simulations. JX, SW, JS, HL, DX and XR tested the performance of DWCox against other methods. JX, JS and HL drafted the manuscript, and the other authors helped revise it. JP and JH advised the team.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nResearch was sponsored in part by grant 1U54GM114838 awarded by NIGMS through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (http://www.bd2k.nih.gov), National Science Foundation IIS-1320617, IIS-1354329 and IIS 16-18481, and the U.S. Army Research Lab. under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA). The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the opinions of the funding agencies.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors would like to thank Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Challenge.\n\n\nSupplementary material\n\nThis section gives details about the 20 predictors used by DWCox in the PCDC.\n\n• age: Categorical variable with 3 levels. 18–64 years old = 1; 65–74 years old = 2; at least 75 years old = 3.\n\n• alb: Albumin level in g/L. Continuous variable.\n\n• alp: Natural logarithm of the alkaline phosphatase level in U/L. Continuous variable.\n\n• alt: Natural logarithm of the alanine transaminase level in U/L. Continuous variable.\n\n• analgesics: Prior analgesics use. Binary variable. 1 means yes; 0 means no. Note that this is not exactly the \"opioid analgesic use\" as appeared in the baseline paper, since the latter is not contained in the challenge data set.\n\n• ast: Natural logarithm of the aspartate aminotransferase level in U/L. Continuous variable.\n\n• bili: Natural logarithm of the total bilirubin level in µmol/L. Continuous variable.\n\n• bmi: Natural logarithm of the body mass index in kg/m2. Continuous variable.\n\n• ds: Disease site. Categorical variable with 3 levels. 0 means the disease sites are not at bones or viscera. 1 means the disease sites are at bones but not at viscera. 2 means at least some disease sites are at viscera.\n\n• ecogps: Eastern Cooperative Oncology Group performance status. Categorical variable with 3 levels (0, 1 and 2). The greater the value is, the more severe the situation is for the patient. Technically this variable should have 6 levels (0, 1, …, 5), but Halabi’s model only considers the first 3 levels. Besides, in the challenge training data there is only 1 patient whose ecogps is greater than 2 (and it is 3). Therefore DWCox sets all ecogps > 2 to 2.\n\n• hb: Hemoglobin level in g/dL. Continuous variable.\n\n• ldh: Lactate dehydrogenase level. Binary variable. 1 means the lactate dehydrogenase level is greater than 200 units/liter, which is considered as the value of the upper limit of normal (ULN)18. 0 means the opposite.\n\n• liver: Liver metastases. Binary variable. Yes = 1; No = 0.\n\n• lung: Lung metastases. Binary variable. Yes = 1; No = 0.\n\n• plt: Natural logarithm of the platelet count in 109/L. Continuous variable.\n\n• psa: Natural logarithm of the prostate-specific antigen level in ng/mL. Continuous variable. The reason for taking logarithm is to make the distribution less skewed.\n\n• race: categorical variable with 4 levels. White = 1; Asian = 2; Black = 3; Other or Hispanic = 4.\n\n• radio: Prior radiotherapy. Binary variable. Yes = 1; No = 0.\n\n• testo: Testosterone level in nmol/L. Continuous variable.\n\n• wbc: Natural logarithm of the white blood cell count in 109/L. Continuous variable.\n\n\nReferences\n\nSiegel RL, Miller KD, Jemal A: Cancer statistics, 2015. CA Cancer J Clin. 2015; 65(1): 5–29. PubMed Abstract | Publisher Full Text\n\nGarcia M, Jemal A, Ward EM, et al.: Global cancer facts & figures 2007. Atlanta, GA: American cancer society. 2007; 1(3): 52. Reference Source\n\nCox DR: Regression models and life-tables. In Breakthroughs in statistics.Springer, 1992; 527–541. Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Buuren S, Boshuizen HC, Knook DL, et al.: Multiple imputation of missing blood pressure covariates in survival analysis. Stat Med. 1999; 18(6): 681–694. PubMed Abstract | Publisher Full Text\n\nvan Buuren S, Groothuis-Oudshoorn K: mice: Multivariate imputation by chained equations in R. J Stat Softw. 2011; 45(3). Publisher Full Text\n\nMoons KG, Donders RA, Stijnen T, et al.: Using the outcome for imputation of missing predictor values was preferred. J Clin Epidemiol. 2006; 59(10): 1092–1101. PubMed Abstract | Publisher Full Text\n\nWhite IR, Royston P: Imputing missing covariate values for the Cox model. Stat Med. 2009; 28(15): 1982–1998. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAzzalini A, Menardi G: Clustering via nonparametric density estimation: the R package pdfCluster. arXiv preprint arXiv: 1301.6559. 2013. Reference Source\n\nSimon N, Friedman J, Hastie T, et al.: Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent. J Stat Softw. 2011; 39(5): 1–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nBender R, Augustin T, Blettner M: Generating survival times to simulate Cox proportional hazards models. Stat Med. 2005; 24(11): 1713–1723. PubMed Abstract | Publisher Full Text\n\nShiga M: Two-step feature selection for predicting survival time of patients with metastatic castrate resistant prostate cancer [version 1; referees: awaiting peer review]. F1000Res. 2016; 5: 2678. Publisher Full Text\n\nXiao J: DWCox: A Density-Weighted Cox Model for Outlier-Robust Prediction of Prostate Cancer Survival [Data set]. Zenodo. 2016. Data Source\n\nJoseph J, Badrinath P, Basran GS, et al.: Is the pleural fluid transudate or exudate? A revisit of the diagnostic criteria. Thorax. 2001; 56(11): 867–870. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "18159",
"date": "13 Dec 2016",
"name": "Motoki Shiga",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper proposed a weighted Cox proportional hazards model (DWCox) to reduce the effects of outliers. Experimental results demonstrated that DWCox outperforms the standard Cox model. The proposed method is interesting. This manuscript is well-written.\n\nMajor comments:\nTable 1 shows that the selected features by Halabi’s model (Halabi’s 8 features) and DWCox are quite different. A performance comparison the proposed model with a Cox model using Halabi’s 8 features would be a good demonstration of the proposed method.\n\nPerformance of DWCox and a standard Cox model were compared using only simulated data. A performance comparison using real datasets by leave-one-trial-out CV such as Table 4 is an important experiment to evaluate the proposed method.\n\nMinor comments:\np. 5 (in the above section of Eq. (2)): “sped up training” -> “speed up training”.",
"responses": [
{
"c_id": "2421",
"date": "10 Jan 2017",
"name": "Jinfeng Xiao",
"role": "Author Response",
"response": "Dear Dr. Shiga, Thank you for reviewing our manuscript! We appreciate your feedback. Here is our response to your major comments. DWCox versus Halabi's model Halabi's model is the baseline method of the Prostate Cancer DREAM Challenge (PCDC). As described in the \"Results on the PCDC data\" subsection under the \"Results\" section, the better performance of DWCox compared to Halabi's model was validated by the challenge organizers using bootstrapping. DWCox versus Cox in leave-one-trail-out cross-validation We tried both DWCox and a standard Cox in leave-one-trail-out cross-validations. The difference in iAUC is less than 1%, which is much smaller than the difference across the three leave-one-trial-out cross-validation experiments (Table 4). In this case the difference in iAUC is dominated by the inter-trial heterogeneity, and thus the contribution of density-based weighting is masked. It is also interesting that DWCox's iAUC (0.779) on the validation data set is much higher than its highest iAUC (0.685) in leave-one-trail-out cross-validation experiments. It indicates that the validation trial is better represented by the three training trials, compared to how well each training trial is represented by the other two."
}
]
},
{
"id": "20018",
"date": "13 Feb 2017",
"name": "Sebastian Pölsterl",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an interesting extension of the well-known Cox proportional hazards model if data contain outliers. They demonstrate the advantage over the traditional Cox model on synthetic data and applied their proposed model to a real world problem in the context of the 2015 Prostate Cancer DREAM challenge.\nMajor issues Unfortunately, the author only provides little insight in the motivation for choosing a kernel density estimator to determine the sample weights. In particular, traditional kernel density estimation is only applicable to continuous random variables, whereas feature vectors comprised of clinical variables can contain continuous as well as categorical variables. It is unclear how density estimation was performed when feature vectors are a mix of continuous and categorical variables. Moreover, I strongly suggest to explicitly mention the assumption of the proposed density-weighted Cox model. The authors state that their proposed model is suitable when data “contain some sparsely distributed outliers.” A more systematic approach to thoroughly formulate this assumption would be highly appreciated.\nMinor issues\nPage 2, paragraph 2:\nReference 3, please cite the original paper Cox 1972.\n\nPage 4, paragraph 3:\nI would suggest to change reference 5 to the original work on multiple imputation by Rubin: D. B. Rubin, Multiple imputation for nonresponse in surveys, John Wiley & Sons Inc., 1987.1\n\nPage 5, paragraph 2:\nHow were the candidate values for the L2 penalty chosen?\n\nPage 5, paragraph 3:\nIt is not clear what the coefficients \\hat{b} represent. It seems they are not associated with any features, only \\hat{k} is. Is the error e assumed to be normally distributed? If yes, such a choice might be problematic, because survival times usually follow a skewed distribution. Representing the log survival time as a linear model, as in the case of the accelerated failure time model, is usually preferred.\n\nPage 5, paragraph 5:\nPlease cite the original work on iAUC:\nH. Hung and C. T. Chiang, “Estimation methods for time-dependent AUC models with survival data,” Canadian Journal of Statistics, vol. 38, no. 1, pp. 8–26, 2010.2 H. Uno, T. Cai, L. Tian, and L. J. Wei, “Evaluating prediction rules for t-year survivors with censored regression models,” Journal of the American Statistical Association, vol. 102, pp. 527–537, 2007 3\n\nIt should be mentioned that the RMSE used in the challenge was only with respect to uncensored survival times in the test set.\n\nPage 5, extended applications:\nI would suggest to remove this short section, because it is already clear from the description in the text, that the author’s propose a general model that can be applied to any survival data.\n\nPage 7, paragraph 2:\nPlease cite the recently published paper summarising the Prostate Cancer DREAM challenge\n\nPage 7, Results on the PCDC data:\nThe author’s stated earlier that the Halabi model is based on a Cox model and that a Cox model is not able to directly predict time to death. However, the authors mention that the Halabi model achieved an RMSE of 196.6704. How was this value obtained, if the model is not applicable for this task? It would be helpful of the authors could add the exact Bayes factor of the proposed model.\n\nPage 9, Results on simulate data\nTo better understand the benefit of the proposed density-weighted Cox model, it might be interesting to plot the the i-th weight against the i-th residual in the unweighted Cox model. I would assume that samples with high residuals are assigned a low weight, leading to an overall better prediction.\n\nGrammar The text contains several grammatical errors and convoluted formulations, which damps the overall presentation. I strongly suggest to improve the grammar and wording. I’m only highlighting some obvious errors below.\nPage 3, paragraph 2:\nThe simplicity and interpretability of the Cox model come from the proportional hazards assumption\n\nPage 4, paragraph 5:\nEach point represents a patient whose each coordinate is the value of one of his/her M clinical features.\n\nPage 6, paragraph 1:\nThe challenge goal was to develop models\n\nPage 7, paragraph 1:\nThe shape and scale parameters of the Weibull distribution is",
"responses": []
},
{
"id": "20019",
"date": "17 Feb 2017",
"name": "Riku Klen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments\nThe article introduces a density-weighted Cox model (DWCox). The model was created by Team Cornfield in the 2015 Prostate Cancer DREAM Challenge for outlier-robust prediction of survival. The article is well written and the introduced method is novel. The only major comment about the article is that the comparison of the new method and the existing methods could be more complete.\nThe authors test the DWCox method with the 2015 Prostate Cancer DREAM Challenge data and simulated data. They compare DWCox method with the Cox model and Halabi’s model introduced in reference 4. I suggest that the authors also consider as a fourth alternative method the Smaletz method 1 due to the fact that it was also used in Halabi et al. article. To make the study complete these 4 methods should be compared and the results reported for the simulated data and the 2015 Prostate Cancer DREAM Challenge. It would be interesting to see complete results in the spirit of Table 5 and Figure 7.\nTable 1 shows the 22 featured ranked by DWCox. It would be interesting to know how DWCox behaves when the features with missing values (namely race, testo, ldh or alb) were omitted. Based on the Table 1 these features have small weight and they might have little effect on the prediction results.\nAdditionally, it would be interesting to see how the method behaves on other data sets. However, this general study might be out of the scope of this article.\nFurthermore, it would be more descriptive to explain how the missing values are handled in DWCox approach in page 5.\nDetailed comments:\n- page 1, line 6 of abstract: The result will not be worse in interpretation, but the model’s interpretability will decrease. Hence the last word “result” should be substituted by “model” - page 3, line 18: It should be specified that the reason Cox is not appropriate for testing time dependency is due to the nature of semi-parametric models that are have no assumption on the shape of the hazard function.2 - page 3, line 30: It would be nice to have a citation to clarify the statement thatDWCox was performing “better than or comparable to the best ensemble approaches” - page 10, line 3: the comparison represented in Figure 8 was done using t-test. Maybe Wilcoxon test would have been more appropriate.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2806
|
https://f1000research.com/articles/4-86/v1
|
02 Apr 15
|
{
"type": "Software Tool Article",
"title": "MetaGenSense : A web application for analysis and visualization of high throughput sequencing metagenomic data",
"authors": [
"Damien Correia",
"Olivia Doppelt-Azeroual",
"Jean-Baptiste Denis",
"Mathias Vandenbogaert",
"Valérie Caro",
"Damien Correia",
"Jean-Baptiste Denis",
"Mathias Vandenbogaert",
"Valérie Caro"
],
"abstract": "The detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) or Next-Generation Sequencing (NGS) technologies have proven to be promising approaches for efficient and unbiased detection of pathogens in complex biological samples, providing access to comprehensive analyses. As NGS approaches typically yield millions of putatively representative reads per sample, efficient data management and visualization resources have become mandatory. Most usually, those resources are implemented through a dedicated Laboratory Information Management System (LIMS), solely to provide perspective regarding the available information. We developed an easily deployable web-interface, facilitating management and bioinformatics analysis of metagenomics data-samples. It was engineered to run associated and dedicated Galaxy workflows for the detection and eventually classification of pathogens. The web application allows easy interaction with existing Galaxy metagenomic workflows, facilitates the organization, exploration and aggregation of the most relevant sample-specific sequences among millions of genomic sequences, allowing them to determine their relative abundance, and associate them to the most closely related organism or pathogen. The user-friendly Django-Based interface, associates the users’ input data and its metadata through a bio-IT provided set of resources (a Galaxy instance, and both sufficient storage and grid computing power). Galaxy is used to handle and analyze the user’s input data from loading, indexing, mapping, assembly and DB-searches. Interaction between our application and Galaxy is ensured by the BioBlend library, which gives API-based access to Galaxy’s main features. Metadata about samples, runs, as well as the workflow results are stored in the LIMS. For metagenomic classification and exploration purposes, we show, as a proof of concept, that integration of intuitive exploratory tools, like Krona for representation of taxonomic classification, can be achieved very easily. In the trend of Galaxy, the interface enables the sharing of scientific results to fellow team members.",
"keywords": [
"High Throughput Sequencing",
"Next-Generation Sequencing",
"Laboratory Information Management System",
"Galaxy",
"Django"
],
"content": "Introduction\n\nThe detection and characterization of emerging infectious agents has been a continuing public health concern. High Throughput Sequencing (HTS) Next-Generation Sequencing (NGS) technologies have proven to be promising for unbiased detections of pathogens in complex biological samples. They are efficient and provide access to comprehensive analyses.\n\nIn most large-scale genomic (re)sequencing initiatives involving both sequencing technology, genotyping expertise and computational analyses, the ultimate goal targets analysis of the data in a reference-free context. Depending on sufficient sequencing throughput and availability of reference genomes, raw sequence reads are to be handled by de novo assembly protocols. The choice of the most appropriate assembly algorithm will both depend on the number of sequenced DNA fragments and the genome size of the targeted species. Most well-acknowledged computational bottlenecks for those short-read assemblers concern memory footprints and difficulties in correctly handling repetitive sequences. Assembly very often results in discontinuous sequence contigs and hence insufficient genome coverage. Currently, de novo assembly yields better coverage for small genomes (i.e. bacterial and/or viral species), though assembly of genomes in a metagenomics setup is nowadays considered as complicated and very challenging. Concerning species with no reference in public databases, pre-processing steps are required to increase genome coverage. For example, the use of paired-end sequence data using different insert size libraries is a well established technique to increase assembly scaffold sizes.\n\nGenotype calling from low coverage data may require extra steps of imputation, filling the gaps that remain due to lack of coverage, and results in more accurate genotypes. Identification of candidate haplotypes and inferring the genotype, by either “phasing” the data to known haplotypes or derivation from external reference panels, allows to better characterize missing genotypes among the individuals.\n\nCurrent NGS platforms including Illumina, Ion Torrent/Life Technologies, Pacific Bioscience and Nanopore can generate reads of 100–10,000 bases long allowing better coverage of the genome at lower cost. However, these platforms also generate huge amounts of raw data. For example, the raw data produced by an Illumina HiSeq-2500 platform adds up to 1TB per run. Sequencing reads are recorded as FastQ formatted files along with the corresponding quality score for each nucleotide.\n\nIn addition to those sequence files, it has become important to also consider and store associated sample related metadata (collection date, location, etc.). Thus, NGS projects usually represent such a huge amount of relevant sample-specific sequences that efficient data management and visualization resources have become mandatory. The challenges accompanying HTS technologies raise the following issues: (1) How do we best manage the enormous amount of sequencing data? (2) What are the most appropriate choices among the available computational methods and analysis tools? The issue concerning the growing amount of data can be managed through a dedicated Laboratory Information Management System (LIMS), solely to organize and provide perspective regarding the information contained. The question regarding the lack of adapted intertwining among the wide spectrum of available tools, was in part filled by workflow management systems, even though it still requires fairly advanced knowledge of the tools available at hand.\n\nIndeed, today, hundreds of bioinformatics tools are available, each with specific parameters and each available either through GUI or command lines. Galaxy1–3, is a scientific workflow management system, which provides means to build multi-step computational data-processing, quality control, and analytic results aggregation, while additionally ensuring analysis reproducibility. In addition to a system for composing pipelines, there is a need for an adapted computational infrastructure capable of doing the processing and data storage in a scalable manner.\n\nMetaGenSense is a managing and analytical bioinformatics framework that is engineered to run dedicated Galaxy workflows for the detection and eventually classification of pathogens. It aims to integrate the capacity for large-scale genomic analysis and technical expertise in sequencing and genotyping technology among project partners. The web application was produced in order to facilitate access to high throughput sequencing analysis tools, acting as an information resource for the project and interacting research partners. This user-friendly interface has been designed to associate bio-IT provider resources (a local Galaxy instance, sufficient storage and grid computing power), with the input data to analyse and its metadata. The use of the available Galaxy tools is automated with MetaGenSense. Galaxy, as a pipeline management software, lets you define workflows and pushes the data through that pipeline. The pipeline manager ensures that all the tools in the pipeline get to run successfully, typically spreading the workload over a computational cluster. MetaGenSense is used at the Pasteur Institute to do the bulk of the data processing for a number of HTS projects, and can be adapted to launch any of the software packages available in the Galaxy workflow designer interface. A dedicated LIMS (postgreSQL-based) was developed to ensure data coherence. In more detail, the web interface design is based on the Django web framework (http://www.djangoproject.com). Moreover, the communication with Galaxy is ensured by the BiobBlend library2 which provides a high-level interface for interacting with the Galaxy application, promoting faster interaction, and facilitating reuse and sharing of scripts.\n\n\nSoftware tool - implementation\n\nMetaGenSense is a bioinformatics application that is geared to ease the scientists’ work in management of NGS project-related data and results. MetaGenSense is built upon three major components, two of which are specific to the project: a dedicated LIMS and a Django-based web user-interface. The third component is Galaxy, which is the bioinformatics workflow management system. In the following paragraphs, we describe the interface’s implementation and discuss how communication between the different parts takes place, in a smooth and user-friendly, managing web-user interface.\n\nA LIMS can be described as a software-based laboratory that offers a set of key features that support modern laboratory operations. Those systems have become mandatory to manage the quantity of metadata related to both raw data and their analysis results, obtained through bioinformatic tools. In this project, the LIMS is based on a postgresql database. It was designed and structured with expert knowledge from biologists and bioinformaticians with sequencing competence, in order to answer their specific needs ensued by the sample management. Its main feature is that it was also designed to store interesting and worth sharing information obtained by the analysis, as well as the information about the type of workflow that was used to perform the bioinformatics treatment. The database’s schema is available in the Supplementary Figure 1. We provide here an excerpt of the existing tables (divided in three categories): (1) experimental data (LIBRARY_PREPARATION, SAMPLE, TECHNOLOGY, RUN, GEOGRAPHIC_LOCATION, GPS_COORDS), (2) bioinformatic metadata (RAW_DATA, FILE_INFORMATION, WORKFLOW_DATA, RUN_WORKFLOW, WORKFLOW), and (3) user and project data (PROJECT, PROJECT_SUBSCRIBERS, AUTH_USER).\n\nDjango is a high-level Python Web framework. It encourages rapid development and clean, pragmatic design. It is used by many known websites. Moreover, the python language (https://www.python.org/) has become a reference for scientific applications.\n\nMetaGenSense is divided in 4 sub-applications which are: 1) user_management, 2) lims, 3) workflow_remote, 4) analyse. Each has a specific function, and the task-partitioning has been designed to allow independent evolution of each part according to the user’s needs.\n\n1. user_management: manages user authentication. Examples of implementations comprehend communication with an LDAP user authentication database, but it can be used as a user management database.\n\n2. lims: ensures the organization and the data partitioning according to the selected project. A project contains sample metadata, and enables to share them only with selected. This part of the application handles sample traceability, an important component of any present-day core resource laboratory.\n\n3. worflow_remote: is in charge of the communication with Galaxy. It manages: (a) the instance connection, (b) the user histories (c) the data from galaxy libraries, (d) import of data from a data library to a Galaxy user-history instance. (e) Execution of the selected galaxy workflow. This application handles data storage and links the samples to the selected workflow. In practice, this application could access any of the BioBlend functionalities.\n\n4. analyse: deals with the workflow result files. The user can choose to “save” a file in order to share the results with the other users involved in the project. Large data result files can be exported using the Galaxy export functionality or can be downloaded (if the results file can be dealt with through a web browser).\n\nThe following paragraph discusses communication between MetaGenSense and Galaxy. Scientists and data-managers use Galaxy to facilitate bioinformatics analysis. A large number of XML formatted tool-configuration files have already been integrated, facilitating the execution of e.g. a mapping tool like BWA4 through galaxy instead of executing it using the command line.\n\nFor programming purposes and in order to interact with Galaxy using command line, the Galaxy team initially implemented a Galaxy-API (which allowed, for example, retrieval of the user list of a Galaxy instance, to create a library for a specific user, etc.,). However, this project was rapidly replaced by a highly dedicated and specific python library called BioBlend5. This API gives access to most Galaxy functionalities through scripts and command lines. We prototyped our instances of BioBlend, and validated each task that MetaGenSense was submitting to Galaxy (Figure 1). At the time of development, specific functionalities were not fully ready to use (e.g. the Tools.run_tools function), which made us interact with the BioBlend development team for concomitant finishing and perfection of the tools and accompanying API.\n\nAs mentioned earlier, the sub-application workflow_remote from the web interface uses BioBlend functionalities described in Figure 1.\n\nEverything is integrated and automated except the big data management. Indeed, MetaGenSense senses when new files are copied within the exchange galaxy project directory, but those data need to be copied there using a UNIX terminal or a FileZilla-like solution (https://filezilla-project.org/).\n\nThe MetaGenSense project was initially implemented and validated for metagenomic analyses; most of its uses concern two prototyped workflows designed to preprocess raw fastq data, analyse it and determine the taxonomy distribution within the sample. However, any other type of workflow can be associated to the MetaGenSense application. This only requires an admin user and a workflow identifier.\n\nWe exemplify a use-case of MetaGenSense’s use through the analysis of a batch of biological samples for a dedicated project. The necessary steps to obtain a running MetaGenSense instance, with management of project data and analysis using workflows, are the following:\n\n0/ Log onto MetaGenSense.\n\n1/ Creation of a new project, with a name, a context, a short description and (most importantly) the other persons involved in the project.\n\n2/ Start filling the LIMS database. Enter: a. the sample information b. the library sequencing protocol, c. the run details and d. the raw data file list. The raw data will be subjected to bioinformatic analysis.\n\n3/ At this step, the user needs to use a terminal (or a FileZilla-like tool) to connect to its transfer directory. Create a subdirectory named after the project, and copy the raw data in that directory. This protocol enables MetaGenSense to detect (Sense) the files that will be copied into the Galaxy instance and analysed.\n\n4/ Back on the MetaGenSense GUI, the user needs to click on the “Workflows” button, and click on “import new files” button to import into Galaxy the inputs that were transferred at the previous step.\n\n5/ Create a Galaxy history,\n\n6/ select the workflow,\n\n7/ select the workflow input(s),\n\n8/ launch the analysis,\n\n9/ follow the workflow status,\n\n10/ At each step, the user has three choices: If the results file sizes are larger than 2 GB, they can be exported (using the native Galaxy tools), or if the file sizes are smaller than 2 GB, they can be downloaded or saved in the LIMS, be tagged as interesting and shared with other project members.\n\n11/ Visualization of the results by clicking on the “Analyse” button. All workflow inputs as well as LIMS result files are visible on this tab. Krona6 representation can easily be visualized if stored in html files.\n\n\nDiscussion and conclusions\n\nThe technology evolution in molecular biology, especially in NGS, has moved biology into the big data era (consisting of handling data, computation requirements, efficient workflow design, and knowledge extraction). With this trend, the challenges faced by life scientists have been shifted from data acquisition to data management, processing, and knowledge extraction. While many studies have recognized the big data challenge, few systematically present approaches to tackle it. New findings in biological sciences usually come out of multi-step data pipelines (workflows). Galaxy is such a workflow-managing tool dealing with big data. However, it is still necessary to globally optimize the data flow in an overall multi-step workflow in order to eliminate unnecessary data movement and redundant computation. On the other hand, data information traceability has become an inevitable requirement in a present-day laboratory setup. In the meantime, knowledge-embedded data and workflows are expected to be an integral part of future scientific publications.\n\nWe therefore, engineered MetaGenSense, a Django-based web interface which helps biologists, who are unfamiliar with the design of Galaxy workflows, to quickly obtain analysis results from HTS sequencing projects. It uses Galaxy as a workflow management software and the BioBlend API to remotely manage data upload, workflow execution as well as analysis of results. MetaGenSense covers data processing up to presentation of data and results in a genome browser compatible data format. Its main advantages encompass data handling through its incorporated LIMS, user and project handling in a cooperative context, it enables data sharing without compromising data confidentiality, it features automated workflow execution, resulting altogether in decreasing the data and analysis delivery time. MetaGenSense is available as open-source from GitHub, and can be deployed very easily. Though the prototyped tool is mainly focused on metagenomic sample analysis, its modularity allows it to be easily complemented, through project-specific Galaxy workflows, for a variety of other NGS related initiatives.\n\n\nSoftware availability\n\nhttps://github.com/pgp-pasteur-fr/MetaGenSense\n\nhttps://github.com/F1000Research/MetaGenSense\n\nDOI: 10.5281/zenodo.16510\n\nLicense: GPLv2",
"appendix": "Author contributions\n\n\n\nDC, ODA, MV, JBD and VC designed and implemented the software. DC, ODA and MV wrote the manuscript. VC supervised the project, contributed to discussion and reviewed the manuscript. All authors approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nDamien Correia and Olivia Doppelt-Azeroual were financed by the “COMMISSARIAT A L’ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES” in the scope of a national anti-terrorism fight NRBC project.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nPathoquest, CEA/DSV/IG/Genoscope, Fabien Mareuil (part of the CIB team).\n\n\nSupplementary materials\n\n\n\n\nReferences\n\nGoecks J, Nekrutenko A, Taylor J: The Galaxy Team. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biol. 2010; 11(8): R86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlankenberg D, Von Kuster G, Coraor N, et al.: Galaxy: a web-based genome analysis tool for experimentalists. Curr Protoc Mol Biol. 2010; Chapter 19: Unit 19.10.1–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiardine B, Riemer C, Hardison RC, et al.: Galaxy: a platform for interactive large-scale genome analysis. Genome Res. 2005; 15(10): 1451–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSloggett C, Goonasekera N, Afgan E: BioBlend: automating pipeline analyses within Galaxy and CloudMan. Bioinformatics. 2013; 29(13): 1685–1686. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOndov BD, Bergman NH, Phillippy AM: Interactive metagenomic visualization in a Web browser. BMC Bioinformatics. 2011; 12: 385. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "8901",
"date": "19 Jun 2015",
"name": "Peter Li",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral commentsThe authors have developed MetaGenSense, a web-based application for analysing metagenomics data. MetaGenSense also acts as a laboratory information management system, managing the metagenomics data and the results of their analysis by Galaxy workflows.The work done by the authors sounds promising since I am a fan of Galaxy and of the Django framework. However, I believe the paper can be much improved by providing additional details in a number of places.For example, it would be interesting to learn more about the two prototyped workflows for analysing metagenomics data which are alluded to in the \"Pre-designed Galaxy workflow\" section on page 5.The \"Case study - use example\" section could also be improved by providing screenshots of the MetaGenSense GUI which are relevant to each or some of the steps. At the moment, I have no idea what the GUI for MetaGenSense looks like since there is also no example instance of MetaGenSense available on the Web which would have been useful for reviewing this paper.In the final paragraph, it is stated that MetaGenSense can be easily deployed but looking at the source code in https://github.com/F1000Research/MetaGenSense, there appears to be a lack of documentation to enable me to do this. For example, which version of Django should I use with MetaGenSense? How do I get the MetaGenSense source code integrated with the Django framework? The LIMS for MetaGenSense uses PostgreSQL; how do I set up this database and get it linked to MetaGenSense? How do I provide MetaGenSense with access to a Galaxy server? I think the authors need to provide this type of information to help readers install MetaGenSense for the authors' source code to be more useful.Minor correctionsPage 3:Paragraph 4Pacific Bioscience should be Pacific Biosciences.Paragraph 7BiobBlend should be BioBlend.Page 4worflow_remote should be workflow_remotelike BWA through galaxy - should be, \"like BWA through Galaxy...\"Page 5Case study - use example: Need to be consistent by starting bullet points with capital letters.",
"responses": [
{
"c_id": "1507",
"date": "22 Aug 2016",
"name": "Olivia Doppelt-Azeroual",
"role": "Author Response",
"response": "Thank you for your review. For each of your remarks, we wrote responses in bold: For example, it would be interesting to learn more about the two prototyped workflows for analysing metagenomics data which are alluded to in the \"Pre-designed Galaxy workflow\" section on page 5. This publication aims to present an application combining as you wrote above a lims, a direct link to any Galaxy and a way to sort and manage Galaxy results. The workflow you choose to use is totally arbitrary. In essence, MGS was designed in a way that any Galaxy workflow can be plugged-in. To facilitate testing, we implemented a Virtual Machine image pre-configured to test MetaGenSense directly on a web browser. Instructions for download and use are available in the GitHub README file. However, as instructions for installation of the framework are now available, any developer can download and link MetaGenSense to his Galaxy. The \"Case study - use example\" section could also be improved by providing screenshots of the MetaGenSense GUI which are relevant to each or some of the steps. At the moment, I have no idea what the GUI for MetaGenSense looks like since there is also no example instance of MetaGenSense available on the Web which would have been useful for reviewing this paper. Yes we agree with you, we omitted to add screenshots in the first version of the article. We added a new figure that resumes all MetaGenSense steps and functionalities. It is available at: https://github.com/pgp-pasteur-fr/MetaGenSense/blob/master/doc/images/metagensense_complete.jpeg. It is now the 2nd figure of the article. Moreover all the small figures (steps) are also available in the doc directory of our GitHub repository. In the final paragraph, it is stated that MetaGenSense can be easily deployed but looking at the source code in https://github.com/F1000Research/MetaGenSense, there appears to be a lack of documentation to enable me to do this. For example, which version of Django should I use with MetaGenSense? How do I get the MetaGenSense source code integrated with the Django framework? The LIMS for MetaGenSense uses PostgreSQL; how do I set up this database and get it linked to MetaGenSense? How do I provide MetaGenSense with access to a Galaxy server? I think the authors need to provide this type of information to help readers install MetaGenSense for the authors' source code to be more useful. We improved the documentation on our github repository (https://github.com/pgp-pasteur-fr/MetaGenSense) with elements facilitating deployment. We completed a README file containing well-informed installation and configuration instructions. We also wrote a UserGuide working with readTheDocs documentation tool (http://metagensense.readthedocs.io). Don’t hesitate to tell us if parts of the documentation are still a bit blurry. We will adapt the manuscript with all your minor corrections and suggestions. Thank you very much for your review."
},
{
"c_id": "2204",
"date": "27 Sep 2016",
"name": "Peter Li",
"role": "Reviewer Response",
"response": "To test the authors' software, it will take over 6 hours for me to download the 2.7 GB machine image provided by the authors. I think this is due to my office internet connection being very slow and because of the fact that I am based in the Far East. Unfortunately, waiting 6 hours plus for the machine image to download is not really practical for me but this is not a fault of the authors. I had a look at the instructions to install MetaGenSense which are available from the README.md file in their GitHub repository in an attempt to manually install the software. I feel that the instructions are minimal, for example, the installation requires a requirements.txt file but I could not find one in the MetaGenSense repo. You would need to know that the contents of requirements.txt are listed at the start of the manual installation instructions. The installation instructions also rely on the user doing background reading to find out how to install and use MetaGenSense's dependencies, for example, virtualenv and a database server."
},
{
"c_id": "2301",
"date": "22 Nov 2016",
"name": "Olivia Doppelt-Azeroual",
"role": "Author Response",
"response": "Dear Peter Li, Thank you for your review and remarks. Since last August, we asked a non familiar with MetaGenSense colleague to read and install locally the application. This exercise has enabled us to clarify some points. We simplified the README file (https://github.com/pgp-pasteur-fr/MetaGenSense) and added significant details to the installation procedure part in the readthedocs documentation: As you mentioned, the virtualenv installation procedure has been added. We also clarified the part concerning the requirements, i.e. the developer needs to copy the three requirements in a requirements.txt file and use the “pip” command to install them. http://metagensense.readthedocs.io/en/latest/installation.html#requirements The part concerning the database configuration has also been a bit modified in order to be clearer. We hope the changes applied recently enable you to successfully install MetaGenSense. Best regards, Olivia Doppelt-Azeroual"
}
]
},
{
"id": "8217",
"date": "22 Jun 2015",
"name": "Manuel Corpas",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMetaGenSense is intended to help find pathogen data in metagenomic data created through next generation sequencing. Measured data including the sequencing reads and metadata are fed into a Laboratory Information Management System (LIMS). The application can fetch that information and pipe it into predefined Galaxy workflows, run them and visualise the output via a framework called KRONA.The introduction to the article is perhaps too long (almost half of the article). There are sections that are not necessarily related to the research presented here, e.g., the paragraph focusing on the assembly problem of next generation sequencing reads. It would be useful, however, that the authors give a more comprehensive introduction into metagenomics as this topic is only covered very briefly at the beginning of the introduction.The section on the software tool itself is very technical. I have trouble identifying a clear train of thought. Also it could be shorter and more precise. The case study does not really seem to be a case study on how the application can be used to actually find pathogen information in metagenomic data but is more like a step by step protocol on how to use the application. I suggest that this kind of information should be moved to the documentation and that instead a concrete biological example is demonstrated in the article. Moreover, the title says that MetaGenSense can visualise its results. However, this is not shown in the article. Therefore I would advise the authors to consider replacing the current figure with a figure demonstrating the results of a concrete biological use case.The discussion and conclusion seem to be a summary rather than a discussion.MetaGenSense seems to lack many of the standard requirements of a quality software productWe could not find any documentation. The README file on GitHub does not contain any information. The last update to the code was months ago, suggesting that the program is not being developed and maintained actively. We could not find any tests. We could not find any examples, demos or even screenshots of the interface.Therefore we are not convinced that MetaGenSense adheres to the journal’s quality standards.We believe that the article should be revisited and documentation, live examples and tests should be added to the software before the article should be indexed.",
"responses": [
{
"c_id": "1540",
"date": "22 Aug 2016",
"name": "Olivia Doppelt-Azeroual",
"role": "Author Response",
"response": "We tried to answer each of your comments. A second version of the publication will be available soon. Thank you for your critical review of our article. Reviewer Comment: MetaGenSense is intended to help find pathogen data in metagenomic data created through next generation sequencing. Measured data including the sequencing reads and metadata are fed into a Laboratory Information Management System (LIMS). The application can fetch that information and pipe it into predefined Galaxy workflows, run them and visualise the output via a framework called KRONA. The introduction to the article is perhaps too long (almost half of the article). There are sections that are not necessarily related to the research presented here, e.g., the paragraph focusing on the assembly problem of next generation sequencing reads. It would be useful, however, that the authors give a more comprehensive introduction into metagenomics as this topic is only covered very briefly at the beginning of the introduction. Introduction is now shorter in the second version of the publication. It was modified in a way that more emphasis is made on the importance of a true metagenomics component for HTS-type analyses. Information regarding assembly problems are indeed of secondary importance, and they were removed for the new version. Reviewer Comment: The section on the software tool itself is very technical. I have trouble identifying a clear train of thought. Also it could be shorter and more precise. The case study does not really seem to be a case study on how the application can be used to actually find pathogen information in metagenomic data but is more like a step by step protocol on how to use the application. I suggest that this kind of information should be moved to the documentation and that instead a concrete biological example is demonstrated in the article. Moreover, the title says that MetaGenSense can visualise its results. However, this is not shown in the article. Therefore I would advise the authors to consider replacing the current figure with a figure demonstrating the results of a concrete biological use case. Thank you for this interesting remark; the use-case part of the article was modified to be more precise. Moreover, a user can now use MetaGenSense on a concrete biological dataset. We created a virtual machine image preconfigured to directly use MetaGenSense with a web browser. A fastq file, as well as a light version of our metagenomic Galaxy workflow is included. The user can connect to the framework, launch an analysis, explore the framework, as well as the results. Moreover, we also would like to change the title of the publication for it to be more adapted to our approach; replacing visualisation by the word exploration which is really the goal of an application like MetaGenSense. The title of the second version of the article is: \"MetaGenSense : A web application for analysis and exploration of high throughput sequencing metagenomic data.\" Reviewer Comment: The discussion and conclusion seem to be a summary rather than a discussion. MetaGenSense seems to lack many of the standard requirements of a quality software product We could not find any documentation. The README file on GitHub does not contain any information. MetaGenSense README file is now complete. We also have written on a userGuide available directly on our Github repository through the web tool readthedocs : http://metagensense.readthedocs.io. Reviewer Comment: The last update to the code was months ago, suggesting that the program is not being developed and maintained actively. The code in the GitHub repository was committed just before the submission of the article. A few debug and add-ons have been implemented since the previous release. Reviewer Comment: We could not find any tests. As mentioned earlier, we implemented a Virtual Machine Image containing the infrastructure to test our framework. It is pre-configured so that any user can start using MetaGenSense with a web browser. It is available on the Institut Pasteur server as it was too big to be uploaded on GitHub (http://webext.pasteur.fr/metagensense/metagensense.ova). Documentation about this image is available on the README file of our GitHub repository. As Metagenomic analyses are time and storage consuming, we made available a very light version of our workflow with a small fastq file. However, it is enough to test the framework and to understand how the database, the Django framework and the related Galaxy instance are working together. Reviewer Comment: We could not find any examples, demos or even screenshots of the interface. For the screenshots, we added a picture of the framework at each step of its use. It is also available at the url: https://github.com/pgp-pasteur-fr/MetaGenSense/blob/master/doc/images/metagensense_complete.jpeg Moreover, as the screenshots are small, they are also available on our github repository in the doc directory."
}
]
},
{
"id": "8902",
"date": "24 Jun 2015",
"name": "Saskia Hiltemann",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral Comments:The authors describe their application, MetaGenSense, a web-based application for analysing metagenomic data. It provides a user-friendly interface which combines a LIMS system with a Galaxy backend for computation and workflow management.The Django framework is very nice, and I think the integration of Galaxy with a LIMS system is very useful and something many readers will be interested in.However, many aspects of this application are tailored specifically to the authors' local setup. I installed parts of the application, but since no information is provided on how to install the various components (LIMS/Django/KRONA/BioBlend), and since the Galaxy server used in the code was not accessible to me, it was not fully functional, and because documentation was lacking, it was unclear to me how to proceed with the setup.To make this work more valuable to the readers, the following additions would be helpful:Installation instructions for the code on GitHub. The readme file is empty at the moment.How to install the various components (LIMS, Django UI, KRONA, BioBlend)? And how to connect the different components together? How to configure the webserver correctly (apache/nginx/other)? Which parts of the code are specific to the authors' local setup and need to be adapted when readers install their own MetaGenSense instance? A description of the Galaxy workflows used by the authors would also be very interesting, which tools are used? are they available from the Galaxy tool shed? Either create a demo server with an example project or add screenshots of the application to the manuscript. The UI looks quite nice, show it to the readers. The case study section is very technical, and would be enhanced by showing the use-case in terms of a real biological example, add screenshots of a real-world analysis to the various steps in this section.Minor Edits:Capitalize the word \"Galaxy\" throughout.\n\nIn section \"Bioinformatics and HTS projects\", BiobBlend --> BioBlend",
"responses": [
{
"c_id": "1506",
"date": "22 Aug 2016",
"name": "Olivia Doppelt-Azeroual",
"role": "Author Response",
"response": "Thank you very much for your critical review. We posted the answers for your remarks underneath them in bold. To make this work more valuable to the readers, the following additions would be helpful: Installation instructions for the code on GitHub. The readme file is empty at the moment. Yes, the README file is now complete. It contains installation information. Please tell us if you feel that it gathers all useful information to install and configure MetaGenSense. Along with the version 2 of MetaGenSense publication, we built a release of the software. It is available at the URL (https://github.com/pgp-pasteur-fr/MetaGenSense/releases/tag/v1.0) How to install the various components (LIMS, Django UI, KRONA, BioBlend)? And how to connect the different components together? How to configure the webserver correctly (apache/nginx/other)? Which parts of the code are specific to the authors' local setup and need to be adapted when readers install their own MetaGenSense instance? Concerning the LIMS, Django, BioBlend, everything is in the application itself so the connection between the components is natively implemented. For the apache, it is directly linked to Django which is deployed on an apache server. It is very well documented on this url: https://docs.djangoproject.com/en/1.8/howto/deployment/ For KRONA, the javascript which enables the taxonomy distribution exploration is actually generated by a tool, installed in Galaxy. At the Institut Pasteur, it is part of an in-house package which gathers several tools for taxonomy analyses. Those tools are available on GitHub now (it was not a year ago) at the URL: https://github.com/C3BI-pasteur-fr/taxo_pack. To test those tools, we implemented a Virtual Machine image pre-configured to test MetaGenSense directly on a web browser. Instructions are also available in the GitHub README file. For your last question, please look at the “set the settings” part of the README file. A description of the Galaxy workflows used by the authors would also be very interesting, which tools are used? are they available from the Galaxy tool shed? The workflow included on the virtual Machine Galaxy instance contains a light version of our metagenomic analysis workflow. A small fastq file is also included to test it. Either create a demo server with an example project or add screenshots of the application to the manuscript. The UI looks quite nice, show it to the readers. Thank you for that. Yes, another reviewer suggested that we put screenshots of the UI. We added a figure that resumes all the windows and their use for each step. The figure is the available at this url: https://github.com/pgp-pasteur-fr/MetaGenSense/blob/master/doc/images/metagensense_complete.jpeg as well as each of the steps (in bigger picture). This figure is now added to the publication. The case study section is very technical, and would be enhanced by showing the use-case in terms of a real biological example, add screenshots of a real-world analysis to the various steps in this section. Please tell us if the figure resumes well the various steps. For the minor remarks, they are all included in the second version of the publication. Thank you for your critical review of our article."
}
]
}
] | 1
|
https://f1000research.com/articles/4-86
|
https://f1000research.com/articles/5-2803/v1
|
30 Nov 16
|
{
"type": "Research Article",
"title": "The impact of wearing single vision soft contact lenses on the peripheral refractive error",
"authors": [
"Kareem Allinjawi",
"Sharanjeet-Kaur Sharanjeet-Kaur",
"Saadah Mohamed Akhir",
"Haliza Abdul Mutalib",
"Kareem Allinjawi",
"Saadah Mohamed Akhir",
"Haliza Abdul Mutalib"
],
"abstract": "Aim: The purpose of this study was to determine the changes in the relative peripheral refractive error produced by soft single vision contact lenses in myopic schoolchildren. Methods: 27 myopic schoolchildren aged between 13 to 15 years were included in this study. The measurements of central and peripheral refraction were made only on the right eye using a Grand-Seiko WR-5100K open-field autorefractometer without contact lens (WL), and with wearing single vision contact lens (SVCL). Refractive power was measured at center and horizontal eccentricity between 35° temporal to 35° nasal visual field (in 5° steps). Results: SVCL showed an increase in peripheral hyperopic defocus at the nasal and temporal visual field compare with baseline, but this change was not statistically significant (p=0.129). Conclusion: Wearing single vision soft contact lenses increases the relative peripheral hyperopic defocus in myopic schoolchildren.",
"keywords": [
"Myopia",
"hyperopic defocus",
"peripheral retina",
"soft contact lenses"
],
"content": "Introduction\n\nMyopia is the most common type of refractive errors1. It is considered a global health problem. Studies have shown that the myopic eye has a more prolated retinal shape compared to emmetropes2–4. Soft contact lenses are commonly used to correct refractive errors especially in young adults. Using these lenses has become the most widely affordable correction of myopia with an extensive range of power and designs available, and such lenses are frequently worn by myopic children and adults. The contact lens usually affords a wider visual field, better life-style, and better appearance compare with spectacles. However, the incidence rate of microbial keratitis infection ranges from 2.2 to 4.1/10,000 per year for users of ‘daily-wear soft contact lenses, and this rate increases for users of extended-wear soft contact lenses from 13.3 to 20.9/10,000 per year5.\n\nAlthough contact lenses are prescribed to correct the central vision, using them has no effects on the blurred image at the peripheral visual field, which eventually could influence the axial growth of the eyeball. The myopic eye typically demonstrates peripheral retinal hyperopic defocus which results in the eye growing axially backward in order to overcome the blur at the periphery. Studies conducted by Smith and colleagues in monkeys have shown that not only the fovea, but also the peripheral retina, is capable of regulating the emmetropization process6–8. This shows that the peripheral retina is important in determining the ocular development and refractive error. Evidence suggests that hyperopic defocus is associated with progression of myopia in humans9,10. Lin et al. (2010) reported that hyperopic defocus worsens with higher degree of myopia and eccentricity11.\n\nThe impacts of single vision soft contact lenses (SVCL) on the peripheral refraction profile is still in debate. A recent study reported a reduction of relative peripheral hyperopic defocus with using Acuvue 2 SVCL by Johnson & Johnson12. However, another study by Kang et al. (2012)13 recruited 34 young adults aged between 18 and 29 years found an increase in relative peripheral hyperopic defocus when full-correction Proclear SVCL by CooperVision were compared with the naked eye. Since there is evidence of the potential impact of peripheral refraction defocus on the progression of myopia, this study set out to determine the changes in relative peripheral refractive error (RPRE) produced by soft single vision contact lenses in myopic schoolchildren. To date, no study has evaluated the impact of SVCL wearing on the retinal profile in myopic children.\n\n\nMethods\n\nA total of 27 myopic Malaysian schoolchildren (24 females, 3 males) aged between 13 and 15 years were recruited for this cross-sectional study. The study was conducted at the Universiti Kebangsaan Malaysia (UKM) Optometry Clinic and Vision Science Lab. Written informed consent was obtained before enrolment into the study. This research was approved by the Ethics Committee of Universiti Kebangsaan Malaysia (UKM 1.5.3.5/244/NN-144-2013) and followed the tenets of the Declaration of Helsinki for using human subjects. The purpose and procedure of the study were explained to all participants and their parents.\n\nChildren eligible for this study were required to have 6/9 visual acuity or better with best correction, been myopic for more than 6 months, with a spherical component refractive error range between -3.00 to -6.00 D under non-cycloplegic refraction, astigmatism not more than -1.00 D, and having normal ocular and systemic health conditions. Children with manifested strabismus, amblyopia, any ocular conditions associated with myopia, a history of bifocal or progressive spectacles wear, orthokeratology contact lens wear, or currently wearing soft contact lens, were excluded from participation in this study.\n\nA comprehensive ocular examination, which included fundus evaluation, anterior segment assessment, and A-scan ultrasound, was conducted by an experienced optometrist to select the candidates. An ultrasound A-scan (Tomey AL-2000) was used to measure axial length using a handheld probe. The final outcome was calculated as the mean of five measurements.\n\nThe spherical equivalent refractive error (M) for each subject was determined using non-cycloplegic objective and subjective refraction. Central and peripheral refraction were measured using an open-view autorefractometer Grand-Seiko WR-5100K (Grand Seiko Co., Ltd., Hiroshima, Japan). The examination room illumination was dimmed (mean of three measurements: 9.91 ± 1.73 lux, measured using Topcon Luxmeter) to obtain a sufficiently large pupil size enough to measure peripheral retina without using dilatation drops. The measurement was obtained initially without correction lenses (WL), then re-measured again using single vision soft contact lens (SVCL). The subjects were instructed to view fixation targets (green light laser) located at 4 meters arranged horizontally in the positions corresponding to eccentricities from 35º temporal to 35º nasal, in 5º steps. The straight ahead viewing technique was used in this study where the subjects rotated their eyes to view a series of fixation targets. Five refraction measurements were taken at each target fixation for the right eye only, while the left eye was occluded. For statistical analysis, the sphero-cylindrical refractive error measurements were converted into vector components of refraction M, J0, J45 using the equations recommended by Thibos et al. (1997)14 M, J0 and J45 according to Fourier analysis,\n\n\n\nM = sph + (cyl/2),\n\nJ0 = (-cyl/2) cos (2 α),\n\nJ45 = (-cyl/2) sin (2 α),\n\nwhere sph, cyl, and (α) represent sphere, cylinder, and axis, respectively. The relative peripheral refractive error (RPRE) was calculated as the difference between eccentric peripheral refraction and central refraction (the eccentricity point minus the centre value).\n\n\nContact lens design and materials\n\nAll subjects were fitted with single vision contact lenses to their right eyes. Lens powers fully corrected the central refractive error. The lens used in this study was ‘2 week Pure’ by SEED Co. Ltd, Japan. It is a spherical biweekly disposable soft contact lens made of Zwitterionic material SIB (FDA Group IV), 58% water content, with a diameter of 14.2 mm and a base curve of 8.6 mm. The SIB is SEED's original material with superior biocompatibility, similar in structure to protein which is a basic constituent of a human body by containing both positive and negative ions.\n\n\nStatistical analysis\n\nThe statistical analysis software (IMB SPSS version 20, SPSS Inc, IL, USA) for Windows was used to evaluate the data. Only data from the right eye were analysed. Normality of data distribution was tested used Shapiro- Wilk test. A Paired t-test was used to compare the baseline (without correction) and single vision contact lens at the different eccentricities. The differences were considered statistically significant when the p value was lower than 0.05.\n\n\nResults\n\nA total of 27 myopic schoolchildren with a mean age of 14.18 ± 0.88 years (range: 13 years to 15 years) participated in this study with mean axial length of the eye was 24.72 ± 0.92 mm (range: 23.5 1mm to 26.39 mm). Table 1 illustrates the mean spherical equivalent value and the peripheral refraction along the horizontal visual field. The mean central refractive error for the baseline and single vision contact lens was -4.39±0.95 D and -0.22±0.22 D, respectively. The minus value decreased with the farther off-axis which indicated hyperopic shift at the peripheral retina.\n\nValues are expressed in dioptres (D). N is nasal visual field; T is temporal visual field; C is centre.\n\nCompared with the baseline (without contact lens), the ‘2 week Pure’ single vision contact lens from SEED® caused an increase in relative peripheral hyperopia from 10º and beyond in the nasal visual field (VF), and from 20º and beyond in the temporal VF. Table 2 illustrates the mean and standard deviation for the relative peripheral refractive error (RPRE) for the baseline and when using SVCL, as well as the significant P value using paired t-test for the centre refraction and off-axis measured. The baseline showed a statistically significant difference between centre refractive error and 30º and 35º in nasal VF (p=0.001), and from 25º and beyond in temporal VF (p<0.05). However, the statistical significant difference started from 20º and beyond in nasal and temporal VF when using the single vision contact lens (p<0.05). Moreover, J0 and J45 showed no statistically significant difference between the centre refraction and all eccentricity points for baseline and with using SVCL, except J45 at 5º nasal VF with using SVCL (p=0.026).\n\nLegend: Values are expressed in diopters (D). N is nasal visual field; T is temporal visual field; C is center. Bold indicates statistically significant power difference from central point (95% confidence).\n\nFigure 1 illustrates the RPRE without contact lenses and with SVCL for the spherical equivalent value M, horizontal astigmatism component J0, and oblique astigmatism component J45. The hyperopic defocus is greater at the nasal and temporal visual field in the spherical equivalent values M graph. However, J0 and J45 graphs show flat curves from central toward both peripheral sides which indicates no changes at the relative peripheral refraction in baseline as well as in SVCL.\n\nThe spherical equivalent value M, horizontal astigmatism component J0, and oblique astigmatism component J45 in condition of baseline and with using single vision contact lens (SVCL).\n\nAlthough the mean spherical equivalent value M was greater when using single vision soft contact lens SVCL compared with no correction (WL) at all eccentricities, the paired sample t-test used to compare the mean spherical equivalent M between baseline and SVCL showed no statistical significant difference among all eccentricity points (p> 0.05) except at 25º temporal VF (p=0.013) and 20º nasal VF (p=0.004). The horizontal astigmatism component J0, and oblique astigmatism component J45 showed no statistically significant difference at all eccentricity points between baseline and when using SVCL.\n\n\nDiscussion\n\nWith the extensive range of powers and materials of single vision soft contact lenses available, these lenses have become one of the most popular myopia correction modes widely used by young adults. In this study, we evaluated the relative peripheral refractive error along the horizontal meridian. The myopic children who participated in this research were found to have relative hyperopic defocus in the peripheral retina. However, full central myopia correction with using SVCL was found to cause a greater relative hyperopic shift at all eccentricity along the horizontal meridian compared with baseline (WL). The results are in agreement with a study by Kang et al. (2012)13 who found an increase of relative hyperopic defocus for young adult wearing Proclear soft contact lenses when compared with the peripheral refraction measurement of the eye without correction.\n\nFew studies have attempted to observe myopia progression when SVCL are used for full correction. In the early 1970s, several studies reported an increase of myopia progression rate with soft contact lens wearers15,16. However, a recent study found a decrease of progression rate17, while Andreo (1990)18 reported no difference between daily soft contact lens wearers and spectacles wearers in myopia progression after one year of study. A three year randomized clinical study reported no difference in myopia progression between SVCL and spectacle lens wearers among children aged between 11 and 14 years old19. Moreover, Walline et al. (2008)20 conducted a study of 247 soft contact lens and 237 spectacles wearers aged between 8 and 11 years. The study confirmed that SVCL has no significant effect on axial length or corneal curvature progress compared with spectacles group.\n\nFulk et al. (2003)21 found in a 1-year study a three times higher rate of myopia progression (-0.75D/year) in children who chose to switch from spectacle lens to SVCL compared with children who continued to wear spectacles who had average myopia progression (-0.23D/year). Moreover, a recent non randomized longitudinal study over 2 years reported a statistically significant but clinically not significant increase in myopia progression for children who chose to switch from spectacles to soft contact lenses -0.52±0.46 D compared with children remained on wearing spectacle lens -0.25±0.39 D22. This higher rate of progression found with SVCL wearers is in agreement with our findings of an increased relative peripheral retinal hyperopic defocus when using SVCL compare with baseline (without contact lens). Taking results of these studies together, we expect that when children wear the SVCL for prolonged periods of time the induced higher relative hyperopic defocus will speed up the progression of myopia.\n\nIt is believed that hyperopic defocus in the peripheral retina stimulates myopia progression and axial elongation23. Wagner et al. (2013)24 demonstrated the optical profile power measurement from central to peripheral retina for six popular single vision commercial soft contact lenses. They found that most of the standard single vision soft contact lenses for myopes have more minus power toward the periphery zone. Therefore, prescribing contact lenses which increase the minus power at the periphery would explain the results of previous studies which showed an increase of myopia progression in SVCL wearers compared with spectacle lens wearers. This might also explain the increase of relative peripheral hyperopic refraction in myopic children we observed in this study compared with the baseline (without correction), which could be due to the increase of minus power at the periphery of the ‘2 week pure’ single vision soft contact lens.\n\nA recent study by Blacker et al. (2009)25 in the United States compared the progression of myopia in children wearing single vision soft contact lenses of two different material groups. The study reported progression of myopia +0.02 D for those who used silicon hydrogel lenses compared with -0.41 D for low Dk/t for those wearing hydrogel contact lenses over 3 years. However, the study was non-randomized and the sample size was not matched between the groups where there were 54 patients wearing hydrogel and 230 patients wearing silicon hydrogel contact lenses, and the aged group was 38±11 years for silicone hydrogel contact lens wearers compared with 23±12 years for Low Dk/t contact lens wearers. A previous randomized study on 92 adult subjects reported myopia progression by +0.18±0.33D in silicone hydrogel group compared with a -0.23±0.36D low Dk/t hydrogel group after 6 months extended wear26. The authors hypothesized that pressure related to contact lens wearing has a direct impact on the redistribution of corneal tissue when using high Dk/t silicone hydrogel soft contact lenses. Moreover, low Dk/t hydrogel material could lead to hypoxia-associated corneal thinning which could temporarily influence the results. In this research we used hydrogel single vision contact lens made of zwitterionic material SIB and high water content.\n\n\nConclusions\n\nThis study demonstrates that wearing single vision soft contact lenses increases the relative peripheral hyperopic defocus in myopic schoolchildren. Although the higher values found in SVCL compared with naked eye were statistically and clinically not significant, it is possible that wearing soft contact lenses at childhood age for a prolonged period might speed up the progression rate of myopia. However, a longitudinal study using ‘2 week Pure’ single vision contact lens from SEED® is needed to evaluate the impact of wearing this contact lens on myopia progression in children.\n\n\nData availability\n\nF1000Research: Dataset 1. Spherical equivalent refractive error M, 10.5256/f1000research.10080.d14397827\n\nF1000Research: Dataset 2. Horizontal astigmatism components J0, 10.5256/f1000research.10080.d14397928\n\nF1000Research: Dataset 3. Oblique astigmatism components J45, 10.5256/f1000research.10080.d14398029\n\nF1000Research: Dataset 4. Relative peripheral refractive error M, 10.5256/f1000research.10080.d14398130\n\nF1000Research: Dataset 5. Relative peripheral horizontal astigmatism J0, 10.5256/f1000research.10080.d14398231\n\nF1000Research: Dataset 6. Relative peripheral oblique astigmatism J45 w, 10.5256/f1000research.10080.d14398332",
"appendix": "Author contributions\n\n\n\nSK = designed the experiment and conceived the study. HAM: Data analysis.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by a grant (to SK) from Universiti Kebangsaan Malaysia (DPK-2014-002).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank SEED Co. Japan for supplying the contact lenses and solutions used in this study.\n\n\nReferences\n\nPan CW, Ramamurthy D, Saw SM: Worldwide prevalence and risk factors for myopia. Ophthalmic Physiol Opt. 2012; 32(1): 3–16. PubMed Abstract | Publisher Full Text\n\nMutti DO, Sholtz RI, Friedman NE, et al.: Peripheral refraction and ocular shape in children. Invest Ophthalmol Vis Sci. 2000; 41(5): 1022–1030. PubMed Abstract\n\nAtchison DA, Jones CE, Schmid KL, et al.: Eye shape in emmetropia and myopia. Invest Ophthalmol Vis Sci. 2004; 45(10): 3380–3386. PubMed Abstract | Publisher Full Text\n\nAtchison DA, Pritchard N, Schmid KL, et al.: Shape of the retinal surface in emmetropia and myopia. Invest Ophthalmol Vis Sci. 2005; 46(8): 2698–2707. PubMed Abstract | Publisher Full Text\n\nLiesegang TJ: Contact lens-related microbial keratitis: Part I: Epidemiology. Cornea. 1997; 16(2): 125–31. PubMed Abstract\n\nSmith EL 3rd, Kee CS, Ramamirtham R, et al.: Peripheral vision can influence eye growth and refractive development in infant monkeys. Invest Ophthalmol Vis Sci. 2005; 46(11): 3965–3972. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Ramamirtham R, Qiao-Grider Y, et al.: Effects of foveal ablation on emmetropization and form-deprivation myopia. Invest Ophthalmol Vis Sci. 2007; 48(9): 3914–3922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Hung LF, Huang J: Relative peripheral hyperopic defocus alters central refractive development in infant monkeys. Vision Res. 2009; 49(19): 2386–2392. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoogerheide J, Rempt F, Hoogenboom WP: Acquired myopia in young pilots. Ophthalmologica. 1971; 163(4): 209–215. PubMed Abstract | Publisher Full Text\n\nSchmid GF: Association between retinal steepness and central myopic shift in children. Optom Vis Sci. 2011; 88(6): 684–690. PubMed Abstract | Publisher Full Text\n\nLin Z, Martinez A, Chen X, et al.: Peripheral defocus with single-vision spectacle lenses in myopic children. Optom Vis Sci. 2010; 87(1): 4–9. PubMed Abstract | Publisher Full Text\n\nShen J, Clark CA, Soni PS, et al.: Peripheral refraction with and without contact lens correction. Optom Vis Sci. 2010; 87(9): 642–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKang P, Fan Y, Oh K, et al.: Effect of single vision soft contact lenses on peripheral refraction. Optom Vis Sci. 2012; 89(7): 1014–21. PubMed Abstract | Publisher Full Text\n\nThibos LN, Wheeler W, Horner D: Power vectors: An application of Fourier analysis to the description and statistical analysis of refractive error. Optom Vis Sci. 1997; 74(6): 367–75. PubMed Abstract | Publisher Full Text\n\nHarris MG, Sarver MD, Polse KA: Corneal curvature and refractive error changes associated with wearing hydrogel contact lenses. Am J Optom Physiol Opt. 1975; 52(5): 313–9. PubMed Abstract | Publisher Full Text\n\nBarnett WA, Rengstorff RH: Adaptation to hydrogel contact lenses: variations in myopia and corneal curvature measurements. J Am Optom Assoc. 1977; 48(3): 363–6. PubMed Abstract\n\nLazon de la Jara P, Sankaridurg P, Ho A, et al.: A silicone hydrogel contact lens produced less myopia progression than single vision spectacles in Chinese children over a 6-month period. Invest Ophthalmol Vis Sci. 2010; 51(13): E-Abstract 2198. Reference Source\n\nAndreo LK: Long-term effects of hydrophilic contact lenses on myopia. Ann Ophthalmol. 1990; 22(6): 224–7, 229.PubMed Abstract\n\nHorner DG, Soni PS, Salmon TO, et al.: Myopia progression in adolescent wearers of soft contact lenses and spectacles. Optom Vis Sci. 1999; 76(7): 474–9. PubMed Abstract | Publisher Full Text\n\nWalline JJ, Jones LA, Sinnott L, et al.: A randomized trial of the effect of soft contact lenses on myopia progression in children. Invest Ophthalmol Vis Sci. 2008; 49(11): 4702–6. PubMed Abstract | Publisher Full Text\n\nFulk GW, Cyert LA, Parker DE, et al.: The effect of changing from glasses to soft contact lenses on myopia progression in adolescents. Ophthalmic Physiol Opt. 2003; 23(1): 71–7. PubMed Abstract | Publisher Full Text\n\nMarsh-Tootle WL, Dong LM, Hyman L, et al.: Myopia Progression in Children Wearing Spectacles vs. Switching to Contact Lenses. Optom Vis Sci. 2009; 86(6): 741–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Hung LF: The role of optical defocus in regulating refractive development in infant monkeys. Vision Res. 1999; 39(8): 1415–1435. PubMed Abstract | Publisher Full Text\n\nWagner S: Power profiles of single vision soft contact lenses. Poster 52, British Contact Lens Association Conference 6–9 June 2013; Manchester, UK.\n\nBlacker A, Mitchell GL, Bullimore MA, et al.: Myopia progression during three years of soft contact lens wear. Optom Vis Sci. 2009; 86(10): 1150–1153. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJalbert I, Strotton S, Naduvilath T, et al.: Changes in myopia with low-Dk hydrogel and high-Dk silicone hydrogel extended wear. Optom Vis Sci. 2004; 81(8): 591–596. PubMed Abstract | Publisher Full Text\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 1 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 2 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 3 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 4 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 5 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 6 in: The impact of wearing single vision soft contact lenses on the peripheral refractive error. F1000Research. 2016. Data Source"
}
|
[
{
"id": "20014",
"date": "08 Feb 2017",
"name": "Li-Fang Hung",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI would suggest the authors change the topic and revise the conclusion (in text and abstract) of the manuscript. Due to the differences of contact lens power profiles made by different contact lenses manufacturers, the peripheral refractions would be influenced when the subjects wear SVCLs made by different manufacturers. This study only use one particular brand of contact lenses, hence, cannot conclude that all the SVCLs would have the same impact on the peripheral refraction.\n\nFirst line of the second paragraph in the introduction: Although contact lens focus on correcting the central vision, the peripheral image, in most of the conditions, will still be improved, just not so accurate (or adequate) as that of central vision. Please revise the sentence.\n\nLast sentence in the introduction: Currently, there are more published studies that evaluate the SVCL wearing on the retinal profile in myopic children now. Please revise the last sentence and add references.\n\nFor the statistical analysis: When comparing different eccentricities, it involved multiple comparison problem. It is prefer to use multiple comparison corrections.",
"responses": []
},
{
"id": "21458",
"date": "15 May 2017",
"name": "Pauline Kang",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study evaluated changes in peripheral refraction induced by ‘2 week pure’ soft contact lenses in a group of children. There are issues which need to be addressed as described below:\nGeneral comments:\nWhy do authors expect there to be a difference in peripheral refraction changes to published reports on adults, if peripheral refraction has been shown to be relatively similar between myopic children and adults?\nFurthermore, these results cannot be generalised to all contact lenses and is specific for the included lens design.\nIf results are not statistically significant, authors should not state that SVCLs increased relative hyperopic defocus.\nThe discussion details many studies exploring differences in myopia progression between SVCLs, spectacles, and between different materials of SVCLs. Instead, authors should compare peripheral refraction profiles to previous publications in children.\nLimitations of instrumentation has not been discussed – authors only measured peripheral refraction along the horizontal meridian and it has been shown that the vertical meridian tends to demonstrate myopic defocus in myopic individuals. Furthermore, the autorefractor samples within a small arc which may mask subtle changes, which may be a possible reason for non-significant changes reported in this study.\nAuthors need to be careful in statements regarding peripheral vision/refraction and myopia/emmetropization as these are hypotheses or proposals that have not been fully validated.\nMinor comments:\nAbstract:\nConclusion: Results are not statistically significant therefore authors should not state that SVCLs increased relative hyperopic defocus.\nIntroduction - first paragraph:\nWhy is myopia considered a global health problem?\n\nProlate, not prolated\n\nReplace affordable with a more appropriate word\nIntroduction – second paragraph:\nThere have been numerous studies have characterised changes in peripheral vision or defocus with various single vision and multifocal soft contact lenses.\n\nPeripheral hyperopic defocus is believed to cause axial length elongation\n\nMore recent evidence has suggested that peripheral hyperopic defocus may be a consequence rather than cause of myopia development.\nMethods:\nWere p values adjusted to take into account multiple comparisons?\n\nHow long were contact lenses worn before measurements were taken?\n\nDid authors note any decentration in their subjects, as this may influence results\nResults – first paragraph:\nRemove the last sentence\nFigure keys:\nbaseline, not base\nTable 1, 2 and Figure 1:\nI am surprised with such low J0 values in the periphery?\nDiscussion – first paragraph :\nthe results were not statistically significant, therefore authors should not state that the contact lenses induced any changes in relative peripheral refraction.\nFourth paragraph:\nthis is true if peripheral defocus is to some extent responsible or involved in myopia development. This is currently still under debate.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2803
|
https://f1000research.com/articles/5-2802/v1
|
30 Nov 16
|
{
"type": "Research Article",
"title": "Approaches to R education in Canadian universities",
"authors": [
"Michael A. Carson",
"Nathan Basiliko",
"Nathan Basiliko"
],
"abstract": "Introduction: R language is a powerful tool used in a wide array of research disciplines and owes a large amount of its success to its open source and adaptable nature. The popularity of R has grown rapidly over the past two decades and the number of users and packages is increasing at a near exponential rate. This rapid growth has prompted a number of formal and informal online and text resources, the volume of which is beginning to present challenges to novices learning R. Students are often first exposed to R in upper division undergraduate classes or during their graduate studies. The way R is presented likely has consequences for the fundamental understanding of the program and language itself; user comprehension of R may be better if learning the language itself followed by conducting analyses, compared to someone who is learning another subject (e.g. statistics) using R for the first time. Consequently, an understanding of the approaches to R education is critical. Methods: To establish how students are exposed to R, we used a survey to evaluate the current use in Canadian university courses, including the context in which R is presented and the types of uses of R in the classroom. Additionally, we looked at the reasons professors either do or don’t use/teach R. Results: We found that R is used in a broad range of course disciplines beyond statistics (e.g. ecology) and just over one half of Canadian universities have at least one course that uses R. Discussion and Conclusions: Developing programming-literate students is of utmost importance and our hope is that this benchmark study will influence how post-secondary educators, as well as other programmers, approach R, specifically when developing educational and supplemental content in online, text, and package-specific formats aiding in student’s comprehension of the R language.",
"keywords": [
"R",
"RStudio",
"open source",
"higher education",
"statistics",
"data analysis",
"programming"
],
"content": "Introduction\n\nThe R language was developed in the early 1990s by Ross Ihaka and Robert Gentleman in an attempt to write a statistical computing language that combined desirable aspects of two other languages, Scheme1 and S2. For all non-developer user purposes, R is an interpreted object-oriented language that relies heavily on packages, which contain functions that users apply to their data (see Ihaka and Gentleman, 19963 for a more through explanation of the details and thought process behind the development of R). It could be argued that the success of R was by luck or maybe design, but the choice to target usage at statisticians meant that it had a reasonably large and dedicated user base from its inception, and subsequently, it has gained attention across academic and professional disciplines4. In a general sense, the concept of user-developed packages is the reason R has gained a lot of ground over other statistical software, as the broader community is given the tools and freedom to write specific code for their disciplines and research questions, which is formatted into functions and grouped into a package. These packages are then vetted by the R Core Team and made available through the CRAN repository5. This flexibility and R’s social organization has led to a rapid growth of R use and the R community, which is reflected in a number of areas, including the expansion of the Core Team, an exponential increase in the number of packages in CRAN (ca. 100 in 2001 vs. ca. 7,000 in 2016), the rise of email list traffic6, the number of downloads per year, and general R activity7. Additionally, based on download history from CRAN, there are millions of current R users8, R has had a consistent rise in Google scholar hits (SAS and SPSS are declining)9, and there have been more packages added in 2015 than have existed in all of the SAS institute’s history9. Taken together, these metrics indicate the rise in popularity of R, and highlight the importance of teaching the next generation of students and researchers the most applicable skills.\n\nWe are living in a time of rapid technological advancement and an age where the free sharing of ideas is becoming a standard practice10,11. Evidence of this is seen in the proven effectiveness of the open source framework, within which R is developed12. For R users, open source means not reinventing the wheel every time a new problem arises. Instead they can search for packages to address specific analyses that others have written and made publically available on CRAN or through sources like GitHub13 and Omegahat14. The open source nature not only means that primary R resources are freely available, but that the R community at large is also willing to provide troubleshooting support, as evidenced by the multiple independent support websites (e.g. Quick-R and Cookbook for R) and community forums that address user questions and problems (e.g. Stackoverflow and R-bloggers). Thus, this means that an average user has a diverse toolset to pull from, and an even larger support community to help them accomplish their task at hand. The open source nature of R and its sharing community are two important reasons that R is gaining popularity so rapidly in many business, research, and educational sectors.\n\nWhile the R language is not specifically limited to data analysis, in science, technology, engineering, and math (STEM) disciplines it is commonly used for this purpose. For example, there were approximately 35,000 scholarly articles published across all disciplines (STEM and others) in 2015 with R as the primary analysis tool, second only to SPSS, which had decreased by 25% from the previous year9. This is most likely because unlike other analysis tools, R is adaptable to specific problems, while remaining versatile enough to address more common data management, analysis, and graphing needs as well; users can easily write new code or adapt other users’ code to address their specific needs. In this way R promotes an active learning process, which is proven to increase students’ performance in STEM education15. Additionally R is an “all in one” environment that streamlines data analysis workflow from data management and analysis to graphical data presentation and text processing. The concept of packages is also in line with many STEM disciplines and the nature of the scientific process and dissemination, where a reader can find the exact package used by others and do a similar analysis for their study. Finally, R gives STEM users multiple options with many packages that do nearly the same thing in slightly different ways. For example if a user wants to create a general plot, that capability is in the base16 package, but there are also options to use an array of other packages that generate plots in slightly different ways (e.g. lattice17 and ggplot218). In short, R gives users options and is easily adaptable to exact tasks at hand, greatly benefiting STEM users as well as the R community at large.\n\nThe importance of programming education is becoming evident and universities have a significant role to play19. R is a prime language to use in undergraduate classrooms because it is extremely versatile, free, has a large user community, is relatively easy to learn in terms of programming (see Fox 20096), and is supported across multiple computing platforms. This means that a student could encounter R in a wide array of classes ranging from traditional statistics to, for example, an ecological modeling or bioinformatics course. The programming skills learned in one course would easily transfer to other courses, and departments could benefit by coordinating course content to better capitalize on this continuum. Along this line, R allows students to preform practical applications rapidly upon learning the language, whereas languages geared more towards software development require more base knowledge before writing more meaningful code. This means that R is a compelling language to learn for novice programmers. Furthermore a solid foundation in R better prepares undergraduate students for postgraduate education or for seeking employment in a broad range of sectors. While there are other programing languages, the overall versatility and open source nature of R means that many research institutions and cooperate entities are using R at an increasing rate. Even if R were not the primary coding language used later in a career, learning any programming language often means that a student is better equipped to enter the job market20; however, most other data management and statistical programs an undergraduate is likely to encounter are a point-and-click format (e.g. Excel and SPSS), so they gain little practical coding experience.\n\nThe goal of this survey-informed study was to highlight R usage at Canadian universities, shedding light on which types of courses use R, as well as overall R training offerings at the institutional level. Additionally, we look at some of the benefits and challenges professors encounter teaching R to their students, and motivations for using R in their research programs and teaching R in the classroom. To our knowledge this is the first study to look specifically at R usage in an educational context, and thus may also help serve as a benchmark for future characterization of R usage in universities in general and Canadian universities through time.\n\n\nMethods\n\nA survey of 70 Canadian universities was conducted using Google Forms (https://www.google.com/forms/about/) from June 1, 2016 to June 15, 2016 to estimate the number of universities offering courses that either use or teach the R. Universities were identified as recognized institutions of higher education in Canada that offer four-year degree programs. The survey was developed to specifically address how many universities offered (a) course(s) using R and in what capacity the program was used within courses. Following research ethics approval, the survey was sent to ca. 2,500 professors in Biology, Ecology, Chemistry, Statistics, Mathematics, and Computer Science departments (considered to be the most likely sources of R usage in a university). Contact information for individual professors was obtained from departmental websites at each university in May, 2016. Only full time active faculty were sent the initial request (i.e. the survey was not sent to adjunct/emeritus professors, graduate students, or technologists). Additionally, a request was made to forward the survey to any other faculty or departments that a respondent thought appropriate or had knowledge of R usage at their particular university. The survey was formatted with conditional responses and ranged from 10 to 22 questions depending on the respondents’ answers. For example, if a respondent answered “yes” to teaching R they were taken to a different section than if they answered “no” to the same question. Survey questions and a figure diagraming conditional response layout is available in Supplementary File 1 and Supplementary File 2, respectively. Following the response period, results were downloaded and analyzed to determine the extent of R usage across Canada and evaluate usage patterns.\n\nBoth individual question responses, as well as combined question information, were used to evaluate R usage. For example, the response rate of universities was simply calculated by taking the number of universities with at least one respondent divided by the number of universities surveyed, while the calculation of R usage at universities was reflected by the number of universities with at least one respondent that also had at least one class utilizing R divided by the number of respondent universities regardless of R usage. All data are expressed as counts and formal statistical tests were not preformed. As with any voluntary surveying method, it must be noted that positive sampling bias is potentially a factor; meaning it is probable that respondents were at least familiar with what R is and people unfamiliar with the program were less likely to take the time to respond. All analysis and plotting was carried out in R version 3.3.116.\n\nEthics approval was granted on May 20, 2016 from the Laurentian University Research Ethics Board (REB) under REB file number 2016-04-14. Consent was obtained through a participant consent statement (Supplementary File 3) and electronic approval, which lead participants to the survey. This information is available in Dataset 121, and only one participant opted out of taking part in the survey.\n\nConditional requirements of the REB were to retain the anonymity of individual participants. To ensure this, but preserve the ability to analyze and deposit data, university name information has been removed from the dataset and replaced with number designations. Additionally, all comments or other potential individual level identifiers have been removed.\n\n\nResults\n\nOf the 2,500 professors from 70 Canadian universities invited to participate, 157 responded. Of these only one participant elected not take the survey giving a total of 156 respondents. At least one response was recorded from 61 of the universities for an 87% response rate (i.e. at least one key informant per institution). Of the 61 responding universities, 65% (40) had at least one course that used R in some manner, while 36% (22) of responding universities had courses that were either specific to the R language or used it as the primary data analysis tool. Of respondents 51% used R in at least one course. Based on the courses taught by all respondents, R was used in 26% of courses in some capacity and of the courses that used R, 16% taught the R language.\n\nOf courses using R, 60% were offered to both undergraduate and graduate students while only 8% were graduate-only, and the remaining 32% undergraduate-only (Figure 1). By far the most frequent use of R in the classroom was geared towards statistics, followed by courses explicitly focusing on the R language itself and ecological modeling, respectively, (Figure 2). Professors who taught R felt the biggest advantages included that it is free, followed multiple platform support, diverse packages, and being open source; the latter three were all weighted similarly (Figure 3). Cited disadvantages to teaching R were dominated by a steep learning curve, followed by the students not actually learning the language itself (e.g. using code that is “plug and play” and not written or altered by students; Figure 4).\n\nBreakdown of course offerings for 80 professors who teach with R, where “both” means a class contains undergraduates and graduate students or the professor teaches both an undergraduate and graduate course using R.\n\nResponses from 80 professors who teach R, regarding the subjects they teach in their courses that use R (multiple responses were allowed). Other includes climatology, population genetics, econoinformatics, and plotting.\n\nResponses from 80 professors who teach R, regarding the biggest advantages to using R in the classroom (multiple responses were allowed). Other includes facilitates problem solving, teaches job applicable skills, the R community, graphics, flexibility, and reproducibility.\n\nResponses from 78 professors who teach R, regarding the biggest disadvantages to using R in the classroom (multiple responses were allowed). Other includes requires coding, colleagues cooperating, pushback from SAS users, students using multiple platforms in classroom, mainstream texts lack R examples, and R is used less in industry.\n\nA total of 76 professors did not teach with R at all. The most common reasons for not teaching with R are presented in Figure 5. Key reasons for not teaching R included teaching non-analytical courses or being unfamiliar with R. Many of the “other” responses included what could be classified as “departmental issues” (e.g. lack of time, perceived difficulty of learning R vs. programs like Excel, cooperation in coordinating between courses/professors). Professors who used R in their own research, but don’t teach R, were more open to teaching R in the future when compared to professors who were unfamiliar with R (Figure 6). Overall, the majority of professors were open to teaching a class using R in the future.\n\nResponses of 76 professors who don’t use R in any classes (multiple responses were allowed). Other includes time restrictions and classes that are already using other stats programs with limited departmental cooperation on switching over.\n\nResponses of 73 professors who don’t use R in any classes, regarding their willingness to use R in future classes. Groupings are by professors who use R in their research, but don’t teach it (green), and those who don’t teach or use R themselves (red).\n\nProfessor usage of R in research did not clearly reflect them teaching (with) it in the classroom. Figure 7 shows four groups based on whether professors taught and/or used R themselves in their research. The majority of professors (66%) used R themselves, while only 51% of professors actually taught R. In total, 19% of professors who used R themselves did not teach it. Professors who used R tended to use only R, but SAS/SPSS and MATLAB were also used along with an assortment of other programs (Figure 8). In comparison to reasons to teach with R, professors who used R still felt it being free was a good reason to use it, but also placed more emphasis on packages and it being a discipline standard (Figure 9). All professors who used R (100%) did so for descriptive statistical analyses, while modeling and figure generation were other common uses (Figure 10).\n\nSummary of how 156 professors interact with R. Note the large portion of professors who don’t teach R, but use it in their own research.\n\nResponse of 100 professors who use R (could have multiple answers). Other includes Excel, LINDO, BMDP, Prism, PAST, MEGA, Statistica, Sigmaplot, Stata, JMP, DataDesk, Systat, STAN, OpenBUGS, Minitab, Mathematica.\n\nReasons 103 professors use R themselves (multiple responses allowed). Other includes new code/package development, multiple platform support, and user configuration/flexibility.\n\nUses of R for 103 professors. Other includes data manipulation, simulations, and data exploration.\n\nOf the 156 respondents 154 indicated a department affiliation grouped into biology/life sciences, math/statistics, and others, including professors who had multiple appointments in biology, math, stats, and/or were in completely unique departments, e.g. decision sciences. A total of 64% of respondents were in the biology/life sciences, and 48.5% taught R. Professors who identified with math and stats departments made up 27.5% of respondents and 56% taught R. Of professors who were in statistics alone, 100% taught R in at least one course. The remaining 8.5% of respondents were categorized as “others” and 54.5% taught at least one class using R.\n\n\nDiscussion\n\nThe R language is beginning to make its way into Canadian universities with a wide range of courses spanning both graduate and undergraduate levels already in place. Over half of Canadian universities offer at least one course that uses R, but these courses are often not geared at the R language specifically, greatly diminishing the benefits to students. While a number of universities did offer multiple classes that use R, this was the exception and not the norm, indicating that R is not being adopted by professors and expanding throughout Canadian universities as fast as it perhaps should be. There appeared to be a positive sampling bias towards people who use R themselves; meaning a professor who was unfamiliar with R was unlikely to respond to the survey, however this is not uncommon in surveys of this type22. Taking this into consideration, it is likely that these results represent the current state of R usage at Canadian universities relatively well. There was a diverse range of professor’s experience with R as well as the subjects being taught using R. This reflects common trends in the R community where the language has been adapted beyond a statistical tool for use in an array of applications, for example interactive maps (rMaps package) and developing applications23. Taken together, both professors who currently teach and those who do not teach R need to consider new ways to adapt their coursework to include R in interactive and engaging ways.\n\nBy far the most common application in the classroom was statistics, which is likely due to the origins of R being geared at that community3. Bioinformatics usage was a less common theme, but this is an area that will likely see significant growth in the coming years with a large amount of new package development promoted by Bioconductor (collection of packages specific to bioinformatics usage) prompted by the drop in DNA sequencing cost and rapid increase in sequence data being produced (NCBI). While the number of courses taught that explicitly teach the R language is perhaps lower than ideal, it must be noted that courses dedicated specifically to the R language may be a lofty goal, and incorporating R into courses in any manner is a useful learning exercise. This is also in line with the general need of more computer literate students regardless of academic discipline24. Overall, there weren’t many professors that responded who didn’t teach or use R themselves. The most common reason for not using R was that their classes were non-analytical. This appears justifiable, as some subjects rely less on data management and analysis. However, large portions of professors in this category were totally unfamiliar with the capabilities of R and it may be that they don’t realize that R and packages within R are not exclusively focused on descriptive statistical analyses. For example, modeling transmission of a pathogen in a virology class or movement of animals across an ecosystem in ecology classes could both be incorporated in labs in these courses using R. The importance of adapting course material to match current trends in technology is highlighted in other research that retrospectively are easy to understand the importance of early adoption into the classroom25–27. For example, the broad movement from handwriting to typewriters to computers or the change from film to digital cameras and the finer resolution examples of software, which is updated on a more frequent basis. While preservation of older technologies is important, keeping students at the cutting edge of technologies and the programs/systems that operate them are key to current education. Along this same line the most concerning reasons for not teaching R included time restrictions and/or limited departmental cooperation, as well as general apathy towards adapting course material28. To us, these are potentially poor excuses for not altering courses to expose their students to a useful, widely accessible tool and emphasize a general lack of professor engagement, which is detrimental in the classroom29.\n\nBringing R into the classroom has a number of advantages. First, it is free, so does not strain student or department budgets and is compatible with multiple platforms (Mac, PC, Linux) allowing students to download it on their personal computer instead of having to do assignments on university computers with restricted licenses. Second, it is also open source and has a large support community online with a number of forums to address virtually any sort of problem (e.g. Stackoverflow). Third, a major advantage to students is the current applicability of R in the classroom and beyond. The near exponential growth of R6,7 highlights the importance of learning the language and is indicative of a desirable skillset across academic disciplines and career paths. This is due in part to the adoption of R in many areas outside of academia, but also because R (and coding languages in general) is a skillset that many employers look for in a potential employee. That is to say learning to code is desirable for today’s students largely due to the fact that coding is a skill that is transferrable between languages and a process that teaches critical thinking and problem solving20,30,31. So even if a student never codes again, the process of learning to code may benefit the way they approach future work. It is worth noting that with the advantages come some disadvantages, the largest being a “steep learning curve”. However, as sociologist John Fox6 points out this is really in comparison to the point-and-click types of software that students are used to. In reality R is a relatively easy coding language to learn once the basic conventions are mastered, making it accessible to novice programmers.\n\nThe feasibility of introducing R into the classroom is highlighted in our study by the fact that many professors who don’t teach R are open to teaching it in the future. Furthermore, it is possible to teach classes in R even if the teacher does’nt use it themselves, and we showed that a number of professors who don’t use R themselves already teach R. After all there are numerous other skills professors pass on to students that they themselves don’t use on a regular basis, if at all (e.g. a professor teaching an introductory course would typically only research on a very small subset of what they teach). Of particular interest are the professors who use R themselves, but don’t teach R. This group could be a catalyst for universities and/or departments to introduce R into course material, greatly expanding the number of courses offering R and the subject areas using R. Willingness of adopting new technologies in the classroom is a common hurdle32, but fortunately many professors in these positions are open to teaching R in the future, they just need to find the motivation to bring new material into their classroom28. Admittedly it takes time and effort to adapt a class that is already “refined” and it can be difficult to be the first to take that step within a department or institution28,33, but professors should realize that the benefits greatly outweigh the costs, and can take the time to gradually begin to incorporate R into their course content. For example, a professor could promote R over other “less useful” programs (e.g. Excel), even if R will only be used for minor assignments, such as mean calculations and basic plotting. Then expansion of material could be done incrementally throughout the semester from the student’s perspective and across multiple years of lectures from the professor’s perspective. Additionally, professors should expand their own R knowledge and look for the new and exciting ways R is being used. R is no longer a purely “analytical” tool and lab courses could, for example, use R for lab report writing (markdown34 is great for this), including all aspects of data management, plots, and text all in one file.\n\nIndividual comments provided valuable insights into problems with R in education, and the “learning curve” was a common theme amongst users and non-users both personally and for their students. As discussed before, this is in our view a misperception promoted by comparing R to “point and click” programs. While R is not as intuitive initially, once a foundation is established the subsequent adaptability and power over point and click platforms are large. Recently there has been an expansion of resources available to learn R in a fun and interactive way (e.g. Datacamp and swirl package35). These could serve as useful companions to professors looking to use R in their classroom as an effective way of “outsourcing” much of the initial learning process. Furthermore, it is our general thought that the R community needs to expand the currently available startup material to get people familiarized with R in a more interactive way. More specifically we feel that the R education community would greatly benefit from a more centralized location for material related to course content and examples of lesson plans that incorporate R. While some examples of this are available through sources like GitHub, these are collections of individual educators and there is no comprehensive location for educational material related to R. At an institutional level some professors suggested the idea of workshops, which are a great tool in university settings36. These can range from a weekend crash course to a semester long in depth introduction, which sets students and professors that are new to R on the right path from the beginning. From our personal experience, the lead author is in a trial period of teaching an R workshop, which is open to graduate, upper level undergraduate, and faculty, using hours normally devoted to teaching undergraduate labs, which is being met with positive reviews. Other comments indicated that these less formal forms of instruction may be a way to promote R in universities, ideally leading to a broader acceptance through time.\n\n\nConclusions\n\nIt is apparent that Canadian universities are beginning to put the R language to practice in classes with nearly 2/3 of the responding universities offering at least one course that uses R. However, fewer courses teach classes that are more specific to learning the language itself. While this is a good start to exposing students to R, it appears that Canadian universities in general are lacking R-based coursework. To our knowledge, there are no similar data for R usage at universities in other countries, but a comprehensive understanding of R usage in all levels of academics is necessary and would provide critical insights. Future work could use surveys to identify broad R usage trends as we did, but would benefit even more from obtaining detailed information from syllabi or course material itself. Surveys do depend on people’s willingness to participate so perhaps individual case study reports from departments or individual teachers who have incorporated R might be of use, encouraging others to put forth the effort and use R in the classroom. Based off broad data on downloads and references to R, it is apparent that R is rapidly becoming a programming and data analysis language of choice for researchers, academics, and in industry. With this in mind it is in an institutions’ and students’ best interest to promote R in coursework among all of the STEM disciplines. Furthermore, the only “cost” to a university, department, or educator is the time required to rework course material into the R language. While this takes initial effort, we feel that the long-term benefit to students greatly outweighs this initial input. The R community is rapidly developing more “user friendly” graphical user interfaces and will continue to be at the forefront of data analysis and presentation for the foreseeable future. Without doubt, an understanding of R will benefit students beyond their coursework in postgraduate and professional settings.\n\n\nSupplementary material\n\nSupplementary File 1. Survey questions. A list of all questions that participants could have been asked. Some questions are repeated for the conditional response survey. See Supplementary File 2 for question pathways.\n\nClick here to access the data.\n\nSupplementary File 2. Question flow diagram. Diagram showing potential survey “paths” with conditional responses, numbers correspond to questions in Supplementary File 1.\n\nClick here to access the data.\n\nSupplementary File 3. Participant consent statement. The full consent statement that participants agreed to prior to taking the survey, approved by the REB.\n\nClick here to access the data.\n\n\nData availability\n\nDataset 1: R survey results. Raw data from the survey questions with the university names converted to numbers and other potential respondent identifiers removed. DOI: 10.5256/f1000research.10232.d14434521",
"appendix": "Author contributions\n\n\n\nMC conceived the study, wrote and distributed the survey, analyzed data, and wrote the manuscript. NB commented and revised the survey questions and design, guided analysis, and edited the manuscript. Both authors have agreed on the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nNB is an active professor at a Canadian university and had the opportunity to participate in the survey. The authors do not feel his participation, or lack of participation, compete with the interest or conclusions of the study.\n\n\nGrant information\n\nMC is supported by the Ontario Trillium Foundation through an Ontario Trillium Scholarship for International PhD Students, by fellowships from Laurentian University, and by NSERC Discovery (#341381) and a Canada Research Chair Grant (#228457) held by NB.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank all of the participants who took the time to complete the survey and those who responded with valuable additional information about R usage at their university and in their classrooms. A travel grant was provided by the Laurentian University Graduate Student Association to support a poster presentation of the results at the 2016 UseR! conference in Stanford, CA, USA, and we recognize the valuable input we received from attendees that helped develop this manuscript into a comprehensive and cohesive survey of R usage in the Canadian university setting.\n\n\nReferences\n\nSussman GJ, Steele GL Jr: Scheme: A interpreter for extended lambda calculus. Higher-Order Symb Comput. 1998; 11(4): 405–439. Publisher Full Text\n\nBecker RA, Chambers JM, Wilks AR: The New S Language: a programming environment for data analysis and graphics. Chapman and Hall, 1988. Reference Source\n\nIhaka R, Gentleman R: R: A Language for Data Analysis and Graphics. J Comput Graph Stat. 1996; 5(3): 299–314. Publisher Full Text\n\nVance A: Data Analysts Captivated by R’s Power. New York Times. 2009. Reference Source\n\nCRAN: The Comprehensive R Archive Network. 2016. Reference Source\n\nFox J: Aspects of the Social Organization and Trajectory of the R Project. The R Journal. 2009; 1: 5–13. Reference Source\n\nRapporter: R activity around the world. R-bloggers. 2014. Reference Source\n\nJames: Where is the R Activity? R-bloggers. 2013. Reference Source\n\nMuenchen RA: The Popularity of Data Analysis Software. 2016. Reference Source\n\nHeron MJ, Hanson VL, Ricketts I: Open Source and Accessibility: Advantages and Limitations. J Interact Sci. 2013; 1: 2. Publisher Full Text\n\nKhan MA, Urrehman F: Free and Open Source Software: Evolution, Benefits and Characteristics. 2012; 1(3). Reference Source\n\nWeber S: The Success of Open Source. Harvard University Press, 2004. Reference Source\n\nHarrison E: RStudio and GitHub. R-bloggers. 2015. Reference Source\n\nChambers JM, Lang DT: Omegahat Packages for R. R News. 2001; 1: 1–32. Reference Source\n\nFreeman S, Eddy SL, McDonough M, et al.: Active learning increases student performance in science, engineering, and mathematics. Proc Natl Acad Sci U S A. 2014; 111(23): 8410–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe R Core Team: R: A language and environment for statistical computing. 2016. Reference Source\n\nSarkar D: Lattice: Multivariate Data Visualization with R. Springer, 2008. Publisher Full Text\n\nWhickham H: ggplot2: Elegant Graphics for Data Analysis. Sorubger-Verlag, 2009. Publisher Full Text\n\nBoard A: Why Undergraduates Should Learn the Principles of Programming Languages. Language (Baltim). 2010; 1–9. Reference Source\n\nPea RD, Kurland DM: On the cognitive effects of learning computer programming. New Ideas Psychol. 1984; 2(2): 137–168. Publisher Full Text\n\nCarson M, Basiliko N: Dataset 1 in: Approaches to R education in Canadian universities. F1000Research. 2016. Data Source\n\nSax LJ, Gilmartin SK, Bryant AN: Assessing Response Rates and Nonresponse Bias in Web and Paper Surveys. Res High Educ. 2003; 44(4): 409–432. Publisher Full Text\n\nBaier T, Neuwirth E, De Meo M: Creating and Deploying an Application with (R) Excel and R. The R Journal. 2011; 3(2): 5–11. Reference Source\n\nMcDonald DS: Computer Literacy Skills for Computer Information Systems Majors: A Case Study. J Inf Syst Educ. 2004; 15(1): 19–33. Reference Source\n\nGillard S, Bailey D, Nolan E: Ten Reasons for IT Educators to be Early Adopters of IT Innovations. J Inf Technol Educ. 2008; 7: 21–33. Reference Source\n\nGuzey SS, Roehrig GH: Teaching science with technology: Case studies of science teachers’ development of technology, pedagogy, and content knowledge. Contemp Issues Technol Teach Educ. 2009; 9(1): 25–45. Reference Source\n\nRogers EM: A prospective and retrospective look at the diffusion model. J Health Commun. 2004; 9(Suppl 1): 13–19. PubMed Abstract | Publisher Full Text\n\nHodas S: Technology Refusal and the Organizational Culture of Schools. Educ Policy Anal Arch. 1993; 1(10). Publisher Full Text\n\nFink LD: Creating Significant Learning Experiences: An integrated approach to designing college courses. John Wiley & Sons Inc., 2013. Reference Source\n\nAkcaoglu M: Learning problem-solving through making games at the game design and learning summer program. Educ Technol Res Dev. 2014; 62(5): 583–600. Publisher Full Text\n\nRobins A, Rountree J, Rountree N: Learning and Teaching Programming: A review and discussion. Comput Sci Educ. 2003; 13(2): 137–172. Publisher Full Text\n\nGbomita V: The adoption of microcomputers for instruction: Implications for emerging instructional media implementation. Br J Educ Technol. 1997; 28(2): 87–101. Publisher Full Text\n\nConroy CA, Bruening TH: School subcultures as factors affecting technology refusal: An examination of applied academics implementation in Pennsylvania and resulting implications for agricultural teacher education. 1994; 21. Reference Source\n\nAllaire JJ, Horner J, Marti V, et al.: markdown: ‘Markdown’ Rendering for R. 2015. Reference Source\n\nKross S, Carchedi N, Bauer B, et al.: swirl: Learn R, in R. 2016. Reference Source\n\nMjelde L: The magical properties of worksop learning. Peter Lang; 2006. Reference Source"
}
|
[
{
"id": "18124",
"date": "12 Dec 2016",
"name": "Colin W. Rundel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe the results of a survey of Canadian academics on their use of R in their courses and in their own work. While there is good information on the increasing popularity of R on the web, in industry, and in scholarly articles there is far less information on how R is being taught. It is in this important area that the paper provides some much needed insight.\nIn particular, I think many researchers would be surprised to find courses being taught using R in more than half of Canada's universities. Giving other educators this kind of information is tremendously valuable in inducing other educators to also decide to make the jump to R. In particular, being able to point to other universities and courses where R is being successfully taught is a strong argument against common complaints like the learning curve being too steep.\n\nWhile a more systematic examination of R offerings across universities would have more reliable results than this survey, I believe that it still offers valuable (if potentially slightly biased) insights into the basic patterns of R education. The authors results are very encouraging to me as a educator interested in teaching R, but they also show that there is much more we can do in promoting R at other universities as well as growing our own course offerings locally.\n\nMajor Comments\nIntroduction\n\nI think the history as stated underplays the importance of R being free software (particularly as compared to S), this aspect also is clearly a hugely influential factor for professors based on the survey results.\n\nthere have been more packages added in 2015 than have existed in all of the SAS institute's history - this comparison is based the number of packages added to CRAN vs a rough estimate of the number of procs contained in SAS 9.3 (from r4stats.com). This is a weak comparison originally which is then confusingly stated in the paper.\n\nWhile touched on tangentially I think reproducible research is worth mentioning explicitly, particularly in reference to the strength of programming languages vs. point and click tools.\n\nResults\nBased on the courses taught by all respondents, R was used in 26% of courses in some capacity and of the courses that used R, 16% taught the R language. I believe that this would be more interesting to see broken down based on the respondent's field. In general, additional cross tabulation by discipline would give more insight into the data.\n\nFigure 1 is not really needed, giving the values in the text is sufficient in my mind. Also it is somewhat confusing about which subset of classes this breakdown applies to - this is generally true for many of the other figures. See my Figures comment below.\n\nDiscussion\nindicating that R is not being adopted by professors and expanding throughout Canadian universities as fast as it perhaps should be. I don't entirely follow the logic here, it is not clear how to establish how fast it is or should be expanding. While I don't disagree with the sentiment, this comes across as an unsupported opinion.\n\nTaking this into consideration, it is likely that these results represent the current state of R usage at Canadian universities relatively well. Again, this comes off more of an opinion than what is supported by the survey results and a claim like this needs additional support. The later conclusions are not invalided by removing this claim.\n\nFigures\nMost figures could be shrunk considerably without negatively affecting readability, e.g. Figs 2-5. In some cases it might improve readability to combine plots into facets within a single figure (connecting figures to subsections).\n\nMinor Comments\nIntroduction\nThis is the first I've ever heard of omegahat, seems to be on a very different scale than github or even r-forge.\n\nWhile not without their issue, it seems worthwhile to also mention R's mailing lists and special interest groups.\n\nMethods\nA survey of 70 Canadian universities was conducted using Google Forms (https://www.google.com/forms/about/) from June 1, 2016 to June 15, 2016 to estimate the number of universities offering courses that either use or teach the R\n\nResults\nThe basic organizational structure is based subsets of the respondents, it would be helpful to indicate the size of each of these subsets. For example, n=80 for professors who teach with R is only given explicitly in the Figure 2 label.\n\nProfessors who taught R felt the biggest advantages included that it is free, followed by multiple platform support, diverse packages, and being open source;\n\nDiscussion\n(e.g. Datacamp and the swirl package)\n\nFurthermore, it is our general thought that the R community needs to expand the currently available startup materials to get people familiarized with R in a more interactive way.\n\nFigures\nBar plot labels are rotated, this is not needed - it makes the labels harder to read and takes up unnecessary space.\n\nConclusions\n\nHowever, fewer professors teach classes that are more specific to learning the language itself.*",
"responses": []
},
{
"id": "18120",
"date": "14 Dec 2016",
"name": "Eliezer Gurarie",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI feel odd reviewing this paper - since I have little technical expertise in assessing human survey-based studies. My interest in this topic is as a highly biased object of the study, specifically, as an enthusiastic user of R in the classroom, both for statistics and ecological modeling courses. [Is this what the animals I do research on feel like when they read my papers?]\n\nOverall, the information presented is useful. The survey appears to have been conducted responsibly, with a reasonably high response rate (though with possible self-selection bias). The introduction provides a useful overview of the history and context of R's and the discussion is comprehensive and thoughtful. The availability of the survey results is a welcome contribution.\n\nWhile the main purpose of the article seems to be to argue and advocate for the use of R in classrooms, it is not always clear how the survey results inform that argument. On the one hand, the fact that there are courses in 65% of institutions and across fields might encourage other professors to adopt R. On the other hand (given my bias) it is a shocking disservice to students that 35% of institutions use R in ZERO courses. There was a bit of a missed opportunity in that the article presents just a snapshot in time. It would have been interesting to see how the rate of R use has increased (which would have been possible by asking professors about their use of R 5 or 10 years ago). I imagine the rate of increase would have been very nearly explosive.* In any case, the claim that \"R is not being adopted ... as fast as it perhaps should be\" is more of an opinion (even if softly put, and one I completely agree with) than supported by the results. There are other slight disconnects between claims in the discussion and the survey, but then for this kind of pseudo-advocating article/essay this is perhaps more acceptable than what I am used to.\n\n* - As a single datapoint: when I first proposed incorporating R into an introductory undergraduate statistics course at the Unversity of Washington in 2012 - not even five years ago - the idea was met with surprise and some scepticism by other instructors that the students could \"hack it\", The experiment ended up being an unequivocal success, with many students claiming that was the most useful portion of the course [certainly compared to looking up t-values in a table at the back of a textbook!] and I believe is now standard in the curriculum.\n\nResults and Figures:\nThe weakest point in this paper is the ugliness of the figures (which is ironic, considering that one of the main selling points of R is the ability to make beautiful graphics). I understand that the results are simple counts, but the presentation could still be improved. Figure 1 is completely unnecessary, unless it were cross-tabulated against, e.g., subject of course (an important missing bit of information), for example sorted into statistics/ mathematics/ computer science vs. life/ social sciences.\n\nIn almost all of the bar plots, you could use horizontal bars, ordered top to bottom from highest count to lowest count, and go ahead and include all of the \"Other\" categories [e.g., in figure 2, climatology, population genetics, econoinformatics, and plotting]. Those results are interesting, and there's plenty of room if you abandon the fat vertical bars. These could also be cross-tabulated and color-stacked against subject, or at least \"graduate\" / \"undergraduate\".\n\nFigure 6 (though seasonally appropriate) would be much improved if it were presented as a mosaic plot (i.e.: mosaicplot(table(R.Use, Willingness))), which is much better for comparing the relative shift across categories, while reflecting the sample sizes as well. It is, incidentally, interesting that so few people answer \"9\" compared to \"8\" (I guess 9/10 of \"Very\" is a more slippery concept than 4/5 of \"Very\"!) There's a psychological effect here somewhere, but in the meantime you might be better off pooling 1-3, 4-6, 7-9.\nI must confess I would have no idea how to answer the question in Figure 10 - there is so much overlap. I really don't see how one can separate \"Modelling\" (and, often, \"Statistics\") from \"Data manipulation\" \"Simulation\", \"Data Exploration\", \"Visualization\", etc.\n\nDiscussion\n\nAmong the tools which facilitate the use of R in the classroom, one of the most important is the use of 'knitr' and 'Rmarkdown' to easily generate documents that combine text, math, code, figures and output. This is a very important omission. Perhaps the single most practical use of \"knitting\" documents is for teaching material - including lectures, labs and homework assignments - in particular for learning R. Also, report generation itself is a useful and totally accessible skill to teach, particularly considering the importance of reproducibility of analysis (another important advantage of R over point-and-click tools).\n\nMinor:\n\nDiscussion - there's a \"does'nt\" that should be \"doesn't\".",
"responses": []
},
{
"id": "18125",
"date": "15 Dec 2016",
"name": "Luc F. Bussiere",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article provides useful survey data on aspects of instruction using the R programming language at Canadian Universities. The authors report intriguing data on the numbers of respondents who use R for teaching and research, the subject areas in which the respondents work, and their willingness to teach future classes using R.\nThese data provide a useful glimpse of the adoption of R software in Canadian Universities, and the transparent inclusion of the survey and data makes this publication a valuable addition to the literature. My comments below are intended to provoke further critical analysis if possible.\nAlthough I am sympathetic to the authors’ opinions (as an instructor who uses R in my own research and teaching), I am not consistently convinced that these data support the authors’ conclusions, even though those are made somewhat tentatively. My skepticism comes from a few sources, as detailed below. I think most of my concerns could be addressed though a follow up survey and additional analyses.\nMuch of the discussion is devoted to the argument that we need more teaching of R (especially in classes dedicated to the programming language itself, rather than its applications). I do not object to this assertion in principle (teaching with R has personally been a rewarding experience for me and most of my students), but the conclusion does not derive from the survey data, and the logic that underpins it is not always clear. The authors cite some pedagogical papers on the general importance of programming knowledge, but the relative value of programming per se (as opposed to its applications) for disciplines apart from computing science are not self-evident given the assumed cost to other portions of the curriculum. One could indeed use R markdown for lab report submissions, as the authors suggest on p.10, but I am not convinced that this would often be worth implementing if the main learning outcome sought is written communication skills. I think it would be useful if the authors could more clearly separate the discussion that derives directly from their survey findings from those that represent advocacy of a particular pedagogical opinion.\nAs the authors acknowledge, there is a risk of positive bias in their survey because respondents unfamiliar with R may have been less likely to respond. The importance of the bias could be estimated through attempts to contact nonrespondents, and contrasts of the scores with the original surveys, and methods for computing estimates of response survey quality seem to be reasonably well established and (of course) have been developed for analysis with R1. Such an effort could help clarify the importance of biases in this study.\nFor a paper about a language developed explicitly for conducting statistical analyses, the lack of statistics is quite jarring. The authors draw many conclusions about differences among categories of response based on apparent patterns, but it would be quite useful to know how much confidence we should place in the relative numbers of responses. Like the analyses of survey quality mentioned above, methods for conducting multinomial models and extracting multinomial CIs are readily available within R (e.g., see Villacorta 2012), and would allow the authors to both quantify uncertainty in their proportions and illustrate confidence limits for each response measure.\nSome of the comparisons suffer from a lack of context. For example, Fig. 1 concerns the relative provision of R courses to undergraduates vs graduate students, but this contrast is difficult to interpret without more information on the number of courses in total that are offered to graduates and undergraduates. Is the rate of provision higher at the graduate level, given the smaller number of total courses on offer? I wonder if the authors can hint at the answer by assessing numbers of courses in each category at a few institutions.\nIn addition to a dissatisfying lack of measures of confidence in effects, the figures are not consistently laid out to permit effective consideration of the data. For example, in Figure 6, the key response variable is a scaled measure of willingness to teach R in future classes, but that variable appears on the x-axis instead of the y. Since the most meaningful contrast is between users and non-users of R, the authors could produce a plot that illustrates the numerical response scores in the two groups (e.g., in a strip chart) along with a measure of means and confidence limits: such a presentation would support the presumed difference much more persuasively, in my opinion, than the current layout.\nMinor comments:\nI spotted a few typographic errors, including the use of the word “preform” for perform on pp. 3 and 4, and “does’nt” on p. 10.\nThe Education Board that Authored citation 19 is incorrectly attributed as if it were a single author, whereas there are 8 individuals listed as authors on the report who could be acknowledged.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2802
|
https://f1000research.com/articles/5-2634/v1
|
04 Nov 16
|
{
"type": "Method Article",
"title": "Design of chemical space networks incorporating compound distance relationships",
"authors": [
"Antonio de la Vega de León",
"Jürgen Bajorath",
"Antonio de la Vega de León"
],
"abstract": "Networks, in which nodes represent compounds and edges pairwise similarity relationships, are used as coordinate-free representations of chemical space. So-called chemical space networks (CSNs) provide intuitive access to structural relationships within compound data sets and can be annotated with activity information. However, in such similarity-based networks, distances between compounds are typically determined for layout purposes and clarity and have no chemical meaning. By contrast, inter-compound distances as a measure of dissimilarity can be directly obtained from coordinate-based representations of chemical space. Herein, we introduce a CSN variant that incorporates compound distance relationships and thus further increases the information content of compound networks. The design was facilitated by adapting the Kamada-Kawai algorithm. Kamada-Kawai networks are the first CSNs that are based on numerical similarity measures, but do not depend on chosen similarity threshold values.",
"keywords": [
"Chemical space",
"bioactive compounds",
"coordinate-free space representations",
"chemical similarity",
"similarity-based compound networks",
"force-directed network layouts"
],
"content": "Introduction\n\nIn chemoinformatics, molecular network representations have thus far mostly been applied to study similarity relationships between compounds and visualize structure-activity relationships (SARs)1–3. In such networks, molecules are represented as nodes and edges indicate pairwise similarity relationships. Potency information can be added, for example, through node coloring, which provides a basis for SAR visualization2. A prototypic network representation specifically designed for SAR analysis was the ‘network-like similarity graph’ (NSG)3, a precursor of more generally defined ‘chemical space networks’ (CSNs)4, which are characterized using statistical concepts from the interdisciplinary field of network science5. As SAR-oriented network representations, NSGs provide immediate visual access to local communities (subsets) of active compounds with interesting SAR characteristics.\n\nA major distinguishing feature of different CSNs is the way in which molecular similarity relationships are established5. The use of alternative similarity measures often changes local and global network properties of CSNs5. When numerical similarity measures are used, pairwise compound comparisons yield a similarity matrix that contains similarity values for all compound pairs in a data set. The application of a similarity threshold value then transforms the similarity matrix into an adjacency matrix, which serves as input for layout algorithms to generate a graphical representation6. In fact, network appearance is often strongly influenced by chosen layout algorithms.\n\nConventional chemical space representations used in chemoinformatics are mostly generated on the basis of vectors of numerical descriptors. The resulting coordinate-based space representations are multi- or high-dimensional, with each chosen descriptor adding another dimension to the space. In such coordinate-based spaces, compound positions are unambiguously defined and so are distances between compounds that are quantified as a measure of dissimilarity, i.e. the larger the distance is, the more dissimilar the compounds are. By contrast, CSNs have become a paradigm of coordinate-free chemical space representations, which are entirely determined by pairwise similarity relationships4,5. If substructure-based similarity measures are employed, binary relationships are obtained (i.e. two compounds are either ‘similar’ or not); if similarity threshold values are applied to numerical measures, pairs of compounds reaching the threshold are classified as similar (and appear in the adjacency matrix). Hence, distance relationships between compounds are typically not considered in coordinate-free chemical space representations.\n\nIn this work, we introduce a novel layout for CSNs that does not depend on chosen threshold values, but takes distances derived from pairwise similarity values into account. Thus, in contrast to currently available CSNs, distances between compounds and communities in the resulting networks become chemically relevant (at least with respect to chosen descriptors), which further increases the information content of these representations.\n\n\nMethods\n\nFor network design, one large and three small compound sets (active against human targets with defined equilibrium constants) were taken from ChEMBL (version 21) (https://www.ebi.ac.uk/chembl/)7, as reported in Table 1. We note that there was no specific reason to focus on these sets; many others could have been selected instead.\n\n‘ID’ is the ChEMBL target identifier and ‘# CPDs’ means number of compounds.\n\nCompounds were represented using the MACCS fingerprint8 (consisting of 166 structural keys or patterns), which were generated using an in-house Python implementation. Pairwise similarity values were calculated using the Tanimoto coefficient (Tc)9. Fingerprint descriptors of different design might have been selected instead, but for our proof-of-principle investigation, the relatively simple MACCS fingerprint was readily sufficient.\n\nPairwise similarity values were transformed into distances using the formula\n\ndistance = 1 – CDF(similarity)\n\nwhere CDF is the cumulative distribution function for an assumed normal distribution. For each compound set, the mean and standard deviation were calculated from its pairwise similarity values. The CDF was used to emphasize compound pairs with large Tc values and de-emphasize pairs with small values compared to a linear relationship.\n\nAlternative CSN layouts were generated with in-house Java programs based upon the JUNG library (http://jung.sourceforge.net/doc/JUNG_journal.pdf). Please also see the ‘Data availability’ section.\n\nFruchterman-Reingold. The Fruchterman-Reingold (FR) algorithm10 has so far consistently been used for NSGs3 and CSNs5. FR is a force-directed algorithm that brings together subsets of densely connected objects and separates different subsets from each other through repulsion (until equilibrium positions are obtained). Only similarity values reaching a pre-defined threshold are considered in FR layout construction (all other similarity values are ignored). In FR-based network views, distances between compounds have no chemical meaning.\n\nKamada-Kawai. The Kamada-Kawai (KK) algorithm11, adapted herein for CSN design, is also a force-directed layout method. However, KK uses all distances derived from similarity values as input, and optimizes (threshold-independent) edge lengths with respect to inter-compound distances. Thus, the KK approach incorporates distance relationships into network layouts. In principle, KK-based networks are completely connected. Thus, edges between distant compounds might be omitted for clarity. Although all similarity values and corresponding distance relationships are considered for network construction, for selective edge display, similarity threshold values can also be applied.\n\nAs similarity-based compound networks, KK network representations are covered by the general definition of CSNs4,5 and are in the following also referred to as KK CSNs.\n\n\nResults and discussion\n\nThe characteristic feature of the KK approach is that it takes distances derived from all pairwise similarity values quantitatively into account during network construction. The resulting layout reflects relative compound distances, which principally increases the chemical information contained in KK CSNs compared to threshold-dependent FR CSNs. Independent of the KK network structure, which remains constant, edges in KK CSNs can be selectively displayed at varying similarity threshold values to optimize the clarity of the presentation.\n\nFor an initial proof-of-principle assessment, a model data set was generated by combining four subsets (A–D) of five hypothetical data points, each with well-defined intra-set similarity value ranges, as reported in Table 2. Subsets A–C contained highly similar data points with varying inter-subset similarity values (Table 2), whereas subset D consisted of dissimilar data points (singletons). The KK CSN of this model data set is shown in Figure 1. All three subsets of similar data points formed separate clusters in the network, whereas data points from subset D were widely distributed. Furthermore, clusters of subsets A and B, which displayed largest inter-subset similarity values (Table 2), were located close to each other and removed from the less similar subset C. Moreover, the KK CSN also correctly accounted for the smaller distance between A and C compared to B and C. Thus, the KK CSN incorporated for various distance relationships present in the model set; an encouraging finding.\n\nFor each subset of compounds in the model data set, intra-set (diagonal) and inter-set MACCS Tc value ranges are given.\n\nShown is the KK CSN of the model data set according to Table 2. Data points are colored on the basis of compound subset membership: A, blue; B, red; C, green; D, gray.\n\nFigure 2 shows KK CSNs for data sets 11638 and 222 (Table 1). In each case, edges were selectively displayed at three different similarity threshold values, which enabled viewing edge distributions on a “sliding scale”. The KK CSN of set 11638 revealed a clear clustering of similar compounds with comparably high or low potency, corresponding to the presence of locally continuous SARs1. By contrast, the KK CSN of set 222 revealed a cluster of highly similar compounds with large potency variations, corresponding to a high degree of local SAR discontinuity1. This cluster was distant from other compounds of set 222, consistent with the presence of unique structural features.\n\nKK CSNs are displayed for two data sets (11638 and 222 according to Table 1) at three similarity threshold values of 0.8, 0.7, and 0.6, respectively. Nodes are colored on the basis of potency values applying a color gradient from green (highest potency) over yellow (intermediate) to red (lowest potency).\n\nFigure 3 compares the KK and FR CSNs for set 100476, revealing the presence of distinct layouts. In the KK CSN a larger cluster of similar –and mostly weakly potent– compounds emerged that was distant from other data set compounds. The corresponding FR CSN provided a completely different view of the compound set with several clusters that were essentially evenly distributed across the layout (consistent with its threshold-dependent force-directed design). For each of these clusters, a corresponding cluster was also identified in the KK CSN. In three cases, the corresponding compounds were so similar –and the resulting distances so small– that these clusters needed to be magnified for a detailed inspection, as shown in Figure 4a. Hence, the KK and FR CSNs also provided complementary network views of the data set.\n\nFor data set 100476, KK and FR CSNs are compared at a similarity threshold value of 0.8 for selective edge display (KK) and network generation (FR). Nodes are colored according to Figure 2. At the bottom, similarity values and corresponding network distances of all compound pairs are compared in scatter plots and correlation coefficients are reported.\n\nIn (a), corresponding compound communities are highlighted in the KK and FR CSNs from Figure 3 and enlarged. Compounds in each community are numbered. In (b), compounds forming the top cluster in (a) are shown.\n\nThe scatter plots in Figure 3 reveal that there was no correlation between similarity values and network distances in the FR CSN, consistent with its design principles. By contrast, with a correlation coefficient of -0.79, significant inverse correlation (i.e. large similarity values corresponding to small distances) was observed for the KK CSN, which was largely determined by compound pairs with similarity values greater than 0.5. For small similarity values, correlation was only weak. This observation was consistent with the use of the CDF in the distance function, which emphasized distance relationships between similar compounds, as discussed above. For data sets 222 and 11638 (Figure 2), KK CSNs yielded correlation coefficients of -0.84 and -0.88, respectively.\n\nIn Figure 4a, corresponding compound communities in KK and FR CSNs are compared in detail. FR CSN clusters contain edges of comparable length and have similar topology, which is a characteristic feature of this layout. By contrast, KK CSN clusters display different topologies and contain edges of different length that further differentiate intra-cluster similarity relationships and position similar compounds closely together. For example, compounds 3, 4, and 5 from the cluster at the top in Figure 4a only differ by the (ortho, meta, or para) position of a benzene ring and are more similar to each other than to compounds 1, 2, 6, and 7 that have different substituents (Figure 4b).\n\nFigure 5 shows a KK CSN representation for three analog series (A, B, and C) that were extracted from compound set 51. Series A and B had chemically related core structures, whereas the core of series C was distinct from A and B. In the KK CSN, the three series formed communities that were separated from each other. Consistent with the structural relationship between their cores, series A and B were positioned closer to each other than to series C. A single compound from series A was found to form a bridge between the communities of A and B. This compound contained a cyclobutyl substituent at R1 and thus closely resembled the core of series B. Taken together, these observations indicated that the KK CSN captured similarity relationships between these analog series in a meaningful way.\n\nShown are three analog series from the KK CSN of data set 51. For clarity, a similarity threshold value of 0.88 was applied for edge display. Each analog series is encircled and its common core structure is displayed. Compounds in each series were distinguished by substituents at a single site (R1).\n\n\nConclusions\n\nWe have introduced an approach to incorporate compound distance relationships into CSNs that are coordinate-free representations of chemical space. For this purpose, the KK algorithm was adapted, which takes into account all inter-compound distances during network construction and does not depend on chosen similarity threshold values, in contrast to the FR algorithm. As such, KK networks also represent the first threshold-independent CSNs for numerical similarity measures, which further extends the current CSN spectrum. Initial results obtained for KK CSNs were encouraging, as demonstrated by the study of a model data set, for which subset relationships were correctly reproduced. Informative KK CSNs were also obtained for sets of bioactive compounds. Furthermore, we have shown that KK and FR CSNs may provide complementary representations that make it possible to view and compare compound communities in different ways. KK CSNs were also found to capture chemical relationships between analog series, which provided an advantage compared to FR CSNs.\n\nIn summary, the results of our proof-of-principle investigation suggest that KK CSNs should be of considerable interest for further exploring biologically relevant chemical space.\n\n\nData availability\n\nThe data sets used in this study are freely available in ChEMBL (https://www.ebi.ac.uk/chembl/) via the identifiers reported Table 1. The NSG (FR CSN) software is freely available as a part of the SARANEA program suite12 in an open access deposition (DOI: 10.12688/f1000research.3713.1)13. The implementation can be adapted to generate KK CSNs.",
"appendix": "Author contributions\n\n\n\nAVL and JB conceived the study, AVL carried out the analysis, AVL and JB wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank Dagmar Stumpfe for help with illustrations.\n\n\nReferences\n\nPeltason L, Bajorath J: Systematic computational analysis of structure-activity relationships: concepts, challenges and recent advances. Future Med Chem. 2009; 1(3): 451–466. PubMed Abstract | Publisher Full Text\n\nStumpfe D, Bajorath J: Methods for SAR visualization. RSC Adv. 2012; 2(2): 369–378. Publisher Full Text\n\nWawer M, Peltason L, Weskamp N, et al.: Structure-activity relationship anatomy by network-like similarity graphs and local structure-activity relationship indices. J Med Chem. 2008; 51(19): 6075–6084. PubMed Abstract | Publisher Full Text\n\nMaggiora GM, Bajorath J: Chemical space networks: a powerful new paradigm for the description of chemical space. J Comput Aided Mol Des. 2014; 28(8): 795–802. PubMed Abstract | Publisher Full Text\n\nVogt M, Stumpfe D,Maggiora GM, et al.: Lessons learned from the design of chemical space networks and opportunities for new applications. J Comput Aided Mol Des. 2016; 30(3): 191–208. PubMed Abstract | Publisher Full Text\n\nBrandes U: Drawing on physical analogies. In: Drawing graphs: methods and models. Kaufmann M, Wagner D (eds.); Springer Berlin Heidelberg. 2001; 71–86. Publisher Full Text\n\nGaulton A, Bellis LJ, Bento AP, et al.: ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012; 40(Database issue): D1100–D1107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurant JL, Leland BA, Henry DR, et al.: Reoptimization of MDL keys for use in drug discovery. J Chem Inf Comput Sci. 2002; 42(6): 1273–1280. PubMed Abstract | Publisher Full Text\n\nWillet P, Barnard J, Downs GM: Chemical similarity searching. J Chem Inf Comp Sci. 1998; 38(6): 983–996. Publisher Full Text\n\nFruchterman TM, Reingold EM: Graph drawing by force-directed placement. Softw Pract Exp. 1991; 21(11): 1129–1164. Publisher Full Text\n\nKamada T, Kawai S: An algorithm for drawing general undirected graphs. Inform Process Lett. 1989; 31(1): 7–15. Publisher Full Text\n\nLounkine E, Wawer M, Wassermann AM, et al.: SARANEA: a freely available program to mine structure-activity and structure-selectivity relationship information in compound data sets. J Chem Inf Model. 2010; 50(1): 68–78. PubMed Abstract | Publisher Full Text\n\nHu Y, Bajorath J: Follow up: Compound data sets and software tools for chemoinformatics and medicinal chemistry applications: update and data transfer [version 1; referees: 3 approved]. F1000Res. 2014; 3: 69. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "17816",
"date": "21 Nov 2016",
"name": "Alexandre Varnek",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nChemical space networks (CSN) technique is an efficient way to visualize and analyze the content of chemical databases. Typically, CSNs are built using the Fruchterman-Reingold algorithm in which the distances between objects are determined for layout purposes. In this paper, for CNS construction the authors suggest to use the Kamada-Kawai algorithm providing with a graph in which the edges lengths correspond to similarity measures. Thus, CSNs obtained with the above algorithms provide with two complementary views of a chemical space. I believe that reported results are of the great interest for chemoinformatics community. The title is appropriate for the content of the article; the abstract represents a suitable summary of the work. I recommend indexing this paper as is.",
"responses": []
},
{
"id": "17426",
"date": "22 Nov 2016",
"name": "Gerhard Hessler",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nChemical space networks are an interesting method to visualize chemical space and to analyse SAR of chemical series. Recently, behavior of CSN dependent on data sets and parameter settings have been carefully analysed esp. by Prof. Bajorath and co-workers. Typically the layout of CSNs is optimized for visualization purposes and thus, distances between nodes do not have a chemical meaning. Here, a method is presented, which takes into account chemical distance information in generating the network layout, which is particularly helpful in SAR analysis. The effect of the algorithm is nicely illustrated, first with a model data set and then with real SAR data sets. In addition, a comparison between distances in the graph and similarity values shows, that the design goal to reflected compound similarity in the graph distance is achieved. The publication is well written, clearly structured and adds an interesting, valuable feature to CSNs. Some small additions might be considered by the authors. Instead of referencing the target Chembl ID in the text naming the target might make reading easier. Is it possible to discuss the effect of network density on layout and SAR interpretation, esp. for larger data sets?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2634
|
https://f1000research.com/articles/5-2791/v1
|
29 Nov 16
|
{
"type": "Research Article",
"title": "Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population",
"authors": [
"Sandra Guauque-Olarte",
"Ana-Lucia Rivera-Herrera",
"Laura Cifuentes-C",
"Sandra Guauque-Olarte",
"Ana-Lucia Rivera-Herrera"
],
"abstract": "Background: CHEK2 (Checkpoint Kinase 2) encodes CHK2, a serine/threonine kinase involved in maintaining the G1/S and G2/M checkpoints and repair of double-strand DNA breaks via homologous recombination. Functions of CHK2 include the prevention of damaged cells from going through the cell cycle or proliferating and the maintenance of chromosomal stability. CHEK2 mutations have been reported in a variety of cancers including glioblastoma, ovarian, prostate, colorectal, gastric, thyroid, and lung cancer in studies performed mainly in White populations. The most studied mutation in CHEK2 is c.1100delC, which was associated with increased risk of breast cancer. The objective of this study was to compile mutations in CHEK2 identified in cancer genomics studies in different populations and especially in Latin American individuals. Methods: A revision of cancer genomics data repositories and a profound literature review of Latin American studies was performed. Results: Mutations with predicted high impact in CHEK2 were reported in studies from Australia, Japan, United States, among other countries. The TCGA cancer types with most mutations in CHEK2 were breast, colorectal, and non-small cell lung cancer. The most common mutation found was E321* in three patients with uterine cancer. In Latin American individuals nine mutations were found in melanoma, lymphoma, and head and neck cohorts from TCGA and ICGC. Latin American studies have been restricted to breast and colorectal cancer and only two mutations out of four that have been interrogated in this population were identified, namely c.1100delC and c.349A>G. Conclusions: This study presents a compilation of mutations in CHEK2 with high impact in different cancer types in White, Hispanic and other populations. We also show the necessity of screening CHEK2 mutations in Latin American in cancer types different than breast and colorectal.",
"keywords": [
"CHEK2",
"CHK2",
"cancer",
"Latin America",
"databases",
"mutations",
"CHEK2*1100delC",
"genomics"
],
"content": "Introduction\n\nCHEK2 (Checkpoint Kinase 2) (OMIM +604373) encodes CHK2 a serine/threonine kinase that is the human homolog of Saccharomyces cerevisiae RAD53 and Schizosaccharomyces pombe CDS11. In mammalian cells, ATM activates CHK2 in response to ionizing radiation through phosphorylation. This leads to a variety of cellular responses, such as cell cycle checkpoint activation2, where CHK2 is involved in maintaining the G1/S and G2/M checkpoints by phosphorylation of CDC25A, CDC25C and p533 and in the repair of double-strand DNA breaks via homologous recombination (HR) through phosphorylation of BRCA14 and BRCA25. CHK2 is also involved in the induction of p53-dependent apoptosis through phosphorylation of p53 on Ser206, and, in a p53-independent manner, via phosphorylation of PML and E2F13. These responses prevent damaged cells from going through the cell cycle or proliferating. CHK2 also plays an important role during mitosis by maintaining chromosomal stability7.\n\nCHEK2 c.1000delC, a truncating mutation in exon 10 that abolishes kinase activity of the protein, was the first mutation being reported for this gene and was found in a woman with breast cancer and family history of Li-Fraumeni syndrome-28. The role of this mutation in breast cancer was confirmed by Meijers-Heijboer et al.9 and in several other studies10–22. Based on these studies, CHEK2 has been proposed as a moderate penetrance breast cancer susceptibility gene9 and mutations in this gene are associated with almost a 3-fold increase in the risk of breast cancer in women and a 10-fold increase in the risk of breast cancer in men23.\n\nGiven the role of CHEK2 in maintaining genomic stability and the fact that the CHEK2 protein is expressed in a wide range of tissues, it was not surprising that alterations in this protein were found in other cancers, including glioblastoma, ovarian, prostate, colorectal, gastric, thyroid, and lung cancer18,24–28. The studies in CHEK2 included individuals mainly from the United States and Europe while Latin American individuals were underrepresented. In order to infer the role of the CHEK2 gene in the cancer etiology in the Latin American population we compiled mutations in the CHEK2 gene registered in genomics data repositories and the literature, that had been reported in this population.\n\n\nMethods\n\nMutations in CHEK2 were identified in The Exome Aggregation Consortium (ExaC, RRID:SCR_004068, http://exac.broadinstitute.org/)29 browser, the Cancer Genome Atlas (TCGA, RRID:SCR_003193)30 data sets extracted from the cBioPortal for Cancer Genomics (RRID:SCR_014555, http://www.cbioportal.org/)31, and The International Cancer Genome Consortium (ICGC) (http://icgc.org/)32. From the GWAS catalog (RRID:SCR_012745, https://www.ebi.ac.uk/gwas/)33 a list of SNPs mapped to CHEK2 and associated with a disease was also downloaded. Data obtained from cell line studies was not included.\n\nICGC, the cBioportal and ExAc use prediction tools to assess functional impact of non-synonymous (SO term: missense_variant) somatic mutations on protein coding genes. ICGC uses FatHMM (http://fathmm.biocompute.org.uk/)34, Mutation Assessor (RRID:SCR_005762)35 and SIFT (RID:SCR_012813)36 to compute functional impact scores and assign impact categories (High, Medium, Low and Unknown). The cBioPortal uses Mutation Assessor and reports the same impact categories. We used those functional impact categories to filter the mutations and extract possible pathogenic mutations by selecting only high and medium impact mutations and nonsense alterations. The percentage of mutations in CHEK2 per cancer study and the percentage of cases altered per cancer type was also calculated. The filter used for the ExAC information was based on the annotation of possible damaging and deleterious mutations made by two in silico tools: Polyphen2 (RID:SCR_013200)37 and SIFT36. The assessment of stop gained, splice site disrupting and frameshift variants was made through Loss of Function Transcript Effect Estimator (LOFTEE), a plugin of the Ensembl Variant Effect Predictor (VEP) (RRID SCR_007931)38. The Latino annotation was examined in the databases that reported ethnicity data; this search was done before filtering the datasets, with the purpose to report all genetic alterations found in Latin American populations.\n\nThe plots were generated with R version 3.3.1 (RRID:SCR_001905)39.\n\nIn order to include all the studies identifying CHEK2 gene mutations in Latin America, a deep search of literature was conducted by using the terms “CHEK2”, “CHEK2 Latin America”, and “CHEK2 cancer” in electronic academic literature search engines. PUBMED (RRID:SCR_004846) was the relevant database used followed by Google Scholar (RRID:SCR_008878). References of the retrieved articles were also screened for relevant studies. This search strategy was performed iteratively up to and including 10 October 2016.\n\n\nResults\n\nThe complete list of mutations in CHEK2 reported in the cBioPortal and ICGC, before applying filters, are available in Dataset 1 and Dataset 2, respectively.\n\ncBioPortal. The available data sets consisted of 147 studies that included only cancer samples. Mutations in CHEK2 were reported in 39 out of the 147 studies. Before applying filters, cholangiocarcinoma (8.6%), uterine carcinosarcoma (7.0%), and colorectal adenocarcinoma (6.9%) were the types of cancer that showed the higher number of cases (Figure 1); meanwhile, breast, colorectal and non-small cell lung cancer (NSCLC) had more mutations in CHEK2 than other cancer types (Figure 2).\n\nThe X axis shows the type of cancer in which at least one case has a mutation in CHECK2, the Y axis indicates the percentage of cases per study that have mutations in CHECK2 (source: cBioPortal).\n\nThe X axis shows the type of cancer in which at least one mutation in CHECK2 was identified, the Y axis indicates the percentage of mutations in CHEK2 per cancer type (source: cBioPortal). n unique mutations = 159. Synonymous mutations are not included in the cBioPortal database.\n\nUsing the Mutation Assessor from cBioPortal, we filtered out mutations labeled to have neutral and low impact. In Table 1 we are reporting the mutations with high and medium impact and also nonsense mutations and frameshifts. Table 1 shows the 78 mutations that remained after the filtering process, 38 of which were classified as with high impact. 51.2% of mutations were missense mutations, 20.5% were frameshift mutations, 19.2% were nonsense mutations and 9% were in splice sites. The type of cancer with most mutations (13/78) was breast cancer, followed by uterine, lung, and colorectal cancer. The rest of cancer types had six or less mutations. The most frequent mutation was E321* reported in three patients with uterine cancer.\n\nBefore filtering the mutations found in the cBioPortal we identified Latino individuals with the ethnicity data obtained from the TCGA clinical data available at the NCI's Genomic Data Commons portal (GDC, RRID:SCR_014514, https://gdc-portal.nci.nih.gov/) (Table 2). Two patients with three mutations in the gene were found. One of the samples was a Latino patient from the head and neck squamous cell carcinoma cohort (HNSC); this patient carries the neutral variant K373E. Because this is a neutral variant it was not included in Table 2. The second Latino patient was part of the diffuse large B-cell lymphoma (DLBC) cohort; this patient carries a frameshift and a nonsense mutation.\n\n*The nomenclature used for the mutation annotation is as follow: ICGC (ENST00000328354), ExAC (NP_665861) and TCGA (NP_009125).\n\nA total of 279 mutations including up- and down-stream mutations were reported in 185 donors. From this number, seven mutations are predicted to have high impact (Table 3). For the Latin American population in ICGC, the Brazilian melanoma study (SKCA-BR) reported four mutations inside the gene, one of them with high impact (Table 2 and Table 3).\n\n*Depending of transcript. All mutations are single base substitutions. MELA-AU: melanoma, Australia. BRCA-EU: breast ER+ and HER- cancer, European Union. ESAD-UK: esophageal adenocarcinoma, United Kingdom. SKCA-BR: skin adenocarcinoma, Brazil. LINC-JP: liver cancer, Japan. BRCA-FR: breast cancer, France.\n\n\nExAC browser\n\nA total of 742 mutations for the CHEK2 gene were reported in this database and 132 of them were present in the Latino population before filters (Dataset 3). After applying the filter of possibly damaging and deleterious alterations, 23 mutations in the Latino population were left. In this group the mutation p.Leu279Pro was the most frequent (0.003112). CHEK2 c.1100delC (p.Thr410MetfsTer15*), the most interrogated mutation in CHEK2, was found in two samples (Table 2).\n\n\nGWAS catalog\n\nMutations rs132390-C and rs17879961-A mapped to or near CHEK2 were associated in European populations with breast and lung cancer, respectively. Mutations rs4822983-T and rs2239815-T were associated with esophageal squamous cell carcinoma in individuals with Han Chinese ancestry. In addition, in a Han Chinese cohort of esophageal and gastric cancer the mutation rs738722-T was also associated with those cancers (Dataset 4).\n\n\nCHEK2 mutations in Latinos reported in the literature\n\nIn total, we found nine studies in which mutations in CHEK2 were evaluated in Latino populations. Two of these studies were international and included Latin American cancer patients10,22 and the other six studies were country-based. The country in which most studies have been performed was Brazil with four studies40–43. In Argentina44, Chile45, and Mexico46 one study per country was identified. In eight out of the nine studies, the presence of variants in CHEK2 was interrogated in breast cancer patients. Only one study used samples of patients with hereditary breast and colorectal cancer. The mutation most frequently evaluated in these investigations was c.1100delC (in six studies); while other two studies42,44 interrogated the other two most frequent mutations in the CHEK2 gene (c.470T>C and c.444+IG>A) in addition to c.1100delC. Additionally, Chaudury et al. performed a complete sequencing of the gene and found a different mutation, c.478A>G (p.Arg160Gly)46. Table 4 shows the Latin American studies that reported the presence of mutations in CHEK2 mutations and their frequency.\n\n\nDiscussion\n\nA search in cancer genomics data repositories and the literature was performed to identify mutations in CHEK2 in different cancer types, with specific emphasis on mutations found in Latino American populations. The database with the most number of mutations reported in CHEK2 for Latino populations was ExAC with 132 mutations, followed by ICGC with four mutations, and TCGA with three mutations. After filtering 30 mutations with high and medium impact according to the databases functional impact categories were kept: seventeen missense, eight ‘stop gain’ mutations, one frameshift mutation, two mutations in the 5’UTR, and two mutations in splice donor sites of CHEK2. These mutations included the most analyzed mutation of CHEK2, c.1100delC (p.Thr367Metfs) (Table 2).\n\nWorldwide, according to our findings in the ICGC and TCGA databases, CHEK2 mutations were reported in 23 cancer types, while in the Latin American population CHEK2 mutations were only found in head and neck cancer, lymphoma and melanoma. In this context, it is important to highlight, that Latino populations have been underrepresented in other worldwide studies. As shown in Dataset 4, the cohorts of TCGA are biased toward the inclusion of white individuals and individuals from other ethnicities are underrepresented. The same was observed in ICGC in which only a Latin American cohort from Brazil was available for our analysis. Regarding the data found in our literature review, CHEK2 has only been studied in the Latin American population in breast and colorectal cancer.\n\nIn the ExAC repository, the mutations c.1100delC and c.478A>G were found two times and one time, respectively, in the Latino population (Dataset 3). In TCGA, c.1100delC was found in a patient with breast cancer but information about its ethnicity was not available (Table 1). Up to now, only nine studies evaluating mutations in CHEK2 have been performed in Latin America and only six of them found mutations in the gene, five studies found the c.1100delC mutation and one found the c.478A>G (p.Arg160Gly)10,22,40,43,46. Two mutations, c.1100delC and c.478A>G, were classified in the ClinVar archive (https://www.ncbi.nlm.nih.gov/clinvar/) as pathogenic and likely pathogenic, respectively. These mutations are the only ones in common with the mutations found in genomics data repositories.\n\nAlthough c.1100delC is the CHEK2 mutation most evaluated in the Latin American population, it should be noted that its frequency, seen from literature reports and data repositories, is rather low. Because the highest frequency of this mutation is found in populations from the Northern and Western Europe, c.1100delC is proposed as an allele with population gradient, which originated in these populations and its frequency decreases as you get to the southern regions of Europe (Basque Country, Spain, and Italy)47. Taking into account the European genetic component of Latin American populations, it is expected that if the frequency of c.1100delC is low in the Spanish population, in our mixed populations the frequency would be even lower.\n\nBecause cancer types other than breast and colorectal cancer, such as uterine, lung, bladder and head and neck cancer, presented mutations in CHEK2 in several populations, it is relevant to focus the search for mutations in these types of cancer in the Latin American populations. Additionally, the interrogation of CHEK2 mutations in the Latin American population has been focused mainly on the c.1100delC mutation, but the data obtained from the ExAC database showed that in Latin American samples there are 23 germline mutations (Table 2) that could generate cancer susceptibility. It would therefore be important to examine the frequencies of these mutations in the Latin American population and its association with the development of cancer.\n\nThis study has limitations; for example, information about race and ethnicity was not available for at least 28 studies in the cBioPortal, and consequently some Latinos may be hidden in those studies. Thus, the small number of Latinos included in the genomics data repositories could be a reason why we have found a small number of mutations in CHEK2 in this population. It is important to highlight that the use of different transcripts for reporting mutations makes the correlation between mutations found in different studies laborious.\n\nThis study presents a compilation of mutations in CHEK2 with high impact in different cancer types in White, Hispanic and other populations. We also showed the necessity of performing studies in Latin American in cancer types different than breast and colorectal and a screening of other mutations in addition to the most popular mutations analyzed, such as c.1100delC.\n\n\nData availability\n\nF1000Research: Dataset 1: A complete list of mutations, before applying filters, in CHEK2 reported in the cBioPortal 10.5256/f1000research.9932.d14212948.\n\nF1000Research: Dataset 2: A complete list of mutations, before applying filters, in CHEK2 reported in the ICGC 10.5256/f1000research.9932.d14213049.\n\nF1000Research: Dataset 3: Mutations in CHEK2 identified in Latino American samples before applying filters (source:ExAC) 10.5256/f1000research.9932.d14213150.\n\nF1000Research: Dataset 4: Variants reported in CHEK2 that have been associated with cancer according to data in the GWAS catalog. All of these variants were found in the cBioPortal or ICGC data 10.5256/f1000research.9932.d14213251.\n\nF1000Research: Dataset 5: Number of individuals per cancer study and ethnicity in the TCGA cohort. Only studies in which at least one mutation in CHEK2 was found were included 10.5256/f1000research.9932.d14213352.",
"appendix": "Author contributions\n\n\n\nConception and design of the work: CCL, GOS and RHAL. Data collection: RHAL and GOS. Data analysis: CCL, GOS and RHAL. Drafting of the article and critical revision: CCL, GOS, and RHAL. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLCC received funding by CONADI-Universidad Cooperativa de Colombia (Grant ID1450).\n\n\nReferences\n\nBrown AL, Lee CH, Schwarz JK, et al.: A human Cds1-related kinase that functions downstream of ATM protein in the cellular response to DNA damage. Proc Natl Acad Sci U S A. 1999; 96(7): 3745–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZannini L, Delia D, Buscemi G: CHK2 kinase in the DNA damage response and beyond. J Mol Cell Biol. 2014; 6(6): 442–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJekimovs CR, Chen X, Arnold J, et al.: Low frequency of CHEK2 1100delC allele in Australian multiple-case breast cancer families: functional analysis in heterozygous individuals. Br J Cancer. 2005; 92(4): 784–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee JS, Collins KM, Brown AL, et al.: hCds1-mediated phosphorylation of BRCA1 regulates the DNA damage response. Nature. 2000; 404(6774): 201–4. PubMed Abstract | Publisher Full Text\n\nBahassi EM, Ovesen JL, Riesenberg AL, et al.: The checkpoint kinases Chk1 and Chk2 regulate the functional associations between hBRCA2 and Rad51 in response to DNA damage. Oncogene. 2008; 27(28): 3977–85. PubMed Abstract | Publisher Full Text\n\nShieh SY, Ahn J, Tamai K, et al.: The human homologs of checkpoint kinases Chk1 and Cds1 (Chk2) phosphorylate p53 at multiple DNA damage-inducible sites. Genes Dev. 2000; 14(3): 289–300. PubMed Abstract | Free Full Text\n\nStolz A, Ertych N, Kienitz A, et al.: The CHK2-BRCA1 tumour suppressor pathway ensures chromosomal stability in human somatic cells. Nat Cell Biol. 2010; 12(5): 492–9. PubMed Abstract | Publisher Full Text\n\nBell DW, Varley JM, Szydlo TE, et al.: Heterozygous germ line hCHK2 mutations in Li-Fraumeni syndrome. Science. 1999; 286(5449): 2528–31. PubMed Abstract | Publisher Full Text\n\nMeijers-Heijboer H, van den Ouweland A, Klijn J, et al.: Low-penetrance susceptibility to breast cancer due to CHEK2*1100delC in noncarriers of BRCA1 or BRCA2 mutations. Nat Genet. 2002; 31(1): 55–9. PubMed Abstract | Publisher Full Text\n\nBell DW, Kim SH, Godwin AK, et al.: Genetic and functional analysis of CHEK2 (CHK2) variants in multiethnic cohorts. Int J Cancer. 2007; 121(12): 2661–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCHEK2 Breast Cancer Case-Control Consortium: CHEK2*1100delC and susceptibility to breast cancer: a collaborative analysis involving 10,860 breast cancer cases and 9,065 controls from 10 studies. Am J Hum Genet. 2004; 74(6): 1175–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCybulski C, Wokolorczyk D, Huzarski T, et al.: A deletion in CHEK2 of 5,395 bp predisposes to breast cancer in Poland. Breast Cancer Res Treat. 2007; 102(1): 119–22. PubMed Abstract | Publisher Full Text\n\nDe Jong MM, van der Graaf W, Nolte IM: Increased CHEK2 1100delC genotype frequency (also) in unselected breast cancer patients. J Clin Oncol. 2004; 22(suppl): 844s. Reference Source\n\nGhadirian P, Robidoux A, Zhang P, et al.: The contribution of founder mutations to early-onset breast cancer in French-Canadian women. Clin Genet. 2009; 76(5): 421–6. PubMed Abstract | Publisher Full Text\n\nKleibl Z, Novotny J, Bezdickova D, et al.: The CHEK2 c.1100delC germline mutation rarely contributes to breast cancer development in the Czech Republic. Breast Cancer Res Treat. 2005; 90(2): 165–7. PubMed Abstract | Publisher Full Text\n\nOffit K, Pierce H, Kirchhoff T, et al.: Frequency of CHEK2*1100delC in New York breast cancer cases and controls. BMC Med Genet. 2003; 4: 1. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRashid MU, Jakubowska A, Justenhoven C, et al.: German populations with infrequent CHEK2*1100delC and minor associations with early-onset and familial breast cancer. Eur J Cancer. 2005; 41(18): 2896–903. PubMed Abstract | Publisher Full Text\n\nThompson D, Seal S, Schutte M, et al.: A multicenter study of cancer incidence in CHEK2 1100delC mutation carriers. Cancer Epidemiol Biomarkers Prev. 2006; 15(12): 2542–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVahteristo P, Bartkova J, Eerola H, et al.: A CHEK2 genetic variant contributing to a substantial fraction of familial breast cancer. Am J Hum Genet. 2002; 71(2): 432–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeischer M, Bojesen SE, Ellervik C, et al.: CHEK2*1100delC genotyping for clinical assessment of breast cancer risk: meta-analyses of 26,000 patient cases and 27,000 controls. J Clin Oncol. 2008; 26(4): 542–8. PubMed Abstract | Publisher Full Text\n\nWeischer M, Bojesen SE, Tybjaerg-Hansen A, et al.: Increased risk of breast cancer associated with CHEK2*1100delC. J Clin Oncol. 2007; 25(1): 57–63. PubMed Abstract | Publisher Full Text\n\nZhang S, Phelan CM, Zhang P, et al.: Frequency of the CHEK2 1100delC mutation among women with breast cancer: an international study. Cancer Res. 2008; 68(7): 2154–7. PubMed Abstract | Publisher Full Text\n\nNarod SA: Testing for CHEK2 in the cancer genetics clinic: ready for prime time? Clin Genet. 2010; 78(1): 1–7. PubMed Abstract | Publisher Full Text\n\nBak A, Janiszewska H, Junkiert-Czarnecka A, et al.: A risk of breast cancer in women - carriers of constitutional CHEK2 gene mutations, originating from the North - Central Poland. Hered Cancer Clin Pract. 2014; 12(1): 10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIngvarsson S, Sigbjornsdottir BI, Huiping C, et al.: Mutation analysis of the CHK2 gene in breast carcinoma and other cancers. Breast Cancer Res. 2002; 4(3): R4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSiołek M, Cybulski C, Gasior-Perczak D, et al.: CHEK2 mutations and the risk of papillary thyroid cancer. Int J Cancer. 2015; 137(3): 548–52. PubMed Abstract | Publisher Full Text\n\nTeodorczyk U, Cybulski C, Wokołorczyk D, et al.: The risk of gastric cancer in carriers of CHEK2 mutations. Fam Cancer. 2013; 12(3): 473–8. PubMed Abstract | Publisher Full Text\n\nZhang P, Wang J, Gao W, et al.: CHK2 kinase expression is down-regulated due to promoter methylation in non-small cell lung cancer. Mol Cancer. 2004; 3: 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLek M, Karczewski KJ, Minikel EV, et al.: Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016; 536(7616): 285–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network, Weinstein JN, Collisson EA, et al.: The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013; 45(10): 1113–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCerami E, Gao J, Dogrusoz U, et al.: The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012; 2(5): 401–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang J, Baran J, Cros A, et al.: International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data. Database (Oxford). 2011; 2011: bar026. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWelter D, MacArthur J, Morales J, et al.: The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 2014; 42(Database issue): D1001–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShihab HA, Gough J, Cooper DN, et al.: Predicting the functional, molecular, and phenotypic consequences of amino acid substitutions using hidden Markov models. Hum Mutat. 2013; 34(1): 57–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReva B, Antipin Y, Sander C: Predicting the functional impact of protein mutations: application to cancer genomics. Nucleic Acids Res. 2011; 39(17): e118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNg PC, Henikoff S: Predicting deleterious amino acid substitutions. Genome Res. 2001; 11(5): 863–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdzhubei IA, Schmidt S, Peshkin L, et al.: A method and server for predicting damaging missense mutations. Nat Methods. 2010; 7(4): 248–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLaren W, Gil L, Hunt SE, et al.: The Ensembl Variant Effect Predictor. Genome Biol. 2016; 17(1): 122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, 2013. Reference Source\n\nAbud J, Koehler-Santos P, Ashton-Prolla P, et al.: CHEK2 1100DELC germline mutation: a frequency study in hereditary breast and colon cancer Brazilian families. Arq Gastroenterol. 2012; 49(4): 273–8. PubMed Abstract | Publisher Full Text\n\nCarraro DM, Koike Folgueira MA, Garcia Lisboa BC, et al.: Comprehensive analysis of BRCA1, BRCA2 and TP53 germline mutation and tumor characterization: a portrait of early-onset breast cancer in Brazil. PLoS One. 2013; 8(3): e57581. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFelix GE, Abe-Sandes C, Machado-Lopes TM, et al.: Germline mutations in BRCA1, BRCA2, CHEK2 and TP53 in patients at high-risk for HBOC: characterizing a Northeast Brazilian Population. Hum Genome Var. 2014; 1: 14012. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalmero EI, Alemar B, Schuler-Faccini L, et al.: Screening for germline BRCA1, BRCA2, TP53 and CHEK2 mutations in families at-risk for hereditary breast cancer identified in a population-based study from Southern Brazil. Genet Mol Biol. 2016; 39(2): 210–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJablonski P, Alterman L, Pastene E, et al.: Argentinean Jewish population frequencies for common mutations in BRCA1, BRCA2, and CHEK2. Journal of Clinical Oncology. [Abstract]. 2014; 32(15_suppl): 1539. Reference Source\n\nGonzalez-Hormazabal P, Castro VG, Blanco R, et al.: Absence of CHEK2 1100delC mutation in familial breast cancer cases from a South American population. Breast Cancer Res Treat. 2008; 110(3): 543–5. PubMed Abstract | Publisher Full Text\n\nChaudhury A, Laukaitis C, Mauss C, et al.: Abstract P3-07-05: Frequent BRCA1 and BRCA2 mutations are found in Mexican and Mexican-American women with breast cancer. Cancer Research. 2013; 73(24_suppl). Publisher Full Text\n\nMartínez-Bouzas C, Beristain E, Guerra I, et al.: CHEK2 1100delC is present in familial breast cancer cases of the Basque Country. Breast Cancer Res Treat. 2007; 103(1): 111–3. PubMed Abstract | Publisher Full Text\n\nGuauque-Olarte S, Rivera-Herrera AL, Cifuentes-C L: Dataset 1 in: Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population. F1000Research. 2016. Data Source\n\nGuauque-Olarte S, Rivera-Herrera AL, Cifuentes-C L: Dataset 2 in: Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population. F1000Research. 2016. Data Source\n\nGuauque-Olarte S, Rivera-Herrera AL, Cifuentes-C L: Dataset 3 in: Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population. F1000Research. 2016. Data Source\n\nGuauque-Olarte S, Rivera-Herrera AL, Cifuentes-C L: Dataset 4 in: Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population. F1000Research. 2016. Data Source\n\nGuauque-Olarte S, Rivera-Herrera AL, Cifuentes-C L: Dataset 5 in: Mutations of the CHEK2 gene in patients with cancer and their presence in the Latin American population. F1000Research. 2016. Data Source"
}
|
[
{
"id": "20668",
"date": "03 Mar 2017",
"name": "Claire Palles",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSandra Guauque-Olarte et al provide an overview of both somatic and germline mutations in CHEK2 that have been identified in Latin-American populations. The authors interrogate cBioPortal and ICGC databases to identify somatic mutations and ExAC and a review of existing literature to identify germline mutations.\n\nMy reservations about the manuscript are as follows:\n\nCurrently the authors do not make it clear throughout the manuscript whether they are describing somatic mutations or germline mutations/variants. Please add a column to table 2 to show clearly which are somatic and which are germline.\n\nIn the abstract it says: “Latin American studies have been restricted to breast and colorectal cancer and only two mutations out of four that have been interrogated in this population were identified, namely c.1100delC and c.349A>G”. Table 4 which lists the mutations reported in the literature in Latin American studies does not show the c.349A>G mutation but a c.478A>G mutation and I can see no further mention of c.349A>G in the rest of the manuscript. Please resolve this.\n\nResults: The text description of the difference between Figure 1 and 2 in the start of the results section is unclear. As far as I can see Figure 1 shows data per cancer type in TCGA and Figure 2 shows data per study in TCGA. I don’t see the need to have both figures- figure 1 is sufficient and the text should read “breast, colorectal and non small cell lung cancer had more CHEK2 mutations than other cancer type”. At the start of the results section the authors describe mutations “before filtering”. Please be clearer and state before filtering steps to include only likely functional mutations.\nOn page 5 the sentence beginning The type of cancer with the most mutations …. should read “After filtering for likely functional variants the cancers with the highest numbers of mutations in CHEK2 were breast followed by uterine, non small cell (?) lung and colorectal.\nTable 1 describes mutations in non Hispanic-latino samples. A98Mfs*13 and Q100*, which are found in a white Hispanic or latino sample should be removed (none of the other mutations in Latin American populations are in Table 1). The ethnicity column in table 1 also needs to be formatted properly – remove duplicated words and “_” between words.\nGWAS catalogue section in results- need to insert P-values for the associations that you report. Table 4. Insert OR and P-values for associations.\nDataset 5. The %s should not have a – infront of them, it adds confusion as to what these values are.\n\nDiscussion:\n\nThe authors make reference to a CHEK2 1100delC mutation picked up in the TCGA datasets and refer to Table 1. I cant find 1100delC in table 1. I can only find it in Table 2 in ExAC. Please clarify.\n\nThe authors state that other patients with cancer types such as uterine, lung, bladder and head and neck cancer should be screened for CHEK2 mutations. Here they are trying to show that because a gene is somatically mutated in a particular cancer type that there might also be a germline mutation that increases predisposition. Some of the mutations listed in tables 1 and 2 (mutations post filtering for likely functional impact) are missense or UTR and so it would be important to show that these somatic mutations are functional. Could the authors please annotate the TCGA/ICGC mutations with information of which domain they map to.\n\nThe authors state there are 23 germline mutations which could cause cancer susceptibility. Were any of these examined for a functional affect on CHECK2 in paper by Bell et al 20071 or other studies of variants in CHEK2 on protein activity? I think it would be important to include this and to also state that functional assays would be helpful to determine which of these should be screened for in Latin American and other populations.",
"responses": []
},
{
"id": "21485",
"date": "03 Apr 2017",
"name": "Muhammad Usman Rashid",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Guauque-Olarte and colleagues is an overview of the CHEK2 variants reported in Latin American population, searched from literature or cBioPortal, ICGC and ExAC databases. Overall the concept of the manuscript is interesting; however the data is poorly presented and the scientific writing is not up to the mark. The manuscript title also needs modification, like “An overview of CHEK2 variants associated with cancer in Latin American population”.\nI have following reservations about the manuscript:\nFor missense variants or variants in 5’UTRs it is suggested to write “DNA sequence variants” instead of “mutations” throughout the manuscript, so that these can be differentiated from clear pathogenic mutations i.e. frameshift, nonsense or splice site mutations. As the study objective was to compile the CHEK2 mutations reported in Latin Americans, Table 1 describes the CHEK2 variants identified in other populations or even the ethnicity is unknown for majority of the variants presented in this table. The table is also not presented properly. It is suggested to omit this table or present it as a “Dataset”, and just mention in the text that 78 deleterious or potentially deleterious mutations were reported in TCGA studies. The authors did not state about the origin (somatic or germline) of CHEK2 variants presented in all the tables. It would be of interest if a column is added in all tables for this information. Results section: Data presented in Figure 1 and Figure 2 is not concordant as mentioned in the text. Please resolve this issue. Results section: “…..after the filtering process, 38 of which were classified as with high impact” It is not clear which those 38 nucleotide variants are in Table 1? Please add a column for this information. Results section: Paragraph “Two patients with three mutations …….this patient carry a frameshift and a nonsense mutation” is confusing. Is the patient with DLBC a compound heterozygous for a frameshift and nonsense CHEK2 mutation, simultaneously? Table 2: Column Genomic DNA change: The nucleotide change can’t be seen in this column, there is just the nucleotide position. Please modify this column. Results section: GWAS catalog. Authors should be cautious whether the SNPs rs132390-C and rs2239815-T are present in CHEK2 gene or not? Table 2: Two variants in 5’UTR are not clear, population is also not mentioned. Table 2: “Effect” column; please correct that stop gain mutations are also called nonsense mutations. Table 2: The data in the table is not presented properly. c.1590+2T>G and c.573+2T>G are the nucleotide changes and these are presented in column AA change. The authors should follow HGVS nomenclature, both for nucleotide change and the AA change. There should be a column for pathogenicity of missense mutations (high or medium impact) in this table. Table 3: Column Consequences: I think there is no need to mention the amino acid change referring all CHEK2 transcripts. Just follow the GenBank reference sequence for transcript variant 1 for reporting nucleotide or AA change and follow the HGVS nomenclature. Discussion, paragraph 1: “….eight stop gain mutations, one frameshift mutation…” Please correct, there are four stop gain mutations (also called nonsense mutations) and five frameshift mutations.",
"responses": []
},
{
"id": "22138",
"date": "24 Apr 2017",
"name": "Ewa Grzybowska",
"expertise": [
"Reviewer Expertise cancer genetics",
"molecular biology of carcinogenesis",
"epidemiology of cancer",
"pharmacogenetics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors worked out the compilation of germline mutations in the CHEK2 gene in patients diagnosed with different cancer types and in different populations focusing on Latin American population. CHEK2 mutations have been linked with Li—Fraumeni syndrome, also germline mutations are thought to confer a predisposition to sarcomas, breast cancer and brain tumors. The most frequent CHEK2 mutation c.1100 delC is the low penetrance mutation and it has low impact in breast or other cancers risk. The rest of the mutations or SNPs are much less connected with the known cancer risk. Therefore, it is difficult to evaluate the increase of the different cancers risk for the carriers of germline mutations in CHEK2 gene. It is particularly difficult to do so if there are only one or two carriers of these mutations in the population under study.\n\nI totally agree with both reviewers especially with two issues:\nThe text is written in that way that the reader can think the authors analyze somatic mutations in CHEK2 gene in different cancer types whereas they made the search for germline mutations. In Table 4 they indicate that the blood was the tissue which was used to analyze mutations so the text should be rewritten in that way there would be no doubt that germline mutations were under study. The Tables 1 and 2 also should be changed. The description of the ethnic minorities is strange. In Table 2 the data from databaseExAC do not contain the information about the disease connected with the mutation so it does not make sense to include these data if the title of the manuscript is “Mutations of the CHEK2 gene in the patients with cancer…” these data should be excluded from the analysis because they do not bring any important information about CHEK2 mutations in different cancer sites.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2791
|
https://f1000research.com/articles/5-2784/v1
|
29 Nov 16
|
{
"type": "Research Article",
"title": "In vitro comparison of three earwax removal formulations for the disintegration of earwax",
"authors": [
"Janice Knebl",
"Barbara Harty",
"C. Eric Anderson",
"W. Dennis Dean",
"Joseph Griffin",
"Janice Knebl",
"Barbara Harty",
"C. Eric Anderson",
"W. Dennis Dean"
],
"abstract": "Introduction: Impacted cerumen is a widespread reason that patients visit their health care providers. It effects approximately 2-6% of the general population and disproportionately impacts up to 65% of patients over 65. This study compared a new cerumen (earwax) removal product (Solution 1; EOS-002; a glycolic acid/bicarbonate formulation) versus two commercially available products (Solution 2 and Solution 3; both containing carbamide peroxide 6.5%) for their cerumenolytic activity in vitro. Methods: Samples of human cerumen were placed in 10 x 75 mm polypropylene test tubes. Approximately 1 mL of each test solution was added and incubated at room temperature for 30 minutes. The vials were shaken at the 15- and 30-minute time points to simulate rinsing in a clinical setting. Breakdown of the cerumen was graded at 5-, 10-, 15-, and 30-minute time points in a masked manner on a 5-point scale (Grade 0 = no change; Grade 4 = complete disintegration). Results: Significantly greater disintegration of the cerumen was observed in the samples exposed to EOS-002 at every time point (P < 0.0001). At 5 minutes, disintegration was observed in 39 out of 43 samples exposed to EOS-002, 0 out of 24 samples exposed to Solution 2, and 1 out of 19 samples exposed to Solution 3. Mean disintegration scores at 5, 10, 15, and 30 minutes were 1.65, 2.38, 2.95, and 3.24 for EOS-002; 0, 0, 0, and 0.2 for Solution 2; and 0.05, 0.13, 0.16, and 0.21 for Solution 3, respectively. Discussion: EOS-002 exhibited a significantly greater ability to breakdown cerumen than the two other products. Disintegration of cerumen occurred with EOS-002 within 5 minutes in 91% (39/43) of the samples. Therefore, EOS-002 provides rapid disintegration of human cerumen in vitro.",
"keywords": [
"Earwax",
"Cerumen",
"Cerumenolytic",
"Cerumen impaction"
],
"content": "Introduction\n\nThe excess accumulation of cerumen (earwax) is a common cause for patients to seek treatment by a general physician, family physician, or otolaryngologist1. At least 8 million ear irrigations are performed each year for this condition2. Cerumen impaction is estimated to affect between 2 and 6% of the general population in the United States. As many as 65% of individuals over 65 years of age and up to 36% of those with mental retardation experience cerumen impactions2–5.\n\nCerumen impaction has important clinical implications in terms of the general well-being of patients and may be associated with temporary hearing loss, pain, itching, tinnitus, external otitis, vertigo, and even chronic cough5. Cerumen impaction can temporarily decrease hearing acuity by as much as 45 dB6. For the elderly, this hearing impairment can have a negative impact on quality of life by causing difficulties with communication, cognition, social isolation, anxiety, depression, and even physical mobility1,7,8. All too often, decreased hearing with advancing age, either gradual or acute, is perceived by the patients and/or their caregivers as a natural, almost expected, phenomenon, which does not warrant intervention1. However, studies have shown that hearing is significantly improved following the removal of impacted cerumen4,9.\n\nThere are currently several commercially available cerumen removal products. These products include oil-based (e.g., almond oil), water-based (e.g., acetic acid), and non-water, and non-oil-based (e.g., propylene glycol) preparations10. Unfortunately, these preparations are minimally effective at disintegrating cerumen impactions and often require multiple doses per day over several days to achieve satisfactory results11,12.\n\nNone of the agents that are currently available has shown a clear advantage in terms of efficacy in removing cerumen2,10,13,14. Previous studies have found that these products are often less effective or no better than deionized water12,15. Moreover, they typically clear cerumen less than half of the time10,16. Systemic reviews have found no topical cerumenolytic clearly superior to any other or to saline or sterile water10,13,14.\n\nThese results have prompted the search for a better cerumenolytic agent, and we have identified ingredients that could quickly, effectively, and safely breakdown or dissolve human cerumen when combined. Consequently, a new product has been developed, which benefits from a dual-action mechanism for breaking down human cerumen. The current study compared the new cerumen removal product (EOS-002) with two commercially available products for their ability to breakdown or disintegrate samples of human cerumen in vitro.\n\n\nMethods\n\nInstitutional Board Approval of the University of North Texas Health Science Center (UNTHSC) and patient informed consent were obtained prior to commencement of this study.\n\nHuman cerumen samples (approximately 30 to 50 μg each) were placed in 10 x 75 mm polypropylene test tubes at room temperature. The samples were taken without restriction in terms of patient characteristics. The physician utilized a curette to remove the cerumen from the subjects outer ear canal. The samples were placed in small plastic storage tubes with lids, labeled with date of extraction along with a general description of the physical characteristics (dry, wet or mixed). The samples were required to be at least 30 μg in size. Approximately 1 mL of each test solution was added to each test tube, and the samples were incubated at room temperature for 30 minutes, with grading recorded at 5, 10, 15 and 30 minutes. Photographs were taken for representative samples at 2.5-minute intervals. Each comparison for each time point was performed in replicate tubes (n = 24 or 19). The sample size was driven by the availability of subjects willing to participate in the collection trial. A total of 86 cerumen samples were available during the duration of the testing. The comparative products were used as controls, as these products are well recognized by physicians and consumers.\n\nThe samples were graded at 5 minute and 10 minute time points, without moving the tubes. However, the test tubes were shaken at the 15 minute and 30 minute time points to simulate the rinse procedure that would normally occur in the clinical use setting.\n\nThe test solutions were as follows:\n\n• Solution 1 - Glycolic acid/bicarbonate formulation (EOS-002; Eosera Inc., Fort Worth, TX; 2016)\n\n• Solution 2 - Carbamide peroxide 6.5% (Debrox, Prestige Brands, Tarrytown, NY; 2016)\n\n• Solution 3 - Carbamide peroxide 6.5% (Murine Earwax Removal System, Prestige Brands, Tarrytown, NY; 2016)17\n\nA grader (affiliated with the sponsor company) was blinded as to the identity of the test solutions assessed the disintegration (breakdown) of cerumen at 5, 10, 15, and 30 minutes. A 5-point disintegration grading scale was developed for assessing the effects of different formulations on human cerumen (Table 1). This grading scale was adapted from those of Jimenez et al.18 and Fraser19.\n\nAdapted from Jimenez et al., 200818 and Fraser, 197019.\n\nMeans and standard deviations were calculated for each treatment group at the 5, 10, 15, and 30 minute time points. Between-group comparisons were performed using Student’s t test. A P value of ≤ 0.05 denoted a statistically significant difference between treatment groups. Statistical analysis was conducted with Microsoft Excel for Mac 2011, version 14.6.0.\n\n\nResults\n\nFor the comparison between EOS-002 and Solution 2, 24 samples each were available for each time point. The time course found significant differences between EOS-002 and Solution 2 (P < 0.0001) in grading scores at all time points (5 min, 10 min, 15 min, and 30 min) (Figure 1). The mean disintegration scores at 5 minutes were 1.63 ± 0.7 for EOS-002 and 0 ± 0 for Solution 2. No sample out of the 24 samples in the Solution 2 group had a score above 0 at 5 minutes compared with 24 out of 24 for EOS-002 (range 1 to 3).\n\nAll incubations were performed at room temperature. *P < 0.0001.\n\nFor the evaluations of EOS-002 and Solution 3, 19 samples each were available for each time point. Similarly, the time course demonstrated significant differences between EOS-002 and Solution 3 (P < 0.0001) in grading scores at all time points (Figure 2). The mean disintegration scores at 5 minutes were 1.68 ± 1.0 for EOS-002 and 0.05 ± 0.2 for Solution 2. Only 1 out of 19 samples in the Solution 2 group had a score above 0 (1) at 5 minutes, compared with 16 out of 19 samples for EOS-002 (range 0 to 3).\n\nAll incubations were performed at room temperature. *P < 0.0001.\n\nWhen the data for both comparisons were combined, the mean disintegration scores at 10 minutes were 2.38 ± 1.1 for the EOS-002-treated samples and 0.06 ± 0.3 for the carbamide peroxide 6.5%-treated samples (n = 43 for both groups; Figure 3). As expected, all time points showed significant differences in favor of EOS-002 in terms of the disintegration scores.\n\nAll performed were conducted at room temperature. *P < 0.0001.\n\nFor the comparison between EOS-002 and Solution 2, the cerumen samples started to swell and disintegrate within 2.5 minutes of exposure to EOS-002 (Figure 4). At 15 minutes, these samples were noticeable disrupted and dispersed compared with their appearance prior to treatment. However, after 15 minutes of exposure to Solution 2, there was no discernable change to the samples.\n\nAll incubations were performed at room temperature.\n\nAs with the above experiments, for the evaluations of EOS-002 and Solution 3, within 2.5 minutes of exposure to EOS-002, the cerumen samples started to swell and disintegrate (Figure 5). At 15 minutes, the EOS-002 sample was noticeable disrupted and dispersed compared with its appearance before treatment. However, after 15 minutes of exposure to Solution 3, there was little to no change to the sample.\n\nAll incubations were performed at room temperature.\n\n\nDiscussion\n\nBoth photographic records and the time course studies for disintegration scores demonstrated that EOS-002 was effective at quickly breaking down human cerumen under room temperature conditions. Samples incubated in EOS-002 demonstrated significantly higher disintegration scores than the two comparators at every time point measured (P < 0.0001). From the photographic studies, differences between EOS-002 and the other two products could be seen within 2.5 minutes. Differences in disintegration scores were also observed within 5 minutes (the earliest graded time point). Only a small amount of disintegration was observed for the samples exposed to the 2 products containing carbamide peroxide 6.5%, even after 30 minutes.\n\nAn in vitro study, conducted by Saxby et al.15, evaluated the cerumenolytic activity of 6 different preparations (distilled water; olive oil; sodium bicarbonate 5%, dexamethasone 0.05% + framycetin sulphate 0.5% + gramicidin 0.005% [Sofradex, Sanofi-Aventis, Guildford, UK]; urea + hydrogen peroxide 5% in glycerol; and bethamethasone sodium phosphate 0.1% [Vistamethasone, Cardinal Health Martindale Products, Brentwood, UK]). Each cerumen sample (5 mm in diameter and 3 mm thick) was placed into a test tube that contained 5 mL of one of the test solutions and allowed to incubate at room temperature. At 30 minutes of exposure, the aqueous-based solutions had caused a slight amount of disintegration, while the oil-based solutions (olive oil or urea + hydrogen peroxide) produced no visible change to the cerumen samples (Table 2). Distilled water and sodium bicarbonate 5% produced the greatest amounts of disintegration. It should be noted that it might not be feasible for a patient to treat their ears with a cerumenolytic for 30 minutes prior to irrigation. The current study suggests substantial disintegration of cerumen might be possible in as little as 5 minutes of exposure with the novel glycolic acid/bicarbonate formulation.\n\n1Grading scale adapted from Fraser et al., 1970, and Jimenez et al., 2008. Grade 0 = no change; Grade 1 = slight disintegration; Grade 2 = moderate disintegration; Grade 3 = substantial disintegration; Grade 4 = complete disintegration.\n\n2 – = no visible change; + = slight disintegration; ++ = partial disintegration; +++ = substantial disintegration.\n\n3 – = no visible change; + = coloration of the agent; ++ = slight disintegration; +++ = partial disintegration; ++++ = substantial disintegration; +++++ = complete disintegration.\n\n4 – = no visible change; + = slight solvent effect; ++ = partial disintegration; +++ = complete disintegration.\n\nMin = minutes; h = hours; d = days\n\nBellini et al.20 performed an in vitro study on eight different preparations (Waxsol, dioctyl sodium sulphosuccinate 0.5% in a water-miscible base; Cerumol, paradichlorobenzene 2%, chlorbutol 5%, and turpentine oil 10%; Earex, arachis oil 33.3% v/v, almond oil 33.3% v/v, rectified camphor oil 33.3% v/v; dioctyl sodium sulpho-succinate 5% w/v; olive oil; sodium bicarbonate; distilled water; and acetone). The tubes containing the samples (40 mg) and test solutions (0.5 mL) were incubated at 36.4˚C for up to 2 hours in either pasteur pipettes (Series 1) or plastic centrifuge tubes (Series 2) (Table 2). Bellini and colleagues found a modest amount of cerumenolytic activity with sodium bicarbonate, a component of the EOS-002 formulation. No changes were observed at 15 minutes for Earex and the preparations containing arachis oil, and olive oil. Conversely, the present study found moderate to substantial disintegration of cerumen with EOS-002 at 15 minutes at room temperature. One might expect even more rapid disintegration with this formulation at body temperatures.\n\nFraser and colleagues19 also conducted their studies of different cerumenolytic preparations in test tubes incubated at 37˚C for up to 3 days. Interestingly, they found no visible change with any of the preparations at 15 minutes (Table 2). Contrast this with the cerumen samples in the current study exposed to EOS-002, which showed observable disintegration within 5 minutes.\n\nAnother in vitro study, by Uppal et al.21 compared 5 ear drop formulations (5% NaHCO3, 3% H2O2, dexamethasone sodium metasulphobenzoate 0.05%, framycetin sulphate 0.5%, gramicidin 0.005%), 0.33% acetic acid and 0.9% NaCl) for clearing grommets blocked with freshly harvested thick middle ear effusion fluid. These grommets were housed in models of the ear canal constructed using 2 mL syringes. In the intervention groups five drops of each formulation were instilled into each syringe 3 times a day for a total of 7 days. The number of grommets cleared ranged from 1.3% (no drops) to 36.7% (29/79; 5% NaHCO3). Acetic acid 0.33% produced clearing in 27.2% (22/81) of the grommets. The glycolic acid/bicarbonate formulation of EOS-002 has similar but enhanced characteristics compared with some of the most effective eardrops tested in the Uppal et al. study21.\n\nAnother in vitro study evaluated a liquid enzyme-based cerumenolytic formulation18. Samples of cerumen (30 mg) were incubated in glass test tubes at 37˚C without agitation. After 5 minutes of exposure, there was evidence of disintegration with the enzyme-based formulation. However, at 30 minutes, there was almost no qualitative change in the samples exposed to the commercial formulations, one of which was the same product as Solution 3 in the current study. The results of the current study corroborate this previous observation. Little change to the samples were observed after 15 minutes with Solution 3.\n\nIt is proposed that EOS-002 uses a dual-action mechanism to disintegrate human cerumen. Wax ester and fatty acid lipid components of the cerumen are disrupted by the bicarbonate system of the formulation22,23. This system breaks carboxylic acids down to their more water-soluble carboxylate salts. The glycolic acid system of the product chelates calcium ions from the calcium-dependent cell adhesion molecules resulting in the disruption of cadhedrins, which allows the cells of the keratin sheet to break apart24,25. It is feasible that the glycolic acid also works in conjunction with a osmolarity variance between the formulation and the keratinocytes, leading to an influx of water into the cells leading to swelling and disruption of the wax mass26.\n\nThe current study is limited by its in vitro design. The incubations were conducted at room temperature and results could vary at body temperatures in vivo. These results should be confirmed in a prospective randomized clinical study.\n\nOverall, evidence from the literature suggests aqueous preparations are better for disrupting human cerumen than oil-based preparations15,19,20. Furthermore, bicarbonate formulations have demonstrated efficacy for causing the disintegration of cerumen in vitro. Another study showed that an acidic preparation had moderate efficacy in breaking down cerumen in vitro21. These findings support the results of the current study, which demonstrated the rapid disintegration of cerumen in sample exposed to EOS-002 comprising a glycolic acid/bicarbonate formulation. Conversely, two commercially available products, both containing carbamide peroxide 6.5%, had minimal effects on the cerumen samples. The in vitro results with EOS-002 are promising. A small exploratory study in humans has recently been performed, which demonstrated efficacy of the product in disintegrating cerumen, in order to aid in the removal of impactions (unpublished study; Fullington, D, Song, J, Gilles, A, Guo, X, Hua, W, Anderson, C, Griffin, J).\n\n\nData availability\n\nDataset 1: Raw data for Figure 1. EOS-002 vs Solution 2. DOI, 10.5256/f1000research.10279.d14437327.\n\nDataset 2: Raw data for Figure 2. EOS-002 vs Solution 3. DOI, 10.5256/f1000research.10279.d14437428.\n\nDataset 3: Raw data for Figure 3. EOS-002 vs combined data from Solutions 2 & 3. DOI, 10.5256/f1000research.10279.d14437529.\n\n\nEthics statement\n\nInstitutional Board Approval of the University of North Texas Health Science Center (UNTHSC IRB Project # 2015-114) and patient informed consent were obtained prior to commencement of this study.",
"appendix": "Author contributions\n\n\n\nJG and CA conceived, designed the study and carried out the research with consultation from JK and BH. JG prepared the first draft of the manuscript. CA and WD contributed to the experimental design, discussion and mechanism of action sections of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nJK, BH, and WD declare no conflict of interest. CA was a temporary employee of the sponsoring company (Eosera, Inc.) during the development of the product, was provided fair market compensation during his employment, and is an author on the pending patent application. JG is an employee and equity holder in Eosera, Inc., and is an author on the pending patent application.\n\n\nGrant information\n\nThis study was funded by Eosera, Inc. (Fort Worth, TX, USA).\n\n\nAcknowledgements\n\nWe would like to thank Julie Crider, PhD for medical writing/editing contributions.\n\n\nReferences\n\nOron Y, Zwecker-Lazar I, Levy D, et al.: Cerumen removal: comparison of cerumenolytic agents and effect on cognition among the elderly. Arch Gerontol Geriatr. 2011; 52(2): 228–232. PubMed Abstract | Publisher Full Text\n\nRoland PS, Smith TL, Schwartz SR, et al.: Clinical practice guideline: cerumen impaction. Otolaryngol Head Neck Surg. 2008; 139(3 Suppl 2): S1–S21. PubMed Abstract | Publisher Full Text\n\nGarahan MB, Waller JA, Houghton M, et al.: Hearing loss prevalence and management in nursing home residents. J Am Geriatr Soc. 1992; 40(2): 130–134. PubMed Abstract | Publisher Full Text\n\nMoore AM, Voytas J, Kowalski D, et al.: Cerumen, hearing, and cognition in the elderly. J Am Med Dir Assoc. 2002; 3(3): 136–139. PubMed Abstract\n\nRoeser RJ, Ballachanda BB: Physiology, pathophysiology, and anthropology/epidemiology of human earcanal secretions. J Am Acad Audiol. 1997; 8(6): 391–400. PubMed Abstract\n\nMeador JA: Cerumen impaction in the elderly. J Gerontol Nurs. 1995; 21(12): 43–45. PubMed Abstract | Publisher Full Text\n\nJones DA, Victor CR, Vetter NJ: Hearing difficulty and its psychological implications for the elderly. J Epidemiol Community Health. 1984; 38(1): 75–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMulrow CD, Aguilar C, Endicott JE, et al.: Quality-of-life changes and hearing impairment. A randomized trial. Ann Intern Med. 1990; 113(3): 188–194. PubMed Abstract | Publisher Full Text\n\nSugiura S, Yasue M, Sakurai T, et al.: Effect of cerumen impaction on hearing and cognitive functions in Japanese older adults with cognitive impairment. Geriatr Gerontol Int. 2014; 14(Suppl 2): 56–61. PubMed Abstract | Publisher Full Text\n\nHand C, Harvey I: The effectiveness of topical preparations for the treatment of earwax: a systematic review. Br J Gen Pract. 2004; 54(508): 862–867. PubMed Abstract | Free Full Text\n\nRoland PS, Eaton DA, Gross RD, et al.: Randomized, placebo-controlled evaluation of Cerumenex and Murine earwax removal products. Arch Otolaryngol Head Neck Surg. 2004; 130(10): 1175–1177. PubMed Abstract | Publisher Full Text\n\nRojahn R: Summaries of nursing care-related systematic reviews from the Cochrane Library: Ear drops for the removal of ear wax. Int J Evid Based Healthc. 2010; 8(3): 151–152. PubMed Abstract | Publisher Full Text\n\nBurton MJ, Doree C: Ear drops for the removal of ear wax. Cochrane Database Syst Rev. 2009; (1): CD004326. PubMed Abstract | Publisher Full Text\n\nBurton MJ, Dorée CJ: Ear drops for the removal of ear wax. Cochrane Database Syst Rev. 2003; (3): CD004400. PubMed Abstract | Publisher Full Text\n\nSaxby C, Williams R, Hickey S: Finding the most effective cerumenolytic. J Laryngol Otol. 2013; 127(11): 1067–1070. PubMed Abstract | Publisher Full Text\n\nMcCarter DF, Courtney AU, Pollart SM: Cerumen impaction. Am Fam Physician. 2007; 75(10): 1523–1528. PubMed Abstract\n\nAvenova™ with Neutrox™ [Package Insert]. Emeryville, CA. NovaBay Pharmaceuticals, Inc.; 2014.\n\nJimenez N, Garcia ML, Galan J, et al.: Development of a liquid enzyme-based ceruminolytic product. J Pharm Sci. 2008; 97(11): 4970–4982. PubMed Abstract | Publisher Full Text\n\nFraser JG: The efficacy of wax solvents: in vitro studies and a clinical trial. J Laryngol Otol. 1970; 84(10): 1055–1064. PubMed Abstract | Publisher Full Text\n\nBellini MJ, Terry RM, Lewis FA: An evaluation of common cerumenolytic agents: an in-vitro study. Clin Otolaryngol Allied Sci. 1989; 14(1): 23–25. PubMed Abstract | Publisher Full Text\n\nUppal S, Sharma R, Nadig SK, et al.: A blinded in-vitro study to compare the efficacy of five topical ear drops in clearing grommets blocked with thick middle ear effusion fluid. Clin Otolaryngol. 2005; 30(1): 29–34. PubMed Abstract | Publisher Full Text\n\nBortz JT, Wertz PW, Downing DT: Composition of cerumen lipids. J Am Acad Dermatol. 1990; 23(5 Pt 1): 845–849. PubMed Abstract | Publisher Full Text\n\nCarr MM, Smith RL: Ceruminolytic efficacy in adults versus children. J Otolaryngol. 2001; 30(3): 154–156. PubMed Abstract\n\nKornhauser A, Coelho SG, Hearing VJ: Applications of hydroxy acids: classification, mechanisms, and photoactivity. Clin Cosmet Investig Dermatol. 2010; 3: 135–142. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang X: A theory for the mechanism of action of the alpha-hydroxy acids applied to the skin. Med Hypotheses. 1999; 53(5): 380–382. PubMed Abstract | Publisher Full Text\n\nRobinson AC, Hawke M, MacKay A, et al.: The mechanism of ceruminolysis. J Otolaryngol. 1989; 18(6): 268–273. PubMed Abstract\n\nKnebl J, Harty B, Anderson CE, et al.: Dataset 1 in: In vitro comparison of three earwax removal formulations for the disintegration of earwax. F1000Research. 2016. Data Source\n\nKnebl J, Harty B, Anderson CE, et al.: Dataset 2 in: In vitro comparison of three earwax removal formulations for the disintegration of earwax. F1000Research. 2016. Data Source\n\nKnebl J, Harty B, Anderson CE, et al.: Dataset 3 in: In vitro comparison of three earwax removal formulations for the disintegration of earwax. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19185",
"date": "11 Jan 2017",
"name": "Yehudah Roth",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a brief report on an in-vitro assessment which may or may not be applicable to earwax removal, hence the title is somewhat misleading.\nThe test reported is of very simple methodology, of preliminary and partial nature, compared with other, similar in-vitro studied that are cited by the authors. There are not sufficient details on the new, tested agent. Clearly, visible disintegration does not reflect the cerumen clearance ability, certainly with little attempt to imitate bodily conditions , i.e. temperature.\nOne cannot deduce from this succint report about possible safety issues, towards a possible clinical trial.\nI would augment this interesting data with more details on the \"exploratory study in humans\" mentioned at the end of the report, or accumulate some more data before re-submission.\nThe topic is important and the authors are encouraged to review and re-submit their info.\nReference should include the newly published AAO guidelines on cerumen removal.",
"responses": []
},
{
"id": "23502",
"date": "15 Jun 2017",
"name": "Carlotta Pipolo",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPersistence of Earwax during otoscopy still represents a challenge especially for paediatricians and general practitioners during diagnosis. Finding appropriate and efficacious solutions that can make removal easier is surely very important.\nThe article by Knebl et al tries to evaluate the efficacy of their new ceruminolytic solution. However their experimental setting may be misleading in respect to the real value of their product.\nFirst their in vitro setting is flawed due to the chosen temperature (they partially acknowledge this during the discussion) and the shaking and position of the earwax in the test-tube (earwax is compacted inside the earcanal and only one side of it is exposed to drops or solutions) this makes the conclusions about timing completely misleading.\nSecondly they chose two solutions with the same component (Carbamide peroxide 6.5%) for comparison; furthermore a component that has already been shown to not be efficacious (Roland PS, Eaton DA, Gross RD, et al.: Randomized, placebo-controlled evaluation of Cerumenex and Murine earwax removal products. Arch Otolaryngol Head Neck Surg. 2004; 130(10): 1175–1177). This is very problematic as to their statement of superiority (albeit this component is used in daily practice).\nLast but not least they insert a citation into the discussion regarding the clearing of middle ear effusion in grommets (Uppal et al), suggesting the addition of glycolic acid to NaHCO3 as superior for that purpose. This use is also very misleading as glycolic acid should surely not come into contact with the middle ear mucosa.\nThis study shows promise regarding their solution, however it is flawed and therefore misleading.\n\nSurely a better setting will be able to demonstrate the superiority of their solution for earwax removal.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2784
|
https://f1000research.com/articles/5-2074/v1
|
25 Aug 16
|
{
"type": "Research Article",
"title": "The invariances of power law size distributions",
"authors": [
"Steven A. Frank"
],
"abstract": "Size varies. Small things are typically more frequent than large things. The logarithm of frequency often declines linearly with the logarithm of size. That power law relation forms one of the common patterns of nature. Why does the complexity of nature reduce to such a simple pattern? Why do things as different as tree size and enzyme rate follow similarly simple patterns? Here I analyze such patterns by their invariant properties. For example, a common pattern should not change when adding a constant value to all observations. That shift is essentially the renumbering of the points on a ruler without changing the metric information provided by the ruler. A ruler is shift invariant only when its scale is properly calibrated to the pattern being measured. Stretch invariance corresponds to the conservation of the total amount of something, such as the total biomass and consequently the average size. Rotational invariance corresponds to pattern that does not depend on the order in which underlying processes occur, for example, a scale that additively combines the component processes leading to observed values. I use tree size as an example to illustrate how the key invariances shape pattern. A simple interpretation of common pattern follows. That simple interpretation connects the normal distribution to a wide variety of other common patterns through the transformations of scale set by the fundamental invariances.",
"keywords": [
"Measurement",
"maximum entropy",
"information theory",
"statistical mechanics",
"extreme value distributions"
],
"content": "Introduction\n\nThe size of trees follows a simple pattern. Small trees are more frequent than large trees. The logarithm of frequency declines linearly with the logarithm of size1. Log-log linearity defines a power law pattern. Power laws are among the most common patterns in nature2.\n\nPower laws arise by aggregation over a multiplicative process, such as growth. Many processes in nature apply a recursive repetition of a simple multiplicative transformation, with some randomness2.\n\nAggregation over a random multiplicative process often erases all information except the average logarithm of the multiplications3,4. That average determines the slope of the power law line. In the case of tree size, we must also account for the fact that trees cannot grow to the sky. The upper bound on growth causes the frequencies of the largest trees to drop below the power law line.\n\nThat simple view of aggregation and the regularity of power laws contrasts with an alternative view. By the alternative view, the great regularity of a power law pattern suggests that there must be a very specific and particular underlying generative process. If the pattern of tree size is so regular, then some specific process of trees must have created that regularity.\n\nTo support the simple view of aggregation and regularity, I show that a normal distribution contains the same information as a power law size distribution. The distributions differ only in the scaling used to measure the distance of random variations in size from the most common size5.\n\nThe normal distribution calls to mind the great regularity in pattern that arises solely from the aggregation of an underlying stochastic process. Stochasticity and aggregation alone are sufficient to explain the regularity6. There is no need to invoke a detailed generative process specific to trees. Given the observed power law of sizes, maybe all we can reasonably say is that growth is a stochastic multiplicative process and that trees do not grow to the sky.\n\nThe trees provide an example of deeper principles about pattern and process in biology. What exactly are those principles? How can we use those principles to gain insight into biological problems?\n\nTo start on those questions, the next section presents an example of tree size data. Those data follow a power law with an upper bound on size. I show that those data also match almost exactly to a normal distribution when scaled with respect to a natural metric of growth.\n\nThe normal distribution and the power law pattern express the same underlying relation between pattern and process. That underlying relation arises from a few simple invariance principles. I introduce those invariance principles and how those principles shape the common patterns of nature5.\n\n\nTree size\n\nFigure 1A shows the distribution of tree size in a tropical forest1. Most of the trees lie along the green power law. The largest trees, beyond the line, comprise only a small fraction of all trees, because of the logarithmic scaling of frequency.\n\n(A) Tree size, z = d2, in which the squared diameter, d2, is proportional to the cross sectional area of the stem, and d ranges over approximately 11–2800 mm. The green line shows great regularity of pattern as a power law over the range that covers almost all probability. The largest trees, beyond the green power law line, comprise only a small fraction of all trees, because of the logarithmic scaling of frequency. (B) The blue line is log qz = log k − λTz, with Tz = log(1 + az) + γz, and parameters λ = 1.06, a = 0.004, and γ = 7 × 10–7, with log k shifting curve height and total probability. (C) The fitted blue line in panel B is a classic normal distribution with variance 1/2λ when plotted as qz ∝ e−λTz versus ±Tz, with respect to z as a positive parameter. In this plot, the metric is shifted so that the most common type associates with a value Tz = 0. Data approximated from Figure 4 in Farrior et al.1\n\nThe blue curve in Figure 1B closely fits the observed pattern. That curve expresses the natural metric for variation in tree size, z, as\n\nThis metric relates size to a logarithmic term for multiplicative growth plus a linear term for an upper bound on size. There is no additional information in the fitted curve beyond this natural metric.\n\nThe normal distribution in Figure 1C expresses exactly the same information about the distribution of tree sizes as the fitted curve in Figure 1B. The normal distribution follows from the expression of size variation in terms of the natural metric, Tz. I derive these conclusions in the following sections.\n\n\nNatural metrics\n\nThe pattern of tree size can be understood by considering Tz as a natural metric for size. A natural metric expresses a shift and stretch invariant scale for an observed probability pattern5. Shift, by adding a constant to a natural metric, does not change observed pattern. Stretch, by multiplying the metric by a constant, does not change pattern.\n\nIdeally, a natural metric also expresses the relation between underlying process and observed pattern. However, we can be right about the proper natural description of observed pattern but wrong about its underlying cause. It is important to distinguish description from causal interpretation.\n\nThe next section describes the natural metric for tree size with respect to the fundamental invariances of shift and stretch. I discuss the panels of Figure 1 as simple expressions of the natural metric. The following sections consider how to interpret natural metrics, the description of observed pattern, and the analysis of underlying process. The presentation here extends the underlying abstract theory to the interpretation and intuitive understanding of empirical pattern. Technical details can be found in the cited articles.\n\n\nThe metric of tree size: affine invariance\n\nThe data1 in Figure 1 arose from measurements of trunk diameter, d. I sought a natural metric based on d that describes the data in a shift and stretch invariant manner5.\n\nHow does one find a shift and stretch invariant natural metric that matches an observed pattern? In practice, one uses the extensive underlying theory and prior experience in what often works3,4,7,8. I achieved an excellent fit to the observed tree size data in Figure 1B based on the metric, Tz, in equation 1. I summarize the steps by which I arrived at that metric.\n\nThe data form a probability distribution. Probability patterns have a generic form. Measurements, z, relate to the associated probability, qz. The natural metric, Tz, transforms measurements such that the probability pattern has the exponential form\n\nin which λ adjusts the stretch of Tz, and k adjusts the total probability to be one.\n\nProbability patterns in the exponential form are shift and stretch invariant with respect to the metric, Tz. In particular, the affine transformation of shift and stretch, Tz ↦ α + βTz, is exactly compensated by adjustments of k and λ, leaving the probability pattern invariant.\n\nIntuitively, we can think of affine invariance as defining a ruler that is linear in the metric, Tz. In a linear ruler, it does not matter where we put the zero point. The information in measurement depends only on the distance from where we set zero to where the observation falls along the ruler. That independence of the starting point is shift invariance.\n\nSimilarly, if we uniformly stretch or shrink the ruler, we still get the same information about the relative values of different measurements. All we have to do is multiply all measurements by a single number to recover exactly the same distances along the original ruler. The metric Tz provides information that is stretch invariant.\n\nTo fit the data of Figure 1A, we have to find the matching affine invariant metric, Tz, for probability expressed in the exponential form of equation 2.\n\n\nThe metric of tree size: scale\n\nMost natural metrics are simple combinations of linear, logarithmic, and exponential scaling4,8. For example, in the metric Tz = log z+γz, the logarithmic term dominates when z is small, and the linear term dominates when z is large. The metric scales in a log-linear way. Change in scale with magnitude often occurs in natural metrics.\n\nRoughly speaking, the linear, logarithmic, and exponential scales correspond to addition, multiplication, and exponentiation. Those arithmetic operations are the three primary ways by which quantities combine. One can think of numbers combining additively, multiplicatively or exponentially at different magnitudes, depending on the way in which process changes with magnitude.\n\nSmall trees tend to grow multiplicatively, and large trees tend to scale linearly as they approach an upper size limit. Farrior et al.1 used logarithmic scaling at small magnitudes and linear scaling at large magnitudes. However, they did not express a metric that smoothly changed the proportion of the two scalings with magnitude. Instead, they switched from log to linear scaling at some transition point.\n\nThe observed data fit roughly to a pure log-linear metric, Tz = log z + γz, with z = d as tree diameter. I obtained a better fit by modifying this metric in two ways to obtain the expression in equation 1.\n\nFirst, I used the square of the diameter, z = d2, which is proportional to the cross sectional area of the trunk at the point of measurement. Various intuitive reasons favor area rather than diameter as a measure of size and growth. However, I ultimately chose area because it fit the data.\n\nSecond, I replaced log z by log(1+az). On a pure log scale, log z explodes to negative infinity as z approaches zero. In application to positive data, such as size, it almost always makes sense to use log(1 + az). This expression becomes smaller in magnitude as z declines. The parameter a scales the rate of change with respect to the point of origin.\n\nSize distributions often follow the metric, Tz = log(1+az)+γz. Of course, not all distributions follow that pattern. But one can use it as a default. When observations depart from this default, the particular differences can be instructive.\n\n\nInterpretation of natural metrics\n\nThe natural metric of a probability pattern transforms observed values on the scale z into probability values on the scale Tz. Through the natural metric, the particular pattern on the observed scale, z, becomes a universal probability pattern in the natural metric, Tz.\n\nOne can understand the intuitive basis of natural metrics by considering the properties of the universal probability scale. Probability patterns are often discussed with words such as information or entropy9. Those words have various technical and sometimes conflicting definitions. But all approaches share essential intuitive concepts.\n\nSurprise expresses the intuition10. Rare events are more surprising than common events. Suppose a particular size, z, occurs in one percent of the population, and another size, z′, occurs in two percent of the population. We will be more surprised to see z than z′. How much more surprised?\n\nSurprise is relative. We should be equally surprised by comparing probabilities of 0.01 versus 0.02 and 0.0001 versus 0.0002. Each contrast compares one event against another that is twice as common.\n\nWhat is a natural metric of probability that captures these intuitive notions of surprise? For probability, qz, the surprise is defined as\n\nWe compare events z and z′ by taking the difference\n\nThis natural metric, 𝓈z, leads to affine invariant comparisons of surprise values. In the affine transformation, 𝓈 → α + β𝓈, the shift α cancels in the difference 𝓈z − 𝓈z. The stretch β causes a constant change in length independently of location, so the metric retains the same information at all magnitudes of the scale.\n\nThe relation between the universal metric of probability, 𝓈z, and the natural metric for a particular observed scale, Tz, follows from the exponential form for probability5 in equation 2. From that exponential form, we can write 𝓈z = λTz − log k. Because 𝓈z is shift invariant, we can ignore the constant log k term, yielding\n\nThe natural metric, Tz, transforms an observed scale, z, into the universal metric of probability patterns, 𝓈z. The fitted curve in Figure 1B is a plot of 𝓈z = λTz versus log z.\n\nTo interpret a scale, it is useful to think about what happens along each increment of the scale. Define d𝓈z and dTz as small increments along the scales at the point associated with z. Then\n\nwhich means that the scales 𝓈z and Tz change in the same way at all magnitudes of z, with λ as the constant of proportionality in the translation from one scale to the other.\n\nHow do small increments in the natural metric, dTz, relate to increments in the observed values, dz? If we assume that Tz increases with z, and define Tz′=dTz/dz as the derivative (slope) of Tz with respect to z, then\n\nHere Tz′ transforms increments along the observable scale, dz, into increments along the universal scale of probability pattern, d𝓈z. All of the information that relates observation to probability pattern is summarized by the natural metric, Tz.\n\n\nGenerative process: generic vs particular\n\nWhat underlying generative process leads to an observed pattern? We must separate two aspects. Generic aspects arise from general properties of aggregation, measurement and scale that apply to all problems. Particular aspects arise from the special attributes of each problem.\n\nConfusing generic and particular aspects leads to the greatest misunderstandings of pattern and process3,4. For example, the observed pattern in Figure 1 perfectly expresses generic properties. Aggregation leads to the normal distribution by the central limit theorem (Figure 1C). The natural metric of size, Tz, relates the normal distribution to power law and exponential scaling in Figure 1A,B, when probability is plotted with respect the logarithm of the observed values, z.\n\nIn the tree size data, simple generic properties account for all of the observed pattern. I do not mean that there is nothing particular about trees or that we cannot study how ecological processes influence tree size. I mean that we must not confuse the generic for the particular in our strategy of inference3,6,11.\n\nThis article focuses on generic aspects of pattern. The following sections discuss those generic aspects in more detail.\n\n\nThe normal distribution and generic pattern\n\nOne often observes great regularity in probability patterns. Tree size follows a power law with an upper bound. Other measurements, such as height, weight, and enzymatic rate, also express regularity, but with different patterns.\n\nA single underlying quantity captures the generic regularity in seemingly different patterns. That underlying quantity is the average distance of observations from the most common type6. The key is to get the correct measure of distance, which is the natural metric.\n\nThe normal distribution is a pure expression of the generic regularity in probability patterns. In the normal distribution, the variance is the average distance of fluctuations from the mean.\n\nIn the normal distribution, the natural metric is the squared deviation from the mean, Tz = z2. Here, z is the observed deviation from the mean, and Tz is the natural metric for distance. The normal distribution follows from the standard expression of probability patterns in equation 2, repeated here with v = k, as\n\nThe average of the squared deviations, Tz = z2, is the average distance of fluctuations from the most common type, which is the definition of the variance, σ2. We can express the parameters in terms of the variance\n\nThe normal distribution is universally known but rarely understood. Interpreting the powerful generic aspect of probability patterns often reduces to correctly reading this equation.\n\nThe standard expression for the normal distribution in equation 6 seems obscure. By understanding that equation 4 expresses the same information in a much more general and broadly applicable way, we learn to read the simple generic aspect of common pattern. The key arises from the relation between the natural metric, Tz, and the measurement scale, z, used to express the pattern.\n\n\nMetrics of probability and measurement\n\nThis section discusses key aspects of the natural metric transformations, Tz, of the underlying measurements, z. The understanding of probability pattern arises from these key aspects of the natural metric.\n\nSuppose that two observers measure the same pattern. One uses a ruler that follows the scale, z. Another has a logarithmic ruler that returns logarithmic values, log z, for the same underlying values. The two observers do not know that they are using different scales.\n\nWhen the two observers plot their data, each will see a different probability pattern. The plot of qz versus z differs from the plot of qz versus log z.\n\nSimilarly, two observers may see different patterns of human size if they measure different things. Suppose one observer measures femur length, the other measures cross sectional area of the chest. The probability patterns of femur and chest size differ. But the different patterns reflect the same information about the underlying size variation in the population.\n\nWhat is the best way to find the relation between different observed values and the common underlying information about variation? Often, the natural metric for each observed scale provides the universally comparable scale for probability pattern. That universally comparable scale can be used to express variation as a normal distribution.\n\nWhen an observed probability pattern matches the normal distribution, then the variance summarizes all of the information in the pattern6. We can write the variance, σ2, which is the average of the squared distance for fluctuations from the mean, as\n\nin which the angle brackets denote the average value of z2, and the subscript z means that the average is taken with respect to the underlying scale, z.\n\nThe great generality of the normal distribution arises from a broader concept of the average distance of fluctuations from a central location\n\nThe left shows the standard definition of the variance as the average squared distance from a central location. The right generalizes that notion of average squared distance by using the average of the natural metric, Tz, in which the average is taken with respect to the square root of the natural metric, Tz. Here, Tz is shifted so that the most common type associates with Tz = 0, and the metric expresses fluctuations from the most common type5.\n\nOn the left, we average z2 with respect to z. On the right, we average Tz with respect to, Tz. The general form on the right-hand side includes the left-hand side as the special case of Tz = z2.\n\nThe key conclusion is that common probability patterns expressed in their natural metric\n\nare normal distributions when plotting qz versus ±Tz.\n\nThe following sections present examples. Later sections show why the square root is a natural measurement scale for common probability patterns.\n\n\nNatural metrics and generic forms\n\nThe tree size data match almost perfectly to the generic normal distribution (Figure 1C). I discuss that match in terms of universal properties of the normal distribution, given in the prior sections.\n\nTree size variation follows a simple log-linear natural metric, Tz. That metric and its associated probability pattern\n\nclosely fit the data. Figure 1B shows the fit when plotting log qz versus log z. Figure 1C shows that the same observed variation closely fits a normal distribution when plotting qz versus ±Tz.\n\nThe generalized variance is the average squared fluctuation of tree size from the most common type, when squared fluctuations are expressed by the natural metric, and fluctuations are measured by the square root of the natural metric. By the generalized notion of the variance in equation 7, all of the information in the observed distribution of tree size is contained in the average distance of fluctuations, measured in the natural metric.\n\nThe transformation of data into a normal distribution is sometimes considered a trivial step in the statistical analysis of significance levels. Here, in contrast, the natural metric and the associated expression in normal form provide an essential step in the general understanding of pattern and process.\n\nLater sections discuss why the normal distribution arises as the simple expression of pattern in relation to natural metrics. Before turning to those concepts, I present another example.\n\n\nDimensional inversion and metric pairs\n\nNatural metrics sometimes come in pairs4,7. For example, rates and frequencies follow dual metrics. Rates have dimensional units S/t, in which S is a generic size or number unit, and t is a time unit. A growth rate for trees may be given in terms of the change in size per year. A chemical reaction rate may be given as the number of molecules produced per unit time.\n\nThe inverse of a rate has units t/S. That inverse expresses the time to grow larger or smaller by a particular size unit, or the time to produce a particular number of molecules.\n\nThis section illustrates the common dual metrics for rates and times. The dual metrics yield different probability patterns that contain exactly the same underlying information. Each metric takes on the same common normal distribution form when stochastic fluctuations are measured by the metric relative to its square root.\n\nTo illustrate the dual metrics, I use the measured rates of chemical reactions for individual enzyme molecules given by Iversen et al.12. The measurements produce a probability pattern for the distribution of reaction rates. The measurements are not sufficiently precise to determine exactly which natural metric fits the data.\n\nI made an approximate fit to the data by using the natural metric in equation 1, which I previously used to fit tree size. My only purpose here is to illustrate typical aspects of rate and frequency patterns, rather than to over-analyze the limited data available in this particular study.\n\nFigure 2A shows the fitted distribution of reaction rates. The rates are in molecules per second, r, with units S/t. The colors in the curve express the change in the scaling relations of the natural metric as magnitude increases. The natural metric from equation 1, repeated here with r = z, is\n\nWhen r is small, linear scaling of Tr dominates, as shown by the blue coloring. As r increases, logarithmic scaling dominates, as shown by the gold coloring. Figure 2C, covering a greater range of r values, shows that further increase in r leads to linear dominance of scale, as shown by the green color. The upper linearity expresses the bound on size or number. Trees do not grow to the sky. Reaction rates do not become infinitely fast. Figure 3 shows the tree size data colored by the linear-log-linear transitions.\n\n(A) The probability distribution based on the natural metric in equation 1, with Tr = log(1 + ar) + γr. This plot uses a linear abscissa, compared with the logarithmic abscissa of Figure 1A. The curve approximately fits the enzymatic rate data in Figure 2B of Iversen et al.12, in which r has units S/t measured as number of molecules per unit time (seconds). Here, r varies between 0 and 8. The approximately fitted parameters are a = 0.5, γ = 0.05, and λ = 1.6. (B) The Laplace transform of the upper panel yields a shifted gamma probability distribution that expresses the identical information with a natural metric Tτ = (1/λ − 1) log(τ − γλ) + τ/λa. The inverted measure τ has units t/S as time per molecule, varying in the plot between γλ and 4. (C) The same probability distribution as in panel A, on a double log scale over the range of r values 0.6 to 50. (D) Both the original distribution in A and the Laplace inverted distribution in B are normal distributions when expressed in relation to the square root of their respective natural metrics, with generalized variance σ˜2 in equation 7.\n\nThis distribution has the same natural metric as in Figure 2C, but with different parameters. The curve is colored to show the change in the scaling of the natural metric with increasing magnitude as linear (blue), logarithmic (gold), and linear (green).\n\nThe probability pattern for rates, S/t, has a natural dual pattern expressed by inverted units for time, t/S. We can invert units by the Laplace transform4,7. The inversion leads to an altered probability pattern based on the natural metric\n\nwith α = 1 − λ and d = γλ. The parameters match the paired metric, Tr. The common value of λ shared by the paired distributions arises from the full expression for probability patterns in equation 2. The probability pattern for time, arising from Tτ, is a gamma distribution shifted by d.\n\nThe time per molecules pattern in Figure 2B matches the dual enzyme rate pattern of molecules per time in Figure 2A. The dual distributions express the identical information.\n\nDimensional inversion associates the various linear-log-linear scales between the two forms4,7. The linear, blue component at small magnitude in the upper panel matches the long blue tail at large magnitude in the lower panel. Put another way, slow rates, r, correspond to long waiting times, τ.\n\nIn the top, the gold logarithmic component for high rates matches the lower gold component for short waiting times. For very high rates, r, we have to look at Figure 2C. The upper green linear tail corresponds to the rapid decline in the probability of observing extremely high rates, associated with the natural upper bound on rates. The green upper bound on rates matches the green lower limit on times in Figure 2B. If extremely rapid rates of reaction, r, are very rare, then no reactions will produce molecules in very short time periods, τ. That limitation produces the green shift at small times in Figure 2B.\n\nThe dual natural metrics of rate, Tr, and time, Tτ, correspond to similar expressions of the normal distribution5 in Figure 2D. In general, different probability patterns expressed in different metrics, T, become normal distributions when fluctuations from the most common value are measured by ±T.\n\n\nAggregation and asymptotic invariance\n\nWhy do tree sizes and enzyme rates match a simple natural metric? Why do a few simple natural metrics match most of the commonly observed patterns? Part of the answer arises from the way in which aggregation leads to simple invariant pattern.\n\nThe top rows of Figure 4 illustrate aggregation and invariance. Each row begins on the left with two regular polygons, randomly rotated about their center. Columns to the right add more randomly rotated components. As the random rotations aggregate, the shape converges asymptotically to an invariant circular form.\n\nThe top shows polygons randomly rotated about their center. Aggregation leads asymptotically to loss of all information about rotational orientation. A circle purely expresses that rotational invariance. The bottom shows the aggregate summing of observations from arbitrary probability distributions. Aggregates combine to produce normal distributions, purely expressing the loss of all information except the average distance (variance) from the most common observation. The normal distribution is invariant to the order in which observations are combined. Order invariance is similar to rotational invariance (Figure 5). Thus, the asymptotic circle and the asymptotic normal distribution express similar aspects of information loss and invariance.\n\nRandom rotation causes loss of information about the angle of orientation. In the aggregate, the asymptotic form is rotationally invariant. In other words, the circular shape remains invariant no matter how it is rotated. A circle expresses pure rotational invariance.\n\nThe bottom two rows illustrate aggregation and the invariant pattern of the normal distribution. Each row begins on the left with a probability distribution. For each distribution, the horizontal axis represents observable values, and the vertical axis represents the relative probability of each observed value. I chose the shapes of the distributions to be highly irregular and to differ from each other.\n\nThe second column is the probability distribution for the sum of two randomly chosen values from the distribution in the left column. The third, fourth, and fifth columns are, respectively, the sum of four, eight, and 16 randomly chosen values. The greater the aggregation of randomly chosen values, the more perfectly the pattern matches a normal distribution. Adding randomly chosen values often causes an aggregate sum to converge asymptotically to the invariant normal form.\n\n\nNatural metrics and a universal scale\n\nThe invariant normal form expresses a universal scale. That universal scale clarifies the concept of natural metrics. To understand the universal scale, we begin with the fact that the same pattern can be described in different ways.\n\nConsider enzyme catalysis. Fluctuations can be measured as the rate of molecules produced per unit time. Alternatively, fluctuations can be measured as the interval of time per molecule produced. Figure 2A, B show the dual expression of the same underlying information.\n\nThe dual measurement scales each have their own natural metric. A natural metric transforms a particular measurement scale into a universal scale that expresses the common underlying information. A metric is natural in the sense that it connects a particular scale of observation to a common universal scale.\n\nThe normal distribution purely expresses the universal scale. Suppose we begin with different scales of measurement, such as the rate of molecules produced per unit time and the interval of time per molecule produced. Each scale has its own distinctive pattern of random fluctuations, as in Figure 2A, B. When we transform each scale to its natural, universal metric, Tz, the pattern of random fluctuations follows the normal distribution (Figure 2D).\n\nA normal distribution expresses information only about the average distance of fluctuations from the most commonly observed value. If we measure distance for different underlying measurements in their natural metrics, then that distance is the universal form of variance in equation 7 as\n\nThe generalized variance expresses the average deviation of the natural metric relative to the square root of the natural metric.\n\nWhy is the relation between a natural metric and its square root the universal measure of scale and also the expression of the normal distribution? The answer concerns how rotation and aggregation lose information and leave an invariant pattern (Figure 4).\n\nThe next section discusses rotational invariance and its relation to the universal scaling of the normal distribution. The following sections return to tree size and other commonly observed size distributions. The concepts of rotational invariance and the normal distribution clarify why the natural metric for tree size, given in equation 1 as Tz = log(1 + az) + γz, is a common natural metric for size patterns.\n\n\nRotational invariance\n\nTo understand the universal scale of the normal distribution, we begin with circles and rotational invariance (Figure 5). Simple geometric concepts provide the key to natural metrics, universal scales, and the structure of commonly observed patterns.\n\nA circle expresses a rotationally invariant radial distance from a central location. A natural metric can be thought of as a measure of radial distance. Different component observations that add to the same radial distance define a rotationally invariant circle.\n\nA circle expresses a rotationally invariant radial distance from a central location. In Euclidean geometry, squared distance is the sum of squared values along each dimension. Invariant radial distance in two dimensions, x1, and x2, may be written as R2=x12+x22. The points (x1, x2) at constant radial distance lie along the circle. The radial distance is rotationally invariant to the angle of orientation. The circular pattern is also invariant to interchange of the order of x1 and x2.\n\nWe can think of the rotationally invariant circle as a way to decompose a given value into components. If we start with any observed value and equate that value with a radial distance, R2, then the observed value is equally consistent with all points (x1, x2) that satisfy the circular constraint, R2=x12+x22.\n\nWe can break up a given value into n components, R2=∑xi2, which is the invariant radial distance of a sphere in n dimensions. Changing the order of the components does not change the radial distance. Rotational invariance implies order invariance of the component dimensions.\n\nFigure 4 illustrates how aggregation leads to invariant distance. The top two rows aggregate randomly rotated shapes. Initially, the rows differ, because they begin with different shapes in different orientations. However, after adding many shapes, the aggregate patterns converge to the same circular form, because the order no longer matters in a large sample. The pattern of distance from the center becomes the same in every direction.\n\nThe lower two rows of Figure 4 show a similar aggregate tendency to an invariant measure of distance. On the left, the initial patterns differ. As more samples are added, all information is lost except the average distance of fluctuations from the center.\n\nThe rotational invariance of circles relates to the invariance of average distance in the normal distribution5. In both cases, the squared distance is the standard Pythagorean definition of Euclidean geometric distance as the sum of squares. To see the connection between the rotational invariance of circles and the average distance of fluctuations in the normal distribution, we begin with an observed value and consider how it might have arisen by the aggregation of underlying components.\n\n\nAggregation and natural metrics\n\nSuppose we transform an observed value, z, into a natural metric value, Tz. What different aggregations would lead to the same value of Tz? If we think of Tz=Rz2 as a radial distance, we can evaluate the combinations of underlying values that lead invariantly to the same radial distance5.\n\nPreviously, we partitioned squared radial distance as\n\nWe can equate the explicitly squared radial distance to the implicitly squared natural metric, Rz2=Tz. Similarly, we can equate the explicitly squared component dimensions to the implicitly squared dimensions, x2 = y, or equivalently, x=y. Then Rz2=Tz can be written as\n\nIn two dimensions, the points (x1, x2) form a circle with radius Rz. The points ±(y1,y2) form an equivalent circle with radius Rz=Tz.\n\nTo partition a natural metric, Tz, of the observed value, z, we can write each component dimension, zi, in its natural metric, T(zi ) = Ti = yi, and thus\n\nThis equation shows the different component observations of an aggregate that lead to the same rotationally invariant squared radial distance, Rz2=Tz, or equivalently, distance as Rz=Tz.\n\nFor the natural metric, Tz, the square root scale, T, is the natural scale of distance, aggregation, and rotational invariance.\n\n\nThe normal distribution\n\nThe prior section emphasized that the natural metric Tz=Rz2 has the square root Tz=Rz as its natural scale of distance. This section relates the normal distribution to this association between natural metrics and radial distance. See Frank5 for additional details.\n\nWe can write the standard form of probability distributions from equation 2 as\n\nmeasured in relation to the incremental scale dTz=dRz. Using the expression for the generalized variance, σ˜2, in equation 7, we have\n\nand k=λ/π. If we shift Tz so that it is expressed as a deviation from its minimum value, then for many natural metrics, Tz, the probability pattern in equation 8 is a normal distribution with respect to the incremental scale dTz=dRz. The distribution is centered at the minimum of Tz and has average distance of fluctuations from the central location as the generalized variance, σ˜2.\n\nDifferent natural metrics can often be expressed in this normal form. Thus, the rotationally invariant normal form expresses a universal scale (Figure 2D).\n\nRotational invariance often implies invariance with respect to the order of observations in an aggregate. Order invariance connects the asymptotic rotational invariance of circles and natural metrics to the asymptotic form of the normal distribution in Figure 4. Thus, the normal distribution, expressed in natural metrics, provides a universal scale for understanding probability pattern.\n\n\nInductive: observed metric to universal scale\n\nHow does one find natural metrics? For tree size and chemical reaction rates, I began with the observed probability pattern. From those data, I found a natural metric that fit the observed pattern. In those cases, I chose the natural metric based on the fact that patterns of size and reaction rate tend to follow a particular, commonly observed natural metric.\n\nThis inductive approach matches a natural metric to a particular problem. The natural metric can then be used to transform the observed pattern into the universal scale of the normal distribution. What do we learn by this inductive fit of a metric and subsequent transformation to the normal form?\n\nWe have a good sense of the normal distribution as the outcome of simple aggregation and its connection to rotational invariance (Figure 4). Thus, once we find the proper scaling through the natural metric, we can think of an observed probability pattern an an expression of the normal form on a different scale.\n\nFor example, we can think of tree size as following a normal distribution when we express size, z, in the natural metric Tz = log(1+az)+γz. The normal form follows by expressing Tz relative to the most common size as the squared distance of a random fluctuation in relation to the distance, Tz.\n\nBy recognizing the universal normal form, we can see that different measurements of the same underlying pattern express the same information. In Figure 2, the different probability patterns for rate and time have a common normal expression. Of course, many patterns that arise from unrelated processes also have the normal form.\n\nThe key is that the structure of commonly observed pattern arises from the generic processes of aggregation and rotational invariance, when evaluated with the proper natural metric, rather than from the special attributes of particular processes. That conclusion is simply the well known principle of statistical mechanics.\n\nThe principle of statistical mechanics is both well known and frequently ignored in the study of pattern. The reason is that the different scales on which observed patterns arise tend to obscure the underlying commonality. The point here is that one can understand natural metrics and universal scales in a rational way, and thus connect abstract principles to real problems in ways that have often been missed.\n\n\nDeductive: universal scale to predicted metric\n\nThe inductively fit metric expresses the essence of an observed pattern. But the fit does not tell us about the generative process that led to that particular metric.\n\nIdeally, one would deduce the appropriate natural metric for a problem by considering the generative process and the necessary invariances that must be satisfied. For example, tree size must depend on growth processes, and the consequent probability pattern likely satisfies shift, stretch, and rotational invariance. However, three difficulties arise.\n\nFirst, the relations between process, measurement and pattern can be obscure. For tree size, what is the proper scale on which to measure the consequences of growth, competition, and other processes? We could use trunk diameter, d, or cross-sectional area, proportional to d2, or a fractal exponent of diameter, ds, or another size measure correlated with diameter.\n\nThe natural metric is often the scale that aggregates additively, leading to patterns that tend to be shift, stretch, and rotationally invariant. However, what we measure may be a complex transformation of that underlying scale. Inductive fit gets around the problem by describing the pattern and its associated invariant scale, rather than trying to deduce the processes that caused the observed pattern.\n\nSecond, multiple processes may shape pattern. Different processes may dominate at different scales. For example, exponential growth may dominate among smaller trees, whereas a bound on maximum size may dominate among larger trees. In general, different processes may dominate at different magnitudes. Predicting the metric that fits observations requires proper combination of the different underlying processes.\n\nThird, natural metrics express the patterns that arise by loss of information, subject to a few minimal constraints of invariance. Because aggregation dissipates information, many seemingly distinct processes will generate the same observable pattern. Common patterns are common exactly because they match so many distinctive underlying processes3. The natural metrics of common patterns reflect only the similarities of the simple invariances. Most of the special attributes of different generative processes tend to disappear in the aggregate.\n\n\nDeductive: tree size example\n\nTree size depends on growth, on limits to maximum size, and on a variety of other factors. Here, I give a simple introduction to natural metrics that arise from growth. I do not include bounds on size or other processes. I do not include difficulties of measurement. In spite of those limitations, this simplified analysis of growth and natural metrics provides insight into commonly observed probability patterns.\n\nI begin with the form\n\nwhich is a normal distribution when we measure increments on the square root scale, dTz. The normal distribution arises when we consider Tz values to be an aggregate sum of component values.\n\nFor tree size, the problem concerns how the aggregation of random growth increments leads to the observed size. We can split total growth into t increments. Each incremental unit multiplies current size by egi, in which gi is the growth rate in the ith increment. The average growth per increment is\n\nTotal growth is the product of all the growth increments\n\nin which w = g t is the sum of the t incremental growth rates.\n\nThe variable w provides a natural base scale for growth, because it expresses the aggregate sum of growth components. The sum is invariant to the order of the components. Thus, the total of the incremental growth rates can be thought of as a rotationally invariant radial distance.\n\nNatural metrics arise from shift and stretch (affine) invariance to transformations of their base values4,7,8. Thus, a natural metric, T(w) ≡ Tw, for the base scale, w, arises from affine invariance to a generator transformation, G(w), such that\n\nfor some constants α and b. If we consider\n\nto be a shift of the growth rates, so that the shape of probability patterns for size does not depend on adding a constant value to growth rates, then a natural metric for size with respect to growth is\n\nin which β is a positive parameter. This metric remains affine invariant to a shift of the base scale, w↦δ+w, because\n\nfor b = eβδ. The metric Tw is perhaps the most generic and important form of all natural metrics. Its application to growth is a special case of its underlying generality. I discussed this metric extensively in earlier articles4,8. Here, I confine myself to the problem of growth in relation to size.\n\nThe natural metric Tw associates with the probability pattern\n\nwhen measured with respect to the incremental scale, dTw. If we wish to express the probability pattern with respect to measurements of growth rate, on the incremental scale dw, note that\n\nyielding the probability pattern when measured with respect to the incremental base scale, dw, as\n\nin which, as always, k adjusts so that the total probability is one.\n\nSuppose we wish to transform from growth, w, to size, z, in which w(z) expresses growth as a function of size. If w increases with z, then we can write\n\nin which w′ is the derivative of w with respect to z. The generic probability pattern becomes\n\nwith respect to the incremental measurement scale, dz.\n\nIn the tree size example, w is the aggregate growth rate. Let z0 + z be size, with z0 as initial size, and z as the increase in size by growth, thus\n\nimplying that w as a function of z is\n\nIn this particular derivation, a = 1/z0. However, one should not interpret parameters literally. Different generative processes will lead to the same form, with alternative assumptions about process and parameters. Ultimately, the invariant properties of the metric capture the essence of common pattern. This particular derivation is meant only to show one way in which a metric arises.\n\nWe can use equation 12 to write the probability pattern of equation 10 explicitly in terms of the increase in size by growth, z, as\n\nwith respect to the incremental scale, dz, yielding\n\nfor β < 1, and dropping constants of proportionality. For certain parameter combinations, this probability pattern will be similar to the pattern for the size metric Tz = log(1+ az) + γz.\n\nI presented this derivation to encourage future study. The proper way to relate general growth processes to invariant probability patterns remains an open problem.\n\n\nConclusion\n\nProbability patterns often follow a few simple scaling relations. Those scaling relations define natural metrics. A natural metric transforms measurements to a universal scale. On the universal scale, the average distance of random fluctuations from the most commonly observed value defines a generalized variance. When observed values arise by aggregation of random processes, that aggregation erases all information except the average fluctuation, the generalized variance.\n\nMany different probability patterns become a normal distribution when expressed on the universal scale of natural metrics. The only information in each distribution is the generalized variance. Transforming the natural metric distance back to the underlying observed values yields the standard description for probability pattern on the scale of the observed measurements.\n\nThe great regularity of observed patterns, such as power laws, often arises from the same aspects of aggregation and invariance that lead to the normal distribution. A power law pattern and a normal distribution may simply be different transformations of the same underlying pattern.\n\nThe transformations arise from measurement and from the invariances that define scaling relations and natural metrics4,5,7,8. Understanding these key aspects of scale provide the framework in which to study the relations between pattern and process.",
"appendix": "Author contributions\n\n\n\nSAF did all the research and wrote the article\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation grant DEB-1251035 supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nFarrior CE, Bohlman SA, Hubbell S, et al.: Dominance of the suppressed: Power-law size structure in tropical forests. Science. 2016; 351(6269): 155–157. PubMed Abstract | Publisher Full Text\n\nMandelbrot B: The Fractal Geometry of Nature. W. H. Freeman, 1983. Reference Source\n\nFrank SA: The common patterns of nature. J Evol Biol. 2009; 22(8): 1563–1585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrank SA: How to read probability distributions as statements about process. Entropy. 2014; 16(11): 6059–6098. Publisher Full Text\n\nFrank SA: Common probability patterns arise from simple invariances. Entropy. 2016; 18(5): 192. Publisher Full Text\n\nJaynes ET: Probability Theory: The Logic of Science. Cambridge University Press: New York; 2003. Reference Source\n\nFrank SA, Smith E: Measurement invariance, entropy, and probability. Entropy. 2010; 12(3): 289–303. Publisher Full Text\n\nFrank SA, Smith E: A simple derivation and classification of common probability distributions based on information symmetry and measurement scale. J Evol Biol. 2011; 24(3): 469–484. PubMed Abstract | Publisher Full Text\n\nCover TM, Thomas JA: Elements of Information Theory. Wiley: New York; 1991. Reference Source\n\nTribus M: Thermostatics and Thermodynamics: An Introduction to Energy, Information and States of Matter, with Engineering Applications. Van Nostrand: New York; 1961. Reference Source\n\nHarte J: Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics. Oxford University Press: New York; 2011. Reference Source\n\nIversen L, Tu HL, Lin WC, et al.: Molecular kinetics. Ras activation by SOS: allosteric regulation by altered fluctuation dynamics. Science. 2014; 345(6192): 50–54. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "15871",
"date": "25 Oct 2016",
"name": "Neil McRoberts",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper continues Prof Frank’s investigation of the connections between observation and probability patterns in the natural sciences, in this case dealing specifically with power law size distributions. The paper is written in a tutorial style which is probably appropriate given the that the material covered is not standard reading in the life sciences. For those who are completely unfamiliar with the general subject area, I strongly advise reading reference [3] from the list in the current paper before or in conjunction with it.\nStudents in the life sciences often have a difficult relationship with concepts of probability and statistics. It should not be that way, of course, since biological systems are inherently stochastic, but it seems that as biologists we often approach the existence of variance in our observations as a problem to be got round, rather than as the very stuff of biology which it is our job to explain. Worse yet, many of us encounter the idea of “transformation” to make data more normal as an entirely ad hoc, opaque, process that seems to rely on rules of thumb learned by rote, with no explanation as to why a particular transformation would be appropriate in some situations and not others. Frank’s paper offers an altogether more satisfying perspective on the subject of transformation. In addition to a wealth of other insights, this paper lays out a well-grounded theoretical basis for understanding which transformations to seek if one wishes to preserve the information content of original, non-normal, observations but express it in terms of a corresponding normal distribution; the paper focuses in particular on the case of tree size data that conform to a power-law on their original scale of measurement. Of course, the paper is not intended as a tutorial on data transformation (the fact that students could learn to think of transformation in a new, richer, sense from reading this work is a by-product) but more an introduction to a different perspective on biological observation and its relationship with the probability distributions to which the observations conform.\nThree key ideas carry the paper along:\n“A single underlying quantity captures the generic regularity in seemingly different patterns. That underlying quantity is the average distance of observations from the most common type.” A natural metric will exist, as some transformation of the original scale of measurement, such that when considered in terms of the square root of the natural metric, average distance of observations from the most common type will follow a normal distribution. With the transformation to the natural metric properly chosen, the information content of the original data distribution and the normal distribution for the data, when expressed in its natural metric, are the same (the invariance property).\n\nThe general form p(y) = Kexp(-L(y,x)) in which K adjusts so that p(y) is a proper density function and L(y,x) is any desired distance measure is, in the words of Jorma Rissanen, a simple device used already by Gauss. In the present case it forms a density function for the distance between individual observations and the most common type. In other cases (such as those which Rissanen had in mind) it may measure the distance between model predictions and observed data. The simple device forms a link between the work presented by Frank and the extensive literature on coding, model selection and statistical inference. Exploring those links lies well beyond the scope of Frank’s paper, but their shared basis in information theory and the notion of how much, and what, information can be obtained from Nature and then modeled is an area of research that biologists have largely ignored.\nThe paper includes a number of other points during the exposition of its central ideas. These, together with the complexity of the ideas themselves, and the fact that the notation used to lay out the numerous (but necessary) equations is subject to somewhat arbitrary-seeming substitutions mean that the paper needs close reading, in spite of the clarity of Frank’s writing. Although signposts are provided along the way (often in the form of rhetorical questions) to let us know where we’re going next, the paper would benefit from a more comprehensive section by section guide in the introduction, so that the whole journey can be seen in a single view.\nReturning to my initial point concerning the way that the process of making data “more normal” is often learned by rote, the sections on deduction of appropriate natural metrics to express data should be particularly useful from a pedagogic perspective; they show that (at least in theory) an approach based on argument from principles is possible. Those who are unfamiliar with the ideas will still probably be left with the unwelcome impression that it will take considerable experience to become proficient at recognizing approaches that are likely to work, but one of the wider lessons of Frank’s publications in this general area is that rather few approaches are likely to account for the majority of observations most of us will encounter.",
"responses": [
{
"c_id": "2254",
"date": "28 Oct 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Neil McRoberts’ summary provides an excellent overview of the broader subject and the place of my article within that wider context. McRoberts mentions the duality between the understanding of probability patterns and the complementary problems of inference. I originally came to this subject through that connection, particularly through my study of Jaynes pioneering work (see references to my earlier work in my article). However, I was not aware of several of the explicit connections mentioned by McRoberts, which I appreciate learning about from his review. McRoberts suggests that I provide “a more comprehensive section by section guide in the introduction.” I often provide such a guide in my longer articles. In this case, I had that kind of guide in my early drafts. However, the technical underlying nature of the work made the overview into what seemed like more of an obstacle than an invitation to the article. So I dropped it, allowing me to move the article very quickly into the example of tree size that I use throughout to help connect the underlying abstractions to real-world problems. Perhaps it would be possible to write a helpful introductory guide, but I have not yet found the right expression. The main difficulty with the current structure is that some readers may mistakenly focus on the tree size problem as the central message of the article. It is not. The main message is that we can understand almost all common probability patterns by a few simple underlying invariances. That understanding provides great insight into many aspects of commonly observed patterns, including patterns such as tree size. One advantage of the F1000Research format is that I can submit a revised version at any time. For now, I will keep the current structure, while I continue to think about how to improve the presentation. I welcome comments from readers."
}
]
},
{
"id": "17202",
"date": "26 Oct 2016",
"name": "Scott E. Page",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a provocative paper that should have high impact. It makes both a scholarly and educational contribution. I expect it to be widely cited and taught. When indexed, it will appear on the syllabus of a graduate class that I teach.\n\nThat said, I have one major quibble with the paper. First some background. The standard way to teach distributions goes as follows:\nNormal Distributions arise from adding or averaging variation (as nicely explained here) Log Normal Distributions arise from multiplying shocks Power Law Distribution have multiple causes: self organized criticality, preferential attachment, random walk return times, etc.\n\nIn this paper, Frank argues that we can connect some power law and log normal distributions to normal distributions by using different unit of analysis.\n\nLet’s take the standard story of why tree sizes have a log normal distribution. Trees grow by random rates each year. If rates of growth are proportional, then a tree of size S that has growth rates r(t) will be size\n\nS[1_r(1)] [1_r(2)] [1_r(3)] [1_r(4)].. [1_r(10)]\n\nIn 10 years. If I take the logarithm of that size, it will be additive in the shocks, and thus normally distributed.\n\nFrank makes an alternative argument, that there is a natural metric for size, T = d2 and that this when transformed produces a normal distribution. If d is normally distributed, then d2 will be log normally distributed. He then makes a similar argument for enzyme reactions.\n\n(The tree model is more complicated as he includes a linear term and a log term but this captures the main idea).\n\nMy quibble with the paper has to do with the difference between a log normal distribution or (exponential) and a power law distribution.\n\nPower law: y ~ x-a Exponential: y ~ex\n\nIf you plot a log normal distribution on a log log plot, you get the sort of curvature that Frank shows in many of his graphs. What he is calling power laws would be characterized by many as log normal or exponential.\n\nI realize that his more general point is not that the tree size is either log normal or a power law. However, the paper would be much stronger and much clearer if he would make the following changes.\nClarify the difference between power law and log normal (including mention of the curve on the log log plot) Explain that many of the “long tails” in biology such as tree size have “long tails” that can be explained using his method that separates the generic causes of the distribution from the particular. He might even separate out the generic and particular in the tree growth so we see why he cannot just use the log normal distribution. Ideally, he would also show how his approach can produce a true power law.\n\nI would be remiss to not add that the paper is a model of clarity of exposition and argumentation.",
"responses": [
{
"c_id": "2257",
"date": "31 Oct 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I appreciate Scott Page’s generous summary and helpful comments. With regard to his specific suggestions: I agree with Page’s main point that I should relate my approach to the widely used lognormal distribution and the common power law expressions. To address those issues, I have added an Appendix to the revised version. I show how the lognormal distribution arises as an approximate description of growth and size whenever one can reasonably consider the distribution of growth rates as approximately normally distributed. Because total growth can be thought of as the sum of sequential growth increments, total growth may often be approximately normally distributed as a consequence of the various central limit theorem type arguments. However, sums of random variables are often not normally distributed, so one must be cautious about the generality of lognormal distributions for size. In my original article, I did not mention the lognormal distribution. I find it useful to distinguish practical applications from approaches aimed at broad conceptual understanding of the relations between different distributions. In my opinion, the lognormal distribution provides a useful practical tool, but has some limitations with regard to the conceptual aims of my article. I discuss these points in a newly added section of the Appendix. The new Appendix also includes a section that shows how my approach leads to a true power law distribution. Finally, Page discussed how one might separate the generic factors that shape probability distributions from the specific factors that influence particular observable patterns. I agree that separating the generic from the particular is a key aspect of predicting and explaining patterns. In my article, the section on “Deductive: tree size example” attempts to show how one might evaluate particular generative processes of tree growth and size within my broader framework of the invariances that shape the generic form of probability distributions. I say at the end of that section: “I presented this derivation to encourage future study. The proper way to relate general growth processes to invariant probability patterns remains an open problem.” At present, I do not have anything sufficiently compelling to add, although I certainly agree that this is a key issue. A related article of mine, “Invariant death”, emphasizes the duality of the generic and particular aspects of pattern, and adds some analysis on this topic (see https://f1000research.com/articles/5-2076/v1)."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2074
|
https://f1000research.com/articles/5-2764/v1
|
25 Nov 16
|
{
"type": "Antibody Validation Article",
"title": "Evaluation of recombinant monoclonal antibody SVmab1 binding to NaV1.7 target sequences and block of human NaV1.7 currents",
"authors": [
"Dong Liu",
"Mandy Tseng",
"Linda F. Epstein",
"Lydia Green",
"Brian Chan",
"Brian Soriano",
"Desiree Lim",
"Oscar Pan",
"Christopher M. Murawsky",
"Chadwick T. King",
"Bryan D. Moyer",
"Dong Liu",
"Mandy Tseng",
"Linda F. Epstein",
"Lydia Green",
"Brian Chan",
"Brian Soriano",
"Desiree Lim",
"Oscar Pan",
"Christopher M. Murawsky",
"Chadwick T. King"
],
"abstract": "Identification of small and large molecule pain therapeutics that target the genetically validated voltage-gated sodium channel NaV1.7 is a challenging endeavor under vigorous pursuit. The monoclonal antibody SVmab1 was recently published to bind the NaV1.7 DII voltage sensor domain and block human NaV1.7 sodium currents in heterologous cells. We produced purified SVmab1 protein based on publically available sequence information, and evaluated its activity in a battery of binding and functional assays. Herein, we report that our recombinant SVmAb1 does not bind peptide immunogen or purified NaV1.7 DII voltage sensor domain via ELISA, and does not bind NaV1.7 in live HEK293, U-2 OS, and CHO-K1 cells via FACS. Whole cell manual patch clamp electrophysiology protocols interrogating diverse NaV1.7 gating states in HEK293 cells, revealed that recombinant SVmab1 does not block NaV1.7 currents to an extent greater than observed with an isotype matched control antibody. Collectively, our results show that recombinant SVmab1 monoclonal antibody does not bind NaV1.7 target sequences or specifically inhibit NaV1.7 current.",
"keywords": [
"NaV1.7",
"SVmab1",
"ion channel",
"antibody",
"electrophysiology"
],
"content": "Introduction\n\nIon channels are attractive drug targets and small molecule therapeutic drugs to this protein family generate worldwide sales of approximately $12 billion1. Despite this attraction and the demonstrated involvement of ion channel antibodies in diverse autoimmune diseases2, no antibody-based ion channel therapeutic has progressed to the clinic, due to challenges in developing both optimal immunogens and robust screening processes to identify channel modulators3.\n\nThe genetically validated pain target NaV1.7 functions as a voltage-gated sodium channel expressed in nociceptive neurons in the peripheral nervous system4. NaV1.7 is comprised of four domains (DI-DIV), each containing six transmembrane (TMD) helices, in which TMD helices S1–S4 contain the voltage sensor region and TMD helices S5–S6 contain the pore region. Upon membrane depolarization, the voltage sensor domains, in particular the voltage sensor paddle comprised of S3, the S3–S4 loop, and S4, move outward resulting in pore opening, influx of sodium into the cell, and action potential firing5. Recently, Lee et al. described a monoclonal antibody SVmab1 targeted to a peptide loop between DII S3-4 in the voltage sensor paddle region, which bound a NaV1.7 DII voltage-sensor domain protein by ELISA and blocked NaV1.7 function by electrophysiology6. In particular, SVmab1, purified from a hybridoma, was reported to block human NaV1.7 currents in a use-dependent manner, in which repeated channel opening events uncovered the epitope for antibody binding in the paddle region, akin to antibody blockade of potassium channels6,7. The antigen used to generate SVmab1 was peptide VELFLADVEG, located in the DII paddle region and the sequence of this antibody was previously reported8.\n\nWe generated recombinant SVmab1 (rSVmab1) protein based on the publically available sequence information and evaluated its ability to bind peptide VELFLADVEG, purified DII voltage sensor domain protein, and cells expressing NaV1.7, as well as block NaV1.7 sodium currents in heterologous cells.\n\n\nMethods\n\nThe amino acid sequences for the heavy and light chains of rSVmab1 were obtained from Table 2 of a publication8. The variable region heavy chain sequence corresponds to SEQ ID NO 4 and the variable region light chain sequence corresponds to SEQ ID NO 8 of this publication. Synthetic, human codon-optimized, reverse translated DNA was generated by Genewiz, and subcloned into pTT5 expression vectors (National Research Council Canada), containing murine IgG1 heavy chain or kappa light chain constant regions. The coding regions from the resulting constructs were confirmed by sequencing to match the published sequences8. Plasmids were purified (Endofree Quanta Mega Kit; MDI Healthcare Services India) and re-confirmed by both sequencing and diagnostic restriction digest prior to transfection. Heavy and light chain DNA constructs for rSVmab1 were transiently co-transfected into 1.6L of HEK293 6E cells in an Erlenmeyer shake flask.\n\nCells were grown in Freestyle F17 media supplemented with 4mM L-glutamine, 0.1% pluronic acid and 1x antibiotic solution (Freestyle F17: Invitrogen, #12338-026; L-glutamine: Himedia, #TC243-1Kg; Antibiotic-Antimycotic: Invitrogen, #15140-062; Pluronic F-68; Invitrogen, #24040032; Tryptone N1: TekniScience Inc, #19553). Transfections were performed using polyethylenimine (PEI; Polysciences, #23967), at a DNA–PEI MAX ratio of 1:2.88. At 24 hours post-transfection, the cells were supplemented with 0.5% Tryptone. Cells were harvested after 5 days of culture and the supernatant was used for antibody purification. Conditioned media was clarified and used for affinity chromatography using a MabSelect SuRe column (GE Healthcare Life Sciences, #17-5199-01). Fractions containing antibody were pooled and further purified by ion exchange chromatography using SP-Sepharose Fast Flow resin (GE Healthcare). Protein purification and integrity were monitored throughout by SDS-PAGE using 4–12% Bis-Tris gels (Invitrogen, #NP0322), MES SDS Running Buffer (20X; Invitrogen, #NP0002), LDS sample buffer (Invitrogen, #NP0007) and stained with Simply Blue Safe (Invitrogen, #LC6065). Purified antibody was buffer exchanged via dialysis into 10mM sodium acetate (pH5.2), containing 9% sucrose and concentrated (30kD Amicon Ultra centrifugal filter unit; Millipore, #UFC801096). The concentration of the purified antibody was determined by the A280 method on a Nanodrop 2000c (Thermo Fisher Scientific). The final antibody sample was verified by analytical size exclusion chromatography-high performance liquid chromatography (SEC-HPLC) using a YMC-Pack Diol-200, 300 × 8 mm column (YMC Co. Ltd., ID: 0830002871 P/No. DL20S05-3008WT) equilibrated with 20mM sodium phosphate, 400mM sodium chloride, at a pH 7.2, maintaining a flow rate of 0.75ml/min. Finally, the rSVmab1 preparation was assayed for endotoxin levels using the Kinetic Endotoxin Assay (Charles River PTS Assay; 1.0-0.01 EU/ml Sensitivity PTS Cartridge, #PTS2001F) and flash frozen in liquid nitrogen. The isotype-matched control antibody used for electrophysiology studies was a recombinant murine IgG1/kappa monoclonal derived from an unrelated immunization campaign. The positive control mouse monoclonal antibody, used for peptide and D2S domain binding ELISAs, was generated against the DII voltage sensor peptide sequence VELFLADVEG by Abmart, which corresponds to the exact sequence used to generate SVmab1.\n\nMass analysis of non-reduced rSVmab1 was performed on an Agilent TOF 6230 Mass Spectrometer coupled with an Agilent 1260 Infinity HPLC system. HPLC Mobile phases A and B were 0.1% trifluoroacetic acid (TFA) and 90% n-propanol/0.1% TFA, respectively. The reverse-phase column was an Agilent Zorbax 300SB-C8, 3.5µm 2.1 × 50mm column (#865750-906), heated to 75°C. A 20µg aliquot of rSVmab1 was injected into the system. The sample was chromatographed at 0.2 ml/min with an 11 min gradient as follows: 20%B for 1 min; 20–70%B over 8 min; 70–100%B over 1 min; held at 100%B for 1 min. Mass spectrometer ionization and transmission settings were set as follows: Vcap, 5900V; fragmenter voltage, 460V; nebulizer gas, 25 psig; skimmer voltage, 95V; Oct RF Vpp voltage, 800V; and drying gas, 13 l/min.\n\nDNA encoding human NaV1.7 amino acids 709–857 (GenScript; derived from sequence NM_002977.3; https://www.ncbi.nlm.nih.gov/nuccore/NM_002977.3; NCBI Nucleotide RRID: SCR_004860) was cloned N-terminal to a 6x histidine affinity tag [D2S(709-857)-His6] in the pFastBac vector (Thermo Fisher Scientific), and a recombinant baculovirus was generated (Bac-to-Bac; Thermo Fisher Scientific). In total, 12L of Sf9 insect cells (3 × 106 cell/ml; Expression Systems) were infected with 5% (v/v) virus, incubated at 27°C for 48 h in spinner flasks, harvested by centrifugation and stored at -80°C until use. The remainder of the purification was conducted at 4°C. The frozen cell pellet (175 g wet weight) was resuspended in lysis buffer [25 mM Tris-HCl (pH 7.4), 200 mM NaCl (TBS), containing 1% v/v protease inhibitor cocktail (Sigma-Aldrich, Inc., #P8340)], stirred until thawed and disrupted by passing the suspension through a high pressure homogenizer at 10,000 psi (Microfluidizer M110EHI; Microfluidics, Corp.). The crude lysate was centrifuged at 10,000 × g for 15 min and the resulting supernatant collected and centrifuged at 100,000 × g for 1.5 h in a 70 Ti rotor. The supernatant was decanted and the 100,000 × g pellet was collected, resuspended in lysis buffer and homogenized prior to solubilization. N-dodecyl-β-D-maltoside (DDM; Anatrace, Inc.) was added to the resuspended membranes to a final concentration of 40 mM, incubated for 1h on a rocker, followed by centrifugation at 100,000 × g to pellet insoluble material. The DDM soluble fraction (100ml) was decanted and used for purification. Preparative chromatography steps were performed on an AKTA Purifier (GE Lifesciences, Inc.) in TBS containing 1 mM DDM, unless noted. SDS-PAGE with Coomassie Blue staining was used to monitor purification.\n\nAnalytical tryptophan fluorescence size exclusion chromatography (Trp FSEC) was used to monitor the oligomerization state of D2S(709-857)-His6 during purification. Trp FSEC was performed on a Superose 6 10/300 GL column (GE Healthcare Life Sciences) equilibrated with DDM buffer, using an Agilent HPLC system equipped with a fluorescence detector (272 nm excitation/327 nm emission). Absorbance at 280nm was used to determine the protein concentration of purified D2S(709-857)-His6. N-terminal amino acid sequencing confirmed the identity of purified D2S(709-857)-His6. The DDM soluble fraction was incubated with 10ml Talon Superflow resin (Clontech) for 14–16h on a rocker. The resin was collected into an XK 16 column (GE Healthcare Life Sciences) and washed with stepwise increases in imidazole concentration (10 c.v., 5mM; 10 c.v., 7.5mM; 5 c.v., 15mM; and 2 c.v., 25mM) in DDM buffer until the A280nm reached a stable minimum. Talon-bound protein was eluted with 200mM imidazole in DDM buffer. Fractions containing D2S(709-857)-His6 were pooled, concentrated in Ultracel-30K MWCO ultrafiltration units (Millipore Corp., Inc.) and chromatographed on a Superdex 200 10/300 column (GE Lifesciences, Inc.) to remove contaminating proteins and imidazole. The monodispersity of fractions containing D2S(709-857)-His6 was confirmed by Trp FSEC9. Monodisperse, micellar D2S(709-857)-His6 migrates at an apparent MW of 70kDa, which is similar in size to DDM micelles. Thus, the detergent concentrates during ultrafiltration and cannot be separated well using size exclusion chromatography (SEC), necessitating another Talon affinity step. SEC fractions containing monodisperse D2S(709-857)-His6 were pooled, and incubated with 0.5ml Talon resin for 2h. The resin was collected in a 2ml gravity column, washed, and protein was eluted with 200mM imidazole in DDM buffer. The eluate was loaded into a 0.5–3ml 10K MWCO Slide-a-Lyzer cassette (Thermo Fisher Scientific) and imidazole was removed by dialysis against DDM buffer. The dialyzed D2S(709-857)-His6 was collected, aliquoted, and frozen at -80°C.\n\nA recombinant BacMam baculovirus expressing human NaV1.7 was constructed as follows. A full-length cDNA clone of human NaV1.7 was obtained from Origene (pCMV6-XL4-NaV1.7) and codon optimized using synthetic DNAs (Thermo Fisher Scientific) to produce a cDNA that was stable during DNA propagation in E. coli strain HB101. The resulting cDNA was cloned into pENTR-D-Topo (Thermo Fisher Scientific) and the sequence was confirmed. pENTR-D-Topo-NaV1.7 was used in an LR Gateway reaction with pHTBV1.1 to produce pHTBV1.1-NaV1.7. After DNA sequence confirmation, pHTBV1.1-NaV1.7 was used in a transposition reaction to generate recombinant full-length baculoviral genomic DNA carrying NaV1.7, with transcription driven by the immediate early promoter from cytomegalovirus (Bac-to-Bac; Thermo Fisher Scientific). Transfection into Sf9 insect cells (Expression Systems) using FuGENE HD (Roche) allowed production of replication competent baculovirus, pseudotyped with VSV-G protein. The resulting transfection supernatant (P0 virus) was amplified twice, titered by endpoint dilution, as measured by gp64 expression (Expression Systems), and used in cell based assays.\n\nHuman NaV1.7 HEK293 stably transfected cells were purchased from Eurofins Pharma Bioanalytics Services US, Inc., and human NaV1.7 CHO-K1 stably transfected, inducible cells were purchased from Chantest.\n\nHEK293 complete media contained D-MEM/F-12 (1X) with 10% fetal bovine serum (FBS; US origin), 1x non-essential amino acids (NEAA; 10mM, 100X), 1x penicillin-streptomycin-glutamine (100X), and 400ug/ml Geneticin® Selective Antibiotic (all Invitrogen; #11330-033, #16000-044, 11140-050, 10378–016 and 10131-027, respectively).\n\nCHO-K1 complete media contained F12 HAM (1X; Sigma-Aldrich, #N6658) with 10% FBS (US origin; Sigma-Aldrich, #F2442), 1x L-glutamine (Sigma-Aldrich, #G7513), 0.4mg/ml Zeocin (Invitrogen, #46-0509), and 0.01mg/ml blasticidin (Gibco, #A11139-03). CHO-K1 stable cells were seeded at 8×610 cells in 20ml media with 1ug/ml tetracycline (Sigma-Aldrich, #T7660) and 100uM sodium butyrate (Sigma-Aldrich, #303410) in a T-175 flask and incubated 18–24hr prior to FACS analysis.\n\nU-2 OS cells (ATCC; #HTB-96; RRID: CVCL_0042), cultured to 80% confluency, were rinsed with Ca and Mg-free DPBS (Gibco, #14190-144) and dissociated with Cell Dissociation Buffer (enzyme-free; Gibco, #13151-014) for 8–10 minutes in a 37°C incubator. Following addition of 5.0ml of complete growth medium, cells were dislodged with gentle pipetting, pelleted, and resuspended to 3×610 cells/5ml growth medium. Cells and human NaV1.7 BacMam virus added at 200 MOI were combined in a T-75 flask and incubated 18-24hr prior to FACS analysis.\n\nU-2 OS complete media contains McCoy’s 5A with 10% FBS, 1x NEAA, 1x L-glutamine (200mM, 100X) and 1x penicillin-streptomycin (10,000U/ml, 100X) (all Gibco; #16600-082, #10099-141, #11140-050, #25030-081 and #15140-122, respectively).\n\nThe synthetic peptide VELFLADVEG (Abmart) was conjugated to maleimide-activated bovine serum albumin (BSA; Thermo Fisher Scientific, #PI-77116) through an N-terminal cysteine. The peptide was reconstituted to 10 mg/ml in DMSO and maleimide-activated BSA was made up to 10 mg/ml in dH2O. The BSA-conjugate was prepared by mixing 100ug of maleimide-activated BSA in 200uL PBS, 100ug synthetic peptide and 5mM TCEP (Thermo Fisher Scientific, #PI-77720), and the reaction was incubated at room temperature overnight. BSA-conjugated synthetic peptide (VELFLADVEG) was coated at 1μg/ml on a Costar 384-well medium binding plate (#3702) using 40μL/well, in 1X PBS and incubated at 37°C for 1hr. The plate was washed three times with 90μL/well 1X PBS using a Biotek plate washer (ELx 405), blocked with 1% milk/1X PBS (90μl/well), and incubated at room temperature for 30 min. Blocking buffer was aspirated and rSVmab1 or positive control mouse monoclonal antibody against the DII sensor peptide VELFLADVEG was titrated from 200nM using 40μL/well in 1X PBS/1% milk and incubated at room temperature for 1hr. Plates were washed three times with 90μL/well 1X PBS. Polyclonal goat anti-mouse Fc HRP (Jackson ImmunoResearch Labs, #115-035-164; RRID: AB_2338510) was added at 100ng/mL in 1X PBS/1% milk (40μL/well) and incubated at room temperature for 1hr. Plates were washed an additional four times and the HRP signal was detected with 1-Step TMB (40μL/well; Neogenm #308177) for 30min followed by quenching with 1N hydrochloric acid (40μL/well). Plates were read at OD450 (Thermo Multiskan Ascent).\n\nPurified DIIS was coated at 2μg/ml on a 96-well NiNTA plate pre-blocked by the manufacturer with bovine serum albumin (Thermo Fisher Scientific, #15442), (50μL/well), in 1X PBS/2mM n-dodecyl-β-D-maltoside (DDM) detergent (Calbiochem, 324355), and then incubated at 37°C for 1hr. Plates were washed twice with 200μL/well of 1X PBS/2mM DDM. rSVmab1 or positive control mouse monoclonal antibody against the DII sensor peptide VELFLADVEG was titrated 1:2 from 13nM in 1% milk/1X PBS/2mM DDM (50μL/well) and then incubated at room temperature for 1hr. Following two washes with 200μL/well of 1X PBS/2mM DDM, polyclonal goat anti-mouse Fc HRP (Jackson ImmunoResearch Labs, #115-035-164; RRID: AB_2338510) was added at 400ng/mL in 1% milk/1X PBS/2mM DDM (50μL/well), and incubated at room temperature for 1hr. Plates were washed an additional four times and the HRP signal was detected with 1-step TMB (50μL/well), for 30min followed by quenching with 1N hydrochloric acid (50μL/well). Plates were read at OD450 (Thermo Multiskan Ascent).\n\nHuman NaV1.7 stably transfected HEK293 cells, human NaV1.7 stably transfected, inducible CHO-K1 cells, human NaV1.7 BacMam transduced U-2 OS and parental cells were treated with non-enzymatic dissociation buffer (Sigma-Aldrich, #C5914) to remove cells from the flask prior to FACS assays. In 96-well V-bottom plates (Costar, #3897), 50,000 cells/well were incubated with 33nM rSVmab1 or isotype control (R&D Systems, #MAB002; RRID: AB_357344; monoclonal mouse IgG1 isotype control) or positive control antibodies (Millipore, #MABN41; RRID: AB_10808664; monoclonal mouse anti-human NaV1.7 antibody10) in 50ul of FACS buffer (1X PBS+2% FBS; PBS: Hyclone, #SH30256.02; FBS: Sigma-Aldrich, #F2442, 500mL), and then incubated at 4°C for 1hr. Cells were isolated by centrifugation at 2500 RPM (664xg) for 2 min, the supernatant was removed and the cells were washed twice with 200ul/well FACS buffer. Cells were resuspended in 50ul (5ug/ml) polyclonal goat-anti-mouse IgG Fc Alexa 647 (Jackson ImmunoResearch Labs, #115-605-071; RRID: AB_2338909) and 2.5ug/ml 7-aminoactinomycin D (7AAD; Sigma, #A9400) and incubated at 4°C for 15min. Cells were then washed once, resuspended in 50ul FACS buffer and read on a Becton Dickenson Accuri Flow Cytometer using the Intellicyt Hypercyt Autosampler. Single cells were gated and geometric means (GeoMean) of 7AAD-negative cells were analyzed using the Intellicyte Forecyt 3.1 software (Intellicyt; http://intellicyt.com/products/software/). A minimum of 350 live cell events were collected per well.\n\nHuman NaV1.7 stably transfected HEK293 cells, plated on glass coverslips (Warner Instruments, CS-8R, #64-0701) for 18–28 hr before recording, were voltage clamped using the whole cell patch clamp configuration at room temperature (21–24°C), using a MultiClamp 700B amplifier and DIGIDATA 1322A with pCLAMP 10.2 software (Molecular Devices; https://www.moleculardevices.com/systems/conventional-patch-clamp/pclamp-10-software; RRID: SCR_011323). Pipettes, pulled from borosilicate glass capillaries (World Precision Instruments), had resistances between 1.5 and 2.0MΩ. Whole cell capacitance was uncompensated and leak subtraction was not used. Currents were digitized at 50kHz and filtered (4-pole Bessel) at 10kHz using pClamp10.2. Cells were positioned directly in front of a micropipette connected to a solution exchange manifold for antibody perfusion. The external solution consisted of 140mM NaCl, 5.0mM KCl, 2.0mM CaCl2, 1.0mM MgCl2, 10mM HEPES, and 11mM glucose, with a pH 7.4 by NaOH. The internal solution consisted of 62.5mM CsCl, 75mM CsF, 2.5mM MgCl2, 5mM EGTA, and 10mM HEPES, with a pH 7.25 by CsOH. To record from closed/resting channels, cells were held at -120mV and pulsed to -10mV for 30msec at 0.1Hz. To record from partially inactivated channels, cells were held at -120mV initially and then switched to a voltage that yielded 20% channel inactivation. 30msec pulses to -10 mV were delivered every 10 sec, and peak inward currents were recorded before and after antibody addition. To record from slow inactivated NaV1.7 channels (P1) and following a train of depolarizing stimuli (P26), cells were voltage clamped to -110 mV for 3 sec and sodium currents were elicited by a train of 26 depolarizations of 150msec duration to -10 mV at a frequency of 5Hz. Cells were then clamped to -20mV while 500 nM rSVmab1, isotype-matched murine IgG1/kappa monoclonal antibody derived from an unrelated immunization campaign or 0.3% BSA control was added. At the 5 and 15 minute time points post-antibody addition, cells were reclamped to -110 mV for 3sec and put through the same 26 pulse voltage protocol as above. Peak inward current during the 1st (slow inactivated) or 26th (use-dependent) pulse to -10 mV in the presence of antibody was divided by the peak inward current evoked by the 1st or 26th pulse to -10 mV in the absence of antibody to determine percent inhibition. A separate use-dependent protocol was also employed that replicated conditions used by Lee et al.6, where cells were held at -120mV and sodium currents were elicited by a train of depolarizations of 30msec duration to -10mV at a frequency of 10Hz. All testing solutions had 0.3% BSA (Sigma-Aldrich, #A2058) to prevent non-specific adhesion of proteins to tubing and recording chamber components, and solutions were perfused over cells at 1ml/min. The pore blocker tetrodotoxin (TTX; 500 nM; Alomone Labs, #T-550) was added at the end of experiments as a positive control for robust NaV1.7 inhibition. Data were analyzed with pCLAMP and all figures were plotted using Origin Pro8 (OriginLab Corp).\n\nElectrophysiology data are presented as mean ± SEM, and statistical significance was determined using two-tailed, paired or unpaired Student's t-test with Origin Pro 8 software, with p<0.05 denoting statistical significance.\n\n\nResults\n\nRecombinant SVmab1 (rSVmab1) was purified from transiently transfected HEK293 6E cells and analyzed by SDS-PAGE (Figure 1A) and SEC-HPLC (Figure 1B). rSVmab1 migrated at an observed molecular weight of ~150kDa in non-reducing SDS-PAGE, comprised distinct and appropriately sized heavy chain and light chain bands in reducing SDS-PAGE, and eluted as a single sharp peak in SEC-HPLC. Collectively, these findings are consistent with the production of an intact antibody. Mass spectrometry analysis of non-reduced rSVmab1 revealed the major peak mass to be 147,938Da, which closely matched the theoretical mass of 147,936Da for an agalactosylated/fucosylated bi-antennary glycoprotein (Figure 2).\n\n(A) SDS-PAGE of 0.5 and 5.0 ug non-reduced and reduced rSVmab1. (B) Size exclusion chromatography-high performance liquid chromatography elution profile of rSVmab1. The main peak comprised 97.7% of the area.\n\nThe major glycoform on non-reduced rSVmab1 is G0F (agalactosylated/fucosylated bi-antennary glycan) with a calculated mass of 147,938Da. Glycosylation of each heavy chain is denoted (G0F)2. Additional peaks not matching the theoretical mass of 147,936Da are extended glycoforms of the intact molecule and correspond to addition of galactoses (G1F = +1 galactose; G2F= +2 galactose) or aglyco = no glycan.\n\nrSVmab1 binding to antigenic peptide was evaluated in an ELISA assay using peptide VELFLADVEG conjugated to BSA via an N-terminal cysteine residue. At 200nM rSVmab1, no peptide binding was observed, whereas binding of a positive control monoclonal antibody generated against this exact same peptide sequence was detected at a concentration as low as 2nM (Figure 3; Dataset 1). Next, purified DII voltage sensor domain protein, housing the SVmab1 epitope, was prepared as a detergent micelle in DDM and tested for rSVmab1 binding in an ELISA assay. At 13nM rSVmab1, no DIIS binding was observed, whereas binding of the positive control antibody, described above, was detected at concentrations <1nM (Figure 4; Dataset 2). Finally, FACS was used to assess rSVmab1 binding to HEK293, CHO-K1, and U-2 OS cells expressing human NaV1.7 protein. At 33nM rSVmab1, no cell binding was observed, whereas binding of a positive control NaV1.7 Ab was detected in all three cell lines (Figure 5; Dataset 3).\n\nPeptide ELISA of increasing concentrations of rSVmab1 (blue circles) or positive control antibody (red squares) binding to the BSA-conjugated peptide VELFLADVEG. Absorbance values after subtraction of non-specific binding to uncoated plates represent means ± standard deviation of the mean of at least two independent experiments.\n\nELISA analysis of increasing concentrations of rSVmab1 (blue circles) or positive control antibody (red squares) binding to purified, soluble NaV1.7 DII voltage sensor domain. Absorbance values after subtraction of non-specific binding to uncoated plates represent means ± standard deviation of the mean of at least two independent experiments.\n\nFACS histograms of rSVmab1, positive control NaV1.7 antibody (Millipore, #MABN41), and an isotype control (R&D, #MAB002) (all at 33nM) binding to cell lines stably or transiently expressing human NaV1.7, or their respective parental cell lines.\n\nrSVmab1 was evaluated for functional inhibition of human NaV1.7 currents in HEK293 cells using whole cell manual patch clamp electrophysiology. Protocols that mimic conditions reported by Lee et al.6, as well as protocols that interrogate diverse NaV1.7 gating states, were employed. NaV channels exist in resting/closed states where the pore is shut, open states where sodium ions can permeate the pore, and one or more inactivated states where channels are recalcitrant to opening5. When 100nM rSVmab1 was applied to cells which were voltage clamped to a holding potential of -120mV with a 0.1Hz stimulation frequency, where NaV1.7 channels are in the closed/resting state, no reduction of sodium current was detected following 20min of antibody treatment (Figure 6; Dataset 4; p>0.05 comparing BSA control to rSVmab1). Notably, the pore blocker tetrodotoxin (TTX) robustly inhibited currents under these conditions. For comparison, 100 nM SVmab1 was reported to block closed/resting NaV1.7 by ~40% at 0.1Hz (Figure 3D of the study by Lee et al.6). Increasing the concentration of rSVmab1 to 500nM for 20min resulted in reductions of NaV1.7 current by 40% compared to reductions of 20% with an IgG1 isotype control (p=0.05 comparing rSVmab1 to IgG1 isotype control). rSVmab1 and IgG1 isotype control both yielded significantly larger current reductions compared to a BSA vehicle control group (Figure 7; Dataset 5; p<0.01 for BSA compared to IgG1 isotype control and p<0.01 for BSA compared to rSVmab1). Conductance-voltage relationships (Figure 7; Dataset 5) and steady-state fast inactivation curves (Figure 8; Dataset 6) demonstrated that rSVmab1 did not affect NaV1.7 gating properties. rSVmab1 was next evaluated in a use-dependent protocol using a 10Hz train of depolarizing stimuli (as per Lee et al.6) to repeatedly cycle NaV1.7 through open and inactive conformations in order to expose the SVmab1 epitope in the DII voltage sensor paddle region. Both 500nM rSVmab1 and an isotype control IgG1 antibody reduced tonic NaV1.7 current 30–35% in the first pulse of the train with nominal evidence of use-dependent block in later pulses of the train (Figure 9; Dataset 7; p>0.05 for all group comparisons). In all these studies, antibodies were incubated on cells for 20min with constant perfusion to accommodate a potentially slow on-rate. For comparison, 100nM SVmab1 was reported to block NaV1.7 current over 80% within 10sec (Figure 3C of the study by Lee et al.6), using this 10Hz protocol.\n\n(A) Exemplary raw traces showing sequential addition of 0.3% bovine serum albumin (BSA) control, 100 nM rSVmab1 (after 20min incubation), and 500nM tetrodotoxin (TTX) on NaV1.7 currents in the same HEK293 cell. (B) Summary of normalized NaV1.7 currents. rSVmab1 did not block NaV1.7 currents, whereas 500nM TTX robustly blocked NaV1.7 currents. Data are mean ± SEM (n=3/group). (C) Voltage protocol used, where channels were held at -120 mV in the closed/resting state.\n\n(A–F) Traces and I-V curves following control or 20min incubation with (A and D) 0.3% bovine serum albumin (BSA), (B and E) 500nM IgG, and (C and F) 500 nM rSVmab1. (G–I) Conductance-voltage relationships following control or 20 min incubation with (G) 0.3% BSA, (H) 500 nM IgG, and (I) 500 nM rSVmab1. (J) Voltage protocol used for panels A–F. (K) Summary of normalized peak NaV1.7 currents from cells incubated with 0.3% BSA, 500 nM IgG, or 500 nM rSVmab1 (after 20 min incubation) followed by 500nM tetrodotoxin (TTX), which blocked nearly all current. Data are mean ± SEM (n=5–8/group). ** p<0.01 for BSA compared to IgG and BSA compared to rSVmab1 at 20 min.\n\nSteady state fast inactivation curves following control or 20min incubation with (A) 0.3% bovine serum albumin (BSA), (B) 500nM IgG, and (C) 500nM rSVmab1. Data are mean ± SEM (n=4–5/group). (D) Voltage protocol used for panels A–C.\n\nNormalized current over 100 pulses at 10Hz following control or 20min incubation with (A) 0.3% bovine serum albumin (BSA), (B) 500nM IgG, and (C) 500nM rSVmab1. (D) Voltage protocol used for panels A–C. Exemplary raw traces at pulse 1 (P1) and pulse 100 (P100) following control or 20min of incubation with (E) 0.3% BSA, (F) 500nM IgG, and (G) 500nM rSVmab1. Summary of normalized currents at P1 (tonic block) and P100 (use-dependent block) following 20 min incubation with (H) 0.3% BSA, (I) 500 nM IgG, and (J) 500nM rSVmab1. Data are mean ± SEM (n=3–5/group).\n\nrSVmab1 was further evaluated using voltage protocols that place NaV1.7 channels in various inactivated states. When cells were voltage clamped at a potential that yielded 20% NaV1.7 inactivation, in which 20% of NaV1.7 channels are unavailable for opening and 80% of NaV1.7 channels are closed/resting, 500nM rSVmab1 and isotype control antibody decreased currents similarly around 30% after 15min of antibody treatment (p>0.05 for BSA, IgG1, and rSVmab1 comparisons), whereas TTX robustly blocked currents within seconds of application (Figure 10; Dataset 8). When cells were evaluated using a protocol that promotes transition of NaV1.7 into a slow inactivated state, by maintaining cells at a resting potential of -20mV during antibody addition and between voltage measurements, 500nM rSVmab1 and isotype control IgG1 Ab both decreased currents ~35% after 15 min, whereas TTX again robustly blocked currents (Figure 11, P1 tonic measurements; Dataset 9; p>0.05 for BSA, IgG1, and rSVmab1 group comparisons). Layering on a 5 Hz use-dependent protocol with 150msec depolarizing pulses following induction of slow inactivation resulted in current reduction by ~65% for rSVmab1 and isotype control IgG1 groups after 15min of antibody treatment (Figure 11, P26 use measurements; Dataset 9; p<0.01 for BSA compared to IgG1, p<0.05 for BSA compared to rSVmab1, p>0.05 for IgG1 compared to rSVmab1). In these experiments, effects of rSVmab1 were similar to those of the isotype control IgG1 antibody.\n\nExemplary raw traces following control or 15min incubation with (A) 0.3% bovine serum albumin (BSA), (B) 500 nM IgG, and (C) 500nM rSVmab1. (D) Voltage-protocol used for panels A–C. Exemplary time courses following incubation with (E) 0.3% BSA, (F) 500nM IgG, and (G) 500nM rSVmab1. (H) Voltage protocol employed for panels E–G, where cells were held at a voltage yielding 20% channel inactivation during antibody addition. Summary of normalized currents following 15 min incubation with (I) 0.3% BSA, (J) 500nM IgG, and (K) 500nM rSVmab1. Data are mean ±SEM (n=3–5/group).\n\nExemplary raw traces following 5min or 15min incubation with (A) 0.3% bovine serum albumin BSA, (B) 500nM IgG, and (C) 500nM rSVmab1. P1 = first pulse (tonic block); P26 = 26th pulse (use-dependent block). (D) Voltage protocol used for panels A–C. Cells were clamped to -20mV during addition of antibodies and between voltage measurements. Summary of normalized currents following 5 min or 15 min incubation with 0.3% (E) BSA, (F) 500nM IgG, and (G) 500nM rSVmab1. Data are mean ± SEM (n=4–5/group). ** p<0.01 for BSA compared to IgG (15 min, P26); * p<0.05 for BSA compared to rSVmab1 (15 min, P26).\n\n\nConclusion\n\nAt the concentrations tested, recombinant monoclonal antibody SVmab1, generated from published sequence information8, did not bind to the following target sources: NaV1.7 peptide VELFLADVEG, NaV1.7 DII voltage sensor protein, and NaV1.7 expressing mammalian cells (HEK293, CHO-K1, U-2 OS). Recombinant SVmab1 also did not specifically block NaV1.7 currents in HEK293 cells, as assessed by whole cell manual patch clamp electrophysiology when channels were closed/resting, inactivated, or cycled through states to expose the voltage sensor paddle region using a train of depolarizing stimuli. Reductions in NaV1.7 current were comparable when using an isotype control IgG1 or recombinant SVmab1 at 500nM. It is unknown why both isotype control IgG1 and recombinant SVmab1 produced current reductions larger than BSA vehicle control in some voltage protocols. In the absence of positive binding data or specific NaV1.7 block, our results indicate that recombinant SVmab1 is not a robust large molecule NaV1.7 antagonist. It should be noted that Lee et al.6 utilized SVmab1 purified from a hybridoma, whereas the studies reported here employed recombinant SVmab1 purified from HEK293 6E cells. Differences in heavy and/or light chain antibody sequences from these sources could account for the observed differences in NaV1.7 binding and block. In addition, it is conceivable that differences in NaV1.7 glycosylation or beta subunit expression in HEK293 cells could impact epitope accessibility to SVmab1 in cell-based experiments; beta subunits have been reported to partially mask interactions between peptide toxins and NaV1.211,12. Other groups evaluating SVmab1 are encouraged to share their findings on NaV1.7 binding and block to inform the research community on the utility of this reagent.\n\n\nData availability\n\nOpen Science Framework: Dataset: Evaluation of recombinant monoclonal antibody SVmab1 binding to NaV1.7 target sequences and block of human NaV1.7 currents, doi 10.17605/osf.io/4jbz713.",
"appendix": "Author contributions\n\n\n\nBC, LE, LG, DLim, DLiu, CMM, OP, BS, and MT conducted all experiments. DLiu, BC, CMM, CK and BDM conceived the experimental design. BC, CMM and BDM wrote the article.\n\n\nCompeting interests\n\n\n\nAll authors were full-time employees at Amgen, Inc. at the time the experiments were conducted.\n\n\nGrant information\n\nThis research was funded by Amgen, Inc.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Carolyn Chu for assistance with antibody preparation, Emily Fogarty for assistance with ELISA screening, Joe Ligutti and Shanti Amagasu for assistance with cell preparation, Tina Meng, Paul Wang, Mukta Vazir and Fen-Fen Lin for BacMam NaV1.7 generation, and Zaven Kaprielian for critical review of the manuscript.\n\n\nReferences\n\nWickenden A, Priest B, Erdemli G: Ion channel drug discovery: challenges and future directions. Future Med Chem. 2012; 4(5): 661–79. PubMed Abstract | Publisher Full Text\n\nRamaKrishnan AM, Sankaranarayanan K: Understanding autoimmunity: The ion channel perspective. Autoimmun Rev. 2016; 15(7): 585–620. PubMed Abstract | Publisher Full Text\n\nWilkinson TC, Gardener MJ, Williams WA: Discovery of functional antibodies targeting ion channels. J Biomol Screen. 2015; 20(4): 454–67. PubMed Abstract | Publisher Full Text\n\nDib-Hajj SD, Yang Y, Black JA, et al.: The NaV1.7 sodium channel: from molecule to man. Nat Rev Neurosci. 2013; 14(1): 49–62. PubMed Abstract | Publisher Full Text\n\nAhern CA, Payandeh J, Bosmans F, et al.: The hitchhiker's guide to the voltage-gated sodium channel galaxy. J Gen Physiol. 2016; 147(1): 1–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee JH, Park CK, Chen G, et al.: A monoclonal antibody that targets a NaV1.7 channel voltage sensor for pain and itch relief. Cell. 2014; 157(6): 1393–404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJiang Y, Ruta V, Chen J, et al.: The principle of gating charge movement in a voltage-dependent K+ channel. Nature. 2003; 423(6935): 42–8. PubMed Abstract | Publisher Full Text\n\nLee S-Y, Ji R-R: Nav1.7 antibodies and methods of using the same. PCT Int Appl. 2015. WO 2015/035173. Reference Source\n\nKawate T, Gouaux E: Fluorescence-detection size-exclusion chromatography for precrystallization screening of integral membrane proteins. Structure. 2006; 14(4): 673–81. PubMed Abstract | Publisher Full Text\n\nLaedermann CJ, Syam N, Pertin M, et al.: β1- and β3- voltage-gated sodium channel subunits modulate cell surface expression and glycosylation of Nav1.7 in HEK293 cells. Front Cell Neurosci. 2013; 7: 137. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGajewiak J, Azam L, Imperial J, et al.: A disulfide tether stabilizes the block of sodium channels by the conotoxin μO§-GVIIJ. Proc Natl Acad Sci U S A. 2014; 111(7): 2758–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDas S, Glichrist J, Bosmans F, et al.: Binary architecture of the Nav1.2-β2 signaling complex. eLife. 2016; 5: pii: e10960. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu D, Tseng M, Epstein LF, et al.: Dataset: Evaluation of recombinant monoclonal antibody SVmab1 binding to NaV1.7 target sequences and block of human NaV1.7 currents.2016. Data Source"
}
|
[
{
"id": "18379",
"date": "09 Dec 2016",
"name": "David Hackos",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper, by Dong Liu et al., represents a solid re-evaluation of the recombinant monoclonal antibody, named SVmab1, that was previously reported in the journal Cell (Lee et al., 2014) to selectively block human Nav1.7 channels in-vitro and inhibit pain behavior in-vivo. The experiments described in this paper demonstrate that SVmab1, when expressed and purified from HEK293 cells, is not able to block Nav1.7 channels using several distinct voltage-activation protocols. This antibody is also not able to bind to the same peptide antigen that it was reportedly raised against (Lee et al., 2014), nor does it bind to the purified Nav1.7 domain II voltage-sensing domain (VSD2), nor cells expressing Nav1.7. The experiments described in this paper were carried out with high accuracy and with the appropriate controls and described in sufficient detail in the paper. The patch clamp analysis was especially exhaustive, testing the antibody using multiple voltage protocols covering close-state binding, partially inactivated binding, slow-inactivated binding, and use-dependent binding. I am especially confident in these results since we have conducted a similar analysis of expressed and purified SVmab1 antibody at Genentech and also failed to detect binding or Nav1.7 blocking activity.\n\nOne potential concern is that the methods used to produce SVmab1 here (HEK293 cells) are different than those used by Lee et al., who used hybridoma expression techniques. Perhaps there are differences in post-translational modifications between HEK293-expressed and hybridoma-expressed SVmab1, such as glycosylation, that could account for the difference in the observed effects on Nav1.7 channels. Glycosylation differences have been previously observed when antibodies were expressed in different mammalian cell lines (Lifely et al., Glycobiology. 5:813, 1995) and could in principle alter pharmacology (though probably unlikely). Another possibility is that the sequences presented in the patent application that were used by Dong et al. to express SVmab1 in HEK293 cells are not correct. This uncertainly could be resolved if the authors of Lee et al. provided detailed mass spectrometry data on their hybridoma-expressed SVmab1 and/or deposited the hybridoma to the ATCC.\n\nIt is striking that SVmab1 lacks the ability to bind to the same peptide antigen used to generate the antibody (confirmed by Genentech). The sequence presented in the patent application is also unusual as it lacks somatic mutations in the heavy and light variable domains (VH and VL) that are normally present in potent and specific monoclonal antibodies. Only one non-germline residue difference is found in VL at Kabat position number 96 (W96L), which is at the VJ junction and likely due to VJ junctional diversity (not somatic mutation). Similarly, only three non-germline residues are found in VH (in CDR 3 at Kabat positions 95, 96, and 100), which are also likely the product of junctional diversity (not somatic mutation). Otherwise the VH segment is identical to the VhJ558.53 gene in the BALB/c mouse described by Haines et al. (Mol. Immunol. 38:9, 2001). The lack of affinity for the peptide antigen, and the lack of somatic mutations in VH and VL, again raises the possibility that an incorrect antibody sequence is presented in the patent application.\n\nMinor comments:\n\n1. Dong et al. identified a “positive control antibody” that they used as a control in binding experiments. This antibody was generated at Abmart (which also generated SVmab1 itself) and was expressed and purified from hybridomas (instead of HEK293 cells). However, only peptide binding and VSD2 binding data are shown (both positive). It would be interesting to also determine whether this antibody is able to bind to native Nav1.7 (by FACS) and/or modulate Nav1.7 (by patch clamp).\n\nIn summary, Dong et al. have nicely demonstrated the inability of SVmab1 to modulate Nav1.7 channels in-vitro. These results call into question the findings of Lee et al., which previously had suggested the potential for identification of monoclonal antibodies capable of selectively blocking Nav1.7 both in-vitro and in-vivo. If the antibody sequence presented in the patent application is in error or if there are essential post-translational difference between hybridoma-expressed SVmab1 and HEK293-expressed SVmab1, it is my hope that Lee et al. will clarify this so that their work can be appropriate replicated.",
"responses": []
},
{
"id": "18395",
"date": "12 Dec 2016",
"name": "Alan D. Wickenden",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Liu et al. describes the production and evaluation of a recombinant version of rSVmab1, a monoclonal antibody that was recently described to bind to the Nav1.7 DII voltage sensor domain, resulting in block of Nav1.7 sodium currents and reduction of pain and itch in rodent models (Lee et al., 2014). Interesting and surprisingly, Liu et al., show that recombinant SVmab1, produced with the aid of publically available sequence information, does not bind either peptide immunogen, purified Nav1.7 DII voltage sensor domain, or Nav1.7 in live HEK293, U-2 OS, and CHO-K1 cells. Consistent with the lack of binding, they further show that rSVmab has no specific effect on Nav1.7 currents measured in patch clamp studies employing protocols that interrogate a variety of Nav1.7 gating states. The study seems to have been conducted in a careful and thorough manner. In our opinion, the results conclusively show that recombinant SVmab, produced according to published sequence information, does not bind to, or inhibit Nav1.7, and therefore does not represent a valid tool for the exploration of Nav1.7 biology.\n\nThese findings raise the important question as to why results with the recombinant form of the antibody differ so drastically from published results with SVmab purified from hybridoma. One possibility, alluded to by Liu et al, is that the published sequence is different from the mAb purified from the hybridoma. If the authors have access to the original hybridoma-derived SVmab, it would be interesting to see side-by-side studies with the two forms of the antibody (alternatively, it would be interesting to hear from Lee et al., if they already have these comparative data at hand). Although unlikely, another possibility suggested by Liu et al, is that differences exist in Nav1.7 glycosylation or beta subunit expression in the HEK293 cell lines utilized in the two studies. Since Lee et al., demonstrated that SVmab could block sodium currents (presumably Nav1.7) in small DRG neurons, it might be informative to know whether the recombinant SVmab was similarly able to block these native Nav1.7-mediated currents.\n\nThe original findings by Lee et al, indicated that mAbs directed to the DII VSD might represent a viable strategy for inhibiting Nav1.7 function. Unfortunately the rSVmab described in the present study is not a useful tool for further examination of this strategy. Interestingly however, Liu et al., show that the “positive control” mAb from Abmart, binds with high affinity to the peptide immunogen and the purified Nav1.7 DII voltage sensor domain. It would be very interesting to know if this control mAb could inhibit Nav1.7 function in patch-clamp studies.\n\nMinor points:\nrSVmab was tested at 200nM in the peptide ELISA, 13nM in the DII VSD ELISA, 33nM in the FACs assays and up to 500nM in the patch clamp studies. Why was the rSVmab tested at different concentrations in each study and why was binding not assessed at concentrations used in the functional studies?\n\nPage 4 & 5. Should 8x610 and 3x610 be 8x106 and 3x106?\n\nWhy was no capacitance or series resistance compensation applied in the patch clamp studies? Please provide an estimate of typical series resistance values and associated voltage-errors.\n\nPlease provide more information on “MABN41”. Does this mAb recognize the DII VSD or a different extracellular epitope?\n\nThe recombinantly produced rSVmab was flash frozen following purification and prior to testing. Although many proteins tolerate freezing and thawing, including antibodies, there are some that do not, and this is a relatively minor variable worth considering.",
"responses": []
},
{
"id": "18010",
"date": "12 Dec 2016",
"name": "Theodore Cummins",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an important study that attempts to reproduce results obtained with a putative Nav1.7 blocking antibody. Nav1.7 sodium channels have become major targets for reducing nociceptive signal transmission. Lee et al. (2014) reported generation of a monoclonal antibody that binds to the S3-S4 linker of the Nav1.7 domain II voltage sensor, blocking channel activation and providing pain and itch relief in animals. Here Liu et al. used the publicly available sequence information to generate recombinant SVmab1 antibody. They rigorously characterized their antibody. Surprisingly, the antibody did not seem to bind the target immunogen, voltage-sensor or full length Nav1.7 channels, nor did it block Nav1.7 currents. This raises many important questions. The authors appropriately discuss several reasons that their results may differ from that obtained with the original SVmab1. The array of electrophysiological experiments carried out in order to detect rSVmab1 activity is impressive, but a few additions to the paper would have been helpful.\nThe lack of activity for recombinant SVmab1 raises concern about the original study and it would have been good to see if an aliquot of the original preparation from Lee et al. acted differently in their hands, but presumably aliquots of the original antibody are not available to the authors. Clarification of this point would have been helpful.\n\nMultiple studies have identified the S3-S4 linker of the Nav1.7 domain II voltage sensor as critical molecular determinants of the action of peptide toxins that modulate Nav1.7 activity. Indeed, Huwentoxins I and IV and Protoxin II seem to target the very peptide region that SVmab1 reportedly targets ( Schmalhofer et al., 2008; Sokolov et al., 2008; Xiao et al., 2008, 2010). Discussion of the mechanism of action of these toxins that inhibit Nav1.7 might have been helpful.\n\nThe authors generated a monoclonal antibody against the sequence putatively targeted by SVmab1. Because this control antibody did bind Nav1.7 in their cell lines and this peptide contains multiple major molecular determinants of HWTX-IV inhibition of Nav1.7 (Xiao et al., 2011), it is important to know if the control antibody inhibits Nav1.7 currents.\n\nOverall, this is a very interesting study. While it does not directly determine whether the voltage sensor of domain II of Nav1.7 is a good target for inhibiting Nav1.7 currents, it does raise questions about how the original SVmab1 antibody reported in Lee et al. (2014) was able to inhibit Nav1.7 currents and if binding to Nav1.7 domain II was indeed important for reducing pain and itch behaviors in that study.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2764
|
https://f1000research.com/articles/5-2763/v1
|
25 Nov 16
|
{
"type": "Research Note",
"title": "Divergent pattern of genomic variation in Plasmodium falciparum and P. vivax",
"authors": [
"Preeti Goel",
"Gajinder Pal Singh",
"Preeti Goel"
],
"abstract": "The two main species causing malaria in humans, Plasmodium falciparum and P. vivax, differ significantly from each other in their evolutionary response to common drugs, but the reasons for this are not clear. Here we utilized the recently available large-scale genome sequencing data from these parasites and compared the pattern of single nucleotide polymorphisms, which may be related to these differences. We found that there was a five-fold higher preference for AT nucleotides compared to GC nucleotides at synonymous single nucleotide polymorphism sites in P. vivax. The preference for AT nucleotides was also present at non-synonymous sites, which lead to amino acid changes favouring those with codons of higher AT content. The substitution bias was also present at low and moderately conserved amino acid positions, but not at highly conserved positions. No marked bias was found at synonymous and non-synonymous sites in P. falciparum. The difference in the substitution bias between P. falciparum and P. vivax found in the present study may possibly contribute to their divergent evolutionary response to similar drug pressures.",
"keywords": [
"Plasmodium falciparum",
"Plasmodium vivax",
"non-synonymous polymorphism",
"resistance evolution"
],
"content": "Introduction\n\nPlasmodium falciparum and P. vivax are the two major species causing malaria in humans. These species differ greatly in their geographical distribution, mortality rates and resistance to anti-malarial drugs. P. falciparum is responsible for ~200 million malaria cases and ~440,000 deaths annually, of which ~90% occur in Africa, while P. vivax causes ~14 million malaria cases and 1400–15000 deaths annually, of which ~75% occur in South and South-East Asia1. Chloroquine was used as a frontline drug for both P. falciparum and P. vivax, but widespread resistance to chloroquine has only been observed in P. falciparum2. Therefore, chloroquine remains a frontline drug against P. vivax in most parts of the world, despite its usage for ~70 years2. The crt gene, which is involved in chloroquine resistance in P. falciparum, is not associated with chloroquine resistance in P. vivax3, despite significant conservation of the protein in the two species, and the mechanism of chloroquine resistance in P. vivax remains unknown. P. falciparum strains have developed resistance to almost all currently used drugs, including artemisinin, the most effective anti-malarial drug, and dealing with drug-resistant P. falciparum is one of the main contemporary public health challenges4,5. While P. vivax has been exposed to artemisinin, due to its frequent co-infection with P. falciparum6 and the usage of artemisinin against P. vivax in areas with chloroquine resistance7, artemisinin resistance in P. vivax has not yet been observed8–10. Thus, the evolutionary response of P. falciparum and P. vivax against anti-malarial drugs appears to be different11.\n\nA large amount of genome sequencing data has recently been generated from thousands of P. falciparum and hundreds of P. vivax samples12–14. This provides an unprecedented opportunity to compare the evolutionary patterns in the two species. The present study analysed this genomic data, and found a large substitution bias in P. vivax, even at non-synonymous sites, leading to biased amino acid changes. This may be related to the differential evolutionary response to same anti-malarial drugs observed in the two parasites.\n\n\nMethods\n\nThe single nucleotide polymorphism (SNP) data of P. falciparum and P. vivax was obtained from the MalariaGen community webpage (https://www.malariagen.net/data/p-falciparum-community-project-jan-2016-data-release; https://www.malariagen.net/data/p-vivax-genome-variation-may-2016-data-release). The SNP data for P. falciparum consists of filtered and high quality 939,687 exonic SNPs with 631,715 non-synonymous and 307,972 synonymous SNPs from 3,394 samples from 22 countries13. The SNP data for P. vivax consists of filtered and high quality 303,616 SNPs from 228 samples14. Of these there were 87,877 non-synonymous, 62,862 synonymous and 152,877 non-coding SNPs. Proteome sequences of P. falciparum 3D7, P. berghei ANKA, P. chabaudi chabaudi, P. cynomolgi B, P. knowlesi H, P. reichenowi CDC, P. vivax Sal1, P. yoelii 17X were downloaded from the PlasmoDB database (http://plasmodb.org/common/downloads/release-27/). Orthologous sequences were identified using best bidirectional hit algorithm15 and aligned using ClustalO (http://www.clustal.org/omega/)16. The conservation score for P. vivax residues was calculated as the average substitution score using BLOSUM62 matrix across seven orthologs at non-gapped positions.\n\nAll statistical analyses were performed in R software v3.3.1 (https://www.r-project.org/). R commands cor.test was used for calculating the Spearman rank correlation coefficients.\n\n\nResults\n\nThere is a large difference in the genomic AT content of the two Plasmodium species. P. falciparum has a genomic AT content of 81% compared to 58% for P. vivax, thus the two species have diverged in their AT content from their common ancestor17. It has been proposed that the common ancestor of the two species was AT rich17 and P. vivax has increased its genomic GC content since its divergence from the common ancestor. We tested whether this is true during the recent evolution of P. vivax by analysing the SNP data. We found highly biased substitution patterns in P. vivax, such that SNPs that change GC to AT nucleotides were approximately three times more common than those that change AT to GC nucleotides (Figure 1). This bias was present at synonymous, non-synonymous and non-coding sites (Figure 1) and indicates a recent opposite substitution bias in P. vivax compared to the general increase in its genomic GC content since its divergence from the common ancestor of P. falciparum and P. vivax. The biased substitution pattern at non-synonymous sites in P. vivax was reflected in the pattern of amino acid changes at the polymorphic sites, such that amino acids with GC-rich codons are reduced in abundance, while amino acids with AT-rich codons are increased in abundance (Figure 2A).\n\nThere are three times as many single nucleotide polymorphisms in P. vivax (Pv) that change GC to AT nucleotides compared to those that change AT to GC nucleotides. This bias becomes higher at synonymous sites (S). At non-synonymous sites (N) the bias becomes lower with a higher conservation score. The conservation score for each amino acid at non-synonymous sites was calculated as the average BLOSUM62 substitution score across seven Plasmodium orthologs at non-gapped positions. No such bias was observed in P. falciparum (Pf).\n\n(A) P. vivax. The average % GC content of codons for different amino acids is plotted on the Y-axis and the log2 fold change in the amino acid abundance at non-synonymous sites is plotted on the X-axis. A significant negative correlation is observed (Spearman correlation coefficient 0.69, p=0.0008). (B) P. falciparum. No correlation was observed (Spearman correlation coefficient -0.09, p=0.7).\n\nWe asked whether substitution bias in P. vivax might also influence amino acid changes at conserved positions. The substitution bias was present at amino acid positions that are low to moderately conserved, but not at highly conserved positions (Figure 1). As a control, there was no relationship between conservation and substitution bias at synonymous sites (Figure 1).\n\nWe next tested whether similar bias might be present in P. falciparum. There was no marked substitution bias at synonymous or non-synonymous sites in P. falciparum (Figure 1). Consequently, there was no bias in amino acid changes at polymorphic sites according to the GC content of its codons (Figure 2B).\n\n\nDiscussion\n\nThe present study finds a sharp recent reversal in the substitution bias in P. vivax favouring AT nucleotides compared to the general increase in its GC content since its divergence from the common ancestor of P. vivax and P. falciparum. This substitution bias has a consequence for the pattern of amino acid changes even at moderately conserved, and thus functionally important, sites (Figure 1). No such bias was observed in P. falciparum (Figure 1). The large difference in the substitution bias between P. vivax and P. falciparum may lead to different evolutionary solutions to similar drug pressure. It has been proposed that differences in the life cycle of the two Plasmodium species, specifically the early onset of gametocyte stage in P. vivax, which allows transmission before the malaria symptoms and drug treatment, may impede the spread of drug resistance in P. vivax11. It is also possible that the strength of negative selection might be different between the two species. The ratio of non-synonymous to synonymous polymorphisms (N/S) is much higher in P. falciparum compared to P. vivax (2.113 and 1.414, respectively). We found that the difference in N/S was also present when considering amino acid sites conserved across Plasmodium species (0.64 and 0.37 for P. falciparum and P. vivax, respectively). A higher tolerance for non-synonymous changes at conserved amino acid positions in P. falciparum suggests that fitness reducing mutations might have a higher likelihood to be established in P. falciparum compared to P. vivax. Since drug resistance evolution often entails fitness cost18, it might be easier to acquire fitness reducing drug resistance mutations in P. falciparum compared to P. vivax. It is likely that a combination of these and other factors might contribute towards differences in the drug resistance evolution in the two species.\n\nIt has been proposed that the common ancestor of P. falciparum and P. vivax was AT-rich and P. vivax has subsequently been evolving towards higher GC content17. Here we find a recent reversal in the substitution bias in P. vivax, where it is now evolving towards increasing AT content. Interestingly, we observed a lower substitution bias in the non-coding regions in P. vivax compared to synonymous sites (Figure 1), which might suggest a higher functional constraint in the non-coding regions compared to synonymous sites. This observation may be utilized to identify non-coding regions in P. vivax genomes that are under higher functional constraint as more genomics data becomes available in the future.\n\n\nData availability\n\nThis publication uses data from the MalariaGEN Plasmodium falciparum Community Project, as described in ‘Genomic epidemiology of artemisinin resistant malaria’, eLife, 2016 (DOI: 10.7554/eLife.08714)13, and the MalariaGEN P. vivax Genome Variation project, as described by Pearson et al. in Nature Genetics, 2016 (DOI: 10.1038/ng.3599)14. This data is also available from the MalariaGEN website (https://www.malariagen.net/data/p-falciparum-community-project-jan-2016-data-release; https://www.malariagen.net/data/p-vivax-genome-variation-may-2016-data-release).",
"appendix": "Author contributions\n\n\n\nG.P.S. conceived and designed the study. G.P.S. and P.G. performed the research. G.P.S. wrote the manuscript. All authors reviewed the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by an Early Career Fellowship to G.P.S. by the Wellcome Trust/DBT India Alliance (IA/E/15/1/502297) and a Junior Research Fellowship from University Grants Commission (UGC) of India to P.G.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nGenome sequencing was performed by the Wellcome Trust Sanger Institute and the Community Project is coordinated by the MalariaGEN Resource Centre with funding from the Wellcome Trust.\n\n\nReferences\n\nWorld Health Organization: The World Malaria Report 2015. Reference Source\n\nBaird JK: Chloroquine resistance in Plasmodium vivax. Antimicrob Agents Chemother. 2004; 48(11): 4075–4083. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNomura T, Carlton JM, Baird JK, et al.: Evidence for different mechanisms of chloroquine resistance in 2 Plasmodium species that cause human malaria. J Infect Dis. 2001; 183(11): 1653–1661. PubMed Abstract | Publisher Full Text\n\nWoodrow CJ, White NJ: The clinical impact of artemisinin resistance in Southeast Asia and the potential for future spread. FEMS Microbiol Rev. 2016; pii: fuw037. PubMed Abstract | Publisher Full Text\n\nLover AA, Gosling R, Feachem R, et al.: Eliminate now: seven critical actions required to accelerate elimination of Plasmodium falciparum malaria in the Greater Mekong Subregion. Malar J. 2016; 15(1): 518. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMayxay M, Pukrittayakamee S, Newton PN, et al.: Mixed-species malaria infections in humans. Trends Parasitol. 2004; 20(5): 233–240. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Global report on antimalarial drug efficacy and drug resistance: 2000–2010. Geneva, World Health Organization (WHO). 2010. Reference Source\n\nWang M, Siddiqui FA, Fan Q, et al.: Limited genetic diversity in the PvK12 Kelch protein in Plasmodium vivax isolates from Southeast Asia. Malar J. 2016; 15(1): 537. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeng S, Ruan Y, Bai Y, et al.: Genetic diversity of the Pvk12 gene in Plasmodium vivax from the China-Myanmar border area. Malar J. 2016; 15(1): 528. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrazeau NF, Hathaway N, Parobek CM, et al.: Longitudinal Pooled Deep Sequencing of the Plasmodium vivax K12 Kelch Gene in Cambodia Reveals a Lack of Selection by Artemisinin. Am J Trop Med Hyg. 2016; pii: 16-0566. PubMed Abstract | Publisher Full Text\n\nSchneider KA, Escalante AA: Fitness components and natural selection: why are there different patterns on the emergence of drug resistance in Plasmodium falciparum and Plasmodium vivax? Malar J. 2013; 12: 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHupalo DN, Luo Z, Melnikov A, et al.: Population genomics studies identify signatures of global dispersal and drug resistance in Plasmodium vivax. Nat Genet. 2016; 48(8): 953–958. PubMed Abstract | Publisher Full Text\n\nMalariaGEN Plasmodium falciparum Community Project: Genomic epidemiology of artemisinin resistant malaria. eLife. 2016; 5: pii: e08714. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPearson RD, Amato R, Auburn S, et al.: Genomic analysis of local variation and recent evolution in Plasmodium vivax. Nat Genet. 2016; 48(8): 959–964. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolf YI, Koonin EV: A tight link between orthologs and bidirectional best hits in bacterial and archaeal genomes. Genome Biol Evol. 2012; 4(12): 1286–1294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSievers F, Wilm A, Dineen D, et al.: Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011; 7(1): 539. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNikbakht H, Xia X, Hickey DA: The evolution of genomic GC content undergoes a rapid reversal within the genus Plasmodium. Genome. 2014; 57(9): 507–511. PubMed Abstract | Publisher Full Text\n\nRosenthal PJ: The interplay between drug resistance and fitness in malaria parasites. Mol Microbiol. 2013; 89(6): 1025–1038. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "18003",
"date": "20 Dec 2016",
"name": "Richard D. Pearson",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverview\nThis article reports a difference in the types of single nucleotide polymorphisms observed between the malaria parasites P. falciparum and P. vivax. Specifically, a higher rate of G/C to A/T substitutions is seen in P. vivax than in P. falciparum. The analysis is based on data produced by the Malaria Genomic Epidemiology Network (MalariaGEN, https://www.malariagen.net/).\nThe substitution bias reported in P. vivax is clearly present. This can be observed by using the MalariaGEN Data app at https://www.malariagen.net/apps/pvgv (select Variants tab, click Define query, click + and set Mutation Type Equals S, click OK, click Create plot then Bar graph, Group by: Reference Allele, Secondary group: Alternate Allele, click Create plot). It is clear that C->T and G->A mutations are much more common than other types of substitution. Using this app we can also see that the result continues to hold if we focus only on high frequency mutations (e.g. MAF All >= 0.25), though the difference is less dramatic for such variants. It can also be observed that the lack of such substitution bias is present in P. falciparum by using the MalariaGEN Data app for the Pf3k project at https://www.malariagen.net/apps/pf3k (select Variants 3.1 - Samtools Mpileup tab, click Define query, click + and set Is SNP Equals Yes, click + and set Is Coding Equals Yes, click OK, click Create plot then Bar graph, Group by: Reference Allele, Secondary group: Alternate Allele, click Create plot).\nThe substitution bias in P. vivax and lack of such bias in P. falciparum observed by the authors is an interesting result that warrants further explanation.\n\nMajor points\nThe first and last sentence of the abstract concern responses to drug pressures. This leaves the impression that the manuscript will describe an important result regarding drug pressure. However, there is no evidence provided in the manuscript to support such a result. The hypothesis that the difference in genomic variation seen is due to drug pressure appears to be pure speculation. Unless the authors can provide any evidence that the patterns of genomic variation seen can attributed to drug pressure, such speculation should be removed from the abstract, and if mentioned at all, should only be in the discussion and made clear that this is an unsubstantiated hypothesis.\n\nThe title of the manuscript describes a \"divergent\" pattern of genomic variation in the two species. The word divergent suggests that the pattern of genomic variation might be growing apart over time between the two species in some way. However, if the main results holds, and other types of mutation are GC content-neutral, then over time it should be expected that the GC content of P. vivax will decrease, which in this sense would result in the P. vivax genome becoming more like that of P. falciparum, so if anything this might be described as a convergent rather than a divergent pattern. A more accurate description might be that there is a \"different\" pattern of genomic variation in the two species.\n\nThe authors describe \"similar drug pressure\" in P. falciparum and P. vivax, but I think it is well established that the drug pressures are quite different in the two species, so such phrases should be removed.\n\nThe authors describe a \"recent reversal in the substitution bias in P. vivax\". In order to make such a claim, the authors would first need to show that there has, until recent times, been an opposite substitution bias in P. vivax, but no such evidence is given, only evidence from others that the GC content of P. vivax might have increased since the common ancestor of P. vivax and P. falciparum, which is not the same as showing an historic substitution bias.\n\nSpecific points\nThe final sentence of the abstract mentions \"similar drug pressures\" in P. falciparum and P. vivax. However, I think it is well established that the drug pressures are quite different in the two species, for example due to difference in gametocytogenesis and longevity of gametocytes as discussed in ref 11 by Schneider and Escalante. As stated above, speculation regarding drug pressure should be removed from the abstract.\n\nIn the first paragraph of the Introduction, it is not stated that there is resistance to chloroquine in P. vivax in some parts of the world. A reader not aware of this might be confused to read the phrase \"the mechanism of chloroquine resistance in P. vivax remains unknown\", because the previous sentences give no indication that there is any such resistance. It might be helpful to the reader to mention that there is resistance to chloroquine in P. vivax in some parts of the world.\n\nThe final sentence of the Introduction states that the substitution bias observed may be related to differential evolutionary response to the same anti-malarial drugs. No evidence is given in the remainder of the manuscript to say why this particular hypothesis should be preferred to any other potential hypothesis we might want to imagine. Also, it seems highly unlikely that the substitution bias observed might be related to the differential evolutionary response to the same anti-malarial drugs, as to date only a small number of genomic loci have been shown to be associated with drug resistance, whereas the pattern of substitution bias observed is (presumably) genome-wide.\n\nIn the first paragraph of the results, the authors use the phrase \"We tested whether this is true during the recent evolution of P. vivax\". It is not entirely clear what \"this\" refers to here but presumably the authors are saying they have tested the hypothesis that P. vivax has increased its GC content during recent evolution. However, this is not what they have tested. Firstly, the authors have not apparently considered which is the ancestral and which is the derived allele. The SNPs used are differences from the 3D7 reference genome. The authors appear to have assumed that the substitutions have been from the reference to the alternative allele, but it might be the case that for many of these SNPs, the alternative allele is ancestral and the 3D7 allele is derived. Secondly, it is possible that the GC content could be increasing even if substitutions are biased to decrease GC content, for example if other types of mutation such as short indels, larger structural variants and/or gene conversion result in an increase in GC content. The authors should either make it very clear that their results do not give any definitive conclusions as to the change in GC content over recent evolution, or else should perform a more thorough analysis taking into account the ancestral status of alleles and analysing non-SNP types of genomic variation.\n\nThe first sentence of the discussion states that the present study finds a sharp recent reversal in the substitution bias in P. vivax. In order to make such a claim, the authors would first need to show that there has, until recent times, been an opposite substitution bias in P. vivax. The authors appear to be making the assumption that a) the conclusion from reference 17 holds (i.e. that the GC content of P. vivax has increased since the common ancestor of P. vivax and P. falciparum) and that b) this increase in GC content was due to substitutions. However, even if a) is true, there is no evidence to support b), and it could be that the GC content has increased due to other types of mutation such as short indels, larger structural variants and/or gene conversion. The authors should remove any reference to reversal in substitution bias unless they can provide evidence of such a reversal.\n\nThe fourth sentence of the discussion again mentions \"similar drug pressures\", but as above I think it is well established that the drug pressures are quite different between the species. This sentence should be removed.\n\nThe final sentence of the first paragraph of the discussion is somewhat unclear, for example what exactly is meant by \"these\" factors? Also, is it really \"likely\" that these might contribute towards differences in drug resistance or is this simply \"possible\".\n\nThe final paragraph again refers to a recent reversal in the substitution bias in P. vivax. Unless the authors can demonstrate that this is true, this sentence should be removed.\n\nIt was not clear to me how the observation of lower substitution bias in non-coding regions compared to synonymous sites could be utilized to identify non-coding regions that are under higher functional constraint (final sentence). This should either be explained, or the sentence dropped.",
"responses": []
},
{
"id": "18806",
"date": "16 Jan 2017",
"name": "Charles J. Woodrow",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a secondary analysis of sequencing data produced by the MalariaGen community project. The authors describe a strong imbalance in SNPs at synonymous positions and less conserved non-synonymous positions, with GC to AT several times more common than AT to GC substitutions. If true, the core finding is of considerable interest.\n\nAs it stands, there is a substantial methodological issue relating to the assumption that the reference sequence is the ancestral one. For the result to be taken seriously by the wider community this central issue will needs to be addressed by the authors and then tested in future work by others. The authors attempt to discuss evolutionary significance of the finding but these sections are highly speculative. The authors also omit acknowledgment of possible weaknesses / limitations in their work.\n\nMajor issues:\nAncestral vs. derived alleles\nNo attention is given to the critical issue of which are ancestral and derived alleles for each SNP and until this issue is attended to, the findings should be treated with caution. The paper does not explicitly state how the direction of each mutation was determined, but presumably they used the SalI reference sequence available at the MalariaGen website as the ancestral and the alternative allele as the derived allele. SalI is of course not an ancestral sequence but simply the complete sequence of a single isolate and hence the direction of some SNPs is bound to be misclassified.\nThe authors need to consider and describe the extent to which this affects their overall results and conclusion. Then, all possible approaches to addressing and/or resolving the issue need to be explored – the onus is on the authors to subject their finding to the highest level of scrutiny and not simply assume it is correct because it falls out that way on first analysis. The paper describing the database (Pearson et al.) discusses the use of P. cynomolgi as outlier sequence and the definition of an ancestral sequence for around 30% of SNPs – the authors could focus on this subset of SNPs. Other approaches might include looking at SNPs of low frequency and / or private to one of the Asian populations since these are highly likely to be derived – however in theory this also increases the possibility of including SNPs that do not actually exist (false positives), a risk that needs to be made explicit. It might also be relevant to consider the subset of SNPs at SalI only (those with 100% frequency in the samples) – here the reference becomes the consensus sequence of the entire sample dataset and SalI becomes the derived sequence.\n\nUnits and interpretation\nThere is room for confusion in the way the term ‘bias’ is used. P. falciparum has a highly AT-biased genome but the AT to CG vs. CG to AT ratio is around 1 indicating that the P. falciparum SNPs are also heavily biased towards AT (put another way the system is at equilibrium). In contrast P. vivax has a relatively unbiased genome but SNPs that appear imbalanced towards AT. The SNP bias in the P. vivax population might therefore be roughly the same as for P. falciparum (so it is inappropriate to say that the patterns are divergent).\nIt would be interesting to calculate the underlying SNP rate for A to G (and G to A, C to T and T to C) where the denominator is the starting nucleotides. In other words, what proportion of A nucleotides mutate to G, and vice-versa? One approach would be to focus on 4-fold synonymous sites, calculate the underlying nucleotide content of these 4-fold synonymous sites across the genome, and then determine the mutation rates for each of the 12 possible mutation directions using the underlying nucleotide content as denominator. This would also test whether the AT-bias in SNPs (if it exists) applies to transversions (currently ignored).\n\nSecondary use of data\nThe set of SNPs described in Pearson et al. have clearly undergone extensive filtering and quality control and the possibility that a substantial number are incorrect (or have been missed out) seems remote. Furthermore, the imbalance seems to disappear in non-synonymous positions – evidence against a substantial number of artefactual SNPs. Nevertheless any bias in terms of false positive or false negative SNPs could in theory generate the imbalance in SNPs observed and the authors, who did not generate the data, cannot simply assume that the SNPs are 100% accurate – since they cannot see the underlying sequence reads which produced them. At the very least the authors need to acknowledge this potential weakness and discuss explicitly their reasons for being confident in terms of the SNP calls.\n\nMinor issues:\nThe authors suggest that P. falciparum and P. vivax have evolved in distinct ways to antimalarials but the mutations associated with antifolate resistance in the two organisms are highly analogous.\n\nThe authors attempt to link their (possible) finding on SNP bias to wider areas of malaria biology (lifecycle, drug resistance etc..) in the Introduction and Discussion. Currently these sections are highly speculative and not rooted in the extensive literature on this subject that has already been generated for other organisms. The starting point should be that codon bias is of no functional impact. If there are known examples where codon bias has a functional effect on an organism these need to be brought in – are there any examples where codon bias is thought to influence amino acid constitution (hydrophilicity / secondary structure) or generate repeat sequences (a particular feature of P. falciparum)? Differing codon biases might instead be a secondary consequence of the organism’s biology given the link between population size / genetic drift and fixation of new mutations. The authors need to place their finding in the context of wider literature.\n\nWhen discussing possible changes over time there are statements which lack basis. The authors suggest that the bias in P. vivax SNPs has changed recently; this is only one possible explanation for the possible findings. All the authors can say is that there is an imbalance in SNPs compared to the underlying genome.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2763
|
https://f1000research.com/articles/5-2758/v1
|
24 Nov 16
|
{
"type": "Research Article",
"title": "Dynamics of tRNA fragments and their targets in aging mammalian brain",
"authors": [
"Spyros Karaiskos",
"Andrey Grigoriev",
"Spyros Karaiskos"
],
"abstract": "Background: The progress of next-generation sequencing technologies has unveiled various non-coding RNAs that have previously been considered products of random degradation and attracted only minimal interest. Among small RNA families, microRNA (miRNAs) have traditionally been considered key post-transcriptional regulators. However, recent studies have reported evidence for widespread presence of fragments of tRNA molecules (tRFs) across a range of organisms and tissues, and of tRF involvement in Argonaute complexes. Methods:To elucidate potential tRF functionality, we compared available RNA sequencing datasets derived from the brains of young, mid-aged and old rats. Using sliding 7-mer windows along a tRF, we searched for putative seed sequences with high numbers of conserved complementary sites within 3' UTRs of 23 vertebrate genomes. We analyzed Gene Ontology term enrichment of predicted tRF targets and compared their transcript levels with targets of miRNAs in the context of age. Results and Discussion: We detected tRFs originating from 3’- and 5’-ends of tRNAs in rat brains at significant levels. These fragments showed dynamic changes: 3’ tRFs monotonously increased with age, while 5’ tRFs displayed less consistent patterns. Furthermore, 3’ tRFs showed a narrow size range compared to 5’ tRFs, suggesting a difference in their biogenesis mechanisms. Similar to our earlier results in Drosophila and compatible with other experimental findings, we found “seed” sequence locations on both ends of different tRFs. Putative targets of these fragments were found to be enriched in neuronal and developmental functions. Comparison of tRFs and miRNAs increasing in abundance with age revealed small, but distinct changes in brain target transcript levels for these two types of small RNA, with the higher proportion of tRF targets decreasing with age. We also illustrated the utility of tRF analysis for annotating tRNA genes in sequenced genomes.",
"keywords": [
"transfer RNA",
"rat brain",
"rat cortex",
"tRNA fragments",
"aging",
"non-coding RNA"
],
"content": "Introduction\n\nSmall RNA molecules derived from fragmented tRNAs form a new class of short (~16–40 nt) RNA molecules. They arise from directed cleavage of cellular tRNAs, including both tRNA precursor species, as well as mature, functional tRNA molecules, and have been associated with multiple infectious diseases, pathogen resistance and regulation1,2. Early reports described such fragments resulting from cleavage of tRNAs in Escherichia coli as a protective response to phage infection and as “biochemical warfare” directed against unrelated bacterial strains3,4. Subsequent studies have expanded the known domain of these fragments to archaea5, eukaryotes6–8, including their parasites9,10, and to human cells7,11–14. Broadly, the fragments are categorized into two types based on length and biogenesis: tRNA halves and tRNA-derived fragments (tRFs); this paper is focused on the latter. Studied and reviewed by several experimental groups15–18, tRFs are molecules of ~16–24 nt in length and can be classified into three types based on the tRNA region from which they derive: 5' tRF, 3'CCA and 3'U tRF. The last two types originate from the 3’ end of the tRNA, while the first is derived from the 5’ end. The 3'CCA type is generated from the 3' end of the mature tRNA and includes the CCA that is added to all tRNAs post-transcriptionally. The 3'U type is derived from the uracil rich trailer sequence upstream of the 3' end of the precursor tRNA molecule and has multiple Us added to the 3’ end. There have been various attempts to determine the biogenesis and function of these different types of tRNA-derived small RNAs, but currently most of these questions are still open.\n\nHypothesized to function similarly to microRNAs (miRNAs), either by regulating mRNAs (like miRNAs) or by affecting miRNA loading and processing7,11,19, tRFs have also been shown to bind to Argonaute complexes in multiple species20,21, strengthening their likely role in RISC-mediated gene silencing. A meta-analysis of PAR-CLIP libraries found that both 5’ and 3’ CCA tRFs were loaded to Ago1, Ago3, and Ago4, but 3’ U tRFs did not associate with Argonaute proteins in great numbers in human cells21. A recent study suggested a traditional miRNA-like silencing based on complementarity of the 5' seed sequence of a tRF to a sub-sequence within a 3' UTR of a transcript19. Yet another study has shown that the last 8–10 nts on the 3’ end of the tRF are responsible for mRNA repression22. In our lab, using a computational approach similar to detection of miRNA seeds, we have found potential seed regions on both a 5'-and a 3' tRF end23. Adding to this similarity, we have also reported age-related changes of tRF abundance in Drosophila melanogaster23, comparable to those detected for miRNA in the same organism20. Such changes with age were also detected in tRFs of Caenorhabditis elegans24.\n\nHere, we report on further support for such miRNA likeness of tRFs in another experimental system, which shows that both of these types of small RNA may participate in the mechanisms of brain aging. Aging underlies cognitive decline and dementia, and is the greatest risk factor for the failure of brain functioning in adults. Analysis of aging brain can shed light on the basic neurological mechanisms and their connections with age-related neurodegenerative conditions, such as Alzheimer's and Parkinson's disease25. Neurological research has used rats extensively over many years as models for mammalian behavioral and neurodegeneration studies. In the present study, we analyzed available RNA sequencing libraries produced from the brains of rats of different ages26 and identified numerous tRFs, which showed consistent changes in their abundance patterns with age. We also confirmed in rat brains our previous findings on possible targeting mode of Drosophila tRFs and the functional enrichment of their targets in neuronal and developmental functions23. Potential targets of tRFs with clearly defined seeds showed higher levels of down-regulation with age compared to the rest of the brain transcriptome and to the targets of miRNAs upregulated with age. Our results strengthen the emerging consensus that tRFs are a novel class of non-coding RNA molecules; they target mRNAs in a manner similar to miRNAs and their abundance in the cell is dynamically regulated with regards to aging.\n\n\nMethods\n\nWe used small RNA sequencing libraries from brains of the rat Rattus norvegicus26 publicly available from the European Nucleotide Archive (accession number, ERA365111). Using the sra-toolkit 2.8.0 (https://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=software) we converted the sra files to fastq format. We used fastx toolkit 0.0.13 (http://hannonlab.cshl.edu/fastx_toolkit/) to clip the adapter sequences and collapse identical reads. The reads of length above 16 nts were used for downstream analysis. We collapsed and mapped the reads to the rat genome (rn6, http://hgdownload.soe.ucsc.edu/goldenPath/rn6/bigZips/), and also to the union of rat tRNAs from two independent databases (http://gtrnadb.ucsc.edu and http://trnadb.bioinf.uni-leipzig.de, also including mitochondrial tRNA genes from the latter) using Bowtie (version 1.1.1, released on 10/1/2014, http://bowtie-bio.sourceforge.net/index.shtml). Bowtie parameters were set to output only perfect matches to tRNA sequences, including the post transcriptional CCA modification. Read counts in each experiment were normalized by the total number of reads detected, and averaged across three replicates for each of the three time points (ages of 6, 14 and 22 months). For further analysis, we selected only tRFs with read counts >0.1% of total reads in every replicate.\n\nWe generated 7-mer subsequences of tRFs by applying a 7-nt sliding window and shifting by one nt from the 5’ to the 3’ end. We then found the counts of exact matches for each of these subsequences to the 3’ UTR regions conserved in at least 15 species (always including human, mouse and rat) out of 23 (Table 1; alignments obtained from http://www.targetscan.org/). To estimate significance of the seed matches, we compared the observed match counts for each respective 7-mer in a tRF to (i) the expected number of matches by chance (estimated from 7-mer genomic frequency) and (ii) the average number of matches of all possible 7-mers with the same nucleotide composition in conserved 3’UTRs. Genes with exact matches of 7-mer and 7-mer_1a candidate seeds to the 3’UTR were considered potential targets.\n\nIn total, 23 species are listed with their taxonomy IDs. Matches in at least 15 of these species were required, always including human, mouse and rat (shaded rows).\n\nFor target expression analysis, we downloaded files with pre-computed transcript expression levels for the rat cerebral cortex transcriptome (GEO data series; accession number, GSE3427227). The expression levels in each experiment were normalized by the total number of reads detected and averaged across three replicates for each of the three time points (ages of 6, 12 and 28 months).\n\nFor each set of predicted targets of a tRF or miRNA, we compared its ratio of down-regulated/up-regulated target transcripts from young to old rats with the distribution of such ratios calculated for 1,000 randomly selected transcript sets (from the same transcriptome) of the same size as the target set (different for each tRF and miRNA). This process was repeated for three different thresholds (up-regulated by >5% / downregulated by >5%; up-regulated by >10% / downregulated by >10%; and up-regulated by >20% / downregulated by >20%), and the statistical significance of the differences observed was obtained using two-tailed t-test in the R package (version 3.3.1; www.R-project.org).\n\nThe predicted targets for each tRF were used as input in order to perform GO enrichment analysis. Each set of targets was uploaded to PANTHER website (http://pantherdb.org/; version 11.128) and results were obtained using the website default parameters.\n\n\nResults and discussion\n\nWe analyzed available datasets of nine different small-RNA libraries corresponding to three replicates for three distinct time points throughout a rat lifespan. These libraries were originally produced to study miRNA in the brains of young, middle-aged and old rats26. We will refer to the results associated with these three time points (6, 14 and 22 months) as Y, M and O, respectively. After mapping short RNA reads from these libraries to the union of tRNA sequences obtained from two independent databases (http://gtrnadb.ucsc.edu; http://trna.bioinf.uni-leipzig.de), we observed that the vast majority of alignments localized preferentially to a 5'- or a 3'-end of a tRNA molecule. Only 1–7% of the reads among the nine sequencing experiments aligned elsewhere on the tRNA sequence. In the datasets we analyzed, a negligible number of reads aligned to 3' U tRFs, therefore we limited our focus to 5' and 3' tRFs, for which there was extensive evidence. The two dominant tRF classes appeared likely to be generated by different mechanisms of cleavage. For instance, there was a striking consistency regarding the cleavage site location in 3’ tRFs, compared to a wider distribution of those sites in 5' tRFs (Figure 2), supporting the notion that tRFs are not byproducts of random degradation, but have specific structure-dependent cleavage sites.\n\nTypical secondary structure representation for the PheGAA tRNA gene (from http://trnadb.bioinf.uni-leipzig.de). Blue arrows point to typical endpoints for a 5' tRF. Red arrows indicate the ends of the most frequent 3' tRF. The mature tRNA molecule also contains the post-transcriptional 3' CCA modification (as does the 3' tRF). A 3' U tRF would derive from the uracil-rich trailer sequence downstream of the end of the tRNA gene (not shown).\n\nLength distributions for 5' (blue) and 3' (red) tRFs. tRF length is shown on the x-axis and the frequency on the y-axis. Note the much broader variability for the 5’ tRFs.\n\nWe then analyzed age-related abundance of 3' and 5’ tRFs in the brain. Interestingly, we observed a very common trend of an overall monotonous increase in the 3' tRF levels with age, Y < M < O (Figure 3; Table 2). In striking contrast, the 5’ tRFs displayed a much less consistent picture (Figure 4; Table 2), with several cases of monotonous increase or decrease with age, but mostly with a visibly different pattern of change M < Y < O (Figure 5). This difference, together with the cleavage site distributions (Figure 2), suggests that distinct processes are likely responsible for the generation of 3’ and 5’ tRFs, which may be relevant for their function.\n\nAbundance of 3' tRFs in rat brains for 3 distinct time points (Y is shown in green, M in blue and O in red). An average of 3 replicates for each tRF is shown on the x-axis for each time point. Error bars indicate the range of read counts. The y-axis shows the normalized tRF abundance; the numbers on the x-axis correspond to tRNA genes listed in Table 2. Y, young (6 months); M, mid-age (14 months); O, old (22 months).\n\nThe tRNA anticodon is shown in column 1, the tRNA ID in column 2 and the corresponding numbers used as an x axis label for Figure 3 and Figure 4 (when applicable) are shown in columns 3 and 4, respectively. A database, which each matching tRNA gene was downloaded from, is shown in column 5: *http://gtrnadb.ucsc.edu; **http://trnadb.bioinf.uni-leipzig.de. Additionally, a dollar sign ($) in column 5, indicates that a potential seed was detected in this tRF. Coordinates of tRFs on the matching tRNA genes are given in column 6.\n\nAbundance of 5' tRFs in rat brains for 3 distinct time points (Y is shown in green, M in blue and O in red). An average of 3 replicates for each tRF is shown on the x-axis for each time point. Error bars indicate the range of read counts. The y-axis shows the normalized tRF abundance; the numbers on the x axis correspond to tRNA genes listed in Table 2. Y, young (6 months); M, mid-age (14 months); O, old (22 months).\n\nChange in total abundance levels with age for all 5' tRFs (blue) and 3' tRFs (red). Y, young (6 months); M, mid-age (14 months); O, old (22 months).\n\nGiven the significant levels of tRFs in rat brains and their dynamic changes with age, we aimed to investigate their possible effect on the brain transcriptome. Although the mechanism of tRF action is yet to be elucidated, there is recent evidence suggesting an animal miRNA-like pathway of action. Previous reports have detected tRNA fragments in the cytoplasmic fraction of various human cells, including B-cells and A549 cells22,29, as well as mouse ES cells, plant cells, fission yeast cells and carcinoma cell lines, including HepG2, LNCap and LNCap-derived C4-213,30–33. It has been proposed that tRFs are likely to function similarly to a traditional miRNA-like mode, using perfect complementarity of the 5' seed sequence of the tRF (typically, positions 2–8 in miRNAs) to target a subsequence within a 3' UTR of a transcript29. Contrary to the above, an alternative mode of action for tRFs has been suggested by a study22, which utilized luciferase reporter assays to demonstrate that a potential seed sequence resided in the 3' end of the tRNA fragment, ruling out a 5' and a middle segment seed binding.\n\nA search for a near-perfect complementarity of tRF sequences against transcripts yielded very few results, both in the 12 Drosophila genomes23 and in the present study, further suggesting a targeting mode similar to animal miRNAs. Assuming such an animal-like miRNA targeting mechanism for tRFs, we further investigated the targeting mechanism of tRFs and adjusted our computational pipeline, used previously to find targets in 12 Drosophila genomes23, to perform the tRF seed search in mammalian genomes. This pipeline functions similarly to the approach used to identify such seed sequences for miRNAs34. We used 7-nt sliding windows across the length of a tRF sequence and aligned them against conserved 3' UTR regions of 23 vertebrate species. The region was considered conserved if it was found in 15 genomes (always including rat, mouse and human) out of these 23 (Table 1). We took into consideration the following match types: 7-mer-m8 (full 7-mer match), 7-mer-1a (perfect match of the first 6 nts followed by an A in the 3' end of the targeted transcript) and 8-mer-1a (perfect 7-mer match followed by an A in the 3' end), which have been extensively confirmed for miRNAs in the past35. Our results in finding seeds (Table 2) demonstrate that such conserved matches can be located both on the 5' end and on the 3' end of the tRF (Figure 6), concordant with the existing experimentally validated results for tRF targeting mechanisms22,29. A similar arrangement of the seed regions on the 5' end and on the 3' end of the tRF has also been observed in Drosophila23.\n\nThe numbers of exact sequence matches in the 3’ UTR regions are plotted against the starting position of a 7-mer. Expected number of matches in 3' UTRs is shown in yellow, average number of conserved matches for all other 7-mers with the same nucleotide composition as the given window is shown in blue, and the observed number of matches in the conserved regions of 23 vertebrates is shown in red. The letters on the top left corners of each plot correspond to individual tRFs: (A) ProTGG, (B) ValAAC, (C) PheGAA, (D) AlaTGC, (E) SerAGA and (F) SerGCT.\n\nThe success in finding seeds (Table 2) was overwhelmingly in favor of shorter 3’ tRFs (6 out of 24, 25%) compared to longer 5’ tRFs (2 out of 30, 6.67%). Given the number of differences between these two tRF types, we chose to focus on 3’ tRFs for the remainder of this paper. For our meta-analysis, detailed in the sections below, we combined experimental results performed in different labs, with different brain material, and at different ages (e.g., 22 months in small RNA-seq series is quite far from both 12 and 28 months in RNA-seq series). Given the small changes in gene expression and to avoid the effects of non-monotonous changes in many 5’tRFs, we limited our subsequent analysis to six 3’ tRFs (Figure 6), which showed monotonous changes in their levels from Y to M to O and clearly defined seed sequences.\n\nFollowing our seed region identification for tRFs, we focused on their predicted targets with conserved seed matches within their 3' UTR (Supplementary File 1). We explored potential functions of targets of six tRFs that showed clearly defined seed sequences (Figure 6). Gene Ontology (GO) enrichment analysis of conserved predicted targets of these tRFs revealed >150 significantly enriched GO terms for biological process. “Nervous system development” was found to be consistently enriched for all six tRFs, except ProTGG. Additional biological process GO terms, such as “central nervous system development”, “neurogenesis” and “axonogenesis”, were also enriched for multiple tRF targets (Supplementary File 2). Furthermore, the same tRF targets that showed an enrichment for nervous system functionality and development were also associated with significantly enriched neuron/axon-related cellular localization terms (Supplementary File 2). Overall, these results are in agreement with our previous work on D. melanogaster23, where we have noted a similar enrichment for biological processes related to neuronal function and development for predicted targets of tRFs increasing with age from young to adult flies. However, in addition to these functions, ProTGG and other tRFs also appeared to target transcription and splicing regulators in rat brains (Supplementary File 1 and Supplementary File 2).\n\nWe compared our observations of tRF abundance changes with age to the measured expression levels of their targets. We compared the profiles of all mRNAs in the rat cerebral cortex transcriptome27 with those predicted to be targeted by miRNAs (using Targetscan34) and by tRFs (using perfect matching of the identified tRF seed sequence and a conserved target sequence located in the 3' UTR of a transcript). We calculated the ratios of down- to up-regulated transcripts for the whole rat cortex transcriptome and for the targets of six 3’ tRFs (in which seeds could be clearly seen, Figure 6) and five miRNAs (Table 3) that had >500 raw reads in the old age, and, similarly to 3' tRFs, showed a monotonous increase Y < M < O. We observed that both tRF and miRNA targets were significantly enriched for down-regulated transcripts at three different regulation thresholds (Table 3). Interestingly, the enrichment for down-regulation in the union set of all neuron-related tRF targets was also significant (p<0.05) for each of these three thresholds of regulation.\n\nRatios of down-regulated/up-regulated miRNA- and tRF-targeted transcripts for each of the change thresholds (>5%, >10%, >20%). Significant difference from the expected ratio is indicated by ***p<0.005; **p<0.01; *p<0.05; #p>0.05.\n\nComparing the distributions of de-regulation levels from young to old age for (i) all mRNAs detected in rat cortex, (ii) for miRNA-targeted mRNAs and (iii) for tRF targets (Figure 7), we observed a consistently higher proportion of down-regulated and lower proportion of up-regulated targets in both miRNA and tRF groups of targets compared to all mRNAs. Although these proportions for mRNAs and tRF targets were generally comparable, we noted a bimodal distribution for tRF targets, whereas such bimodality was much less pronounced for miRNA targets (Figure 7). Targets for both types of small RNAs show their most prominent peaks for low levels of down-regulation with age (these range from 0 to -5% and are possibly related to targeting relevant in other cellular contexts or false positives in target predictions). However, the proportion of tRF targets down-regulated in the range of 10.0–22.5%, and thus more likely to be relevant in the brain, is consistently higher compared to that of miRNA targets. Such (relatively low) level of change is not surprising, given that miRNAs are considered to be fine-tuning the transcriptional control by post-transcriptionally modulating the target transcript levels36. The age-related decrease in the mRNA levels for tRF targets is generally more pronounced than that for miRNA targets.\n\nTranscript level changes from young to old rat brains. Distributions of changes for all detectable mRNAs in rat brains (blue), miRNA-targeted (green) and 3' tRF-targeted transcripts (red) are shown using 2.5% bins.\n\nIn our effort to identify every possible tRF present in rat brains, we took into account a union of all annotated rat tRNAs from two databases (http://gtrnadb.ucsc.edu; http://trnadb.bioinf.uni-leipzig.de). Although the latter database is rather small compared to the former, we found that it contained a handful of rat tRNA genes (to which tRF fragments did map perfectly), which were missing from the UCSC database at the time of our first analysis. Upon subsequent checking, we found that most of the missing tRNA genes have been added correctly to the most recent update of the UCSC database (not including mitochondrial tRNAs). However, there is a tRNA gene (tdbD00000658-GluCTC; Table 2), which aligns perfectly to the rat genome (chr17:45,642,771-45,642,843 of rn6), and which is still absent in the latest version of the UCSC database. In our analysis, we detected tRFs from all nine sequencing libraries mapping to tdbD00000658-GluCTC sequence. Together with the fact that annotating tRNAs is not a typical priority in genome sequencing projects, our observations suggest that there are potentially other tRNA genes lacking annotation in the published genomes. However, such genes appear to be sources of detectable tRFs. Hence, analysis of tRFs can have an added value of revealing unannotated tRNA genes for multiple species.\n\n\nDiscussion\n\nIn this study we characterized tRFs present in rat brains at three different time points, revealing that their abundance is dynamically regulated in the context of age. Previously, we have reported age-related changes in D. melanogaster tRFs23. While only two time points have been considered in that paper, it has shown the changes related to the tRF loading to Argonaute proteins and thus very likely related to the function of the RISC complex. Here we observed two typical patterns of change in tRF levels. One was a monotonous increase with age, primarily seen in 3’ tRFs. Another was a lower abundance in mid-aged rat brains and higher abundance in young and old animals, mostly observed in 5’ tRFs. These patterns, together with the differences in fragment sizes, suggest distinct mechanisms of cleavage for the two types of fragments, which can potentially be attributed to the different roles for these two types of tRFs. In addition to the biogenesis pathways, tRFs originating from different ends of the tRNA molecule have also been shown to localize in different sub-cellular compartments. As pointed out by Kumar et al.21, 5' tRFs were equally abundant in the nuclei and whole cell fraction of HeLa cell line37, indicating primarily nuclear localization, which is consistent with large numbers of 5’ tRFs in HeLa cell nucleoli11. On the contrary, 3’ tRFs showed an enrichment in the whole cell fraction, indicating their cytoplasmic localization (in agreement with Haussecker et al.12). There has been evidence of miRNAs actively loaded to Argonaute proteins in an age-dependent manner in D. melanogaster20. A very similar age-related loading pattern was also observed for D. melanogaster tRFs23. This, along with extensive evidence that Argonaute proteins are not only acting in post-transcriptional silencing but are localized/imported to the nucleus, could imply additional unknown functions for tRFs within the nuclear compartments of the cell. Perhaps such functions are similar to those previously described for miRNAs, which have been shown to be associated with mRNA splicing and modulation of histone epigenetic modifications38,39. This is a focus of our ongoing research.\n\nAlthough the mode of action for tRFs is yet to be elucidated, our results support the hypothesis that mammalian tRFs (at least, 3’ tRFs) can act in a very similar way to miRNAs in post-transcriptional gene silencing. We show here that they contain 7-mers, which match 3’ UTR regions of transcripts at much higher rates than expected by chance, similar to the seed sequences of miRNAs. Searching for conserved matches across vertebrate genomes, we found such seeds on either end of the tRF molecules, as has been the case with 12 Drosophila species23. Previous studies have also detected both 5’ and 3’ seeds in different tRFs and changes in the seed sequence have been shown to affect the suppression of mRNA translation22,29. It is worth noting that in miRNAs, 3'-compensatory sites40 and central pairing sites41 have been reported in addition to the most prevalent 5’ seeds34,35; thus, finding seeds on both ends of tRFs is not unexpected. Non-traditional seed region location in miRNA is also consistent with the extensive results of Helwak et al.42, who reported that more than half of the observed miRNA-mRNA interactions do not show traditional seed binding properties in HEK-293 cells. However, one cannot exclude other modes of action, for example, ribosome targeting5. Additionally, tRFs have been reported to bind to oncogenic RNA-binding protein YBX1, displacing pro-oncogenic transcripts and acting as tumor suppressors43.\n\nInterestingly, for tRFs with clearly defined seed-like regions, we observed a significant and consistent enrichment for targeted genes, whose Gene Ontology terms were related to neuronal function and development. Again, this was in agreement with a functional enrichment seen in Drosophila tRF targets23. However, in addition to these functions, rat brain tRFs also appeared to target transcription and splicing regulators, in parallel to earlier findings for rat brain miRNAs36. Some of the genes were predicted to be targeted by more than one tRF (Supplementary File 1), including well-known regulators of growth, such as PTEN or MAP3K1 (both targeted by three tRFs). These genes were seen down-regulated with age in the present study, consistent with their involvement in the developing nervous system. The highest number of tRFs (four) targeted the QK gene, whose human homolog has been implicated in oligodendrocyte-related gene expression abnormalities in schizophrenia44.\n\nHaving identified potential targets of 3’ tRFs, we compared age-related changes in their transcript levels with the targets of upregulated miRNAs and observed small but significant down-regulation of such targets for both groups of small RNAs. However, tRFs appeared to have more of their targets down-regulated to a greater extent with age compared to those of miRNAs. Among such down-regulated tRF targets with a well-defined role in the nervous system, a netrin receptor, UNC5C, is related to axon guidance and neural development. A mutation in this gene has been associated with predisposition to Alzheimer's disease and has been shown to cause increased neuronal cell death in rodents45. Cadherin genes, which are related to development and maintenance of functional structures in the central nervous system (reviewed in 46) were found in the present study to be targeted by tRFs (PCDH9). Fibroblast growth factor receptor-2 gene (FGFR2) was also found among the targets, suggesting that tRFs may affect key proteins involved in neural development, given that fibroblast growth factors are potent modulators of proliferation in the developing nervous system47.\n\nAs is the case with miRNAs, different tRFs appeared to affect down-regulation of their targets to a different extent with age (Table 3). The RISC pathway functions by repressing translation and by mRNA cleavage, and the exact balance of those mechanisms is not known. It has been speculated that degradation of repressed mRNAs by other mechanisms may be responsible for the observed decrease in their counts48. It is also unclear if the miRNA and tRF levels determined by RNA-seq correlate with their actual functional levels in the RISC complexes, or if the tRF entry into the RISC system in mammals is guarded, as seen in the yeast Schizosaccharomyces pombe31. Nevertheless, tRF targets appear to be more efficiently down-regulated compared to miRNA targets in aging rat brains (Figure 7). These present findings await experimental validation and may be of relevance for human aging and neurodegeneration studies, given the comparable gross structure of the rat and human brains and the role of rat models in neurological research.\n\n\nData availability\n\nWe have provided an archive with the data that should allow others to reproduce our results and figures presented in this paper. This archive is available via the Open Science Framework at https://osf.io/hz8en/, DOI: 10.17605/OSF.IO/HZ8EN49.\n\nIt contains output .txt files produced in the course of this study, including the following: (1) A description of output files (readme.txt); (2) 9 *.tRNAs files, generated using Bowtie, which include all possible reads that mapped to tRNA genes; (3) 54 *.txt files, which are the output from the seed sequence identification pipeline.\n\nThe file format for the seed sequence identification pipeline output is as follows:\n\nColumn 1) Gene name\n\nColumn 2) 7-mer location on the tRF molecule\n\nColumn 3) Multiple Sequence alignment starting position\n\nColumn 4) Multiple Sequence alignment ending position\n\nColumn 5) 3' UTR starting position\n\nColumn 6) 3' UTR ending position\n\nColumn 7) Match type\n\nColumn 8) Species with this site type (Taxonomy ID, see Table 1)\n\nAdditional data are available from public repositories as follows: small RNA sequencing libraries, European Nucleotide Archive (accession number, ERA365111); transcript levels in the cortex transcriptome, GEO data series (accession number, GSE34272); miRNA targets, Targetscan (http://www.targetscan.org/cgi-bin/targetscan/data_download.cgi?db=mmu_71).",
"appendix": "Author contributions\n\n\n\nSK participated in the design of the study, analyzed the data, and drafted the manuscript. AG conceived the study, oversaw its design, execution and coordination, and drafted and finalized the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was in part supported by the National Science Foundation to AG [DBI-1458202].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to thank Merve Ozbas for excellent technical help, and Sean Smith and Joseph Kawash for critical reading of an earlier version of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1. List of predicted conserved tRF targets for the six tRFs shown in Figure 6.\n\nClick here to access the data.\n\nSupplementary File 2. Gene Ontology enrichment analysis for the six tRFs shown in Figure 6.\n\nClick here to access the data.\n\n\nReferences\n\nGarcia-Silva MR, Cabrera-Cabrera F, Güida MC, et al.: Hints of tRNA-Derived Small RNAs Role in RNA Silencing Mechanisms. Genes (Basel). 2012; 3(4): 603–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarcia-Silva MR, Cabrera-Cabrera F, Güida MC, et al.: Novel aspects of tRNA-derived small RNAs with potential impact in infectious diseases. Adv Biosci Biotechnol. 2013; 4: 17–25. Publisher Full Text\n\nLevitz R, Chapman D, Amitsur M, et al.: The optional E. coli prr locus encodes a latent form of phage T4-induced anticodon nuclease. EMBO J. 1990; 9(5): 1383–1389. PubMed Abstract | Free Full Text\n\nOgawa T, Tomita K, Ueda T, et al.: A cytotoxic ribonuclease targeting specific transfer RNA anticodons. Science. 1999; 283(5410): 2097–100. PubMed Abstract | Publisher Full Text\n\nGebetsberger J, Zywicki M, Künzi A, et al.: tRNA-derived fragments target the ribosome and function as regulatory non-coding RNA in Haloferax volcanii. Archaea. 2012; 2012; 260909. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGong B, Lee YS, Lee I, et al.: Compartmentalized, functional role of angiogenin during spotted fever group rickettsia-induced endothelial barrier dysfunction: evidence of possible mediation by host tRNA-derived small noncoding RNAs. BMC Infect Dis. 2013; 13: 285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi Z, Ender C, Meister G, et al.: Extensive terminal and asymmetric processing of small RNAs from rRNAs, snoRNAs, snRNAs, and tRNAs. Nucleic Acids Res. 2012; 40(14): 6787–99. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWei C, Salichos L, Wittgrove CM, et al.: Transcriptome-wide analysis of small RNA expression in early zebrafish development. RNA. 2012; 18(5): 915–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi Y, Luo J, Zhou H, et al.: Stress-induced tRNA-derived RNAs: a novel class of small RNAs in the primitive eukaryote Giardia lamblia. Nucleic Acids Res. 2008; 36(19): 6048–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacchiaroli N, Cucher M, Zarowiecki M, et al.: microRNA profiling in the zoonotic parasite Echinococcus canadensis using a high-throughput approach. Parasit Vectors. 2015; 8(1): 83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCole C, Sobala A, Lu C, et al.: Filtering of deep sequencing data reveals the existence of abundant Dicer-dependent small RNAs derived from tRNAs. RNA. 2009; 15(12): 2147–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaussecker D, Huang Y, Lau A, et al.: Human tRNA-derived small RNAs in the global regulation of RNA silencing. RNA. 2010; 16(4): 673–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee YS, Shibata Y, Malhotra A, et al.: A novel class of small RNAs: tRNA-derived RNA fragments (tRFs). Genes Dev. 2009; 23(22): 2639–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYeung ML, Bennasser Y, Watashi K, et al.: Pyrosequencing of small non-coding RNAs in HIV-1 infected cells: evidence for the processing of a viral-cellular double-stranded RNA hybrid. Nucleic Acids Res. 2009; 37(19): 6575–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson P, Ivanov P: tRNA fragments in human health and disease. FEBS Lett. 2014; 588(23): 4297–4304. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGebetsberger J, Polacek N: Slicing tRNAs to boost functional ncRNA diversity. RNA Biol. 2013; 10(12): 1798–806. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartens-Uzunova ES, Olvedy M, Jenster G: Beyond microRNA--novel RNAs derived from small non-coding RNA and their implication in cancer. Cancer Lett. 2013; 340(2): 201–11. PubMed Abstract | Publisher Full Text\n\nSobala A, Hutvagner G: Transfer RNA-derived fragments: origins, processing, and functions. Wiley Interdiscip Rev RNA. 2011; 2(6): 853–62. PubMed Abstract | Publisher Full Text\n\nMiyoshi K, Miyoshi T, Siomi H: Many ways to generate microRNA-like small RNAs: non-canonical pathways for microRNA production. Mol Genet Genomics. 2010; 284(2): 95–103. PubMed Abstract | Publisher Full Text\n\nAbe M, Naqvi A, Hendriks GJ, et al.: Impact of age-associated increase in 2'-O-methylation of miRNAs on aging and neurodegeneration in Drosophila. Genes Dev. 2014; 28(1): 44–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar P, Anaya J, Mudunuri SB, et al.: Meta-analysis of tRNA derived RNA fragments reveals that they are evolutionarily conserved and associate with AGO proteins to recognize specific RNA targets. BMC Biol. 2014; 12(1): 78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang Q, Lee I, Ren J, et al.: Identification and functional characterization of tRNA-derived RNA fragments (tRFs) in respiratory syncytial virus infection. Mol Ther. 2013; 21(2): 368–79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaraiskos S, Naqvi AS, Swanson KE, et al.: Age-driven modulation of tRNA-derived fragments in Drosophila and their potential targets. Biol Direct. 2015; 10(1): 51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKato M, Chen X, Inukai S, et al.: Age-associated changes in expression of small, noncoding RNAs, including microRNAs, in C. elegans. RNA. 2011; 17(10): 1804–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKirkwood TB: A systematic look at an old problem. Nature. 2008; 451(7179): 644–647. PubMed Abstract | Publisher Full Text\n\nYin L, Sun Y, Wu J, et al.: Discovering novel microRNAs and age-related nonlinear changes in rat brains using deep sequencing. Neurobiol Aging. 2015; 36(2): 1037–1044. PubMed Abstract | Publisher Full Text\n\nWood SH, Craig T, Li Y, et al.: Whole transcriptome sequencing of the aging rat brain reveals dynamic RNA changes in the dark matter of the genome. Age (Dordr). 2013; 35(3): 763–776. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMi H, Muruganujan A, Casagrande JT, et al.: Large-scale gene function analysis with the PANTHER classification system. Nat Protoc. 2013; 8(8): 1551–1566. PubMed Abstract | Publisher Full Text\n\nMaute RL, Schneider C, Sumazin P, et al.: tRNA-derived microRNA modulates proliferation and the DNA damage response and is down-regulated in B cell lymphoma. Proc Natl Acad Sci U S A. 2013; 110(4): 1404–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBabiarz JE, Ruby JG, Wang Y, et al.: Mouse ES cells express endogenous shRNAs, siRNAs, and other Microprocessor-independent, Dicer-dependent small RNAs. Genes Dev. 2008; 22(20): 2773–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBühler M, Spies N, Bartel DP, et al.: TRAMP-mediated RNA surveillance prevents spurious entry of RNAs into the Schizosaccharomyces pombe siRNA pathway. Nat Struct Mol Biol. 2008; 15(10): 1015–1023. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHsieh LC, Lin SI, Shih AC, et al.: Uncovering small RNA-mediated responses to phosphate deficiency in Arabidopsis by deep sequencing. Plant Physiol. 2009; 151(4): 2120–2132. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKawaji H, Nakamura M, Takahashi Y, et al.: Hidden layers of human small RNAs. BMC Genomics. 2008; 9(1): 157. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLewis BP, Shih IH, Jones-Rhoades MW, et al.: Prediction of mammalian microRNA targets. Cell. 2003; 115(7): 787–98. PubMed Abstract | Publisher Full Text\n\nGrimson A, Farh KK, Johnston WK, et al.: MicroRNA targeting specificity in mammals: determinants beyond seed pairing. Mol Cell. 2007; 27(1): 91–105. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNielsen JA, Lau P, Maric D, et al.: Integrating microRNA and mRNA expression profiles of neuronal progenitors to identify regulatory networks underlying the onset of cortical neurogenesis. BMC Neurosci. 2009; 10: 98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nValen E, Preker P, Andersen PR, et al.: Biogenic mechanisms and utilization of small RNAs derived from human protein-coding genes. Nat Struct Mol Biol. 2011; 18(9): 1075–1082. PubMed Abstract | Publisher Full Text\n\nHuang V, Li LC: Demystifying the nuclear function of Argonaute proteins. RNA Biol. 2014; 11(1): 18–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchraivogel D, Meister G: Import routes and nuclear functions of Argonaute and other small RNA-silencing proteins. Trends Biochem Sci. 2014; 39(9): 420–431. PubMed Abstract | Publisher Full Text\n\nBartel DP: MicroRNAs: target recognition and regulatory functions. Cell. 2009; 136(2): 215–233. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShin C, Nam JW, Farh KK, et al.: Expanding the microRNA targeting code: functional sites with centered pairing. Mol Cell. 2010; 38(6): 789–802. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHelwak A, Kudla G, Dudnakova T, et al.: Mapping the human miRNA interactome by CLASH reveals frequent noncanonical binding. Cell. 2013; 153(3): 654–665. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoodarzi H, Liu X, Nguyen HC, et al.: Endogenous tRNA-Derived Fragments Suppress Breast Cancer Progression via YBX1 Displacement. Cell. 2015; 161(4): 790–802. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaroutunian V, Katsel P, Dracheva S, et al.: The human homolog of the QKI gene affected in the severe dysmyelination “quaking” mouse phenotype: downregulated in multiple brain regions in schizophrenia. Am J Psychiatry. 2006; 163(10): 1834–7. PubMed Abstract | Publisher Full Text\n\nWetzel-Smith MK, Hunkapiller J, Bhangale TR, et al.: A rare mutation in UNC5C predisposes to late-onset Alzheimer's disease and increases neuronal cell death. Nat Med. 2014; 20(12): 1452–1457. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRedies C: Cadherins in the central nervous system. Prog Neurobiol. 2000; 61(6): 611–648. PubMed Abstract | Publisher Full Text\n\nFord-Perriss M, Abud H, Murphy M: Fibroblast growth factors in the developing central nervous system. Clin Exp Pharmacol Physiol. 2001; 28(7): 493–503. PubMed Abstract | Publisher Full Text\n\nLim LP, Lau NC, Garrett-Engele P, et al.: Microarray analysis shows that some microRNAs downregulate large numbers of target mRNAs. Nature. 2005; 433(7027): 769–773. PubMed Abstract | Publisher Full Text\n\nKaraiskos S: Dynamics of tRNA Fragments and Their Targets in Aging Mammalian Brain. Open Science Framework. 2016. Data Source"
}
|
[
{
"id": "17976",
"date": "16 Dec 2016",
"name": "Tatiana V. Tatarinova",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is well-written with appropriate title and adequate abstract. There are, however, several minor comments.\nThe authors mention in passing the results of Helwak et al. Would it be possible to validate the predicted seed regions using the data from the CLASH experiments?\n\nThe results appear to agree with their previous work in Drosophila, with the difference that Ago IP was used in the flies. This difference is not discussed in sufficient detail here and needs to be further elaborated.\n\nHave the authors considered a “target” of the tRNA itself, from which a fragment is produced? After all, there is base complementarity in the stem for both 5’ and 3’ tRFs.\n\nA recent paper in Science1 implicated possible tRF matches in promoter regions. Have the authors considered these targets?",
"responses": []
},
{
"id": "18880",
"date": "03 Jan 2017",
"name": "Laura Kamenetzky",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript adds important and relevant work on tRFs present in rat brains at three different time points. The methodology is well-written and able to be replicated. The results demonstrate that tRFs abundance is dynamically regulated in the context of age and that tRFs contain 7-mers which match 3’ UTR regions of relevant mRNAs. Also, the tRFs analysis have an added value revealing unannotated tRNA genes for multiple species.\nThere are minor issues (indicated below):\nIt would be interesting to add more detail of the criteria used to determine the expression level of the miRNAs that are compared with the tRFs. It may also be explained in more detailed how the authors selected the 5 miRNAs for comparisons.\n\nIt would be interesting to show a table with the number of tRFs sites (single or multiple) per mRNA target since in the supplementary material it is difficult to see. Also targets regulated by distinct tRFs.\n\nThe legend of Figure 4 mentions the yellow color that is absent in the figure.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2758
|
https://f1000research.com/articles/5-2757/v1
|
24 Nov 16
|
{
"type": "Software Tool Article",
"title": "A guide and best practices for R/Bioconductor tool integration in Galaxy",
"authors": [
"Nitesh Turaga",
"Mallory A. Freeberg",
"Dannon Baker",
"John Chilton",
"Galaxy Team",
"Anton Nekrutenko",
"James Taylor",
"Nitesh Turaga",
"Dannon Baker",
"John Chilton",
"Anton Nekrutenko",
"James Taylor"
],
"abstract": "Galaxy provides a web-based platform for interactive, large-scale data analyses, which integrates bioinformatics tools written in a variety of languages. A substantial number of these tools are written in the R programming language, which enables powerful analysis and visualization of complex data. The Bioconductor Project provides access to these open source R tools and currently contains over 1200 R packages. While some R/Bioconductor tools are currently available in Galaxy, scientific research communities would benefit greatly if they were integrated on a larger scale. Tool development in Galaxy is an early entry point for Galaxy developers, biologists, and bioinformaticians, who want to make their work more accessible to a larger community of scientists. Here, we present a guide and best practices for R/Bioconductor tool integration into Galaxy. In addition, we introduce new functionalities to existing software that resolve dependency issues and semi-automate generation of tool integration components. With these improvements, novice and experienced developers can easily integrate R/Bioconductor tools into Galaxy to make their work more accessible to the scientific community.",
"keywords": [
"Interoperability",
"Bioconductor",
"R",
"Galaxy",
"Open Source",
"Bioinformatics"
],
"content": "Introduction\n\nThe Bioconductor Project (https://www.bioconductor.org/) provides one of the largest suites of open source bioinformatics tools for analyzing genomics and biomedical data from diverse high-throughput assays, including DNA microarrays, flow cytometry, and deep sequencing1,2. Bioconductor has an active user community of researchers from across a spectrum of biomedical professions, who routinely analyze complex genomics datasets. Bioconductor tools and packages are primarily based on R (https://www.r-project.org/), a programming language and integrated environment developed for statistical computing and data visualization3. By submitting R packages to Bioconductor, developers can easily distribute powerful statistical analysis and visualization tools, while supporting efforts towards reproducible scientific research.\n\nTo further increase their usability and distribution, some R/Bioconductor tools have been integrated into Galaxy (https://galaxyproject.org/), an open source, web-based platform for performing, reproducing, and sharing data analyses, which utilize a variety of bioinformatics tools4,5. R/Bioconductor tool integration into Galaxy is a multi-step process that, although straightforward, poses unique challenges for both novice and advanced tool developers. The first major challenge is identifying and installing all of the dependencies needed for an R/Bioconductor tool. Each dependency needs to be available to Galaxy through its dependency management system, called the Tool Shed (https://toolshed.g2.bx.psu.edu/). If a dependency is not available in the Tool Shed, then it must be installed manually, which can often be difficult. The second major challenge is generating the required files/code for tool integration, which is a time-consuming process. Ensuring that the correct files are generated with correct syntax can be a frustrating task and is often the biggest hurdle for less-experienced developers. Given these issues, tool integration remains a daunting task for some developers, especially for those less familiar with command-line processes. Ideally, the R/Bioconductor tool integration process would be easy and intuitive for both novice and advanced developers.\n\nRecognizing the need to improve the R/Bioconductor tool integration process, we outline here a guide and best practices (Table 1) for R/Bioconductor tool integration into Galaxy. Furthermore, we highlight the addition of new functionality to Planemo (http://planemo.readthedocs.io)6, a suite of command-line utilities that assists in building and publishing Galaxy tools. To simplify generation of required tool integration files, we introduce a new Planemo command that creates template files based on an R/Bioconductor tool developed by the user. This new command also addresses dependency management issues by leveraging Bioconda (https://bioconda.github.io), a channel of the Conda package manager, which provides bioinformatics software7. Bioconda locates specific versioned R/Bioconductor packages via BioaRchive (https://bioarchive.galaxyproject.org/)8 and CRAN (https://cran.r-project.org/)9 and includes these as requirements for using the R/Bioconductor tool. These new Planemo functionalities simplify R/Bioconductor tool integration and will enable developers to focus more attention on R/Bioconductor tool development and less on tool integration into Galaxy.\n\nDescribed in this table are eight key “best practices” for developing R/Bioconductor tools that will be integrated into Galaxy.\n\n\nMethods\n\nHere, we present a complete guide for developers to integrate an R/Bioconductor tool into Galaxy. We first describe the necessary components for wrapping an R/Bioconductor tool in Galaxy. Next, we outline the steps required for integrating the tool components into Galaxy, testing the tool, and executing the tool. Finally, we describe a simplified integration process using the new Planemo command, bioc_tool_init. The instructions provided in this guide assume that the developer has system access to Galaxy source code files (e.g. using a local instance or cloud-based instance of Galaxy), has an active Internet connection, and has installed the Planemo python package (v0.35.0 or later; https://pypi.python.org/pypi/planemo/)6. Furthermore, the user must have installed any packages that are dependencies for the R/Bioconductor tool being integrated.\n\nAn R/Bioconductor Galaxy tool is defined by four major components. The first component is a Tool definition file (Tool wrapper) in XML format, which provides the interface between Galaxy and the R/Bioconductor tool being integrated. The second component is a Custom R file, which calls the R/Bioconductor tool(s) to perform a particular analysis. The third component is a Tool dependency file, which tells Galaxy where to find the required tool dependencies. The fourth component is the Test data directory, which includes both input and output files that will be used to test the Custom R file. These four components should be organized using the following directory structure:\n\n\n\nTool definition file. The Tool definition file informs Galaxy how to handle parameters in the Custom R file. The value given to “name” in the file header appears in the Galaxy tool panel and should be set to a meaningful short description of what the tool does. The minimal structure of the Tool definition file contains seven key sections. The requirements section defines the tool dependencies needed to run the R script, and includes the version of R used to develop the tool. The command section defines the R command that is executed in Galaxy via the R interpreter. Importantly, input and output parameters are denoted as $input1, $input2, ... and $output1, $output2, ..., respectively, and the full path to the Custom R file is essential. The inputs section establishes how input parameters given to the Custom R file appear in Galaxy, while the outputs section defines the name and format of output files generated by the Custom R file. Each input and output parameter requires its own entry in the Tool definition file, and the values assigned to “name” should match those in the command section. The tests section defines the input parameters needed to test the R command and what output to expect as a result. This section is important for tool testing and debugging. The help section should be used to describe the R/Bioconductor tool and will appear at the bottom of the Galaxy tool form. Finally, appropriate references for the tool can be provided using the citations section. References can be cited, for example, using a DOI or a BibTeX entry. An example Tool definition file for an R/Bioconductor tool that enumerates k-mers in a fastq file is available as Supplementary File 1. This tool will subsequently be referred to as “Kmer_enumerate” and will be referenced throughout the remaining sections of this guide.\n\nCustom R file. The Custom R file establishes the R environment and informs Galaxy what R command(s) to execute. The first section of this file contains a header of information that handles error messages, loads required R libraries, and parses options. These requirements are needed for every R/Bioconductor tool being integrated; however, the list of imported R libraries will be specific to each tool. The next section defines the list of parameters to pass to the R command, including input and output parameters. Each parameter, if using the getopt command line parsing library, requires a unique name, a unique single letter designation, a flag indicating whether the parameter is required (0=no argument; 1=required; 2=optional), and the parameter type (e.g. character, integer, float). Optionally, variable names and values can be printed to standard output (stdout), which can be viewed in Galaxy when the tool executes. While not required, these printed statements can assist in debugging and inform whether the R/Bioconductor tool was executed correctly. The final section contains the R command(s) needed to execute the R/Bioconductor tool. A Custom R file for the Kmer_enumerate tool is available as Supplementary File 2. This tool uses the R/Bioconductor package seqTools10 to read in a fastq file of DNA sequences, count the number of k-mers in the sequences where the value k is supplied by the user, and output the k-mers and their counts.\n\nUnlike running standalone R scripts in the command line or using a graphical interface, it is not necessary to define the working directory (e.g. using setwd()) in the Custom R file. By default, Galaxy executes the R script in the same directory where the files are located. Similarly, Galaxy writes output files to the same directory, which enables the results to be displayed in the Galaxy history panel. Before attempting to integrate an R/Bioconductor tool, it is strongly recommended to test the Custom R file as a standalone script. For example, the Kmer_enumerate Custom R file can be executed in the command line using the following (test input and output files are available as Supplemental File 3 and Supplemental File 4, respectively):\n\n\n\nTool dependency file. The Tool dependency XML file informs Galaxy where to find the required tool dependencies and should explicitly reference each of the requirements listed in the Tool definition file. It is important to differentiate between the requirements tag in the Tool definition file and the Tool dependency file: the Tool definition file identifies what dependencies are needed, and the Tool dependency file identifies where to get the dependencies. For example, many of the required tools - R/Bioconductor or otherwise - are available in the Galaxy Tool Shed. In the Tool dependency file, each requirement is listed under its own repository tag with its “name” and “owner” parameters as they appear in the Tool Shed. Available Tool Shed tools can viewed at https://toolshed.g2.bx.psu.edu/. A Tool dependency file for Kmer_enumerate is supplied as Supplementary File 5.\n\nTest data. The Test data directory includes data file(s) intended as input to test the R script and any expected output data file(s). During testing, Galaxy runs the R/Bioconductor tool with the input files in the test data directory and compares the output with the output file(s) in the same directory to ensure that the tool is producing expected results. As an added benefit, including testing data for the tool provides an example for other users of the data formats needed to run the tool.\n\nFor tools that output plots, generating test data becomes an issue. To test whether R figures and plots are generated correctly, they should be saved as PNG files instead of PDF. Saving plots and figures as PDF files is a common practice in R/Bioconductor packages, but when PDF files are generated, they are time-stamped. Galaxy will not consider two PDFs identical - even if they display the same image - if they were generated at different times and thus have different timestamps.\n\nThe following steps outline how to integrate the new R/Bioconductor tool into Galaxy after the tool files have been generated:\n\nStep 1: Assemble the Tool definition file, Custom R file, Tool dependency file, and Test data directory with test data files in a single directory. Update the Tool definition file to provide the full path where appropriate. Alternatively, if the tool directory is saved in the $GALAXY_ROOT/tools/ directory, a relative path is sufficient.\n\nStep 2: Copy the Tool configuration file tool_conf.xml.sample, if it does not already exist, and save it as tool_conf.xml.\n\n\n\nStep 3: Modify tool_conf.xml by adding a new section under which the integrated tool will exist. The value given to “name” in the Tool configuration file will appear in the tool panel, and the value given to “name” in the Tool definition file will appear under this new section. Provide the full path to the Tool definition file if the tool directory is not in $GALAXY_ROOT/tools/. Otherwise, the relative path is sufficient.\n\n\n\nStep 4: Restart Galaxy to integrate the modified tool_conf.xml file.\n\nIncluding test cases for newly integrated tools - while not strictly required - is highly recommended because it enables easier debugging and ensures that the tool is working as expected. To test, for example, the Kmer_enumerate tool, first upload the testing input file (test_input.fq.gz) to Galaxy. Choose the Kmer_enumerate tool from the tool panel, update the input file and k-mer parameters, and execute the tool. In this example, when Kmer_enumerate is executed, test_input.fq.gz is passed by the --input1 argument and the k-mer value is passed by the --input2 argument in the Tool definition file to the Custom R file. The Custom R file executes and sends the results back to the Tool definition file. The output of the tool is then available for viewing in the Galaxy history panel. Since Galaxy interprets any output written to standard error (stderr) as a failed job, it is important that developers ensure that any output (e.g. files, messages) generated by the R script is sent to stdout and not stderr.\n\nWhen the developer is satisfied that the integrated tool executes correctly, the tool is ready to be used for data analysis. At this point, the newly integrated R/Bioconductor tool can be published in the Galaxy Tool Shed, so that it will be available to the Galaxy community. Detailed instructions for how to submit tools for publishing in the Galaxy Tool Shed can be found online at http://planemo.readthedocs.io/en/latest/publishing.html.\n\n\nUse cases\n\nThe above guide outlines a straightforward approach for integrating a relatively simple R/Bioconductor tool into Galaxy. However, generating the Tool definition XML file and ensuring all required dependencies, which are available to Galaxy, remain a difficult task. To simplify R/Bioconductor tool integration and address these remaining challenges, we have added a new command to Planemo v0.34.1, a suite of command-line utilities to assist in building and publishing Galaxy tools. We also take advantage of the package manager Conda v4.2.4 and, specifically, the Bioconda channel11 of Conda, which distributes bioinformatics-related software. The new Planemo command, bioc_tool_init, creates a Bioconda recipe of all dependencies for the given R/Bioconductor tool and writes the path to this recipe to the Tool definition file. This command eliminates the need for recursively parsing the R/Bioconductor dependency tree to create the Tool dependency file. Further, this approach has the added benefit of creating an artifact, which, while created to use with a Galaxy tool, is potentially useful outside of Galaxy by anyone using Bioconda. Additional arguments to the bioc_tool_init command specifically address R/Bioconductor tool integration and are described in the two use cases below.\n\nThe bioc_tool_init command functions by invoking another new Planemo command, bioc_conda_recipe_init. In this command, Bioconda uses BioaRchive v2.35.0, a Bioconductor package version archive, to retrieve the correct package versions if they exist. The bioconductor_skeleton.py script has been modified to not only find missing R/Bioconductor package dependencies, but also create them in the local Bioconda repository specified by the user. BioaRchive improves reproducibility of the Bioconda recipes of different versions of the same Bioconductor package.\n\nHere, we describe how to generate a Tool definition file using bioc_tool_init for a Custom R script that uses the R/Bioconductor package affy v1.52.012. We assume that the user has installed Planemo (v0.35.0 or newer) along with the requirements for the software and has git configured with the appropriate ssh keys (https://help.github.com/articles/generating-an-ssh-key/). We first describe the simplest integration scheme (Case 1) requiring only one parameter, followed by more complex integration schemes (Case 2) with multiple parameters.\n\n\nCase 1: Generating a Tool definition file and Bioconda recipe with a single parameter\n\nThe following command is the simplest way to generate a Tool definition file and Bioconda recipe for integrating a Custom R file in Galaxy (Supplementary File 6–Supplementary File 9). This tool, subsequently referred to as “Extract_expression”, implements the affy package to extract probe expression levels from an Agilent CEL file generated by a microarray experiment.\n\n\n\nOnly the --command option is required here because all of the necessary information for building and running the tool is present in the Rscript call. The full path to the Custom R file should be given so that the Tool definition file can correctly locate it. In addition, extensions for all --input and --output files are also required, as they are needed to populate the format parameter in the inputs and outputs tags in the Tool definition file. The key to this usage of the bioc_tool_init command is that the R command given to --command successfully executes in the command line. The example input13 and output files used here are available as Supplementary File 10 and Supplementary File 11, respectively. In a similarly simple example, each parameter in the Rscript command can be given to bioc_tool_init using the following arguments:\n\n\n\nIn both examples, the bioc_tool_init command first clones the Bioconda repository in the home directory and creates a new recipe for each dependency in $HOME/bioconda-recipes/recipes/ if it does not exist. This allows the user to have a local copy of the recipe and all of the Bioconda package dependencies for that recipe. The bioc_tool_init command then creates a new Tool definition file, my_affy_tool.xml, in the current directory with the newly generated Bioconda recipe as a requirement. The bioc_tool_init command is used to create, not update, the dependency requirements and provide a suitable blueprint for the Tool definition file. Semi-automated creation of the Tool definition XML file enables users to quickly generate usable code for R/Bioconductor tool integration.\n\n\nCase 2: Generating a Tool definition file and Bioconda recipe with a multiple parameters\n\nAdditional options are available for bioc_tool_init and are strongly recommended for generating a tool that follows Galaxy tool development best practices. For example, the --name option sets the name that will appear in the Galaxy tool panel (defaults to the name of the Tool definition file) and should be a brief statement of what the tool does. The --description option provides additional information about the tool’s function and appears immediately after the tool name in the tool panel. The --help_text option populations a field in the Galaxy tool form that provides additional information about what the tool does. Finally, developers should utilize the --doi option to include citation information for the tool in the tool form. An exhaustive list of available arguments can be found by using planemo bioc_tool_init --help. The following is an example command to generate the Tool definition file and Bioconda recipe for the Extract_expression tool:\n\n\n\nUsing this command, the tool dependencies and requirements in my_affy_tool.R are automatically written to the Tool definition file (Supplementary File 12). As in Case 1, appropriate input and output data formats are inferred from the --input and --output arguments given to --command. It is important to note that the Tool definition file and the Bioconda recipe generated by the planemo bioc_tool_init command are meant to be working, usable code for tool integration. However, the files are not 100% complete in terms of following best practices for Galaxy tool development, and may require additional work to reach the standards for which Galaxy tools are published to the Tool Shed. Galaxy R/Bioconductor tool developers are strongly encouraged to meet best practice standards for any tool.\n\n\nDiscussion\n\nIntegrating R/Bioconductor tools into Galaxy can be challenging for both novice and advanced tool developers, but it is an important part of increasing the availability and reproducibility of research tools for the scientific community. We provide here a complete guide for R/Bioconductor tool integration that includes: (1) a description of the components needed to integrate the tool, (2) step-by-step instructions for incorporating the tool components into Galaxy, (3) examples of how to use the new bioc_tool_init Planemo command for easier tool integration, and (4) best practices for R/Bioconductor tool integration. A more detailed guide for R/Bioconductor tool integration into Galaxy is available on GitHub at https://github.com/nturaga/bioc-galaxy-integration/blob/master/README.md. By providing a way to semi-automate the integration process, we hope that R/Bioconductor tool developers can focus more on developing new and essential tools rather than on how to integrate them into Galaxy.\n\nA key feature of the simplified tool integration method described in this work is the addition of the bioc_tool_init command to Planemo. This new capability specifically improves tool integration for developers in two ways. First, the bioc_tool_init command generates nearly complete Tool dependency files and Bioconda recipes by directly parsing the Custom R script being integrated, eliminating the need for developers to manually update the correct tool names and versions in all the tool files. Second, the bioc_tool_init command alleviates dependency management issues by recursively identifying and installing all required tool dependencies using Bioconda. This ensures that tool dependencies are compatible with and accessible across different platforms and eliminates the need for developers to manually install all required dependencies. We hope that these improvements will encourage more R/Bioconductor tool developers to share and publish their tools on Galaxy.\n\nFuture work for improving ease of R/Bioconductor tool integration into Galaxy includes improving the bioc_tool_init command to automate more tasks. For example, we are currently working on a functionality that automatically generates example test cases to include in the Tool definition XML file. We are also working on extending the bioc_tool_init command to handle integration of multiple R/Bioconductor functions by passing a formatted text file to bioc_tool_init. We are also working on a Planemo command that automatically submits a wrapped R/Bioconductor tool to the public Tool Shed. Finally, development of a Planemo command that can automatically wrap an entire R/Bioconductor package based on a published vignette would be ideal for quickly integrating and publishing Galaxy-wrapped tools. These and other improvement are currently undergoing development.\n\n\nData and software availability\n\nAutomated build available from: https://hub.docker.com/r/nitesh1989/bioc-galaxy-integration/\n\nLatest source code: https://github.com/nturaga/bioc-galaxy-integration\n\nArchived source code as at the time of publication: DOI, 10.5281/zenodo.16655114\n\nLicense: Academic Free License version 3.0\n\nMore information on tool building in Galaxy and additional best practices are available at http://planemo.readthedocs.io/en/latest/writing.html. Planemo documentation can be found at https://github.com/galaxyproject/planemo.",
"appendix": "Author contributions\n\n\n\nNT and MAF designed and implemented the features presented with advice from DB, JC, AN, and JT. JC implemented the Planemo tool on which this work is built. NT, MAF, and DB wrote the paper. Members of the Galaxy Team developed the Galaxy framework on which this work relies and provided advice on the project. All authors have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis project is funded in part by the National Institutes of Health [U41 HG006620].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe are grateful to the Galaxy Team (https://wiki.galaxyproject.org/GalaxyTeam) and the developer community for the Galaxy Framework for their guidance on this project. We would also like to thank the developers of Bioconda for providing a great service to the community. We give special mention to Ryan Dale for developing the script to create Bioconda recipes for Bioconductor packages. We also thank the contributors of BioaRchive.\n\n\nSupplementary files\n\nSupplementary File 1: Tool definition file for Kmer_enumerate. An example Tool definition file for the Kmer_enumerate tool, which enumerates k-mers in a fastq file. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 2: Custom R file for Kmer_enumerate. An example Custom R file for the Kmer_enumerate tool. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 3: Input for Kmer_enumerate. An example input fastq file for the Kmer_enumerate tool. This fastq formatted file has been compressed with gzip. The file must be uncompressed (using gunzip) before it can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 4: Output for Kmer_enumerate. An example output text file for the Kmer_enumerate tool. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 5: Tool dependencies for Kmer_enumerate. An example Tool dependencies file for the Kmer_enumerate tool. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 6: Tool definition file for Extract_expression Case 1. An example Tool definition file for the Extract_expression tool created using Case 1. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 7: Yaml file for Extract_expression. An example Bioconda recipe yaml file for the Extract_expression tool created using Case 1. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 8: Bash file for Extract_expression. An example Bioconda recipe bash file for the Extract_expression tool created using Case 1. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 9: Custom R file for Extract_expression. An example Custom R file for the Extract_expression tool created using Case 1. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 10: Input CEL file for Extract_expression. An example input CEL file for the Extract_expression tool13. This file is a binary data file created by Affymetrix DNA microarray image analysis software, so it cannot be viewed with a text editor. The contents of this CEL file can be accessed in R via the readAffy() function of the affy Bioconductor package12.\n\nClick here to access the data.\n\nSupplementary File 11: Output for Extract_expression. An example output text file for the Extract_expression tool. This file can be opened with any text editor.\n\nClick here to access the data.\n\nSupplementary File 12: Tool definition file Extract_expression Case 2. An example Tool definition file for the Extract_expression tool created using Case 2. This file can be opened with any text editor.\n\nClick here to access the data.\n\n\nReferences\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2015. Reference Source\n\nGiardine B, Riemer C, Hardison RC, et al.: Galaxy: a platform for interactive large-scale genome analysis. Genome Res. 2005; 15(10): 1451–1455. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoecks J, Nekrutenko A, Taylor J, et al.: Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biol. 2010; 11(8): R86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPlanemo. GitHub. Cited 12 Oct 2016. Reference Source\n\nConda. GitHub. Cited 12 Oct 2016. Reference Source\n\nTuraga N: bioaRchive: enabling reproducibility of Bioconductor based analyses [v1; not peer reviewed]. F1000Res. 2015; 4(ISCB Comm J): 370(poster). Publisher Full Text\n\nCRAN: The Comprehensive R Archive Network. Cited 12 Oct 2016. Reference Source\n\nKaisers W: seqTools: Analysis of nucleotide, sequence and quality content on fastq files. R package version 1.6.0. 2013. Reference Source\n\nBioconda. GitHub. Cited 12 Oct 2016. Reference Source\n\nGautier L, Cope L, Bolstad BM, et al.: affy--analysis of Affymetrix GeneChip data at the probe level. Bioinformatics. 2004; 20(3): 307–315. PubMed Abstract | Publisher Full Text\n\nHavis E, Bonnin MA, Olivera-Martinez I, et al.: Transcriptomic analysis of mouse limb tendon cells during development. Development. 2014; 141(19): 3683–3696. PubMed Abstract | Publisher Full Text\n\nChilton J, Cock P, Rasche E, et al.: Galaxy Planemo 0.35.0 [Data set]. Zenodo. 2016. Data Source"
}
|
[
{
"id": "18535",
"date": "21 Dec 2016",
"name": "Paul A. Stewart",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript, as the title implies, presents a guide and best practices for Bioconductor tool integration into the Galaxy web environment. Some Bioconductor tools have been integrated into Galaxy, but there are still a number of tools that are not supported and need to be installed by a Galaxy administrator in a multi-step process. The two biggest challenges for any type of tool integration are introduced: installation of all tool dependencies and correctly generating tool component files. The authors provide a table of best practices to help address these challenges, but more importantly the authors introduce Planemo, a wonderful command-line utility suite, which greatly aids in the creation and publishing of Galaxy tools.\nThe Methods section does an excellent job at providing a guide for integrating Bioconductor tools, and the Use cases section, together with sample commands and provided supplementary files, shows the reader the benefits and ease of use of the Planemo suite. We have tested the provided sample files and code, and it behaves as described in our hands.\nThe manuscript is very well written and will serve as a great reference for seasoned developers wishing to integrate R/Bioconductor tools into Galaxy, and the clearly written explanations as well as use cases will help ease newcomers into working with Galaxy. All examples provided are with R/Bioconductor, but this content could easily be used as a manual integrating other tools or command line scripts outside of Bioconductor. This is a great development for Galaxy documentation, and we are happy to recommend the article for approval.",
"responses": [
{
"c_id": "2419",
"date": "10 Jan 2017",
"name": "Mallory Freeberg",
"role": "Author Response",
"response": "Thank you for your insightful and useful feedback! We are thrilled that our use cases and sample files worked for you. We are planning some improvements to the manuscript and will release a version 2 shortly. Cheers, Mallory"
}
]
},
{
"id": "18534",
"date": "28 Dec 2016",
"name": "Houtan Noushmehr",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Turaga et al. is a useful guide for novice and advanced users of R/Bioconductor and Galaxy to incorporate any R/Bioconductor packages within the popular Galaxy software. We believe that by allowing users to integrate any Bioconductor package within Galaxy will add enormous utility and advancement for any analyst. We envision users with little experience with R/Bioconductor would eventually be able to seek support to integrate any new Bioconductor package and thus incorporate the workflow within their data analysis pipeline within Galaxy. The authors have done an excellent job by highlighting several key best practices to achieve a reliable integration as well as the necessary structure to integrate a new BioC packages with example files highlighted within their supplemental data. This integration is explained through two distinct processes; A manual version and a semi-automated process that utilizes the tool Planemo (https://github.com/galaxyproject/planemo), a command-line suite of tool to assist in developing tools for the Galaxy Project. The authors envision a more streamlined version of their tool with subsequent improvements, and the expectations will eventually lead to a larger base of users (little and no experience in R to advanced users).\nThe text is well-written and is well structured, and we were able to follow the manual integration. However, due in part to a non-working version of the latest build from the GitHub repository, we were unable to implement the planemo tool `bioc_tool_init` and thus we can not provide a thorough evaluation of the tool. We expect that once the tool is available, we can provide a proper evaluation by integrating a random BioC package.\nSome minor points:\nIn the tool dependency file description, we were unable to understand how and where one would obtain the name and owner of each dependency. For example, we were unable to find the name and owner of seqTools in https://toolshed.g2.bx.psu.edu/.\n\nIn the tool integration section, the code has the full path and the relative path for the file in the section. We feel this is redundant. Please consider modifying the code. https://gist.github.com/tiagochst/a7b0ff56a864ca1ae2d5eaeaee82db9b\n\nThis issue is probably not the author's fault, but the supplementary files do not possess the same name as the example. For this reason, the user has to download and rename them to execute the guide. Maybe adding a compressed zip with the correct names would help.\n\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=\nMajor problems:\nPlanemo: `bioc_tool_init` unavailable As explained in the methods section we were able to install the latest version of Planemo (0.36.1). However, the command `bioc_tool_init` was unavailable and thus we were unable to evaluate the command by testing it with our packages. We followed the following instructions posted here: https://github.com/galaxyproject/planemo (Here is a screenshot of the version and the problem we experienced during install https://goo.gl/kyh33j). If further steps are required to install the `bioc_tool_init` function, we feel it should be well documented either in git or this manuscript. Due to its unavailability to test and confirm it's practicality, we are unable to accept this MS in its current form for publication. Once the tool is available, we can provide a complete evaluation.",
"responses": [
{
"c_id": "2418",
"date": "10 Jan 2017",
"name": "Mallory Freeberg",
"role": "Author Response",
"response": "Thank you for your insightful and useful feedback! We are in the process of addressing the minor concerns and issues raised here and will be releasing a version 2 of the manuscript shortly. We look forward to hearing about your experience wrapping an R/BioC tool in Galaxy! In the meantime, we have addressed your major concern of the bioc_tool_init command not being installed with Planemo. You should be able to see this command now by installing Planemo via pip or GitHub. Again, thank you for your feedback! Best, Mallory"
}
]
}
] | 1
|
https://f1000research.com/articles/5-2757
|
https://f1000research.com/articles/5-2742/v1
|
22 Nov 16
|
{
"type": "Research Article",
"title": "Peripheral refraction with different designs of progressive soft contact lenses in myopes",
"authors": [
"Kareem Allinjawi",
"Sharanjeet-Kaur Sharanjeet-Kaur",
"Saadah Mohamed Akhir",
"Haliza Abdul Mutalib",
"Kareem Allinjawi",
"Saadah Mohamed Akhir",
"Haliza Abdul Mutalib"
],
"abstract": "Aim: The purpose of this study was to compare the changes in relative peripheral refractive error produced by two different designs of progressive soft contact lenses in myopic schoolchildren. Methods: Twenty-seven myopic schoolchildren age between 13 to 15 years were included in this study. The measurements of central and peripheral refraction were made using a Grand-Seiko WR-5100K open-field autorefractometer without correction (baseline), and two different designs of progressive contact lenses (PCLs) (Multistage from SEED & Proclear from Cooper Vision) with an addition power of +1.50 D. Refractive power was measured at center and at eccentricities between 35º temporal to 35º nasal visual field (in 5º steps). Results: Both PCLs showed a reduction in hyperopic defocus at periphery. However, this reduction was only significant for the Multistage PCL (p= 0.015), (Proclear PCL p= 0.830). Conclusion: Multistage PCLs showed greater reduction in peripheral retinal hyperopic defocus among myopic schoolchildren in comparison to Proclear PCLs.",
"keywords": [
"Progressive contact lens",
"myopia",
"hyperopic defocus",
"peripheral retina"
],
"content": "Introduction\n\nThe most common type of eye refractive error is called myopia, which is considered a global health problem1. With the beginning of 21st century, Atchison et al.2 2006 and Mutti et al.3 2007 observed that myopic eyes have more hyperopic peripheral refraction than emmetropes in the horizontal visual field. Studies conducted by Smith and colleagues in monkeys have shown that not only the fovea, but also the peripheral retina, is capable of regulating the emmetropiszation process4–6. This indicates that the peripheral retina is important in determining ocular development and refractive error.\n\nStudies have shown that conventional correction of myopia using spectacles lenses may increase hyperopic defocus in the periphery7,8. Hyperopic defocus worsens with a higher degree of myopia and eccentricity9. In 2009, Tabernero et al.8 suggested that by changing the peripheral optics of corrective devices, relative hyperopic defocus in myopic eyes could be inverted into peripheral relative myopia. This could be a possible strategy to counterbalance the unknown stimulus that triggers the eye elongation and subsequent progression of myopia.\n\nSpecially designed spectacle lenses10 and contact lenses11 have employed the change in peripheral optics of the optical design. Some commercially available progressive contact lenses (PCL) (dominant-design) intended for presbyopic patients might render a similar effect. The peripheral add power area, which was primarily intended to increase spherical aberration and depth of focus in presbyopic patients, has been shown to induce significant changes in the peripheral refractive error profile of the eye. Lopes-Ferreira et al.12 found that a +3.00 D add dominant design Proclear progressive contact lens in 20 emmetropic and 28 myopic eyes inverted the hyperopic defocus to myopic defocus in the periphery. In another study by Rosén et al.13, they were able to induce approximately 0.50 D of myopic defocus 30° using a +2.00 D lens in 1 myopic and three emmetropic patients. These studies were done in adults, and it is unclear if the hyperopic defocus could be inverted to myopic defocus in myopic children. This knowledge is important because myopia progression occurs mainly in children. The aim of this study was to compare the changes in relative peripheral refractive error produced by using two different designs of commercially available progressive soft contact lenses in myopic schoolchildren.\n\n\nMethods\n\nTwenty-seven myopic schoolchildren (24 females, 3 males) aged between 13 and 15 years were recruited in this study. The purpose and procedure of the study were explained to all participants and their parents. Then, written informed consent was obtained before enrolment into the study. The study was conducted at the University Kebangsaan Malaysia (UKM) Optometry Clinic and Vision Science Lab. This research was approved by the Ethics Committee of the Universiti Kebangsaan Malaysia (UKM 1.5.3.5/244/NN-144-2013) and followed the tenets of the Declaration of Helsinki in using human subjects.\n\nThe inclusion criteria for this study were having visual acuity of 6/9 or better in both eyes, having normal ocular condition with a spherical component refractive error range between -3.00 and -6.00 D, astigmatism not more than -1.00 D and anisometropia of less than 1.50 D between both eyes. Children with manifest strabismus, amblyopia, any ocular conditions associated with myopia, a history of bifocal or progressive spectacle wear, orthokeratology contact lens wear, or those currently wearing soft contact lenses were excluded from participation in this study.\n\nA comprehensive ocular examination, which included fundus evaluation, anterior segment assessment, and axial length calculation, was conducted by an experienced optometrist to select the candidates. The spherical equivalent refractive error (M) for each subject was determined using non-cycloplegic objective and subjective refraction. An ultrasound A-scan (Tomey AL-2000) was used to measure axial length using a handheld probe. The final outcome was calculated as the mean of 5 measurements.\n\nCentral and peripheral refraction were measured using an open-view autorefractometer (Grand-Seiko WR-5100K,Grand Seiko Co., Ltd., Hiroshima, Japan). The examination room illumination was dimmed (mean of three measurements: 9.91 ± 1.73 lux, measured using a Topcon Luxmeter) in order to obtain a pupil size sufficiently large enough to measure peripheral retina without using dilatation drops. The measurement was obtained initially without contact lenses (WL), then re-measured again using Multistage progressive contact lenses (Multistage PCL, from SEED Co. Japan) and Proclear progressive contact lenses (Proclear PCL, from CooperVision) in random order. Subjects were masked to the type of each lens, while the practitioner was unmasked. The subjects were instructed to fixate on targets (green light laser) located at 4 metres arranged horizontally in the positions corresponding to eccentricities from 35° temporal to 35° nasal, in 5° steps. The straight ahead viewing technique was used in this study, in which the subjects rotated their eyes to view a series of fixation targets. Five refraction measurements were taken at each target fixation for the right eye only, while the left eye was occluded. For statistical analysis, the sphero-cylindrical refractive error measurements were converted into vector components of refraction M, J0, J45 using the equations recommended by Thibos et al.14 M, J0 and J45 according to Fourier analysis,\n\nM = sph + (cyl/2), J0 = (-cyl/2) cos (2 α), J45 = (-cyl/2) sin (2 α),\n\nwhere sph, cyl and (α) represent sphere, cylinder and axis, respectively. Relative peripheral refractive error (RPRE) was calculated as the difference between eccentric peripheral refraction and central refraction. A one-way repeated measures ANOVA with Bonferroni’s post-hoc test was conducted to determine the changes in RPRE values for the mean spherical equivalent M, J0 and J45 components between the groups.\n\nAll subjects were fitted with Multistage PCL and Proclear PCL to their right eyes in random order on the same days. Lens powers fully corrected the central refractive error. The Multistage PCL used in the study was a biweekly soft contact lens made of 42% Group IV (ionic high water content) and 58% water content, with diameter of 14.2 mm and a base curve of 8.6 mm. The B-Design used in this study is spherical distance power at the centre zone (2.5 mm), a junction zone (2.5 mm to 3.5 mm) and a near zone (3.5 mm to 8.0 mm) with a maximum addition power of +1.50 D in the periphery.\n\nThe Proclear progressive D® design contact lens was a monthly disposable lens made from omafilcon A, with a water content of 62% and an overall diameter of 14.4 mm, with a base curve of 8.7 mm. The lens design has a 2.3-mm inner distance central spherical area, surrounded by an annular aspheric zone where the addition power increases gradually to reach its maximum power of +1.50D at 5 mm. There is a second spherical zone with a maximum addition power of +1.50D from 5 mm to 8 mm diameter. Table 1 illustrates the parameters of contact lenses used in this study.\n\nAnalysis was performed using SPSS statistical software version 20 (SPSS Inc., IL, USA). Only data from the right eye was analysed. A Shapiro–Wilk test was used to evaluate the normality of the data distribution. A paired t-test was used for paired comparisons of RPRE within each group at the different eccentricities with respect to the centre. When normality could not be assumed, the Wilcoxon signed-ranks test was used. The differences were considered statistically significant when the p value was lower than 0.05. Then, repeated measures analysis of variance (ANOVA) was performed to compare the RPRE between the different groups at the different eccentricities.\n\n\nResults\n\n* Values are expressed in diopters (D). N is nasal visual field; T is temporal visual field; C is centre.\n\n*Values are expressed in diopters (D). N is nasal visual field; T is temporal visual field; C is centre; p represents the value of statistical significance according to Paired t-test or Wilcoxon Signed Ranks Test. Bold indicates statistically significant power difference from central point (95% confid\n\nA total of 27 myopic schoolchildren with a mean age of 14.18 ± 0.88 years (range: 13 years to 15 years) participated in this study. The mean central spherical equivalent refractive error was found to be -4.39 ± 0.95 D (range: -3.12D to -5.93D) without correction, with a mean axial length of 24.72 ± 0.92 mm (range: 23.51 mm to 26.39 mm). Table 2 presents the mean values of refractive error and standard deviations of eyes without contact lenses (WL), with Multistage PCL and Proclear PCL.\n\nTable 3 shows the RPRE and standard deviations (SD) for mean spherical equivalent values (M), horizontal astigmatism component (J0) and oblique astigmatism component (J45) in WL conditions, with Multistage PCL and Proclear PCL. A paired t-test showed that without contact lenses, there was a significant hyperopic defocus at and beyond 30° in the nasal visual field (N30° p= 0.001, N35° p< 0.001) and at and beyond 25° in the temporal visual field (T25° p= 0.018, T30° p= 0.001, and T35°< 0.001). When Multistage PCL was used, the peripheral defocus was only present at 35° in the nasal (p= 0.009) and temporal visual fields (p= 0.026). However, with Proclear PCL there was significant hyperopic defocus at and beyond 30° nasally (N30° p= 0.004, N35° p< 0.001) and at 25° temporally (T25° p= 0.031, T30° p= 0.004, and T35° p= 0.001). Multistage PCL shows a significant myopic defocus at nasal and temporal 35° for the J0 component, while for the J45 component, there was a significant hyperopic defocus at 20° but a significant myopic defocus at 15° nasal and 20° temporal.\n\nWith the Multistage PCL, the peripheral hyperopic defocus was decreased and only present at eccentricity of 35° nasally and temporally. However, with Proclear PCL, the hyperopic shift was still present at and beyond 30° in the nasal visual field and 25° in the temporal visual field. The hyperopic defocus was much smaller for Multistage PCL (+0.67 ±1.23 D at 35° nasal and +0.52 ±1.14 D at 35° temporal) as compared to Proclear PCL (+1.13 ±1.31 D at 35° nasal and +0.81 ±1.10 D at 35° temporal).\n\nFigure 1, Figure 2 and Figure 3 illustrate the RPRE without contact lenses (baseline), with Multistage PCL and Proclear PCL for spherical equivalent value M, horizontal astigmatic component J0 and oblique astigmatic component J45, respectively. The hyperopic defocus is obvious at nasal and temporal visual fields with spherical equivalent value M. The J0 and J45, however, showed few changes at the peripheral field.\n\nProgressive contact lens design\n\n(a) Proclear progressive contact lens from Coopervision (b) Multistage progressive contact lens from Seed.\n\nA one-way repeated measures ANOVA was conducted to determine the changes in RPRE values for the mean spherical equivalent M, J0 and J45 components between the groups. The results of the ANOVA indicated a significant difference in mean spherical equivalent between groups with a Greenhouse–Geisser correction (F(7.218, 43.794) = 4.285, p= 0.032). A post-hoc test using Bonferroni’s correction indicated a statistically significant difference in mean spherical equivalent RPRE (M) between the baseline and Multistage PCL (p=0.015), while Proclear PCL showed no statistically significant different in comparison to the baseline (p=0.830). The results showed no statistically significant difference between without contact lenses and all contact lenses used in this study for J0 and J45F(1.772, 52.926) = 0.871, p= 0.425, and F(0.440, 67.258) = 0.172, p=0.844, respectively. Therefore, it can be concluded that wearing a Multistage PCL can reduce hyperopic defocus in the retinal periphery.\n\n\nDiscussion\n\nWith the extensive range of powers, materials and designs of soft contact lenses, they have become one of the most popular modes of myopia correction widely used by young adults. The present study compared the effect of RPRE along the horizontal visual field between two different designs of progressive contact lens (Multistage PCL and Proclear PCL). Although both progressive contact lenses in this study are simultaneous vision lenses, and had the same addition power (+1.50 D), the results showed a greater reduction in hyperopic defocus with Multistage PCL in comparison with Proclear PCL. The Multistage PCL had a decreased mean hyperopic defocus along the horizontal visual field up to 30° nasally and temporally, which indicated possible control of myopia progression for prolonged wear. However, the Proclear PCL showed significant hyperopic defocus from 30°, and 25° and beyond at the nasal and temporal visual fields, respectively.\n\nThe reason for the difference in hyperopic defocus at the periphery between both PCLs could be due to the difference in lens design. The Proclear PCL has a distance centre design, where the centre zone is 2.3 mm in diameter, and the added power increases progressively in a wide annular aspheric zone (from 2.3 to 5 mm/1.35 width), and ends in a spherical near zone (from 5 to 8.5 mm/1.75 width) where the full addition power of +1.50D exists. However, with the Multistage PCL, the design is different in diameters and power progression. The centre distance zone is 2.5 mm in diameter, surrounded by a narrow aspheric multifocal zone “junction zone” (from 2.5 to 3.5 mm/0.5 mm width), followed by a large spherical near zone (from 3.5 to 8.0 mm/2.25 width). With the dim illumination used in this study, the subject’s pupil size was approximately 4 to 5 mm. Hence, children were unable to view from the spherical near zone in Proclear PCL, where the near zone in this lens starts at 5 mm in diameter until 8 mm in diameter. However, with Multistage PCL the pupil size was sufficient to view the junction zone (2.5 mm to 3.5 mm), and part of the near spherical zone where the maximum addition power exists.\n\nPhillips and Anstice15 used dual-focus soft contact lenses on children aged 11 to 14 years old. The lens had a central distance correction zone followed by a concentric treatment zone with +2.00 D of peripheral retinal defocus. They reported a 36% reduction of myopia progression (-0.44 D versus -0.69 D) over 10 months of treatment as compared to a single vision contact lens. However, Sankaridurg et al.11 2011, found a reduction of myopia progression of 34% (-0.57 D versus -0.87 D) over one year of using multifocal contact lenses with a distance centre zone. The design had a progression increase of +2.00 D additional power compared to the control group. In 2013, Walline controlled myopia progression by 51% over 2 years of treatment by using Proclear PCL with +2.00 D additional power. However, the axial length elongation was slowed down by approximately 29% over this 2-year period16. The authors could not explain why the myopia progression was slowed almost twice as much as the axial elongation. The reasons for not matching the myopia progression with the axial elongation in the Walline study might have been due to the fact that subjects were not randomly allocated to treatment groups, had a high drop-out rate (32.5%) with uncollected reasons for subjects’ withdrawals and there were 5 years of difference in data collection between the treatment group (June 2007 to May 2009) and the control group (September 2003 to Oct 2004).\n\nAlthough neither PCL used in this study is made for myopia control, and they are commercially used for presbyopic older patients, the results of the present study illustrate no significant effect of relative peripheral hyperopic defocus with Proclear PCL +1.50D addition power. However, in 2013, Lopes-Ferreira et al. reported that a minimum addition of +2.00 D Proclear PCL D-design was necessary to induce a significant effect on peripheral refractive error, which explains why no statistical difference was found with Proclear PCL +1.50 D in the present study17.\n\nThe mean central refractive error was -1.08 ±0.29D and -1.11 ±0.36D with Multistage PCL and Proclear PCL, respectively. Since the refractive error was fully corrected with contact lenses, the measurement of central refractive error was expected to be zero. This could be due to the infrared light beam used to measure the refractive error in the open-view Grand-Seiko WR-5100K autorefractometer. The size of the infrared light beam is about 2.3 mm in diameter, which is similar in size to the central zone of PCLs used in this study. Therefore, a small decentration of the lens (<0.5 mm) could have made the instrument read part of the addition power zone. However, by using the same procedure to measure all points of peripheral refraction with the same light beam, the relative peripheral refractive error would give the same myopic shift of readings, and therefore, the measurements were still valid and reliable along the 70° of the horizontal visual field.\n\n\nConclusion\n\nIt was possible to decrease the peripheral retinal hyperopic defocus by using soft progressive contact lenses with a distance centre design. This study suggested that PCL designed with a narrow junction zone and wider spherical near zone had a greater effect on the pattern of peripheral refractive error, which may show better control of myopia in comparison to PCLs designed with a progressive increase of addition power.\n\n\nData availability\n\nF1000Research: Raw data for ‘Peripheral refraction with different designs of progressive soft contact lenses in myopes’, 2016, 10.5256/f1000research.9971.d14367718",
"appendix": "Author contributions\n\n\n\nSK = designed the experiment and conceived the study. HAM: Data analysis.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by a grant from Universiti Kebangsaan Malaysia (DPK-2014-002) to Prof. Sharanjeet.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgment\n\nWe thank SEED Co. Japan for supplying the contact lenses and solutions used in this study.\n\n\nReferences\n\nPan CW, Ramamurthy D, Saw SM: Worldwide prevalence and risk factors for myopia. Ophthalmic Physiol Opt. 2012; 32(1): 3–16. PubMed Abstract | Publisher Full Text\n\nAtchison DA, Pritchard N, Schmid KL: Peripheral refraction along the horizontal and vertical visual fields in myopia. Vision Res. 2006; 46(8–9): 1450–1458. PubMed Abstract | Publisher Full Text\n\nMutti DO, Hayes JR, Mitchell GL, et al.: Refractive error, axial length, and relative peripheral refractive error before and after the onset of myopia. Invest Ophthalmol Vis Sci. 2007; 48(6): 2510–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Kee CS, Ramamirtham R, et al.: Peripheral vision can influence eye growth and refractive development in infant monkeys. Invest Ophthalmol Vis Sci. 2005; 46(11): 3965–3972. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Ramamirtham R, Qiao-Grider Y, et al.: Effects of foveal ablation on emmetropization and form-deprivation myopia. Invest Ophthalmol Vis Sci. 2007; 48(9): 3914–3922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith EL 3rd, Hung LF, Huang J: Relative peripheral hyperopic defocus alters central refractive development in infant monkeys. Vision Res. 2009; 49(19): 2386–2392. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBakaraju RC, Ehmann K, Papas EB, et al.: Do peripheral refraction and aberration profiles vary with the type of myopia? - An illustration using a ray-tracing approach. J Optom. 2009; 2(1): 29–38. Publisher Full Text | Free Full Text\n\nTabernero J, Vazquez D, Seidemann A, et al.: Effects of myopic spectacle correction and radial refractive gradient spectacles on peripheral refraction. Vision Res. 2009; 49(17): 2176–2186. PubMed Abstract | Publisher Full Text\n\nLin Z, Martinez A, Chen X, et al.: Peripheral defocus with single-vision spectacle lenses in myopic. Optom Vis Sci. 2010; 87(1): 4–9. PubMed Abstract | Publisher Full Text\n\nShankaridurg P, Donovan L, Varnas S, et al.: Spectacle lenses designed to reduce progression of myopia: 12-month results. Optom Vis Sci. 2010; 87(9): 631–641. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSankaridurg P, Holden B, Smith E 3rd, et al.: Decrease in rate of myopia progression with a contact lens designed to reduce relative peripheral hyperopia: one-year results. Invest Ophthalmol Vis Sci. 2011; 52(13): 9362–7. PubMed Abstract | Publisher Full Text\n\nLopes-Ferreira D, Ribeiro C, Maia R, et al.: Peripheral myopization using a dominant design multifocal contact lens. J Optom. 2011; 4(1): 14–21. Publisher Full Text | Free Full Text\n\nRosén R, Jaeken B, Lindskoog Petterson A, et al.: Evaluating the peripheral optical effect of multifocal contact lenses. Ophthalmic Physiol Opt. 2012; 32(6): 527–534. PubMed Abstract | Publisher Full Text\n\nThibos LN, Wheeler W, Horner D: Power vectors: an application of Fourier analysis to the description and statistical analysis of refractive error. Optom Vis Sci. 1997; 74(6): 367–375. PubMed Abstract | Publisher Full Text\n\nAnstice NS, Phillips JR: Effect of dual-focus soft contact lens wear on axial myopia progression in children. Ophthalmology. 2011; 118(6): 1152–1161. PubMed Abstract | Publisher Full Text\n\nWalline JJ, Greiner KL, McVey ME, et al.: Multifocal contact lens myopia control. Optom Vis Sci. 2013; 90(11): 1207–1214. PubMed Abstract | Publisher Full Text\n\nLopes-Ferreira D, Ribeiro C, Neves H, et al.: Peripheral refraction with dominant design multifocal contact lenses in young myopes. J Optom. 2013; 6(2): 85–94. Publisher Full Text | Free Full Text\n\nAllinjawi K, Kaur S, Akhir SM, et al.: Dataset 1 in: Peripheral refraction with different designs of progressive soft contact lenses in myopes. F1000Research. 2016. Data Source"
}
|
[
{
"id": "18869",
"date": "13 Jan 2017",
"name": "Mustafa Abdu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMyopia is the most common type of refractive errors and it has become more prevalent recently. It has been reported that normal myopic eyes showing more hypermetropic (less myopic) values toward the peripheral retina. In this paper, the authors attempt to find the effect of different new designs of progressive soft contact lenses in controlling peripheral refraction. Studying two different designs of lenses, the authors found that multistage PCLs had more effects on reducing the peripheral retinal hypermetropic defocus compared to Proclear PCLs which more likely could be due to the difference in designs between the two contact lenses. Power of the study: The power of this study could be due to:\nOriginality Proper statistics and analysis\nTechnical comments:\nAuthors must highlight some additional findings in the result part of the abstract Authors should write Lens specifications instead of lens design and materials in table (1) Figure (4) should come earlier in the text body.\n\nDr. Mustafa Abdu Assistant professor of Optometry Editor-in- Chief of Sudanese Journal of Ophthalmology (SJO)",
"responses": []
},
{
"id": "18870",
"date": "16 Jan 2017",
"name": "Ahmed Elmassry",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title of the paper is appropriate for the contents of the article, as myopia is an error of refraction which is still a “hot topic “ and worth discussion.\n\nThe abstract of the paper is short and should include some more data in the Methods, results and Conclusions.\n\nMeasurement of central and peripheral refraction by using the open field auto-refractometer needs short learning curve ,easy, non invasive fast method for measuring refraction; these points should raise the sample size and age in the study. The results are covered with good statistical analysis of data, reaching the conclusion of having the Multistage CL to be better than preclear CL.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2742
|
https://f1000research.com/articles/5-2740/v1
|
22 Nov 16
|
{
"type": "Research Article",
"title": "Differentially correlated genes in co-expression networks control phenotype transitions",
"authors": [
"Lina D. Thomas",
"Dariia Vyshenska",
"Natalia Shulzhenko",
"Anatoly Yambartsev",
"Andrey Morgun",
"Lina D. Thomas",
"Dariia Vyshenska",
"Natalia Shulzhenko"
],
"abstract": "Background: Co-expression networks are a tool widely used for analysis of “Big Data” in biology that can range from transcriptomes to proteomes, metabolomes and more recently even microbiomes. Several methods were proposed to answer biological questions interrogating these networks. Differential co-expression analysis is a recent approach that measures how gene interactions change when a biological system transitions from one state to another. Although the importance of differentially co-expressed genes to identify dysregulated pathways has been noted, their role in gene regulation is not well studied. Herein we investigated differentially co-expressed genes in a relatively simple mono-causal process (B lymphocyte deficiency) and in a complex multi-causal system (cervical cancer). Methods: Co-expression networks of B cell deficiency (Control and BcKO) were reconstructed using Pearson correlation coefficient for two mus musculus datasets: B10.A strain (12 normal, 12 BcKO) and BALB/c strain (10 normal, 10 BcKO). Co-expression networks of cervical cancer (normal and cancer) were reconstructed using local partial correlation method for five datasets (total of 64 normal, 148 cancer). Differentially correlated pairs were identified along with the location of their genes in BcKO and in cancer networks. Minimum Shortest Path and Bi-partite Betweenness Centrality where statistically evaluated for differentially co-expressed genes in corresponding networks.\n\nResults: We show that in B cell deficiency the differentially co-expressed genes are highly enriched with immunoglobulin genes (causal genes). In cancer we found that differentially co-expressed genes act as “bottlenecks” rather than causal drivers with most flows that come from the key driver genes to the peripheral genes passing through differentially co-expressed genes. Using in vitro knockdown experiments for two out of 14 differentially co-expressed genes found in cervical cancer (FGFR2 and CACYBP), we showed that they play regulatory roles in cancer cell growth. Conclusion: Identifying differentially co-expressed genes in co-expression networks is an important tool in detecting regulatory genes involved in alterations of phenotype.",
"keywords": [
"co-expression networks",
"differential co-expression analysis",
"biological state transition"
],
"content": "Introduction\n\nRecent technological advances have moved the focus of biologists from how to measure biological parameters to how to analyze and interpret tens of thousands of measurements, frequently called omics data. The first solutions for such a problem were limited to hierarchical clustering1–3 and simple comparisons between classes of data through the identification of differentially expressed genes (DEGs)4,5. Nowadays, reconstruction and interrogation of biological networks have become a widely used approach to get insights from different types of omics data6,7.\n\nAfter establishing co-expression networks for different states of one biological system, differential co-expression analysis investigates their structural changes when a system goes through a state transition. This analysis, first proposed more than a decade ago8,9, identifies the pairs of genes that have their interaction changed during such transition. Several later publications have suggested different algorithms and statistics to determine differential gene co-expression10–27. Fewer studies, however, attempted to evaluate the biological significance of these changes18,21. Also, to the best of our knowledge, there have been no studies that would investigate how this approach performs depending on the type and complexity of the biological system analyzed.\n\nCommonly, a state transition of a biological system is related to perturbation of a set of genes, which propagates through network interactions and affects other genes. Thus, there is a possibility that differentially co-expressed (DC) genes (directly or indirectly) contribute to the propagation of perturbations. In order to investigate the role of DC genes in a state transition of a biological system, we considered two biological processes28,29 previously analyzed by our group. The first one (B cell deficiency in mice) is a homogenous, one-causal-factor process, while the second one (cervical cancer) represents a heterogeneous multi-causal system.\n\nIn this work, a co-expression network is an undirected graph, where the set of nodes consists of a set of DEGs, and a pair of nodes is connected if there is a significant correlation between them. Differential co-expression analysis is done by identifying the pairs of genes that suffer significant changes in correlation between two states. Throughout this paper such pairs are called differentially correlated pairs (DCPs) and the genes forming these pairs are considered DC genes.\n\n\nResults\n\nWe started by analyzing the B cell knockout (BcKO) data28, which represents a relatively simple experimental model with only one causal factor (B lymphocytes) and homogenous subject groups since this experiment was performed in highly inbred strains of mice.\n\nIn order to select the nodes to reconstruct the co-expression networks (BcKO and Control) we compared gene expression in jejunum between BcKO and control mice and found 509 DEGs (Dataset 1). Next, the edges for each network were determined using significantly correlated pairs of DEGs (Figure 1). To identify DCPs we used the method introduced in21 which compares correlations in the BcKO group and in the Control group. Eighty DCPs were found (Dataset 2), of which 56 represent correlation gains (edges which were not present in Control network but showed up in BcKO) and 24 represent losses.\n\nNow we investigate whether network structural changes, herein represented by DCPs, are related to actual causes of global change in gene expression. In the previous study28, it was shown that intestinal gene expression alterations in BcKO mice are mostly dependent on the ability of B lymphocytes to produce antibodies. Therefore, we analyzed the presence of immunoglobulin coding genes (Ig genes, see Dataset 3) among differentially expressed genes (26 Ig genes among 509 DEGs) in DCPs. We observed that 72% (39 out of 54) of correlation gain DCPs are formed by at least one Ig gene, (Figure 2A). Moreover, we found strong enrichment for Ig genes among DC genes in correlation gain (24% (15 out of 63) of DC genes are Ig genes vs 2.7% (11 out of 415) of other DEGs are Ig genes), while no enrichment was observed for correlation lost as a result of B cell deficiency (Figure 2B). Thus, these results support the idea that differentially expressed genes that acquire correlations during transition from one biological state to another have a high chance to play causal roles in such transition.\n\nThe nodes are composed by DEGs and the edges represent significant correlations between nodes. The causal genes (immunoglobulin genes) and the DCP edges are concentrated in the high connectivity region with several causal genes forming DCPs.\n\nA) 78 Differentially Correlated Pairs (DCPs) were found, of which 54 represent correlation gains (edges which were not present in Control network but showed up in BcKO) and 24 represent correlation losses. The table stratifies the set of pairs representing correlation gains and losses according to the amount of Ig genes (0, 1 or 2) present in a pair. Note that 39 out of 54 of correlation gain DCPs are formed by at least one Ig gene while only 2 out of 22 correlation losses have at least one Ig gene. B) The 78 DCPs are formed by a total of 94 Differentially Co-expressed genes (DC genes). 58 DC genes participate only in correlation gain DCPs, 31 only in correlation loss DCPs and 5 of them participate in both correlation gain and loss DCPs. The results show enrichment for Ig genes among DC genes in correlation gain: 24% (15 out of 63 (=58+5)) of DC genes are Ig genes vs 2.7% (11 out of 415) of other DEGs are Ig genes (p value < 0.001). Meanwhile no enrichment was observed for correlation loss as a result of B cell deficiency: 3% (1 out of 36 (=31+5)) of DC genes are Ig genes vs 2.7% (11 out of 415) of other DEGs are Ig genes.\n\nAnalysis of gene expression data. In order to study differentially co-expressed genes in a more complex biological model we turned to cancer. It is well known that cancers of the same clinically/morphological type can be very different on molecular levels. One of the most studied causes for such diversity is the different sets of chromosomal aberrations and mutations harbored by tumors otherwise defined as the same cancer. In previous study29, we have found 36 cervical cancer driver genes located in multiple chromosomal aberrations (Dataset 4). Thus we decided to use cervical cancer data from 29 for investigation of the role of DCPs in complex biological processes due to its heterogeneity and previously acquired knowledge of essential causal genes.\n\nWe used the DEGs between tumor and normal tissue as the nodes of the co-expression networks. Since the number of samples (five datasets, 148 tumor samples and 67 normal samples) was larger than in BcKO study (two datasets, 22 paired samples), we used the partial correlation coefficient as a measure of co-expression (Figure 3). The potential advantage of using partial correlation is that it aims to infer edges that are a result of direct regulatory relations6. Partial correlations were calculated through the Local Partial Correlation (LCP) method30 (Material and Methods).\n\nThe nodes are composed by DEGs and the edges represent significant local partial correlation between nodes. A few causal genes (key drivers) and DCP edges are located in the high connectivity region, but scattered throughout the network. Only one key driver is amongst the genes in DCPs.\n\nIn this study seven DCPs composed of 14 DC genes were found. Interestingly, all DCPs were differential correlations gained in tumors (Table 1). Only one of the 36 key drivers (CEP70) was identified among the 14 DC genes. Accordingly, no enrichment of key driver genes among DC genes was detected in this analysis.\n\nEven though we observed that DCPs are not necessarily formed by key drivers, it is known from literature that most of the DC genes found play regulatory roles in other types of cancer31–48. Thus we hypothesized that DCPs are located downstream of key drivers and can be responsible for changes of regulatory chain events coming from the key drivers and spreading throughout the network. In order to verify this hypothesis, we investigated how close DC genes are to key drivers and whether their “signal flow”49 in the tumor co-expression network is stronger than that of the other genes. In order to verify this hypothesis we investigated two network measures: Minimum Shortest Path and Bi-partite Betweenness Centrality.\n\nFirst we compared the shortest paths from key driver genes to DC genes and to all other DEGs in the network. We found that DC genes are located statistically closer than the rest of genes in the network to key drivers (Figure 4A, Wilcoxon test < 0.014 and Permutation test < 0.021). Then we used Bi-partite Betweenness Centrality6 as a measure of the signal flow from key drivers to peripheral genes (genes with only one edge)6. We evaluated this measure for DC genes and remaining DEGs and observed that DC genes had much higher values than other genes in the network. Figure 4B illustrates a comparison of boxplots of bi-partite betweenness centrality between these two groups concerning DCPs and the rest (non DCPs, non-key drivers, non-peripheral). We can observe that the bi-partite betweenness centralities of DCPs are concentrated in higher values than the rest. Mann-Whitney test gave us a p-value of 7.868 X 10-5, which gives us evidence that the distribution of Bi-Partite Betweenness Centrality in DCP genes is higher. For more details see Figure S2. Thus, altogether these results suggest that DC genes might be “bottlenecks”, that is, required to transmit a signal from key drivers to other genes in the network, therefore, supplement the hypothesis of a regulatory role of DC genes (Figure S1).\n\nA) Barplot of the shortest path to the causal genes and originated in either the genes in DCPs (in orange) or the non DCP genes (in blue). The distribution in orange is concentrated in lower values. B) Boxplot comparing the values of Bipartite Betweenness Centrality of the genes in DCPs (in orange) and the non-DCP genes (in blue). The boxplot on the left is concentrated in higher values.\n\nKnockdown experiments. In addition, data from other cancers provide indirect support for the idea of regulatory role of DC genes in cervical cancer31–48. However, since a role of these DC genes in carcinogenesis was not as straightforward as for immunoglobulin genes in B cell deficiency, we decided to perform experimental tests. Among the DC genes found for cervical cancer, there were seven up-regulated and seven down-regulated in cancer. Therefore, for validation experiments we chose one down-regulated (FGFR2) and one up-regulated (CACYBP) gene that have not been previously studied in cervical cancer for regulatory properties, but have a potential connection with cell death or proliferation based on their Gene Ontology annotations. In order to test if FGFR2 and CACYBP play critical regulatory roles in cancer pathogenesis, we evaluated the effect on in vitro knockdown of these genes on cell proliferation in a cervical carcinoma cell line.\n\nFirst, we tested two cervical cancer cell lines (Hela and ME180) and found that only ME180 had detectable expression levels of both genes. In order to perform these tests, we evaluated siRNAs and observed that they were able to knock down expression of both genes by at least 70% (Figure 5A). CACYBP is up-regulated in tumor tissue, as compared to normal tissue (Figure 5B). Consequently, if CACYBP has regulatory potential, as predicted by our analysis, it should function as an oncogene promoting cell proliferation. Therefore, the knockdown of this gene should result in a decrease of cell growth/survival. Since FGFR2 was found down-regulated in cervical carcinomas (Figure 5B) its potential regulatory role would be as a tumor suppressor. Therefore, the knockdown of this gene is expected to increase cell growth. The subsequent analysis of cell proliferation confirmed our predictions for both genes: knockdown of CACYBP led to a decrease of cell growth, while knockdown of FGFR2 induced higher cell proliferation (Figure 5C). Thus, these results provide additional support to our in silico prediction that DC genes may play a regulatory role in cell proliferation related to tumor growth.\n\nA) Efficacy of FGFR2 and CACYBP siRNA knockdown. qRT-PCR with primers for GAPDH as the internal control was used to determine expression and efficacy of FGFR2 and CACYBP specific siRNA knockdown in endothelial cells (ME180). ME180 cells were harvested 72 h after transfection with vehicle (Lipofectamine) and either scrambled control or targeting siRNA. B) Gene expression of FGFR2 and CACYBP (mean +/- standard deviation) for tumor and normal samples from five datasets used in the analysis. Since FGFR2 was found down-regulated in tumor tissue, its potential regulatory role would be as a tumor suppressor. However, CACYBP is up-regulated, thus CACYBP should function as an oncogene promoting cell proliferation. C) Evaluation of cell proliferation inhibition using xCelligence System. Proliferation data (cell index) was obtained at 72 h after transfection with Lipofectamine and either scrambled control or targeting siRNA. Inhibition index was calculated (two step normalization of cell index): inhibition index > 0 – cells transfected with targeting siRNA showed decrease in proliferation; < 0 – showed increase in proliferation; = 0 – no difference from control was found. One sided T test for mean (< 0 for FGFR2 and > 0 for CACYBP) was applied and returned statistically significant p-values for both of them (0.0258 for FGFR2 and 0.01978 for CACYBP).\n\n\nDiscussion\n\nIn the current study, the differential co-expression analysis21 was applied to two relatively well-investigated biological systems28,29 in order to evaluate the potential importance of genes found using differential correlation analyses. Overall, the obtained results support the idea that DC genes play a regulatory role. While in B cell deficiency DCPs were found highly enriched with immunoglobulin genes (i.e. causal genes for alterations in the gut) we did not observe enrichment for key driver genes in cervical cancers. Rather, DCPs of cervical cancer seem to be located downstream of causal genes. Indeed, those DCPs have been found closer to key regulators than other genes in the network, actually representing “bottlenecks” for communication between driver genes previously published in 29 and the rest of the network (Figure 4). Furthermore, some differentially co-expressed genes in cervical cancer have been previously implicated in processes such as metastasis, oncogenic autophagy and apoptosis. For example, CACYBP has been shown to promote colorectal cancer metastasis31, TRPM3 was observed to play a role in oncogenic autophagy in clear cell renal cell carcinoma32,33, and AK2 was reported to activate apoptotic pathway34. Several genes are investigated for prognostic value for cancers such as myeloma35, lymphoma36, breast37–41 and gastrointestinal42,43 cancers. At least two genes were previously proposed as targets for anti-cancer agents: DHFR44 and FGFR245. Moreover, CACYBP and ZSCAN18 were also reported as putative tumor suppressor genes in renal cell carcinoma30,46,47. In addition, we have tested two DC genes and confirmed their regulatory role (FGFR2 as a tumor suppressor and CACYBP as a potential oncogene in cervical cancer) by manipulating their expression in vitro. Altogether, published observations and our experimental validation for these two genes support the idea that DC genes revealed in the current study play a regulatory role and can be candidate targets for cervical cancer treatment.\n\nInterestingly, while in the model of B cell deficiency, the DC genes are highly enriched with causal regulatory genes, there was only one key driver in cervical cancer (CEP70), despite the DC genes in this system still seeming to play a regulatory role overall. Such a difference is potentially related to the fact that the mouse system studied in 28 is highly homogeneous (inbred mice) with only one cause of alterations (i.e. absence of B lymphocytes). Cervical cancer, however, is a heterogeneous system with different chromosomal aberrations and consequently turned-on expression of different driver genes in different patients. Therefore, we can speculate that differential correlations point to regulatory genes that are shared by majority of samples. This hypothesis warrants further investigation, especially considering that DCPs could represent common therapeutic targets for tumors that originated as a result of different genomic or epi-genomic events.\n\nIn conclusion, this study provided additional evidence for the previously suggested idea8–27 that genes presenting alterations in correlation patterns between different phenotypes (i.e. states of biological system) play a critical regulatory role in transitions from one state to another. Furthermore, although our results do not allow for full generalization, they indicate that gain and not loss of correlations connects critical genes involved in transitions to new phenotypes. However, further studies are required to understand how changes in correlation patterns can point to genes with critical capacity to guide a biological system into certain state/phenotype.\n\n\nMaterial and methods\n\nBcKO. All microarray data were analyzed using BRB Array-Tools developed by the Biometric Research Branch of the National Cancer Institute under the direction of R. Simon (http://linus.nci.nih.gov/BRB-ArrayTools.html). Array data were filtered to limit analysis to probes with greater than 50% of samples showing spot intensities of >10 and spot sizes >10 pixels, and a median normalization was applied.\n\nCervical cancer. Same as in cervical cancer29. The data were analyzed using BRB Array-Tools using the original normalization used in three studies50–52 and median normalization over entire the array for the fourth study53. For all studies, we only considered genes found in at least 70% of arrays.\n\nIn every analysis (DEGs, DCPs and networks), filter of direction (same sign of correspondent parameter – difference of mean, difference of correlation, correlation and partial correlation) was required in a fixed number of datasets (2 out of 2 in BcKO and 3 out of 5 in cervical cancer). Then meta-analysis was done through Fisher combined probability test54. Next, the pairs with false discovery rate (fdr)55 lower than a threshold are chosen. At last, only the pairs that pass PUC56 are considered correlated and therefore represent edges in the network.\n\nBcKO. DEGs between groups of samples were identified by random variance paired t-test p-value lower than 5% with adjustment for multiple hypotheses by setting the fdr below 10% in BRB Array-Tools. Co-expression networks (BcKO and Control) were inferred through Pearson correlation with p-value < 20% and fdr adjustment below 2.5%. DCPs were calculated for pairs that were initially correlated (p-value < 20%) in at least one state. Then differences of Pearson correlation were tested following21 with a p-value below 10% and fdr < 2%. At last only the DCPs that showed up in one of the networks were selected.\n\nCervical cancer. DEGs were retrieved from a cervical cancer paper29. Correlation networks and DCPs followed the same procedure and in BcKO but with different p-values (correlation p-value < 10% with fdr < 10-8 and difference of correlation p-value < 10% with fdr < 0.25%). Partial correlation was computed using local partial correlation method30. The initial significance was p-value lower than 40% and then fdr < 5%.\n\nFor more details about the thresholds used, see Table S3 and Table S4.\n\nTwo aspects of cervical cancer data motivated us to use local partial correlation for this system. First of all, we have more samples throughout five datasets (see Supplementary Table S1 and Supplementary Table S2) which allows us to have more confidence in our results and second we already know that tumors in general present heterogeneous causal factors. The partial correlation approach gives us the alternative to only consider edges that represent direct regulatory relations.\n\nIn this paper we used the new approach developed in 30 called local partial correlation. This approach was elaborated specially for cases when there are more variables than samples, which happens regularly in genetics and is a serious problem in classical statistics. First we calculate the correlation network. Then for each significantly correlated pair the inverse method is applied exclusively to the correlation sub-matrix formed only by the closest neighbors of the pair along with the genes forming the pair, Figure 6. If the number of closest neighbors is still higher than the number of samples n, then we decreasingly rank the correlations of the neighbors to either genes in the pair and select the first n/2 neighbors. For each sub- matrix, we only keep the partial correlation value regarding the pair that formed that sub- matrix and then calculate its p-value also based on the sub- matrix. R script for calculation is available in Supplementary Material.\n\nThe neighborhood of this pair is the set of nodes X3, X6, X8, X9 (black nodes/edges). X1, X4, X7 (blue nodes) are significantly correlated with the black nodes (blue edges), but not with the red nodes. Thus the inverse method is applied exclusively to the correlation sub-matrix formed only by the genes X2, X5, X3, X6, X8, X9. In correlation matrices the gray entries are statistically non-significant empirical correlations.\n\nPartial correlations were estimated only for the significant (Pearson) correlations in co-expression network. Thus the same definition of DCPs (by Pearson correlation) can still represent structural changes as long as it remains present in one of the two networks.\n\nFigure 3 illustrates the local partial correlation network for cervical cancer using only tumor data. It has 578 connected nodes and 824 edges.\n\nThe shortest path is a method that calculates distances between 2 nodes in a network. It consists of the minimum number of edges connecting 2 nodes. In this case we want to know the minimum number of edges connecting one node, either DCP gene or not, to a group of nodes: the key drivers Figure 7. For each gene we calculate the shortest path to all key drivers and get the minimum value. Then we compare the minimum shortest path to key drivers coming from DCP genes and the remaining genes. Figure 4A shows that the minimum shortest path to key drivers tend to be smaller when originated in DCP genes.\n\nBetweenness Centrality measures the node’s centrality in a network by counting the number of shortest paths from all vertices to all other vertices that pass through that node. A gene with high betweenness centrality has a great influence on the transfer of signal through the network Figure 8.\n\nNote that the node D has bigger bc because all shortest paths connecting nodes in group A to nodes in group B pass through the node D.\n\nHowever we are interested in the signal passing from key drivers throughout the network. For this reason we decided to apply the measure previously developed by our lab6 called Bi-partite Betweenness Centrality. It measures the amount of shortest path going from all genes in one group of vertices to all genes in a different group of vertices. In our case, the groups of genes are the key drivers and the peripheral genes (genes connected to only one edge).\n\n\nExperimental design\n\nME180 cells were transfected with FGFR2-, CACYBP-specific siRNA or control siRNA using Lipofectamine RNAiMAX Transfection Reagent. Cell growth rate during 72h after siRNA transfection was measured using xCelligence system as described below.\n\nEvaluation of siRNA efficacy in knocking down the gene targets. ME180 cell line was obtained from Dr. Pulivarthi H. Rao. It was cultured in RPMI medium with 10% FBS and 1% Penicillin-Streptomycin added. The cells were seeded at density 4000 cells per well in 96 F-bottom plates (seeding procedure was done according to ATCC protocol for ME180 cell line) and with cell culture media 200 ul per well. 24 hours after seeding, cells were transfected with one of the three siRNA, see Table 2.\n\nBefore transfection, 100 uL of media was taken from each well. Transfection procedure was done according to Lipofectamine RNAiMAX Reagent protocol (Protocol Pub. No. MAN0007825 Rev. 1.0). 3pM of siRNA per well and Lipofectamine 0.6 uL per well were delivered in 20uL. 80 uL of fresh cell culture media was added to each well.\n\nCells were collected 72 h after transfection using Lysis buffer from RNeasy Mini Kit (QIAGEN). RNA extraction was done using RNeasy Mini Kit (QIAGEN) according to the manufacturer’s protocol (no Dnase treatment step was done). Concentrations of RNA measured with Qubit RNA BR Assay Kit. cDNA was done using Bio-Rad iScript cDNA Kit according to the manufacturer’s protocol.\n\nQuantitative Real-Time PCR was done for the samples using QuantiFast SYBR Green PCR Kit and GAPDH as a control gene. Primers for the targets you can see in the Table 3.\n\nqRT PCR set up: sample was heated to 95°C, followed by 40 cycles of 95°C for 10 sec and 60°C for 30 sec.\n\nEvaluation of cell growth after knock down of gene targets. CACYBP is up-regulated in tumor tissue, as compared to normal tissue (Figure 5B). Consequently, if CACYBP has regulatory potential, as predicted by our analysis, it should function as an oncogene promoting cell proliferation. Therefore, the knockdown of this gene should result in a decrease of cell growth/survival. Since FGFR2 was found down-regulated in cervical carcinomas (Figure 5B) its potential regulatory role would be as a tumor suppressor. Therefore, the knockdown of this gene is expected to increase cell growth.\n\nCell growth was evaluated using xCelligence system (The RTCA DP Instrument) using manufacturer’s protocol. ME180 was cultured in RPMI media with 10% FBS and 1% Penicillin-Streptomycin added. The cells were seeded at density 4000 cells per well (E-Plate 16) in 200 uL of cell culture media.\n\n24 hours after seeding, the experiment was paused for transfecton. Before transfection, 100 uL of media was taken from each well. Transfection procedure was done according to Lipofectamine RNAiMAX Reagent protocol (Protocol Pub. No. MAN0007825 Rev. 1.0). 3pM of siRNA per well and Lipofectamine 0.6 uL per well were delivered in 20uL; 80 uL of fresh cell culture media was added to each well. Plate was placed back in the slot and cell growth was evaluated for another 72 h.\n\nCell index normalization. To evaluate cell growth rate cell index was transformed into Inhibition index in two steps:\n\n1. Cell indexes for all wells were exported to the excel file. For each treatment (including non-targeting siRNA transfected wells) we extracted cell index average for all wells at 20 h after seeding (Cell Index Before Treatment) and at 96 h after seeding (Cell Index After Treatment). To normalize cell index to initial cell number differences for each of the treatments we used the following formula:\n\n\n\n2. In next step we normalized each treatment with targeting siRNA to treatment with non-targeting siRNA. For this purpose in each experiment A/B Index from treatment (siRNA targeting either FGFR2 or CACYBP) was normalized to A/B Index from control treatment using the following formula:\n\n\n\nFinal evaluation of growth was done according to the value of Inhibition Index:\n\n>0 – there is a decrease in growth;\n\n0 – no difference between treated with targeting and treated with non-targeting siRNA;\n\n<0 – there is a growth after treating with targeting siRNA.\n\n\nData availability\n\nBcKO: Gene expression files containing array data from 28 are available under the GSE23934 superseries in the Gene Expression Omnibus (GEO) data repository. We worked with two groups of samples: B10.A littermates and BALB/C (Table S1).\n\nCervical cancer: We have used the same datasets as in previous study29 available at GEO: GSE741050, GSE679151, GSE780352, GSE975053, GSE2634229 (Table S21).\n\nF1000Research: Dataset 1. Differentially expressed genes from BcKO study, 10.5256/f1000research.9708.d14210057\n\nF1000Research: Dataset 2. Differentially correlated pairs from BcKO study, 10.5256/f1000research.9708.d14209958\n\nF1000Research: Dataset 3. Causal genes from BcKO study, 10.5256/f1000research.9708.d14209759\n\nF1000Research: Dataset 4. Causal genes from cervical cancer study, 10.5256/f1000research.9708.d14209860\n\nF1000Research: Dataset 5. Cytoscape Edges and Nodes tables from network in Figure 1, 10.5256/f1000research.9708.d14210161\n\nF1000Research: Dataset 6. Cytoscape Edges and Nodes tables from network in Figure 3, 10.5256/f1000research.9708.d14210262\n\nF1000Research: Dataset 7. Raw data for Figure 5A,C, 10.5256/f1000research.9708.d14210363",
"appendix": "Author contributions\n\n\n\nLDT ran the data analysis and DV ran the experimental analysis. LDT, DV, AY and AM conceived the analysis and wrote the paper. NS helped with discussions and revised the first manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), grants 2013/06223-1, 2013/14722-8 and 2013/24516-6 and by NSF grant 1412557.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Mark Ganon and Andre Belejo for the English review and editing, CGRB at OSU for the computational support. AY and LT thank AM's lab at College of Pharmacy - OSU for their hospitality.\n\n\nSupplementary material\n\nExtra details, figures and tables\n\nClick here to access the data..\n\n\nReferences\n\nKaufman L, Rousseeuw PJ: Finding Groups in Data: An Introduction to Cluster Analysis. (1 ed.). New York: John Wiley. ISBN 0-471-87876-6, 1990. Reference Source\n\nPress WH, Teukolsky SA, Vetterling WT, et al.: Section 16.4. Hierarchical Clustering by Phylogenetic Trees. Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press, ISBN 978-0-521-88068-8, 2007. Reference Source\n\nHastie T, Tibshirani R, Friedman J: 14.3.12 Hierarchical clustering. The Elements of Statistical Learning (PDF) (2nd ed.). New York: Springer. 520–528, ISBN 0-387-84857-6, 2009.\n\nDudoit S, Yang YH, Callow MJ, et al.: Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Stat Sin. 2002; 12(1): 111–139. Reference Source\n\nReiner A, Yekutieli D, Benjamini Y: Identifying differentially expressed genes using false discovery rate controlling procedures. Bioinformatics. 2003; 19(3): 368–375. PubMed Abstract | Publisher Full Text\n\nDong X, Yambartsev A, Ramsey SA, et al.: Reverse enGENEering of Regulatory Networks from Big Data: A Roadmap for Biologists. Bioinform Biol Insights. 2015; 9: 61–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorgun A, Dzutsev A, Dong X, et al.: Uncovering effects of antibiotics on the host and microbiota using transkingdom gene networks. Gut. 2015; 64(11): 1732–43. PubMed Abstract | Publisher Full Text\n\nKostka D, Spang R: Finding disease specific alterations in the co-expression of genes. Bioinformatics. 2004; 20(Suppl 1): i194–9. PubMed Abstract | Publisher Full Text\n\nXiao Y, Frisina R, Gordon A, et al.: Multivariate search for differentially expressed gene combinations. BMC bioinformatics. 2004; 5(1): 164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShin E, Yoon Y, Ahn J, et al.: TC-VGC: a tumor classification system using variations in genes’ correlation. Comput Methods Programs Biomed. 2011; 104(3): e87–e101. PubMed Abstract | Publisher Full Text\n\nNeedham M, Hu R, Dwarkadas S, et al.: Hierarchical parallelization of gene differential association analysis. BMC Bioinformatics. 2011; 12: 374. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmar D, Safer H, Shamir R: Dissection of regulatory networks that are altered in disease via differential co-expression. PLoS Comput Biol. 2013; 9(3): e1002955. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde la Fuente A: From ‘differential expression’ to ‘differential networking’- identification of dysfunctional regulatory networks in diseases. Trends Genet. 2010; 26(7): 326–333. PubMed Abstract | Publisher Full Text\n\nLai Y, Wu B, Chen L, et al.: A statistical method for identifying differential gene-gene co-expression patterns. Bioinformatics. 2004; 20(17): 3146–3155. PubMed Abstract | Publisher Full Text\n\nLi KC: Genome-wide coexpression dynamics: theory and application. Proc Natl Acad Sci U S A. 2002; 99(26): 16875–16880. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDettling M, Gabrielson E, Parmigiani G: Searching for differentially expressed gene combinations. Genome Biol. 2005; 6(10): R88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWatson M: CoXpress: differential co-expression in gene expression data. BMC Bioinformatics. 2006; 7: 509. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMani KM, Lefebvre C, Wang K, et al.: A systems biology approach to prediction of oncogenes and molecular perturbation targets in B-cell lymphomas. Mol Syst Biol. 2008; 4: 169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu R, Qiu X, Glazko G, et al.: Detecting intergene correlation changes in microarray analysis: a new approach to gene selection. BMC Bioinformatics. 2009; 10: 20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCho SB, Kim J, Kim JH: Identifying set-wise differential co-expression in gene expression microarray data. BMC Bioinformatics. 2009; 10: 109. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSkinner J, Kotliarov Y, Varma S, et al.: Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview. BMC Bioinformatics. 2011; 12: 286. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDawson JA, Ye S, Kendziorski C: R/EBcoexpress: an empirical Bayesian framework for discovering differential co-expression. Bioinformatics. 2012; 28(14): 1939–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFukushima A: DiffCorr: an R package to analyze and visualize differential correlations in biological networks. Gene. 2013; 518(1): 209–214. PubMed Abstract | Publisher Full Text\n\nJacob L, Neuvial P, Dudoit S, et al.: Package ‘DEGraph’. 2012. Reference Source\n\nChoi JK, Yu U, Yoo OJ, et al.: Differential coexpression analysis using microarray data and its application to human cancer. Bioinformatics. 2005; 21(24): 4348–4355. PubMed Abstract | Publisher Full Text\n\nPronk TE, van Someren EP, Stierum RH, et al.: Unraveling toxicological mechanisms and predicting toxicity classes with gene dysregulation networks. J Appl Toxicol. 2013; 33(12): 1407–1415. PubMed Abstract | Publisher Full Text\n\nCho DY, Kim YA, Przytycka TM: Chapter 5: Network biology approach to complex diseases. PLoS Comput Biol. 2012; 8(12): e1002820. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShulzhenko N, Morgun A, Hsiao W, et al.: Crosstalk between B lymphocytes, microbiota and the intestinal epithelium governs immunity versus metabolism in the gut. Nat Med. 2011; 17(12): 1585–1593. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMine KL, Shulzhenko N, Yambartsev A, et al.: Gene network reconstruction reveals cell cycle and antiviral genes as major drivers of cervical cancer. Nat Commun. 2013; 4: 1806. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomas LD, Fossaluza V, Yambartsev A: Building complex networks through classical and bayesian statistics - a comparison. In XI Brazilian Meeting on Bayesian Statistics. AIP Conf. Proc. 2012; 1490: 323–331. Publisher Full Text\n\nGhosh D, Li Z, Tan XF, et al.: iTRAQ Based Quantitative Proteomics Approach Validated the Role of Calcyclin Binding Protein (CacyBP) in Promoting Colorectal Cancer Metastasis. Mol Cell Proteomics. 2013; 12(7): 1865–1880. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCost NG, Czyzyk-Krzeska MF: Regulation of autophagy by two products of one gene: TRPM3 and miR-204. Mol Cell Oncol. 2015; 2(4): e1002712, in press. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHall DP, Cost NG, Hegde S, et al.: TRPM3 and miR-204 establish a regulatory circuit that controls oncogenic autophagy in clear cell renal cell carcinoma. Cancer cell. 2014; 26(5): 738–753. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee HJ, Pyo JO, Oh Y, et al.: AK2 activates a novel apoptotic pathway through formation of a complex with FADD and caspase-10. Nat Cell Biol. 2007; 9(11): 1303–1310. PubMed Abstract | Publisher Full Text\n\nWalker BA, Leone PE, Chiecchio L, et al.: A compendium of myeloma-associated chromosomal copy number abnormalities and their prognostic value. Blood. 2010; 116(15): e56–e65. PubMed Abstract | Publisher Full Text\n\nBjörck E, Ek S, Landgren O, et al.: High expression of cyclin B1 predicts a favorable outcome in patients with follicular lymphoma. Blood. 2005; 105(7): 2908–2915. PubMed Abstract | Publisher Full Text\n\nLandemaine T, Jackson A, Bellahcène A, et al.: A six-gene signature predicting breast cancer lung metastasis. Cancer Res. 2008; 68(15): 6092–6099. PubMed Abstract | Publisher Full Text\n\nNie F, Yu XL, Wang XG, et al.: Down-regulation of CacyBP is associated with poor prognosis and the effects on COX-2 expression in breast cancer. Int J Oncol. 2010; 37(5): 1261–1269. PubMed Abstract | Publisher Full Text\n\nHorlings HM, Lai C, Nuyten DS, et al.: Integration of DNA copy number alterations and prognostic gene expression signatures in breast cancer patients. Clin Cancer Res. 2010; 16(2): 651–663. PubMed Abstract | Publisher Full Text\n\nHunter DJ, Kraft P, Jacobs KB, et al.: A genome-wide association study identifies alleles in FGFR2 associated with risk of sporadic postmenopausal breast cancer. Nat Genet. 2007; 39(7): 870–874. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatoh M: Cancer genomics and genetics of FGFR2 (Review). Int J Oncol. 2008; 33(2): 233–237. PubMed Abstract | Publisher Full Text\n\nJang JH, Shin KH, Park JG: Mutations in fibroblast growth factor receptor 2 and fibroblast growth factor receptor 3 genes associated with human gastric and colorectal cancers. Cancer Res. 2001; 61(9): 3541–3543. PubMed Abstract\n\nVedeld HM, Andresen K, Eilertsen IA, et al.: The novel colorectal cancer biomarkers CDO1, ZSCAN18 and ZNF331 are frequently methylated across gastrointestinal cancers. Int J Cancer. 2015; 136(4): 844–853. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerman EM, Werbel LM: The renewed potential for folate antagonists in contemporary cancer chemotherapy. J Med Chem. 1991; 34(2): 479–485. PubMed Abstract | Publisher Full Text\n\nKwak Y, Cho H, Hur W, et al.: Antitumor effects and mechanisms of AZD4547 on FGFR2-deregulated endometrial cancer cells. Mol Cancer Ther. 2015; 14(10): 2292–2302. PubMed Abstract | Publisher Full Text\n\nNing X, Sun S, Hong L, et al.: Calcyclin-binding protein inhibits proliferation, tumorigenicity, and invasion of gastric cancer. Mol Cancer Res. 2007; 5(12): 1254–1262. PubMed Abstract | Publisher Full Text\n\nSun S, Ning X, Liu J, et al.: Overexpressed CacyBP/SIP leads to the suppression of growth in renal cell carcinoma. Biochem Biophys Res Commun. 2007; 356(4): 864–871. PubMed Abstract | Publisher Full Text\n\nMorris MR, Ricketts CJ, Gentle D, et al.: Genome-wide methylation analysis identifies epigenetically inactivated candidate tumour suppressor genes in renal cell carcinoma. Oncogene. 2011; 30(12): 1390–1401. PubMed Abstract | Publisher Full Text\n\nWeng G, Bhalla US, Iyengar R: Complexity in biological signaling systems. Science. 1999; 284(5411): 92–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiewenga P, Buist MR, Moerland PD, et al.: Gene expression in early stage cervical cancer. Gynecol Oncol. 2008; 108(3): 520–526. PubMed Abstract | Publisher Full Text\n\nPyeon D, Newton MA, Lambert PF, et al.: Fundamental differences in cell cycle deregulation in human papillomavirus-positive and human papillomavirus-negative head/neck and cervical cancers. Cancer Res. 2007; 67(10): 4605–4619. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhai Y, Kuick R, Nan B, et al.: Gene expression analysis of preinvasive and invasive cervical squamous cell carcinomas identifies HOXC10 as a key mediator of invasion. Cancer Res. 2007; 67(21): 10163–10172. PubMed Abstract | Publisher Full Text\n\nScotto L, Narayan G, Nandula SV, et al.: Identification of copy number gain and overexpressed genes on chromosome arm 20q by an integrative genomic approach in cervical cancer: potential role in progression. Genes Chromosomes Cancer. 2008; 47(9): 755–765. PubMed Abstract | Publisher Full Text\n\nFisher RA: Statistical Methods for Research Workers. Oliver and Boyd (Edinburgh); 1925. Reference Source\n\nBenjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995; 57(1): 289–300. Reference Source\n\nYambartsev A, Perlin M, Kovchegov Y, et al.: Unexpected links reflect the noise in networks. Biol Direct. 2016; 11(1): 52. PubMed Abstract | Publisher Full Text\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 1 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 2 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 3 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 4 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 5 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 6 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source\n\nThomas LD, Vyshenska D, Shulzhenko N, et al.: Dataset 7 in: Differentially correlated genes in co-expression networks control phenotype transitions. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17874",
"date": "20 Dec 2016",
"name": "Thiago M. Venancio",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present work Thomas et al. employed method to find differentially co-expressed genes in co-expression networks of a B lymphocyte deficiency (largely mono-causal) and a complex cervical cancer (multi-causal) dataset. They used different graph-theoretical approaches to find relevant genes in this context.\nInterestingly, the authors found that 72% (39/54) of the correlation gains involve at least one Ig gene, which is in agreement with the previously shown association between intestinal gene expression and B cells ability to produce antibodies. Is it possible that this \"correlation gains\" are merely a consequence of the general enrichment of Ig genes in the DC list?\nIn the cervical cancer analysis, the authors used shortest-path and betweenness centrality to argue for the regulatory relevance of DC genes. I think it would be great to supplement this finding with more biochemical information. For example, how many of these genes are transcription factors or protein kinases?\nOverall, I think this study is technically sound and properly executed.\n\n== Minor corrections In the abstract, \"mus musculus\" should read \"Mus musculus\".\nIn Figure 2, Ig is underlined as if marked by a spellchecker.",
"responses": []
},
{
"id": "17872",
"date": "23 Dec 2016",
"name": "Andrei Zinovyev",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript \"Differentially correlated genes in co-expression networks control phenotype transitions\" by Lina Thomas et al, is devoted to describing a case study of two transcriptomic datasets with the focus on characterizing pairs of differentially correlated genes, with a limited experimental validation of the conclusions of the statistical analysis.\nThe manuscript is clear, technically sound and exploits an interesting approach for the analysis of expression data. The conclusions of the statistical analysis are sufficiently justified. I like the detailed and illustrated description of the novel methods exploited the paper. Experimental validation of several findings is a big plus. Therefore, I think the article deserves to be indexed.\nI have several remarks for the manuscript which I think should be addressed before approval:\nI am not completely comfortable with the title of the manuscript, which is quite conceptual, while the content of the paper remains descriptive and does not provide mechanistic insight on how DCG pairs can control the phenotype. I suggest to the authors to have some reflexion on how to make it more adequate.\n\nRelated to 1), in Introduction the desciption of the mechanisms by which DCG pairs can \"contribute to propagation of perturbation\" remain very illusive. I suggest to the authors to formulate more clearly at least several hypotheses or scenario by which DCG pairs might appear and play an important role. A figure illustrating such hypotheses would clarify what the authors mean.\n\nFigures 1 and 3 are not very informative. Can authors make an effort to improve this aspect (at least, visualize some DC gene names?)\n\nThe authors do not discuss a possibility that appearance of DC pairs can be a result of differential sample tissue composition from several cell types (i.e., immune cells in jejunum or in a tumor tissue). Discussing this point would be an advantage.\n\nIn several places, the authors apply terms \"upstream\", \"downstream\" with respect to the network which is undirected by its nature. I suggest to underline that the nature of correlation networks does not allow distinguishing causality direction and considering genes \"downstream\" of the key drivers is only a hypothesis which can not be assessed from the data.\n\nDescription of the transcriptomic datasets in Materials and Methods is too brief, especially for the cervix dataset which seems to be quite composite. It would be appropriate to specify more clearly the dataset's composition (not simply referring to the original publications) directly in the paper text.\n\nOne thing which is confusing to me is that the correlation networks are constructed differently for two case studies (direct vs partial). I understood the reason why partial correlation was prefered for the cervical cancer study, however, the question is: can the conclusion about that DC pairs do not contain key drivers in the case of cervical cancer be affected by the difference in the methodology of correlation graph computation? It would be usefull to clarify this aspect.\n\nThe section \"Filtering and meta-analysis of microarray data\" was not clear to me. I suggest to re-write it.\n\nMinor remarks:\nPage 4: \"correlation lost\" -> \"correlation loss\"\nDescription of the partial correlation method refers to a paper (30) which can not be easily accessed. Direct reference to the arXiv preprint would be more appropriate in this case",
"responses": []
},
{
"id": "17875",
"date": "16 Jan 2017",
"name": "Fabrício Martins Lopes",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript “Differentially correlated genes in co-expression networks control phenotype transitions” investigates the differentially co-expressed genes in two biological processes, a homogeneous one-causal-factor process (B cell deficiency) and a heterogeneous multi-causal system (cervical cancer). The authors have adopted the Pearson correlation and partial correlation for the inference of networks.\nMajor revision:\nThe networks were inferred from local partial correlation method, which is able to identify a linear relationship between two variables X and Y (genes), and this relationship may or may not be mediate by another gene Z. It is not clear why the authors have adopted the Pearson correlation for B cell deficiency analysis and the partial correlation for cervical cancer analysis. Moreover, it would be interesting to highlight the gain obtained by adopting the partial correlation. For instance, what were the relationships inferred with the partial correlation that would not be inferred using Pearson correlation?\n\nAnother important issue is that even with partial correlation, only pairwise of relationships are identified. In the study presented at http://dx.doi.org/10.1109/JSTSP.2008.923841, it presents the Intrinsically Multivariate Predictive (IMP) Genes, which are genes that depend on a subset of predictors. How did the authors deal with these IMP genes?\n\nIt is not clear how and why the microarray data was filtered. The authors could better describe how the data was filtered and how the parameters were adopted.\n\nThe title “Differentially correlated genes in co-expression networks control phenotype transitions” is too rigid leading to the understanding that all correlated genes control the phenotype transitions. I believe that is not true. Authors could provide a more appropriate title.\n\nMinor revisions:\nPage 3: “homogenous” → homogeneous\n\nPage 4: \"correlation lost\" → correlation loss;",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2740
|
https://f1000research.com/articles/5-1681/v1
|
13 Jul 16
|
{
"type": "Clinical Practice Article",
"title": "Anterior single odontoid screw placement for type II odontoid fractures: our modified surgical technique and initial results in a cohort study of 15 patients",
"authors": [
"Binod Bhattarai",
"Sunil Munakomi",
"Karuna Tamrakar",
"Pramod Kumar Chaudhary",
"Binod Bhattarai",
"Karuna Tamrakar",
"Pramod Kumar Chaudhary"
],
"abstract": "Objective: Anterior odontoid screw fixation for type II odontoid fracture is the ideal management option. However in the context of unavailability of an O-arm or neuro-navigation and poor images from the available C-arm may be an obstacle to ideal trajectory and placement of the odontoid screw. We herein detail our surgical technique so as to ensure a correct trajectory and subsequent good fusion in Type II odontoid fractures. This may be advantageous in clinical set ups lacking state of the art facilities. Methods and Results: In this cohort study we included 15 consecutive patients who underwent anterior odontoid screw placement. We routinely dissect the longus colli to completely visualize the entire width of C3 body. We then perform a median C2-C3 disectomy followed by creating a gutter in the superior end of C3 body. We then guide the Kirchsner (K) wire purchasing adequate anterior cortex of C2. Rest of the procedure follows the similar steps as described for odontoid screw placement. We achieved 100% correct trajectory and screw placement in our study. There were no instances of screw break out, pull out or nonunion. There was one patient mortality following myocardial infarction in our study. Conclusion: Preoperative imaging details, proper patient positioning, meticulous dissection, thorough anatomical knowledge and few added surgical nuances are the cornerstones in ideal odontoid screw placement. This may be pivotal in managing patients in developing nations having rudimentary neurosurgical set up.",
"keywords": [
"Odontoid fracture",
"screw placement",
"technique",
"outcome"
],
"content": "Introduction\n\nManagement of type II odontoid fractures has been long debated1. Conservative management, a regular practice in earlier days, was later followed by prolonged application of halo vest. These techniques invariably lead to non union of the fracture and furthermore caused major discomfort to the patients1,2. It was Nakanishi and Bohler who initially described odontoid screw placement for type II odontoid fractures3. With recent advancements in neurosurgery and additions to its armamentarium with tools like Neuro-navigation and O arm, odontoid screws can now be placed with high accuracy, ease and low morbidity4–6.\n\nHowever, in major developing countries like Nepal we still invariably lack these tools, and therefore free hand technique is still the only viable option for the management of such cases. Herein we discuss a simple technique for anterior odontoid screw placement which is comparable to placement of the same under guidance of an 'O' arm or neuro-navigation, in terms of accuracy of the placement, associated complications and peri-operative morbidity to the patients.\n\n\nMaterials and methods\n\nWe included 15 patients from a cohort group in our study who were managed with anterior single odontoid screw placement from 2011–2015 in the Department of Neurosurgery, College of Medical Sciences, Nepal. All the patients were first evaluated with the help of X-ray, computerised tomography (CT) and magnetic resonance imaging (MRI) of the spine. CT was performed to diagnose the type and pattern of the fracture and also to rule out other associated bony injuries. MRI was performed to determine the integrity of the transverse ligament, associated soft tissue injuries and to rule out cord contusions. The disease process was explained, the procedure and the alternate methods of management were thoroughly detailed to all the patients and their family members. Written consent for the management was obtained from all the patients in the inclusion cohort. The study was approved by the ethical board of the College of Medical Sciences, Chitwan, Nepal Patient details including age, sex, mode of injury, neurological grade at presentation (Frankel grading), associated injuries, any peri-operative untoward events and complications related to the procedure were recorded. We used cannulated and partially threaded lag screws from the Medtronic implant system.\n\nFollowing the procedure we encouraged early mobilization of the patients on a cervical collar from post operative day 2, after performing a CT spine scan to assess the trajectory and location of the screw. We advocated performing a dynamic X-ray cervical spine (lateral view) 4 weeks after the surgery to rule out any evidence of pseudo-arthosis (anterior translation or angulation in the fracture site) or any instances of implant failure. Patients were then advised for follow up visits at the 3rd, 6th and 12th month in our spine clinic. Inclusion and exclusion criteria are outlined in Box 1 and Box 2, respectively.\n\n1. Type 2 transverse fracture.\n\n2. Posterior oblique fracture.\n\n3. Informed consent.\n\n1. Disrupted transverse ligament.\n\n2. Concomitant C1–C2 instability (associated Jefferson’s fracture, overhang of lateral masses of C1 on C2 >7mm, Anterior dens interval (ADI) >4mm).\n\n3. Oblique anterior fracture.\n\n4. Severe osteopenia.\n\n5. Old fractures.\n\n6. Short neck, excessive cervical kyphosis, concomitant thoracic kyphosis and barrel shaped chest.\n\n7. Failure to obtain consent for the procedure.\n\n\nSurgical modifications for the procedure\n\nWe followed a few modifications to the routine surgical steps in the placement of the odontoid screw. The most common complication of the procedure is the wrong trajectory of the screws that predisposes the patient to early implant break out or pull out and fracture pseudo-arthosis. To ensure this is avoided even in the context of rural set ups lacking an O-arm and navigation facilities, we followed these additional steps during the procedure:\n\n1. Midline trajectory of the screw – For correct positioning of the patient to ensure correct trajectory of the screw in the midline, we ensured that the tip of the nose, supra-sternal notch and the xiphisternum were in the same anatomical line. The head of the patient was then securely fixed to the table with adhesive tape. We routinely then exposed the entire breath of the C3 body by dissecting off the longus colli muscles on either side and marked the midpoint as an anatomical landmark to ensure the midline trajectory. The C-arm images in the antero-posterior (AP) view usually ensure the correct location of the dens However, the quality of the C-arm and body habitus of the patient may be a major limiting issue in obtaining quality images. This method also obviates the continuous use of a C-arm to take the AP view to ensure its midline trajectory. Ideally biplanar fluoroscopy is required to obtain images in sagital and coronal views. After initial confirmation of the correct pathway, the C-arm can be used for lateral images to ensure its correct crossover of the fracture line, all threads migrating beyond the fracture line and ideal placement of its tip just beneath the cortex of odontoid tip. This minimizes the operating time without compromising on the screw trajectory.\n\n2. Adequate banking of anterior C2 corticol bone support – To limit the issues of early implant break out, we created a small gutter in the superior aspect of C3 body following a median C2–C3 disectomy. Doing so the endplate of C2 can be breached from a more posterior aspect thereby ensuring good anterior cortical support from C2 to the screw.\n\n3. Normal alignment of the fracture segments – In order to prevent non-anatomic fusion, we have classified the fracture of the type II odontoid into anterior, neutral and the posterior variants depending upon the anatomical position of the distal odontoid segment. We then performed controlled neck movements to either flex or extend the neck to bring back the normal alignment between the fracture segments. The use of neuro-physiological studies like SSEP may help us in the process to minimize any inadvertent neurological compromise during the neck manipulation.\n\n4. Post operative morbidity due to screw head positioning – There will be discomfort and sometimes dysphagia owing to the presence of screw head at the C2–C3 inter-space. The gutter we create at the C3 will ideally act as a station for the lodgment of the screw head during neck movements thereby limiting its pressure effect to the anteriorly located trachea-esophageal complex.\n\n\nOperative technique\n\nWe lack an 'O' arm and navigation system to aid us in ideal placement of odontoid screws. But we believe that detailed analysis of pre-operative radio-images, proper patient positioning and correct operative exposure of anatomical details followed by controlled intra-operative manipulation of the neck help us ensure ideal placement of the odontoid screw.\n\nWe routinely placed the patient in supine position with placement of a pad beneath the inter-scapular region to extend the neck so as to maintain the normal cervical lordosis.\n\nWe used a transverse incision for the medial boarder of the sternocleidomastoid muscle to the midline based on C5–C6 level on the right side. Dissection then proceeds in a similar fashion as compared to the anterior cervical disectomy procedure7.\n\nWe routinely exposed the entire breadth of the C3 vertebral body. Longus colli on both sides were dissected off the C3 vertebral body until a clear view of the lateral boarder of C3 is seen on both sides. This is very important as the screw must be placed exactly on the midline. Then with the help of a curette we carried out a C2–3 disectomy on the midline. After partial disectomy, we drilled (or curetted) so as to make a gutter on the superior aspect of the C3 body with depth facing upward. This is very helpful for accurate placement of the odontoid screw behind the anterior cortex of C2 body without deviation from midline. The groove also provides the proper shelter for the screw head.\n\nFor simplicity we classified odontoid type II fractures into three types:\n\n1. Type A- Anterior displacement of dens\n\n2. Type B- Neutral\n\n3. Type C- Posterior displacement of dens\n\nThis is illustrated in Figure 1.\n\nFor type A fracture- we hyperextend the neck as the screw is about to pass the fracture line.\n\nFor type B fracture- no neck manipulation is required.\n\nFor type C fracture -we flex the neck as the screw is about to enter the fractured line.\n\nWe regularly do lateral and AP view of the upper cervical spine after positioning of the patient to make sure of normal cervical lordosis and fixed the head with plaster. Lateral view is required initially as we place the 'K' wire on the C2 base. One should ensure the projection of the 'K' wire to be posterior to the anterior cortical layer of C2 to avoid screw break out.\n\nAfter the 'K' wire penetrates the endplate of C2, the 'C' arm is changed for AP views to confirm midline entry of the 'K' wire into the body of the C2 and dens. The ideal trajectory and the final position of the screw following the procedure have been detailed in Figure 2 and Figure 3.\n\n\nResults\n\nIn our cohort study, there was a male preponderance (male: female ratio of 6.5: 1). Age of the patients ranged from 15 to 60 years.\n\nRoad traffic accident was the most common mode of injury in 9 patients (60%) followed by fall injuries in 4 of them (26.67%).\n\nThere were polytraumas associated with the condition in 9 patients ( 60%).The presence of associated cord contusion was evident in 4 of them (26.67%).\n\nMost of the patients were in Frankel grade E status at presentation (80%). Two patients (13.34%) of the group were in Frankel grade C status and 1 (6.67%) was in Frankel status. The clinical profile of all the patients in the study has been summarized in Table 1.\n\n*RTA-Road traffic accident/UL-Upper limb\n\nIn our cohort study, 14 out of 15 cases had excellent post operative outcome. Two of the cases who initially presented with Frankel grade C status on admission had associated cord contusion, with no other evidence of fracture and associated instability. Post operatively, both of them improved to Frankel Grade E.\n\nThere were no instances of wrong trajectory or false location of the screw head in our study. During the follow up visits, we found good union of all the fractures without any reports of screw malfunctioning. None of our cases had to be re-operated because of screw related problems or pseudo-arthosis.\n\nMild discomfort during swallowing was present in 2 cases (13.33%) that improved within few days of the procedure.\n\nWe did not have any wound related complications.\n\nIn our study group, we had a single mortality following inferior wall myocardial infarction in a 60-year-old male that presented with Frankel grade A neurological status and had associated high cord contusion.\n\n\nDiscussion\n\nOdontoid type II fracture warrants surgical fixation. Though conservative management with halo rest is an option and still is used in some centers, surgical management is comparatively far more superior with regards to union at the fracture site8–9.\n\nDevelopment in neurosurgical field has evolved tremendously in recent years Newer armamentarium like neuro-navigation and 'O' arm techniques have now revolutionized complicated surgeries that require a high degree of accuracy and precisions4–6. In developing countries like ours, despite these intra-operative aids, the procedure can still be performed using pre-operative images and pertaining to our basic anatomical knowledge. Our results are comparable to previously published studies10.\n\nThe major advantages of anterior screw fixation are immediate spinal stability with preserved C1–C2 rotation. It also provides high union rate11 The threads at the end of the screw help to couple the fractured segments together (theory behind lag compression) thereby promoting early fusion. There is also no need for autologous bone graft harvesting.\n\nMajor limitations of the procedure are the need for intact integrity of the transverse ligament and the prerequisite of attaining normal alignment of the spine before screw placement.\n\nIn all of our cases we only used one screw but still attained satisfactory union of the fracture. With our method of complete exposure of the C3 vertebral body, we are able to drill a midline groove on the C3 body which helps us to project the 'K' wire into the dens with good C2 body cortical purchase thereby minimizing the risk of screw break out. Our next technical nuance is the concept of controlled neck manipulation just prior to the 'K' wire entry into the fractured site. This maintains cervical lordosis as well as decreases the chance of dislodgement of the fractured segments and subsequent non-anatomic fusions. With our surgical technique we haven't met any instances of displacement of the fractured segment or need for multiple screws.\n\nWe believe multiple screws increase the risk of displacement of fracture segments. Double screws also increase the odds of intra operative failure and surgical difficulties. Moreover, there are no differences in terms of load bearing capacity of the screws as well as the subsequent fusion rate following single or double odontoid screws12–14.\n\nAnterior odontoid screw placement is a demanding procedure which can invariably lead to major complications. Most of these are related to implant malpositioning and failures. In one study, the procedure had to be abandoned in two cases and there was screw loosening in two patients15. There are also reports of critical neurovascular compromise and severe dysphagia following same procedures16–18. We did not have such complications in our cohort study.\n\nWe achieved 100% fusion rate. The union rate following odontoid screw fixation ranges from 81–100% in the literature16.\n\nRoad traffic accidents were the major cause of the injury in our study group (60%) comparable to 80% of case in one recent study16.\n\nThe major advantages of our technique are a short learning curve to master it and the execution of the steps even with the use of a single C-arm during the procedure. Another benefit is the decreased operative time with reduced exposure to radiation owing to reduced use of the C-arm for obtaining coronal images.\n\nA major limitation of the study is the small size of our cohort study group. Whether similar results can be extrapolated to major subsets of other patients remains to be answered. Learning time can be minimized by mastering the technique through cadaveric courses.\n\nWe believe that our surgical technique will certainly be a boon in managing patients with odontoid fracture with high therapeutic success and minimal morbidities, especially in the developing regions.\n\n\nConclusion\n\nMost odontoid type II fractures warrant surgical fixation and with proper utilization of our technique, such challenging cases can be conquered with great success. This is even more valid in the context of developing nations where newer tools to aid the procedure are not always available. The benefits of our technique can be summarized as:\n\n1. Alignment of the anatomical landmarks during positioning of the patients and liberal exposure of the width of the C3 body helps us to mark the midline trajectory. This minimizes use of C-arm for obtaining coronal images thereby reducing radiation exposure as well as the operative time.\n\n2. Controlled neck manipulation restores the cervical lordosis and realigns the fracture segments thereby promoting anatomic fusion.\n\n3. Gutter on the C3 body following C2–C3 median disectomy provides corridor for adequate purchase of anterior cortex of C2 thereby minimizing risk of early screw break out. It also stations the head of the screw minimizing pressure to the trachea-oesophageal complex.",
"appendix": "Author contributions\n\n\n\nDr Sunil and Dr Pramod reviewed the literature, collected data of the cohort group and formatted the paper. Dr Binod designed the study and edited the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nKonieczny MR, Gstrein A, Müller EJ: Treatment algorithm for dens fractures: non-halo immobilization, anterior screw fixation, or posterior transarticular C1–C2 fixation. J Bone Joint Surg Am. 2012; 94(19): e144(1–6). PubMed Abstract | Publisher Full Text\n\nKim SK, Shin JJ, Kim TH, et al.: Clinical outcomes of halo-vest immobilization and surgical fusion of odontoid fractures. J Korean Neurosurg Soc. 2011; 50(1): 17–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHenry AD, Bohly J, Grosse A: Fixation of odontoid fractures by an anterior screw. J Bone Joint Surg Br. 1999; 81(3): 472–477. PubMed Abstract\n\nJaiswal A, Shetty AP, Rajasekaran S: Role of intraoperative Iso-C based navigation in challenging spine trauma. Indian J Orthop. 2007; 41(4): 312–317. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKantelhardt SR, Keric N, Giese A: Management of C2 fractures using Iso-C3D guidance: a single institution’s experience. Acta Neurochir (Wien). 2012; 154(10): 1781–1787. PubMed Abstract | Publisher Full Text\n\nWu JC, Wadhwa R, Than K, et al.: Complication Avoidance and Management Using the O-arm for Odontoid Screw Fixation: Technical Note. Cureus. 2014; 6(11): e225. Publisher Full Text\n\nMatz PG, Ryken TC, Groff MW, et al.: Techniques for anterior cervical decompression for radiculopathy. J Neurosurg Spine. 2009; 11(2): 183–197. PubMed Abstract | Publisher Full Text\n\nShilpakar S, McLaughlin MR, Haid RW Jr, et al.: Management of Acute Odontoid Fractures: Operative Techniques and Complication Avoidance. Neurosurg Focus. 2000; 8(6): e3. 1–7.PubMed Abstract | Publisher Full Text\n\nGraziano G, Jaggers C, Lee M, et al.: A comparative study of fixation techniques for Type II fractures of the odontoid process. Spine (Phila Pa 1976). 1993; 18(16): 2383–2387. PubMed Abstract\n\nTian NF, Hu XQ, Wu LJ, et al.: Pooled Analysis of Non-Union, Re-Operation, Infection, and Approach Related Complications after Anterior Odontoid Screw Fixation. Shamji M ed. PLoS One. 2014; 9(7): e103065. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAebi M, Etter C, Coscia M: Fractures of the odontoid process. Treatment with anterior screw fixation. Spine (Phila Pa 1976). 1989; 14(10): 1065–1070. PubMed Abstract | Publisher Full Text\n\nSasso R, Doherty BJ, Crawford MJ, et al.: Biomechanics of odontoid fracture fixation. Comparison of the one- and two-screw technique. Spine (Phila Pa 1976). 1993; 18(14): 1950–1953. PubMed Abstract | Publisher Full Text\n\nSubach BR, Morone MA, Haid RW Jr, et al.: Management of acute odontoid fractures with single-screw anterior fixation. Neurosurgery. 1999; 45(4): 812–820; discussion 819–20. PubMed Abstract | Publisher Full Text\n\nJenkins JD, Coric D, Branch CL Jr: A clinical comparison of oneand two-screw odontoid fixation. J Neurosurg. 1998; 89(3): 366–370. PubMed Abstract | Publisher Full Text\n\nAndersson S, Rodrigues M, Olerud C: Odontoid fractures: high complication rate associated with anterior screw fixation in the elderly. Eur Spine J. 2000; 9(1): 56–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChi YL, Wang XY, Xu HZ, et al.: Management of odontoid fractures with percutaneous anterior odontoid screw fixation. Eur Spine J. 2007; 16(8): 1157–1164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaentzer D, Deinsberger W, Böker DK: Vertebral artery complications in anterior approaches to the cervical spine: Report of two cases and review of literature. Surg Neurol. 2003; 59(4): 300–9; discussion 309. PubMed Abstract\n\nWilson DA, Fusco DJ, Theodore N: Delayed subarachnoid hemorrhage following failed odontoid screw fixation. J Neurosurg Spine. 2011; 14(6): 715–8. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "15625",
"date": "01 Sep 2016",
"name": "Yad Ram Yadav",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAlthough there was no elderly patient in present series, is this technique enough for geriatic patients (osteoporosis and atlantoaxial arthritis)? Do any one have to use anything to augment screw fixation in osteoporotic fracture.\nAuthors should also mention risk factors (in whom it is likely to fail).",
"responses": []
},
{
"id": "16589",
"date": "26 Sep 2016",
"name": "Mohan Raj Sharma",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWe congratulate the authors for their work on a relatively difficult surgery with modification in a resource constrained setting such as Nepal. However, there are some points which warrant clarification:\n1. Exclusion Criteria Some redundancy is noted in points 1 and 2. Disrupted transverse ligament could mean either Anterior dens interval (ADI) >4mm or in case of Jefferson’s fracture, overhanging of lateral masses of C1 on C2 of >7mm. Our suggestion is to divide them as:\nDisrupted transverse ligament (Anterior dens interval (ADI) >4mm) Associated Jefferson’s fracture (Overhang of lateral masses of C1 on C2 >7mm)\n\n2. Surgical modifications for the procedure Midline trajectory of the screw- The C-arm images in the antero-posterior (AP) view usually ensures the correct alignment of the k-wire with the dens. It is not clear whether the authors routinely used the biplanar fluoroscopy or AP view was sufficient.\n\n3. Clinical presentation Frankel grade of the last patienthas not been mentioned. We would have used the ASIA grading which is a better and universally accepted grading system for impairment following spinal cord injury.\n\nOverall, though this is a small serious with one year follow up, this will definitely stimulate other neuro and spine surgeons in Nepal to carry out such study in future.\n\nDr. Mohan R. Sharma, MS, Dr. Amit Pradhang, MS, MCh",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1681
|
https://f1000research.com/articles/5-2701/v1
|
18 Nov 16
|
{
"type": "Research Note",
"title": "Dimension reduction of Malaria Box data allows efficient compound prioritization",
"authors": [
"Gajinder Pal Singh"
],
"abstract": "Background: New anti-malarial drugs are needed to meet the challenge of artemisinin resistance and to achieve malaria elimination and eradication. The new anti-malarial compounds are expected to have many desirable properties, such as activity against multiple stages of Plasmodium, low host cytotoxicity, and low propensity for resistance development, but whether and how these properties might be linked to each other is not clear. A better understanding of the relationship between activities of compounds against different stages of Plasmodium could help in the development of strategies to prioritize compounds with maximum potential for further development. Methods: We utilized the large amount of data that has recently been generated on 400 anti-malarial Malaria Box compounds and performed statistical analyses, such as rank correlation, hierarchical clustering, and principal-component analyses, to test associations between activities against different stages of Plasmodium, other pathogens, and human cells. Results: We found significant positive correlations between the activities of compounds against different stages of Plasmodium. Our results also show toxicity associated with assays conducted at higher compound concentrations. Principal-component analyses (PCA) of the data allowed differentiation of Plasmodium-specific activity from general toxicity and predicted success in in vitro evolution of resistance. We found that a single principal-component can capture most of the desirable properties of Malaria Box compounds and can be used to rank compounds from most desirable to least desirable activity-profile. Conclusions: Here, we provide a systematic strategy to prioritize Malaria Box compounds for further development. This approach may be applied for prioritization of anti-malarial compounds in general.",
"keywords": [
"Malaria Box",
"Plasmodium falciparum"
],
"content": "Introduction\n\nMalaria killed about half a million people in the year 2015, and 70% were children under the age of five1. The emergence and spread of resistance towards frontline anti-malarial drugs in South-East Asia has created an urgent need to discover new drugs. In addition, new drugs are needed to meet the objective of malaria elimination and global eradication, for which the currently available drugs are not adequate2. The desirable characteristics of new clinical candidates, also called Target Compound Profile (TCP), include high potency and fast killing of the asexual erythrocytic stage for quick relief of symptoms, high plasma half-life to reduce treatment duration, activity against the sexual stages to prevent transmission, activity against the liver-stage to avoid relapse and for prophylactic use, activity against multiple species of Plasmodium, and reduced propensity for the development of resistance3. New anti-malarial drugs must also be safe for mass administration, and for children and pregnant women, who are most vulnerable to malaria3.\n\nIt is currently not well understood how TCP properties are related to each other. This makes it difficult to assess whether it would be feasible for a single compound to have all TCP properties and what strategies could be adopted to find such candidates. With the discovery of thousands of active compounds from the high-throughput assays against the erythrocytic stage of P. falciparum4–8, it has become imperative to find a prioritization strategy that can identify the most promising candidates for further development. For a subset of antimalarial compounds identified from high-throughput screens in the so-called “Malaria Box”, many of the TCP properties have been assessed. The Malaria Box is a set of 400 compounds selected based on their potent activity against the erythrocytic stage of P. falciparum, chemical diversity and commercial availability9. These compounds were made available free of cost to researchers, thus catalysing a number of studies, including the screening of these compounds against multiple Plasmodium stages, eukaryotic pathogens and human cells10. Some of these compounds have also been tested for their propensity for resistance generation11. Here, we utilized the large amount of data generated on Malaria Box compounds and found significant associations between different TCP properties. Based on these observations, we propose a prioritization strategy for anti-malarial compounds for further development.\n\n\nMethods\n\nThe screening data on Malaria Box compounds was obtained from Van Voorhis et al,10 who compiled the previously published data on Malaria Box compounds (55 assays), as well as their own data (236 assays). We rank transformed all assay values, such that higher values represent higher inhibition.\n\nIn case multiple assays were available for a given stage or concentration, their median values were taken: there were nine assays reporting EC50 values against the asexual stage of P. falciparum, one assay against asexual stage at high compound concentration (10 µM), five gametocytocidal assays conducted at 0.5–1 µM compound concentrations, ten gametocytocidal assays conducted at 2.5–5 µM compound concentrations and six gametocytocidal assays conducted at 10–12.5 µM compound concentrations. There was one assay each at lower and higher compound concentrations against liver (5 µM and 50 µM, respectively) and ookinete stages (1 µM and 10 µM, respectively). Values were also similarly combined for parasites with multiple assays, such as Babesia sp. and Mycobacterium tuberculosis.\n\nThe data on the in vitro resistance evolution of Malaria Box compounds was obtained from Corey et al.11.\n\nAll statistical analyses were performed in the R software v3.3.1 (https://www.r-project.org/). R commands hclust and prcomp were used for hierarchical clustering and PCA analyses respectively. PCA analyses were performed on the activity data against different Plasmodium stages and human cells. Rank correlation values were used to create the distance matrix for the hierarchical clustering.\n\n\nResults\n\nVan Voorhis et al. have recently reported their large-scale screening data on Malaria Box compounds, which was compiled along with the previously published data10. We first reduced the dimensionality of this data by combining variables that describe activity against the same Plasmodium stage and pathogen. Assays conducted at higher concentration of compounds may provide different results from those performed at lower concentration, and thus assays conducted at different concentrations were combined separately.\n\nFigure 1 provides an overview of the relationship between different properties of Malaria Box compounds in the form of a correlation matrix. Multiple observations can be made from this matrix. There is a moderate, but significant correlation, between activity against the asexual stage and the gametocyte stage of P. falciparum (Spearman rank correlation 0.43, between EC50 values and % inhibition of gametocytes at 1 µM compound concentration). The correlation between EC50 values against asexual stage and gametocyte stage gets lower when gametocytocidal activity was screened at higher compound concentrations (Spearman rank correlation 0.17, at 10µM). The gametocytocidal activity at higher concentrations shows a higher correlation with inhibitory activity against different pathogens, including M. tuberculosis and human fibroblast cells (Figure 1). Assays conducted at higher concentrations against asexual, liver and ookinete stages also show a higher correlation with toxicity against human cells and other cell types (Figure 1). These observations suggest that assays conducted at high compound concentration show general toxicity against a wide variety of cells, including human cells, thus hits identified from these assays should be used with caution.\n\nSpearman rank correlations are shown between assays whose values were rank transformed, such that higher values indicate higher inhibition. Gray boxes indicate p values > 0.05. The assays performed at higher concentrations in Plasmodium show higher positive correlations across different assays, including activity against human cells, suggesting that assays conducted at high compound concentration show general toxicity.\n\nTo further understand the relationship between activities against different Plasmodium stages, we performed hierarchical clustering of the data. Three major clusters were evident (Figure 2). Cluster 1 consists of asexual and gametocyte assays conducted at high compound concentrations. Cluster 2 consists of asexual and gametocyte assays conducted at low compound concentrations. Cluster 3 consists of assays conducted against liver and ookinete stages. Separate clustering of assays against asexual and gametocyte stage at different compound concentrations again suggests general toxicity of assays conducted at higher compound concentrations. There are two possibilities why liver and ookinete stages cluster together. These two stages may be physiologically more similar to each other, or it may reflect the fact that these assays were conducted against P. berghei, compared to other assays that were conducted against P. falciparum.\n\nThe three major clusters are evident that correspond to activity against P. falciparum at high concentrations (Cluster 1, leftmost), activity against P. falciparum at low concentrations (Cluster 2, middle), and possibly activity against P. berghei (Cluster 3, rightmost). Rank correlation values were used to create the distance matrix for the clustering. The color key shows the inhibitory activity of the compounds with a higher number representing a higher activity.\n\nGiven the possible confounding roles of compound concentration and Plasmodium species used in the screening, the prioritization of compounds that have pan-stage activity, but low host cytotoxicity, becomes difficult. We thus tested whether the Principal-Component Analyses (PCA) may be utilized to differentiate Plasmodium-specific activity from general toxicity. PCA analyses of the data from different Plasmodium stages and human cells lead to the identification of principal-components, which showed different properties with respect to general and specific activity. PC1 showed high correlation with assays conducted at higher compound concentrations and against a variety of cell types, including human cells (Figure 3), suggesting that PC1 is related to general toxicity. PC3, on the other hand, showed higher correlations with assays conducted at low compound concentrations, but negative or lower correlation with assays conducted at high compound concentrations in Plasmodium, against different pathogens and human cells (Figure 3), suggesting that PC3 is related to specific activity against Plasmodium across different stages. PC2 showed high positive correlation with the liver and ookinete stage assays, but showed negative correlations with asexual and gametocyte stage assays, suggesting that this component reflects activity against these two stages or against P. berghei in which these assays were performed.\n\nThe three principal components PC1, PC2 and PC3 explained 30%, 16% and 13% of the variation in the data, respectively. PC1 showed positive correlation across different assays, suggesting that it reflects general toxicity. PC3 showed higher positive correlation only with assays conducted at lower compound concentration, but lower or negative correlation with assays conducted at higher compound concentration, other parasites and human cells, suggesting that it reflects pan-stage specific activity against Plasmodium.\n\nIn vitro resistance evolution has recently been attempted against 30 Malaria Box compounds with three independent lines for each compound11. We next tested whether the Plasmodium-specific activity or general toxicity estimated from the principal components might predict in vitro resistance evolution. Compounds for which resistance could not be developed showed significantly higher PC1 values (Figure 4A). These compounds also showed higher human toxicity (Figure 4B) and enrichment of probe-like compounds, which have chemical properties associated with higher non-specific activity9 (Figure 4C). These results suggest that general toxicity of compounds may lead to lower success in in vitro resistance evolution. On the other hand, high PC3 values were associated with higher likelihood of in vitro resistance generation (Wilcox test p = 0.02, not shown).\n\nThe in vitro resistance evolution was attempted for 30 Malaria Box compounds and was successful for 13 compounds11. Compounds for which resistance evolution was not successful showed (A) higher PC1 values (Wilcox p 0.004), (B) lower EC50 against human fibroblasts cells (Wilcox p 0.130), and (C) a higher proportion of probe-like compounds, as classified by Medicines for Malaria Venture9 (Fisher p 0.100).\n\nIn general, our results suggest that compounds that show high PC3 values should be prioritized for further development, including target identification by in vitro resistance evolution. Table 1 lists the top 20 Malaria Box compounds with the highest PC3 values. These compounds show high activity against multiple stages at a low concentration, but low activity against human cells. In total, 11 of these compounds show favourable oral bioavailability values. Some of these are active against other pathogens (Table 1). The values of three principal components for all Malaria Box compounds are available in Dataset 114.\n\nThese compounds show high activity against multiple stages at a low concentration, but low activity against human cells. The mouse oral bioavailability was obtained by measuring the plasma concentration of the compounds with a single high oral dose (140 μM/kg)10. Compounds with favourable plasma concentration (plasma Cmax > 1μg/ml) are indicated. Activity of compounds against other parasites is also indicated. The oral bioavailability data and compound activity data against other parasites was obtained from Van Voorhis et al.10. The PC3 values for all Malaria Box compounds are available in Dataset 114.\n\n\nDiscussion\n\nThe wide availability of Malaria Box has catalysed a number of studies on these compounds10. Prioritization of compounds based on a large number of variables is not straightforward. Here, we analysed this data and found that a single variable (PC3) can capture most of the desirable compound properties: activity against multiple Plasmodium stages and low host cytotoxicity, thus greatly simplifying the task of compound prioritization. Our analyses suggest that screening at high compound concentrations can lead to general toxicity and thus should be avoided. Thus the idea that hits identified from multiple assays should be more confident10 needs to be considered carefully when hits are identified from high concentration assays. The consensus approach might lead to the selection of compounds with general toxicity.\n\nWe found significant correlation between activity against the asexual stage and the gametocyte stage of P. falciparum (Spearman rank correlation 0.43), which suggests that it might be easier to find compounds that have activity against both these stages, even though the two stages have different growth properties. The correlations between asexual stage with the liver and ookinete stages were low (Figure 1 and Figure 2). This could reflect different physiological states of liver and ookinete stages from asexual and gametocyte stages, but it might also reflect that liver and ookinete stage assays were performed in P. berghei, rather than P. falciparum. Thus, the development of higher throughput liver and ookinete stage assays in P. falciparum could be valuable. It is important to note that the correlation values that we report should be considered an underestimate, as inhibition values for assays against the same stage show large variability, e.g. median rank correlation among nine EC50 values against asexual stage of P. falciparum was 0.51. The possible reasons for this variability have previously been discussed10.\n\nThe difficulty in the evolution of in vitro resistance is considered a very desirable property of a compound12 given that a number of anti-malarial drugs are becoming less effective because of resistance generation13. Our observations suggest caution in interpreting the results of in vitro resistance evolution experiments. The failure to obtain resistance in vitro could be because of general toxicity of the compound on the erythrocyte hosts. Thus we suggest that the host toxicity of compounds should be thoroughly evaluated before conducting the labour-intensive in vitro resistance evolution experiments.\n\nWhile we have prioritized compounds according to their pan-stage activity and low human toxicity, we would like to stress that compounds that show activity across pathogens and human cells may also be potential leads, if their toxicity could be managed. One possibility to reduce the toxicity of a compound is to identify its target in the parasite and its human ortholog, and utilize the three-dimensional structures of the compound with the target to modify the compound to increase selectivity. However, target identification of these compounds might be more difficult using in vitro resistance development.\n\n\nData availability\n\nThis publication uses data on Malaria Box compounds, as reported and compiled by Van Voorhis et al.10 (DOI: 10.1371/journal.ppat.1005763.s002). The data on the in vitro resistance evolution is reported by Corey et al.11.\n\nDataset 1: PC1, PC2 and PC3 values for 400 Malaria Box compounds, DOI: 10.5256/f1000research.10121.d14257014",
"appendix": "Author contributions\n\n\n\nG.P.S. conceived and designed the study, performed the research and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by an Early Career Fellowship to G.P.S. by the Wellcome Trust/DBT India Alliance (IA/E/15/1/502297).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nSuggestions and criticism on the manuscript from Dr. Amit Sharma and Ms. Preeti Goel from the author’s lab are gratefully acknowledged.\n\n\nReferences\n\nWorld Health Organization: The World Malaria Report 2015. World Health Organization, Geneva; 2014b. Reference Source\n\nAlonso PL, Brown G, Arevalo-Herrera M, et al.: A research agenda to underpin malaria eradication. PLoS Med. 2011; 8(1): e1000406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurrows JN, van Huijsduijnen RH, Möhrle JJ, et al.: Designing the next generation of medicines for malaria control and eradication. Malar J. 2013; 12: 187. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHovlid ML, Winzeler EA: Phenotypic Screens in Antimalarial Drug Discovery. Trends Parasitol. 2016; 32(9): 697–707. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAvery VM, Bashyam S, Burrows JN, et al.: Screening and hit evaluation of a chemical library against blood-stage Plasmodium falciparum. Malar J. 2014; 13: 190. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeister S, Plouffe DM, Kuhen KL, et al.: Imaging of Plasmodium liver stages to drive next-generation antimalarial drug discovery. Science. 2011; 334(6061): 1372–1377. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuiguemde WA, Shelat AA, Bouck D, et al.: Chemical genetics of Plasmodium falciparum. Nature. 2010; 465(7296): 311–315. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGamo FJ, Sanz LM, Vidal J, et al.: Thousands of chemical starting points for antimalarial lead identification. Nature. 2010; 465(7296): 305–310. PubMed Abstract | Publisher Full Text\n\nSpangenberg T, Burrows JN, Kowalczyk P, et al.: The open access malaria box: a drug discovery catalyst for neglected diseases. PLoS One. 2013; 8(6): e62906. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Voorhis WC, Adams JH, Adelfio R, et al.: Open Source Drug Discovery with the Malaria Box Compound Collection for Neglected Diseases and Beyond. PLoS Pathog. 2016; 12(7): e1005763. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorey VC, Lukens AK, Istvan ES, et al.: A broad analysis of resistance development in the malaria parasite. Nat Commun. 2016; 7: 11901. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDing XC, Ubben D, Wells TN: A framework for assessing the risk of resistance for anti-malarials in development. Malar J. 2012; 11: 292. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoodrow CJ, White NJ: The clinical impact of artemisinin resistance in Southeast Asia and the potential for future spread. FEMS Microbiol Rev. 2016; pii: fuw037. PubMed Abstract | Publisher Full Text\n\nSingh GP: Dataset 1 in: Dimension reduction of Malaria Box data allows efficient compound prioritization. F1000Research. 2016. Data Source"
}
|
[
{
"id": "18052",
"date": "29 Nov 2016",
"name": "Timothy N. C. Wells",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an analysis of the Malaria Box set, and an attempt to try to understand whether there are any correlations between activity and safety signals or resistance development using Principal Component Analyses. The author defines a single principal component which can be used to capture most of the desirable compounds of the Malaria Box. The data used are from the Malaria Box and the recent summary in Van Voorhis et al., 2016 (of which this reviewer is one of the 150 co-authors).\n\nThe initial correlations show that there are some links between the different life cycle stages of inhibition for malaria, and certainly these are strong correlations compared to say the correlation between killing Plasmodium and killing distantly related worm pathogens.\n\nThe most useful insight from the paper is in Figure 4 where it's clear that\nCompounds where no resistance could be generated have higher PC1 values.\n\nCompounds where resistance could not be generated are more significantly represented by fibroblast inhibition at <10 uM.\nGiven that compounds where no resistance can be made are deemed by some to be part of higher value scaffolds (we all want drugs which don’t generate resistance) – then this observation warrants more exploration.\n\nSome comments for the author – which would add value to this study (or could form the basis of the next study):\nThe PCA based on cytotoxicity is built from the human fibroblast data. This is a good start, but the real value would be to use all the NCI59 cell data from the US National Cancer Institute and to see whether there is anything else that emerges.\n\nAs a set of compounds, the Malaria Box was the fruit of a very small investment – and so the compounds were often selected based on availability and cost. It would be good to run the same PCA analysis on the Pathogen Box structures. For pathogen box there was flexibility to make any compound that was required for the collection, and so the quality of the structures chosen was arguably higher.\n\nTwo other types of compound data could be analysed. First it would be interesting to see the same analysis used on the TCAMS set – does that help in some ways to reprioritize these structures. Second, what happens with PC1 if you look at say the 8000 compounds taken in development by the Pharma industry (and not yet launched) – do they appear to be higher quality.",
"responses": []
},
{
"id": "18780",
"date": "16 Jan 2017",
"name": "Francisco-Javier Gamo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article shows an analysis of the available biological data on the 400 compounds of the Malaria Box set trying to understand if there is any correlation between favorable parasitological properties and undesirable unspecific or toxicity aspects. Most of the data have been retrieved from Van Voorhis et al., 2016. Some of these data have been produced by GSK and this reviewer is one of the co-authors. Statistical analyses reveal some correlations that would allow prioritization of phenotypic hits based on the most desirable antimalarial compound profiles.\n\nReviewer finds conclusions very interesting specially those related to the potential misleading information provided by malaria assays carried out at high concentration of compound. Principal Component Analyses shows strong correlation between toxic effects in human cells and antimalarial activities produced at the highest compound concentration, suggesting these antimalarial effects not being specific for parasites. Identification of a single principal component capturing most desirable properties of malaria box compounds is an important result and could be the basis to rank compounds in larger datasets.\n\nThere are some additional points that could help to improve current version or be considered for potential follow ups.\nAuthor describes the most desirable properties for novel antimalarial compounds but comment that how these properties are linked to each other is not clear. However most of them (speed of action, activity against different stages of Plasmodium, propensity to select for resistance, etc) are strongly influenced by antimalarial target. So, mode of action (MoA) should be a clear link among the different properties and compounds displaying similar MoA should show similar properties. Reviewer agrees that pharmacokinetic aspects of antimalarial compounds (e.g. long half life), are structure based and should be much less related to antimalarial MoA.\n\nDespite there is still a lack of understanding on the antimalarial targets of phenotypic hits there is already some information at this regard. I suggest author to include in the analysis current knowledge on mode of action to identify those targets providing the most desirable antimalarial properties. This would be specially interesting if the analysis is expanded to larger datasets.\n\nConclusions based on resistance evolution should be put in the context of the small number of compounds with available results. Larger sets of data would be needed to confirm this trend. Nevertheless reviewer agrees with poly-pharmacology and general toxicity as two of the main properties that negatively influence selection of resistance.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2701
|
https://f1000research.com/articles/5-2700/v1
|
18 Nov 16
|
{
"type": "Research Article",
"title": "Pay-for-performance in resource-constrained settings: Lessons learned from Thailand’s Quality and Outcomes Framework",
"authors": [
"Roongnapa Khampang",
"Sripen Tantivess",
"Yot Teerawattananon",
"Sarocha Chootipongchaivat",
"Juntana Pattanapesaj",
"Rukmanee Butchon",
"Natthida Malathong",
"Francoise Cluzeau",
"Rachel Foskett-Tharby",
"Paramjit Gill",
"Sripen Tantivess",
"Yot Teerawattananon",
"Sarocha Chootipongchaivat",
"Juntana Pattanapesaj",
"Rukmanee Butchon",
"Natthida Malathong",
"Francoise Cluzeau",
"Rachel Foskett-Tharby",
"Paramjit Gill"
],
"abstract": "Introduction. Many countries have introduced pay-for-performance (P4P) models to encourage health providers and institutions to provide good quality of care. In 2013, the National Health Security Office of Thailand introduced P4P, based on the UK Quality and Outcomes Framework (QOF), as a mandatory programme for primary care providers. This study aims to review the first year of the Thai QOF policy, focusing on the key features of its formulation and implementation stages. Methods.This study used a mix of data collection approaches, such as literature review, in-depth interviews with QOF managers, and focus-group discussions with health officers and practitioners. Data were collected between June and August 2015 and transcribed and analysed using qualitative content analysis (interpretation of the content of text data through the systematic classification process of identifying themes or patterns). Two consultation meetings were organised to verify the preliminary findings. Results. Based on the UK model, the Thai QOF was formulated without formal consultation with key stakeholders. Additionally, programme managers adopted a ‘learning by doing’ approach, since Thai authorities were already aware of health system limitations, such as insufficient human and financial resources and unreliable databases. There were also problems with QOF implementation, as follows: 1) deducting the budget from the annual payment for ambulatory care made the policy unjustified because providers did not receive full subsidisation of their service delivery; 2) lack of key stakeholder engagement resulted in miscommunication, and subsequently misunderstanding and inadequate coordination, for the translation of QOF policy into action; and 3) the unreliability of the IT system led to inaccurately-reported data on service delivery, thereby adversely affecting performance. Conclusion.There is still room for improvement in formulating and implementing the Thai QOF programme. Policy makers and programme implementers at both the national and international levels can benefit from this study for ensuring effective policy transfer and implementation of future QOF programmes.",
"keywords": [
"Quality and Outcomes Framework",
"policy formulation",
"program implementation",
"Pay for Performance",
"Primary care quality"
],
"content": "Abbreviations\n\nUHC - Universal health coverage\n\nUCS - Universal coverage scheme\n\nNHSO - National Health Security Office\n\nMOPH - Ministry of Public Health\n\nCUP - Contracting unit for primary care\n\nPCU - Primary care unit\n\nP4P - Pay-for-Performance\n\nQOF - Quality and Outcomes Framework\n\nKPI - Key performance indicator\n\n\nIntroduction\n\nThailand achieved universal health coverage (UHC) in 2002 through the implementation of a universal coverage scheme (UCS) for the majority of the Thai population (75%) in addition to the existing government-funded health insurance schemes: the Civil Servant Medical Benefit Scheme for public employees and dependents, and Social Health Insurance for formal-sector private employees1. The Thai UHC focuses on promoting primary healthcare with an emphasis on disease prevention and health promotion, and these are also in line with new sustainable development goals2. Additionally, the facilities of the healthcare purchaser, the National Health Security Office (NHSO), and the provider, the Ministry of Public Health (MOPH), were separated following the introduction of the UHC. With this split, the NHSO holds more than half of the total health budget, while the MOPH owns more than 80% of government health facilities.\n\nThe UCS requires its beneficiaries to register in a catchment area at the contracting unit for primary care (CUP). In general, a CUP includes one district hospital and several health-promoting hospitals or primary care units (PCUs). District hospitals, which are staffed with physicians, nurses and other allied health professionals, offer both primary and secondary care services. On the other hand, health-promoting hospitals, which are staffed by nurse practitioners and public health officers, only provide primary care, community services, health promotion and disease prevention services1. The NHSO allocates the budget for ambulatory services on a prepaid capitation basis, i.e. a fixed rate per population registered to each CUP, which is equivalent to 90USD per capita. According to the National Health Security Act, the NHSO is entitled to provide contracts to qualified CUPs3; however, the purchaser has limited choices, especially in rural areas where MOPH facilities are predominant.\n\nIn 2010, the NHSO introduced the first pay-for-performance (P4P) programme, called on-top payment, in order to reduce variations in quality and accessibility of care provision by encouraging CUPs to improve infrastructure and staffing4. Since this programme was criticised for not clearly contributing to the quality of services and health outcomes, it was replaced by the Quality and Outcomes Framework (QOF) in October 20135. As a P4P initiative, the QOF incentivises health providers to improve primary care quality in key predetermined areas, namely (i) health promotion and disease prevention; (ii) primary healthcare services; (iii) organisational development and management; and (iv) services targeted to local need (Figure 1). There are two types of QOF indicators: 1) core indicators, used at the CUP level throughout the country, and 2) local indicators, developed by regional health boards, consisting of NHSO and MOPH senior officers at Provincial Health Offices and regional, provincial, and district hospitals. The core indicators comprise nine quality measures in five key primary care services, including maternal and child health, cervical cancer screening, management of asthma, diabetes and hypertension, and the structure of primary care organisational development, e.g. the percentage of people who have access to a physician. Some of these indicators, such as the percentage of pregnant women that received antenatal care before 12 weeks of gestation, are also adopted by the MOPH as a key performance indicator (KPI) for monitoring and evaluating the service delivery in its health facilities. However, some of the QOF indicators, such as the percentage of diabetes patients admitted to the hospital due to short-term complications from diabetes, are not included in the MOPH’s KPI list. Besides, this initiative allows regional health boards to develop regional indicators with the aim of decentralising decision-making power, motivating participation of local actors, and addressing local health problems and health delivery factors. As a result, the numbers and sets of indicators differ across the 13 regions of the country.\n\nQOF = Quality and Outcomes Framework, OP = Out Patient, PCU = Primary Care Unit.\n\nThe Thai QOF was implemented in 1,293 CUPs country-wide. Achievement against each indicator is calculated on an annual basis using national patient care databases and given a point value. In principle, the total points achieved are then converted into financial value, which is allocated to CUPs. In the first year, the NHSO disseminated QOF details e.g. indicators and points through its regional offices, regional health boards, and CUPs. Then, the NHSO requested each regional health board to develop regional QOF guidelines together with local indicators. Once providers delivered services, they recorded such service provision in the existing MOPH database. Thereafter, the information was transferred to the provincial data centre, which manages the data and submits it monthly to the MOPH (Figure 2). Subsequently, the NHSO extracts selected fields from the database and analyses the data related to core QOF indicators, with the analysis details reported back to the regional health boards. Meanwhile, local indicators are collected and analysed locally at the regional level. To fund this, the NHSO allocates the QOF budget to the regional health boards according to the number of people registered to the health facilities in the region and requests the regional health boards to allocate the payment to the CUPs. The reason for this is that the NHSO believes this is reasonable given that the QOF budget was part of the budget for ambulatory services.\n\nPCUs = Primary Care Units, MOPH = Ministry of Public Health, NHSO = National Health Security Office, CUPs = Contracting Units for Primary care, QOF = Quality and Outcomes Framework.\n\nOne year after the QOF implementation, key stakeholders at the national, regional, and peripheral levels raised concerns about the mismanagement of the scheme and highlighted the need for improvement in different aspects of the programme. This study was commissioned by the NHSO to review the first year of the Thai QOF programme, with focus on key features of the policy formulation and implementation stages. These include the policy design, implementation gap, impeding factors, and health providers’ perceptions towards the policy. This paper describes the results of this evaluation as well as the lessons learned and implications for the QOF in Thailand.\n\n\nMethods\n\nA qualitative study was performed, including a review of literature, in-depth interviews, and focus-group discussions. Data were collected between June and August 2015.\n\nDocument review. Key international publications and unpublished research reports related to the UK QOF and relevant documents related to the Thai QOF, including published literature, research reports, and health policy documents, were reviewed. International publications were searched in PubMed (https://www.ncbi.nlm.nih.gov/pubmed/). The search was conducted using key words, including ‘QOF’ and ‘quality indicators’. Papers published between January 1, 2004 and July, 2015 were considered. Unpublished research reports were identified by UK researchers (University of Birmingham). The NHSO and its regional offices offered relevant documents in Thai, including published literature, research reports, and health policy documents.\n\nKey informant interviews. In June 2015, in-depth interviews were conducted with 11 key informants using semi-structured interview guides (Supplementary File 1). Purposive sampling was performed; the informants were recruited according to the criteria that they were responsible for formulating QOF policy or managing the programme at the national and regional levels. Recruitment was performed under the supervision of the QOF programme managers in the NHSO. The informants included five executives and two programme managers at the national level, and three executives of NHSO offices and one programme manager at the regional level (Table 1). All QOF managers at national level were interviewed. However, at regional level, we categorised 13 NHSO regional offices into 5 groups. The first group, Bangkok, was selected due to differences in context, such as population density and mobility, patient’s help seeking behaviours, and lifestyle, as well as primary care system. Therefore, Bangkok has implemented different QOF indicators, payment criteria and management strategies. Regional offices in the south, north, north-eastern and central part of Thailand were randomly selected (1 office per region) for interviews.\n\nNHSO = National Health Security Office, QOF = Quality and Outcomes Framework.\n\nFocus group discussions. Two separate focus group discussion sessions were convened in July 2015 (one for 7 representatives and other for 8 representatives). A researcher (ST) led the discussion. The discussions were guided by sets of predefined questions covering the benefits of the QOF to health facilities, the relationship between QOF performance, scores and payment, QOF payment allocation, barriers, and suggestions to improve the QOF programme. Purposive sampling was performed. The recruitment of participants was carried out in two steps. First, two provinces in each region (12 regions, excluding Bangkok Metropolitan) with the highest and lowest QOF scores were selected. Second, different types of health facilities (e.g. district hospitals and health-promoting hospitals), as well as provincial and district health offices from the 24 provinces, were selected based on consultations with NHSO staff in the regions. As such, 24 representatives from these organisations, who had been involved in the QOF introduction either as supervisors or primary care providers, were invited to participate in the meetings. Fifteen informants agreed to participate (Table 2).\n\nInterviews and focus group discussions were audio-recorded, transcribed, and analysed using qualitative content analysis, a research method for the interpretation of the content of text data through the systematic classification process of identifying themes or patterns6. A single researcher read through the transcripts repeatedly to derive the key issues of the text data that were then sorted into categories and themes based on how different issues were related and linked. After that, the research team discussed the emerging categories, made themes, and changed categories where appropriate.\n\nTwo consultation meetings were organised in July 2015. The aims of these meetings were for the research team to check that key information had been collected and to verify preliminary findings. The first meeting involved 27 key stakeholders at the national level, including policy makers from the MOPH, researchers from universities, as well as representatives from the NHSO and its regional offices. At the second meeting, 31 participants attended, including representatives from the NHSO, provincial and district health offices, health facilities, non-governmental organisations, and patient groups. Six informants in this study attended the consultation meetings.\n\n\nResults\n\nFour major themes emerged from our analysis from the data collected in this study: policy formulation, programme implementation, problems with QOF implementation, and provider perceptions on the QOF programme. These are described in detail below.\n\nAlthough P4P policies were implemented in many countries with all types of income levels (high, middle, low), the Thai QOF was based on the UK model, since the NHSO believes that the UK QOF is the most famous and is implemented on the largest scale. In addition, Thai policy makers are familiar with the UK UHC model due to several staff exchanges and study visits. Nevertheless, Thai authorities were aware of significant differences in the health delivery systems of the two countries, and also anticipated the poor performance of the Thai QOF as a result of health system limitations, such as insufficient human and financial resources, unreliable databases, and conflict in some policy areas between the MOPH and the NHSO. Importantly, the newly-established initiative had never been piloted. Despite this, the UCS executives maintained that they were confident in introducing this policy since the NHSO, health providers, and officers had gained experience in UCS management from the earlier stage of the P4P programme. Furthermore, QOF managers argued that they adopted a ‘learning by doing’ approach, meaning that different stakeholders could learn through their experiences and adapt to the programme accordingly. The QOF managers also perceived the policy implementation as a capacity strengthening exercise for NHSO staff and providers’ networks.\n\nRegarding the development of quality indicators, the NHSO formulated indicators and associated policy without consultation from key stakeholders, such as the MOPH, health providers, and professional organisations. However, personal consultation with respective experts in the Ministry was carried out for some indicators. Two broad criteria were used to select indicators: 1) an indicator would not increase workload on data entry and reporting because the performance could be measured by using data in the existing MOPH database; and 2) the focus of indicators was placed on the performance at CUP level, as opposed to the outputs and outcomes of each health facility.\n\nThe NHSO did not start to disseminate QOF information to its regional offices until October 2013, which was the planned start date for the programme. After this initial dissemination, regional health boards were required to develop local indicators and criteria for the QOF budget allocation. As pointed out by some informants, this resulted in a delay of approximately three to four months in the development of the regional guidelines and its dissemination to respective institutes, including primary care providers. As such, the QOF could not begin until January 2014 in some regions. Aiming to allocate the QOF budget to all CUPs by September 2014, the NHSO decided to use data that the health facilities submitted to the MOPH between April 2013 and March 2014 to analyse QOF performance for the core indicators. Hence, the first-year QOF was based on the information during that period because it took some time for preparation, and the programme began later than expected. This means that the measurement in the first year of the QOF was mainly based on the performance of approximately 9 to 10 months before the providers had been informed about the QOF indicators.\n\n“… QOF performance was measured based on the data of the 3rd and 4th trimester of last year [April to September 2013] and 1st and 2nd trimester of current year [October 2013 to March 2014]. The scores were not associated with current work performances. Additionally, the measurement did not align with the fiscal year, increasing difficulties in creating work plans.” (Health practitioner).\n\nAs mentioned above, two disparities between the NHSO’s QOF and MOPH’s KPI exist: 1) there are different indicators; and 2) the same indicators have different templates and timeframes for data entry and measurements. These two factors thus affected the planning and working process of data submission from health facilities.\n\nKey informants asserted that QOF monies were not consistently allocated according to provider achievement, and high scores might not be associated with high quality primary care delivery due to two reasons. First, some local indicators did not involve healthcare quality or the performance of health providers, e.g. the amount of research conducted annually in particular facilities, and the use of data from IT systems to inform health service planning and delivery. Second, the criteria for QOF payments in different regions were not standardised, as the regional health board had discretionary power to make decisions.\n\nAn analysis by the NHSO suggested that the QOF budget was allocated according to the QOF scores in only 6 out of 13 regions5. In the other regions, the QOF scores were taken into account together with other criteria, such as population size and deprivation level of the catchment area (e.g. hardship and conflict areas).\n\n“The allocation of the QOF budget was based on the financial status of the CUP. In practice, the amount depended on the performance of the health facilities in the CUP. I thought that explicit criteria or guidance for the CUP were necessary.” (Health practitioner).\n\n“In my region, the regional health board agreed to allocate the QOF budget based on [the] number of people registered in the area (40%), and QOF scores (60%).” (Regional NHSO officer).\n\nDuring the first year of the Thai QOF programme, the crucial impeding factors were the inherent conflicts between the NHSO and MOPH, and weaknesses in the existing IT system. Consequently, there was miscommunication and inadequate coordination between healthcare workers, MOPH and NHSO staff, as well as inaccurately-reported data, lack of capacity for data entry and management, and errors in data transfer from peripheral offices to the national authority.\n\nAs health workers pointed out, the QOF was perceived as an NHSO-owned initiative, while the MOPH had developed its own policies to deal with priority health problems and improve primary care services. The lack of engagement between the NHSO and MOPH in the policy formulation phase resulted in miscommunication, culminating in misinformation and inadequate coordination for the translation of QOF policy into action. The focus group discussions made it quite apparent that senior health officers in the MOPH and its central departments were not aware of the NHSO QOF. Although some MOPH general inspectors acknowledged the introduction of the QOF in their regions, they paid attention only to the performance of healthcare delivery according to the MOPH’s direction and associated indicators. Hence, technical supervision and administrative guidance provided by the inspectors for QOF-related activities in most regions were limited.\n\nThe other critical problem that surfaced in the QOF’s introduction was the unreliable IT system, resulting in inaccurately-reported data on service delivery. Interviewees asserted that the capacity for data entry and management at CUPs, PCUs, and district health offices was inadequate. Moreover, interviewees in different institutions argued that health workers in some settings intentionally made up data in order to gain high QOF scores and payments. The worst case, as discussed during a focus group meeting, involved attempts at data manipulation.\n\n“Some practitioners entered data accurately, while others intentionally made up data. So, they [the data] did not really reflect the true performance.” (Health practitioner).\n\n“I had a field visit at a province. In this province, they [provincial health officers] were not concerned about under-recording, but over-recording. This was really happening.” (Data manager of a CUP).\n\nInformants also reported technical errors in data transfer from peripheral offices to the national authority.\n\n“I did not realise what was going on. Our performance last year was zero for 5 indicators. Moreover, the OP visits were marked zero. I believe that the problems originated from the data linkage system between our hospital and the regional health data centre.” (Health practitioner).\n\n“Once we submitted our data to the health data centre, we found that our performance did not meet the indicator targets. Only 5 indicators, related to service provision, passed the assessment. When comparing with other health facilities, they also passed approximately 5–6 indicators, which I did not believe was representative of service delivery. I thought the issue was the database.” (Health practitioner).\n\nThe poor reliability of data has also been illustrated by comparing results from different databases and surveys on certain indicators. Table 3 shows that the coverage of some primary care services as QOF indicators significantly differed from the findings of a survey conducted by Mahidol university and a report from the MOPH during the same period, as suggested by a published document of the NHSO7. Owing to the limited capacity of the IT system, health personnel in PCUs could not obtain feedback on their performance from the provincial data centres. Without such information, it was difficult for PCUs to confirm the accuracy of their QOF scores and payments with CUP managers.\n\nTwo indicators were highlighted, due to the availability of the data from other sources to make comparisons.\n\nSources of information: (1) A Survey conducted by Mahidol University 2014, and (2) A Report of the MOPH, 2014, adapted from a published document by the NHSO7.\n\nMost interviewees in this study agreed with the QOF policy principle that monetary incentives would be effective in enhancing service quality in the Thai UHC context where financing and other resources are scarce. Some also pointed out that the policy helped strengthen teamwork among district hospitals and health-promoting hospitals in terms of patient transfer to receive proper care at the secondary level. They stated that in order to be high achievers, health facilities in a particular CUP would need to cooperate, set service delivery plans together, and improve their referral system. However, besides the problems arising from the MOPH-NHSO conflict and unreliability of data, many weaknesses and associated implementation gaps were observed. First, health workers perceived that the programme was managed in an unfair manner. Given that the QOF budget was deducted from the annual capitation payment for ambulatory care, the policy was not justified as primary care providers did not receive full subsidisation of their service delivery.\n\n“The budget for the QOF programme was deducted from the OP/PP budget [budget for out-patient/health promotion and disease prevention services]. It should have been from other sources. The OP/PP budget is actually aimed at subsidising service deliveries, so it isn’t fair for health providers [since they may receive less money than they had received previously and this may not be sufficient for providing ambulatory services].” (Health facility director).\n\n“…the calculation of the budget for ambulatory services was based on the number of people registered in the catchment areas. Allocation of this budget based on the performance of health providers isn’t fair. The allocation should have been based on the same principle [payments based on the number of registered population].” (Regional NHSO officer).\n\nAdditionally, providers perceived that allocating a budget based on performance was unfair because the NHSO collected a proportion of the budget for ambulatory services, and allocated this on the basis of each CUP performance instead of the registered population. While the total amount remained the same, CUPs with high QOF scores were able to receive higher QOF payments, thereby taking a proportion of other CUPs’ budgets in the same region.\n\n“It [QOF budget allocation] was unfair… [and] inappropriate because a province that achieved high QOF scores would receive a larger budget and [would be] taking from other provinces’ UC budget.” (Health practitioner).\n\nAs also perceived by hospital administrators and health workers, the unfairness was exacerbated when the allocation of the QOF budget did not rely on actual quality improvement in service delivery at the CUP and PCU levels. Furthermore, there was perceived prejudice in performance measurements owing to the fact that some QOF indicators and targets were relatively difficult to achieve in certain circumstances, e.g. providing screening tests for non-communicable diseases in an area with a large population, urbanised culture and lifestyle, and high rate of labour migration to other areas.\n\nSecond, inefficient transfer of information from policy makers to programme managers and practitioners was emphasised by interviewees as an important drawback of the Thai QOF. Traditional, bureaucratic communication approaches that involved lengthy official documents and formal, face-to-face meetings with executives crucially impeded the dissemination of messages. As maintained by some informants from PCUs, they did not have an in-depth understanding about the QOF because CUP representatives who attended the regional meetings did not relay the obtained information. It was also argued that the information deficit resulted in poor attitudes and non-compliance with the policy among service providers.\n\nThird, most QOF indicators, such as coverage of screening or occurrence of complications as a result of poor disease management, were designed to measure the performance at the CUP level, even though district hospitals and health-promoting hospitals are different entities with different financial management. As such, the QOF budget received by the CUP needs to be divided among health facilities. During the interviews and focus group discussions, it was evident that without guidance for QOF budget allocation within the CUP, staff at health-promoting hospitals found it difficult to negotiate a fair share of the budget with the directors of district hospitals and perceived that they were the worse off under this programme. Although some CUPs demonstrated the improved collaboration expected from the QOF implementation between district hospitals and health-promoting hospitals, other CUPs witnessed conflicts between the two entities on account of the QOF budget allocation.\n\n“It was not told how much each indicator would deliver. So it was not sure how the budget should be further allocated to the health-promoting hospitals. We had to find [a] consensus on explicit criteria for budget allocation in order to avoid conflicts.” (District health officer).\n\n“This method of allocation [allocation of the QOF budget through CUP] caused disputes among the health facilities because there were no explicit criteria. Each CUP board could create their own way to allocate the budget.” (Health practitioner from a health-promoting hospital).\n\n\nDiscussion\n\nAs with many health insurance schemes, there has been a strong political will and commitment within the NHSO to improve the quality of primary care under the Thai UCS8. P4P was chosen by the NHSO to improve service quality and reduce the variation in performance of primary care providers. Moreover, this initiative, if introduced properly, may be effective in improving the governance of resource allocation, as it monitors and evaluates the performance of CUPs in terms of inputs, processes, and outcomes. However, this study suggests that the processes of QOF policy formulation did not follow the principles of evidence-based and participatory policymaking, which has long been embedded in Thai health systems, including the NHSO’s policy decisions9. The selection of QOF indicators as a crucial policy instrument is an appropriate illustration. The initial selection of indicators by the NHSO without external stakeholder engagement is not unique, but other countries that have implemented P4P have evolved the indicator development and evaluation process to be more participatory. For example, the National Institute for Health and Care Excellence (NICE) has led the indicator development and evaluation process for the QOF on behalf of the NHS in the United Kingdom since 2008. They utilise a systematic process involving experts in respective disciplines, NHS managers, and practitioners in order to ensure that these indicators and measurements are technically robust, effective in quality improvement, and well-accepted by key stakeholders10. In contrast, the Thai QOF development did not follow an explicit process for indicator development; it was based on non-technical criteria and informal consultation with a limited number of stakeholders. Furthermore, it is not clear whether regional health boards took into account the aims of the QOF while developing local indicators, as some indicators were not relevant for measuring the quality of primary care.\n\nBesides relevant performance indicators and measurement methods, reliable databases on service delivery and civil registry are necessary for the P4P model. This analysis suggests that the health-related IT systems and databases were the weakest component of the Thai QOF programme. This was compounded by the lack of capacity and technical expertise of personnel in different cadres at the country and peripheral levels. An on-going study for the development of QOF indicators in Thailand reveals various types of inaccurate information contained in the MOPH databases11. Since health information and data are considered building blocks (http://www.wpro.who.int/health_services/health_systems_framework/en/) of the health system, it is likely that this weakness also affects a broader range of initiatives managed by the MOPH and NHSO. One example of an unsuccessful introduction of computerised information systems in Thailand is the abandoned Telemedicine Network launched in 1998; it failed because of the lack of IT skills among health professionals, the low level of system acceptance among users, and the rapid changes in the IT system12,13. In addition, a study on the QOF in Switzerland14 revealed that an incomplete database was unable to reflect the quality of healthcare delivery. Furthermore, different studies suggest that healthcare workers should be able to perform proper data entry, and these data should be used as feedback for improving the quality of healthcare delivery14–16.\n\nIn the views of QOF managers and some practitioners, the Thai QOF should be able to strengthen collaboration among district hospitals and health-promoting hospitals. District hospitals should also provide support to health-promoting hospitals in terms of health personnel, medicines, medical devices, technical support, and quality control of services. It is similar to the findings of a systematic review by Gillam et al, 201217, which indicated that the UK QOF strengthened teamwork among practitioners in the health facilities. The improved cooperation within the CUP as asserted by some interviewees has not been clearly evident in this evaluation. Ironically, however, some informants mentioned about the inadequate coordination between district hospitals as CUP leaders and PCUs, especially in terms of financial resource allocation. This may be explained by several reasons. First, it was the first year of programme implementation and the leaders and staff of health facilities within the same CUP are still learning and adapting to the programme; therefore, collaboration may be improved in the future. Second, health-promoting hospital staff may have less incentives than staff in district hospitals to cooperate with district hospitals because the QOF budget is allocated directly to the district hospital (as CUP’s main contractor), while health-promoting hospitals have no negotiating power with the district hospital. Third, a lack of central guidance as to how the incentives should be allocated led to the inconsistency of QOF payment allocation from the CUPs to health-promoting hospitals.\n\nA key success mechanism of P4P implementation to improve quality of healthcare is that health providers are well aware of and appreciate the incentives offered, and have the ability to make behavioural changes or strengthen their service delivery capacity in order to achieve predetermined performance targets18. In the Thai QOF, however, due to the delay in the development of the indicators, inadequate policy communication, and IT issues, it was really difficult for providers to improve their performance in such a short period of time. The QOF score was measured from the performance carried out 8–9 months before issuing the indicators by the NHSO and the regional health boards. During this period, the providers were unaware of the policy, including incentives, and thus it is unreasonable to expect any improvement. Arguably, performance measurements in the second year should be more justifiable than the first year because providers already know what the core indicators are, whereas local indicators can be changed annually depending on the negotiations in the regional health boards. This case study reflects the importance of setting up and keeping an appropriate timeline for the implementation of P4P programmes. In other settings, such as in the UK QOF, the programme can aim to increase a providers’ performance by establishing a reporting system that provides on-going feedback about achievement in real-time. The UK QOF has also sought to incorporate the quality improvement potential of reputation incentives in parallel with financial ones through the public reporting of QOF achievements19. However, both of these systems rely on a robust and reliable IT system and it may not be possible to introduce them into the Thai QOF unless the NHSO overcomes its IT problems.\n\nThis study contributes to public policy literature as it provides empirical evidence from a developing country’s health systems for existing policy analysis models and theories. We learn that the UK QOF was transferred to the Thai setting with significant adaptation during the formulation stage, including indicator development, point value determination, budget, management, feedback, and database for the QOF. This resulted in great differences from the prototype. As pointed out by Dolowitz and Marsh20, the transferred elements can be ideologies, interventions, or administrative arrangements. In Thailand’s QOF, the transfer of the P4P concept and principles was relatively effective compared to the learning on instrumental details and programme structure, since the latter elements were influenced by the political and health delivery context. Such contextual factors, as well as the capacity for programme management and primary care provision at the subnational level, played a key role in the QOF implementation, and possibly the outcomes too. According to the implementation model, in order to meet new policy goals, government service providers usually seek to obtain the necessary information to guide policy execution, as well as draw lessons from several sources21. As such, policy learning on the QOF introduction might also take place locally by exchanging knowledge among practitioners and drawing lessons from past experiences, such as the introduction of the earlier phase of the P4P for infrastructure, and human resource development.\n\nThe findings of this study should be interpreted in light of certain limitations. The study relied heavily on stakeholders’ interpretation and perception towards the QOF. Despite the efforts to triangulate the information, researchers were not able to interview most policy makers from the MOPH, including executives and General Inspectors. However, a small number of these policy makers were able to participate in a stakeholder consultation meeting, and their views are crucial because they play an important role in the policy formulation and implementation process. Further study to evaluate the health outcomes accrued as a result of the Thai QOF is recommended. It should be noted that this may take a longer time frame for the evaluation and different study designs to be able to obtain an impact. Furthermore, studies on other aspects of the QOF are needed, such as costs, cost-benefit, value for money, and the contribution of the QOF to primary care improvement.\n\n\nConclusions\n\nAlthough there were impediments in introducing the Thai QOF programme, the programme gained a strong political will and commitment from the NHSO to improve the quality of primary care under the Thai UCS. Lessons learned from the current Thai QOF will be useful for policy makers and programme implementers at both the national and international levels in ensuring effective policy transfer and implementation, not only for similar P4P programmes, but also for other public health initiatives.\n\n\nEthical approval and consent\n\nEthical approval for this study was obtained from the Institute for the Development of Human Research Protection (http://www.ihrp.or.th/), Thailand (document number, 970/2558). Written informed consent was obtained from the participants.\n\n\nData availability\n\nTranscriptions of recorded interviews (in Thai) with key informants, and focus groups are available from Open Science Framework (https://osf.io/wqvfd/) DOI: 10.17605/OSF.IO/WQVFD22.",
"appendix": "Author contributions\n\n\n\nST analysed and interpreted the findings, provide comments and feedback on the manuscript. RK collected data, interpreted results and was a major contributor in writing the manuscript. YT supervised the study, provide guidance on the outline and provided feedback and suggestion on the manuscript. SC, JP, RB, NM collected data and provided support on the manuscript writing. FC, RF, and PG supervised the overall process of the study and provided feedback and suggestions on the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nThe NHSO funded the project. The funder had no rule in study design, interpretation of data, or writing the manuscript.\n\n\nGrant information\n\nThis project was funded by the National Health Security Office, Thailand (grant number, 59A00218). The Health Intervention and Technology Assessment Program (HITAP) is funded by the Thailand Research Fund (TRF) under a grant for Senior Research Scholar (RTA5980011). The findings, interpretations and conclusions expressed in this article do not necessarily reflect the views of the aforementioned funding agencies.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to express deep gratitude and sincere appreciation to the NHSO for the funding support for this project, Dr. Choochai Sornchumni, Dr. Jakkrit Ngowsiri, Dr. Anuwat Supachutikul and the Steering Committee for their supervision, Ms. Bumrung Chalodech, and the staff at the NHSO for their administrative assistance during data collection. Additionally, we would like to express our special thanks to all informants for their contribution to this study.\n\n\nSupplementary material\n\nSupplementary File 1: Interview guides for QOF managers at national level, QOF managers at regional level, provincial health officers, and health care providers.\n\nClick here to access the data.\n\n\nReferences\n\nTangcharoensathien V, Limwattananon S, Patcharanarumol W, et al.: Achieving universal health coverage goals in Thailand: the vital role of strategic purchasing. Health Policy Plan. 2015; 30(9): 1152–61. PubMed Abstract | Free Full Text\n\nPettigrew LM, De Maeseneer J, Anderson MI, et al.: Primary health care and the Sustainable Development Goals. Lancet. 2015; 386(10009): 2119–21. PubMed Abstract | Publisher Full Text\n\nNational health Security Office: National Health Security Act B.E. 2545 (A.D. 2002). Nonthaburi; 2002. Reference Source\n\nNational Health Security Office: Guidance on the operation and management of the on-top payment policy. Nonthaburi; 2011.\n\nNational Health Security Office: Budget allocation for QOF in 2014. Nonthaburi; 2014.\n\nHsieh HF, Shannon SE: Three approaches to qualitative content analysis. Qual Health Res. 2005; 15(9): 1277–88. PubMed Abstract | Publisher Full Text\n\nNational health Security Office: Guidance on QOF indicators in 2016. Nonthaburi; 2015.\n\nHealth Insurance System Research Office: Thailand’s Universal Coverage Scheme: Achievements and Challenges - an independent assessment of the first 10 years (2001–2010). Health Insurance System Research Office: Nonthaburi; 2012. Reference Source\n\nYoungkong S, Baltussen R, Tantivess S, et al.: Multicriteria decision analysis for including health interventions in the universal health coverage benefit package in Thailand. Value Health. 2012; 15(6): 961–70. PubMed Abstract | Publisher Full Text\n\nCampbell SM, Kontopantelis E, Hannon K, et al.: Framework and indicator testing protocol for developing and piloting quality indicators for the UK quality and outcomes framework. BMC Fam Pract. 2011; 12: 85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhampang R, Pattanaphesaj J, Butchon JR, et al.: Developing health care quality indicators and improving the QOF program for the Thai Universal Health Coverage – Part 2.Health Intervention and Technology Asessment Program. Nonthaburi; 2016. Reference Source\n\nKijsanayotin B, Speedie S: Are health centers in Thailand ready for health information technology? : a national survey. AMIA Annu Symp Proc. 2006; 424–8. PubMed Abstract | Free Full Text\n\nKasitipradith N: The Ministry of Public Health telemedicine network of Thailand. Int J Med Inform. 2001; 61(2–3): 113–6. PubMed Abstract | Publisher Full Text\n\nDjalali S, Frei A, Tandjung R, et al.: Swiss quality and outcomes framework: quality indicators for diabetes management in Swiss primary care based on electronic medical records. Gerontology. 2014; 60(3): 263–273. PubMed Abstract | Publisher Full Text\n\nLee JY, Lee SI, Jo MW: Lessons from healthcare providers’ attitudes toward pay-for-performance: what should purchasers consider in designing and implementing a successful program? J Prev Med Public Health. 2012; 45(3): 137–147. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlafsdottir AE, Mayumana I, Mashasi I, et al.: Pay for performance: an analysis of the context of implementation in a pilot project in Tanzania. BMC Health Serv Res. 2014; 14: 392. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGillam SJ, Siriwardena AN, Steel N: Pay-for-performance in the United Kingdom: impact of the quality and outcomes framework: a systematic review. Ann Fam Med. 2012; 10(5): 461–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEijkenaar F: Key issues in the design of pay for performance programs. Eur J Health Econ. 2013; 14(1): 117–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoland M, Dudley RA: How Financial and Reputational Incentives Can Be Used to Improve Medical Care. Health Serv Res. 2015; 50(Suppl 2): 2090–115. PubMed Abstract | Publisher Full Text\n\nDolowitz DP, Marsh D: Learning from abroad: The role of policy transfer in contemporary policy-making. Governance. 2000; 13(1): 5–23. Publisher Full Text\n\nSchofield J: A model of learned implementation. Public Admin. 2004; 82(2): 283–308. Publisher Full Text\n\nRoongnapa: “QOF_Thailand.” Open Science Framework. 2016. Data Source"
}
|
[
{
"id": "17819",
"date": "13 Dec 2016",
"name": "David Hughes",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWhile Thailand’s UHC reforms have attracted considerable international attention and praise, primary care has remained one of the more problematic aspects of the healthcare system because of concerns about staffing and quality. The English-language literature remains thin when it comes to the strategies being used to lever quality improvements in Thailand, especially the use of performance management techniques. The experience of both the P4P initiative and the introduction of QOF and the associated incentives described in this paper will therefore interest international policy actors and scholars, as well as satisfying the NHSO’s evaluation needs.\nOne of the strengths of this paper is the focus on implementation and the attempt made to empirically investigate the impediments to rolling out the scheme as designed. The specifics of this implementation story are interesting with past outcomes initially used as a proxy for real performance against targets, poor coordination between the MoPH and NHSO, weaknesses in IT systems and data capture, and limited centre-periphery communication among the noted problems. Tension between the MoPH and NHSO is a persistent theme in research studies of the Thai UHC reforms, and comes up here in the form of a mismatch between MoPH KPIs and NHSO QOF indicators. We do not have a large stock of such implementation case studies in developing countries, so this is an area where the paper makes a very useful contribution.\nIn general the paper is logically structured, using appropriate subheadings. The qualitative analysis remains at a fairly basic level, as I think is dictated by the constraints of the shorter papers published in this journal. The authors use the simple but effective strategy of illustrating propositions with a series of short data extracts. There are interesting data on what front-line actors think about the fairness of the QOF initiative and the unresponsiveness of the traditional Thai bureaucratic system (especially as manifest in the MoPH hierarchy).\nOverall I think the paper works well for this journal and would suggest only minor amendments.\n1. I felt that the Introduction could have been slightly clearer about one or two points and that a claim about staffing needed double checking.\n\nTo align with other work I suggest ‘Social Security Scheme’ rather than ‘Social Health Insurance’ for workers in formal employment. The NHSO ‘is entitled to provide contracts to qualified CUPs’ from both the public and private sectors (this seems the point the authors wanted to make here). It is my understanding that some sub-district health promoting hospitals are supported by doctors, usually working on a rotation basis. Is this incorrect?\n\n2 Under Methods it is worth clarifying what the two consultation meetings involved. What was done seems similar to what qualitative researchers term ‘respondent validation’ or ‘member checking’. The participants were not the same group as the focus group participants, but – as I understand it – were asked to assess the face validity or plausibility of the initial findings. This was then feed back into the final analysis. I’m not sure if my interpretation is right though, and an extra sentence or two would help clarify this.\n3. The meaning of the term ‘civil registry’ is not clear to me. Does it mean registration data used to determine the CUP’s beneficiary population, based on members enrolled using house registration documents (tabien baan)?",
"responses": []
},
{
"id": "19815",
"date": "31 Jan 2017",
"name": "Stephen J. Gillam",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nContext – The National Health Security Office (NHSO) in Thailand introduced a pay-for-performance programme in 2013 based on the UK’s Quality and Outcomes Framework (QOF) as a requirement of all primary care providers (via the MOPH). Presumably due to space constraints, only limited detail of the programme and the make-up of CUPs was provided. The criteria shown were process measures. Are intermediate outcomes (of greater interest to primary care teams) measured at local level? Unsurprisingly, introduction of the programme was attended by various managerial, technical and logistical problems.\n\nAim – ‘To review the first year of the Thai QOF policy, focussing on the key features of its formulation and implementation stages’.\n\nMethods – These are described in limited detail. Though purporting to involve a literature review, this is not described in detail and would anyway serve as background to what is a qualitative study based on eleven interviews and two focus groups. The UK QOF is the largest P4P scheme of its kind and has been extensively investigated. There does not seem to be any obvious link between the findings of the review which would have generated multiple criteria for further investigation (e.g. on what makes a ‘QOFable’ indicator, how to engage providers, IT requirements).\n\nIt is not clear who conducted the interviews. It appears that thematic analysis was largely undertaken by one author. Informants appear to have been limited to managers at regional and national level. Why were local providers not involved?\n\nA more comprehensive review that met the study’s aims might have asked after clinicians’ experiences and how these new financial incentives affected clinical practice. Were no meaningful before-and-after activity data available?\n\nResults – The QOF is a scheme of enormous complexity and it is hard to imagine that data collection was saturated after only 11 interviewees. The findings detailed a series of unfortunate programme failings: insufficient prior consultation, with providers and other stakeholders, lack of human and financial resources, unreliable databases and IT, poor design of financial incentives. These can be summarised in one phrase: inadequate preparation. No prior pilot is described and no quantitative data indicating coverage or attainments in relation to different indicators are provided.\n\nPresentation – The paper is clearly written and would be accessible for an international audience.\n\nImplications – As the authors might have stated (based on systematic reviews of P4P), there are many pre-requisites for the successful implementation of P4P in primary care. The Thai context appears to have met few of these requirements and the risk of system failure was (and presumably remains) high. Indeed, our own review concluded that further development of such schemes would better await further research. The UK scheme is being dismantled with little evidence of lasting health gain attributable to the programme.\n\nIn summary, I do not feel the paper is suitable in its present form. This reads rather as operational review, albeit detailed, than research. The findings are nevertheless important and the authors are to be commended for their thoroughness. The findings might better be submitted in the form of a more descriptive policy piece. Certainly this work could be of benefit to those designing and implementing policy in Thailand and comparable settings.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2700
|
https://f1000research.com/articles/5-2689/v1
|
17 Nov 16
|
{
"type": "Research Article",
"title": "YeATSAM analysis of the walnut and chickpea transcriptome reveals key genes undetected by current annotation tools",
"authors": [
"Sandeep Chakraborty",
"Pedro J. Martínez-García",
"Abhaya M. Dandekar",
"Pedro J. Martínez-García"
],
"abstract": "Background: The transcriptome, a treasure trove of gene space information, remains severely under-used by current genome annotation methods. Methods: Here, we present an annotation method in the YeATS suite (YeATSAM), based on information encoded by the transcriptome, that demonstrates artifacts of the assembler, which must be addressed to achieve proper annotation. Results and Discussion: YeATSAM was applied to the transcriptome obtained from twenty walnut tissues and compared to MAKER-P annotation of the recently published walnut genome sequence (WGS). MAKER-P and YeATSAM both failed to annotate several hundred proteins found by the other. Although many of these unannotated proteins have repetitive sequences (possibly transposable elements), other crucial proteins were excluded by each method. An egg cell-secreted protein and a homer protein were undetected by YeATSAM, although these did not produce any transcripts. Importantly, MAKER-P failed to classify key photosynthesis-related proteins, which we show emanated from Trinity assembly artifacts potentially not handled by MAKER-P. Also, no proteins from the large berberine bridge enzyme (BBE) family were annotated by MAKER-P. BBE is implicated in biosynthesis of several alkaloids metabolites, like anti-microbial berberine. As further validation, YeATSAM identified ~1000 genes that are not annotated in the NCBI database by Gnomon. YeATSAM used a RNA-seq derived chickpea (Cicer arietinum L.) transcriptome assembled using Newbler v2.3. Conclusions: Since the current version of YeATSAM does not have an ab initio module, we suggest a combined annotation scheme using both MAKER-P and YeATSAM to comprehensively and accurately annotate the WGS.",
"keywords": [
"RNA-seq",
"transcriptome",
"MAKER-P",
"genome annotation",
"berberine bridge enzyme",
"Trinity",
"walnut genome sequence"
],
"content": "Introduction\n\nThe genome of a particular organism is static in all cells, unlike the dynamic transcriptome, which is the transcription of the gene space into RNA molecules in a fashion responsive to a variety of factors, such as developmental stage, tissue, and external stimuli. RNA-seq, a high-throughput RNA sequencing method, has radically transformed the identification of transcripts and quantification of transcriptional levels (Flintoft, 2008; Wang et al., 2009). It is supported by a diverse set of computational methods for analyzing the resulting data (Chakraborty et al., 2015; Chang et al., 2015; Chu et al., 2013; Fu et al., 2012; Grabherr et al., 2011; Lohse et al., 2012; Mbandi et al., 2015; Schulz et al., 2012; Simpson et al., 2009; Trapnell et al., 2009; Trapnell et al., 2012; Wang et al., 2010; Zerbino & Birney, 2008).\n\nRapid advances in genome sequencing technologies have generated sequences for a deluge of organisms and species. The task of annotating these sequences has been addressed by several flows. These pipelines are categorized in http://omictools.com/genome-annotation-category and http://genometools.org/ and reviewed in (Yandell & Ence, 2012). Here, we focus specifically on MAKER-P (Campbell et al., 2014; Holt & Yandell, 2011; Law et al., 2015; Neale et al., 2014), which was used to annotate the recently published walnut genome sequence (WGS) (Martínez-García et al., 2016).\n\nIn the current study, the YeATS suite (Chakraborty et al., 2015) was enhanced to include genome annotation capabilities using RNA-seq-derived transcriptomes (YeATS annotation module - YeATSAM). First, the Trinity-assembled transcriptome obtained from twenty different tissues was compared to the WGS, excluding transcripts emanating from extraneous sources. This step incidentally revealed both biodiversity and plant-microbe interactions in walnut tree(s) from Davis, California (Chakraborty et al., 2016a). The WGS-derived transcripts were split into three open reading frames (ORFs), which were subjected to BLAST analysis using a plant proteome database obtained from the Ensembl database (Kersey et al., 2016). Transcripts can contain more than one significant ORF and must be handled differently depending on whether they map to the same or a different protein. The resulting analysis provided the WGS annotation.\n\nBoth MAKER-P and YeATSAM failed to annotate several hundred proteins annotated by the other. Many of the proteins had repetitive sequences or domains that, although difficult to detect, do not represent critical proteins during annotation. An egg cell-secreted protein (Sprunck et al., 2012), a copper chaperone (Shin et al., 2012), and a clavata3/ESR-Related protein (Kinoshita et al., 2007) were among the proteins not detected through the YeATSAM flow. Some proteins undetected in the MAKER-P flow are more significant in the context of a plant genome: several photosynthesis-related proteins encoded by the chloroplast (Nelson & Yocum, 2006) and the large family of FAD-binding berberine bridge enzymes (BBE) involved in biosynthesis of antimicrobial benzophenanthridines (Cheney, 1963; Winkler et al., 2008). We posited possible reasons for such exclusions and recommend incorporating both flows for comprehensive enumeration of genes in the WGS.\n\nAs further validation, YeATSAM was applied to chickpea (Cicer arietinum L.), an important pulse crop with many nutritional and health benefits (Jukanti et al., 2012). The RNA-seq-derived transcriptome of chickpea has also been sequenced (Garg et al., 2011) and was processed through the YeATSAM pipeline to identify ~1000 proteins that are encoded by these transcripts, but are not annotated in the NCBI database, most of which were annotated using Gnomon (Souvorov et al., 2010).\n\n\nMethods\n\nThe input to YeATSAM is a set of post-assembly transcripts (∅TRS) and the walnut genome sequence (WGS) (Figure 1). Transcripts that do not align to the WGS were removed (Chakraborty et al., 2016a). A BLAST database of protein peptides (plantpep.fasta: 1M seqeunces) using ~30 organisms (list.plants) from the Ensembl genome was created (Kersey et al., 2016). The three longest open reading frames (ORF), obtained using the ‘getorf’ utility in the EMBOSS suite (Rice et al., 2000), for each transcript in (∅TRS) underwent BLAST analysis (Camacho et al., 2013) to the ‘plantpep.fasta’. For cutoff E-value=1E-8, depending on the number of matches, the transcripts were clustered as:\n\n1. None - either a previously unknown gene or non-coding RNA.\n\n2. One - unique ORF.\n\n3. Multiple ORFs matching to the same gene - merge the ORFs if the Evalue of the combined ORF is significantly lower.\n\n4. Multiple ORFs matching to different genes - duplicate the transcripts, associating each transcript with a different ORF.\n\nFirst, transcripts from extraneous organisms are pruned. Next, the three longest open reading frames (ORFs) from each transcript undergo BLAST analysis to a database of plant peptides. Depending on the number of significant matches, the transcripts are clustered as: (a) None - either a previously unknown gene, or non-coding RNA. (b) One - Unique ORF (c) Multiple ORFs matching to the same gene - merge the ORFs if the Evalue of the combined ORF is significantly lower. (d) Multiple ORFs matching to different genes - duplicate the transcripts, associating each with a different ORF. Subsequently, the ORFs are merged based on overlapping amino acid sequences and exact substrings are removed.\n\nFifteen samples of walnut tissue were gathered from Chandler trees growing in the Stuke block at UC Davis between April and October 2008. Four additional samples were taken from Chandler plant material from the same orchard maintained in tissue culture. Several grams of leaf and root tissue from each plant were frozen in liquid nitrogen and then transferred to a -80 C freezer. RNA was isolated from each sample using the hot borate method (Wilkins & Smart, 1996) followed by purification and DNAse treatment using an RNA/DNA Mini Kit (Qiagen, Valencia, CA) per the manufacturer’s protocol. High-quality RNA was confirmed by running an aliquot of each sample on an Experion Automated Electrophoresis System (Bio-Rad Laboratories, Hercules, CA). The cDNA libraries were constructed following the Illumina mRNA-sequencing sample preparation protocol (Illumina Inc., San Diego, CA). Final elution was performed with 16µL RNase-free water. The quality of each library was determined using a BioRad Experion (BioRad, Hercules, CA). Each library was run as an independent lane on a Genome Analyzer II (Illumina, San Diego, CA) to generate 85bp paired-end sequences from each cDNA library. Over a billion reads were obtained. Prior to assembly, all reads underwent quality control for paired-end reads and trimming using Sickle v1.33 (Joshi & Fass, 2011). The minimum read length was 45bp with a minimum Sanger quality score of 35. The quality-controlled reads were de novo assembled with Trinity v2.0.6 (Grabherr et al., 2011). Standard parameters were used and the minimum contig length was 300bp. Individual assemblies for each library and a combined assembly of all tissues were performed.\n\nThe walnut genome sequence has been released to the public domain (http://ucanr.edu/sites/wgig/). The Illumina (Genome Analyzer II) for all 20 tissues can be accessed at http://www.ncbi.nlm.nih.gov/sra/PRJNA232394.\n\nThe transcriptome of Cicer arietinum (transHybrid.fasta, ICC4958; Desi chickpea) was obtained from http://www.nipgr.res.in/ctdb.html (Garg et al., 2011). The dataset ‘represents optimized de novo hybrid assembly of 454 and short-read sequence data.’ About two million 454 reads were assembled using Newbler v2.3 followed by hybrid assembly with 53409 transcripts generated by optimized short-read data assembly using TGICL, as reported previously (Garg et al., 2011). The set of annotated proteins from chickpea was obtained from the NCBI database (chickpea.pep.fasta, N=34198).\n\nPHYML v3.0 was used to generate phylogenetic trees from alignments (Guindon et al., 2005). Multiple sequence alignment was done using ClustalW (Larkin et al., 2007) and figures were generated using the ENDscript server 2.0 (Robert & Gouet, 2014). The source code written in Perl is provided as Dataset 1 (YeATSAM.tgz). A README is provided inside the top-level directory for installation and running the programs.\n\n\nResults and discussion\n\nThe input to YeATSAM was ~111K Trinity-assembled transcripts (Combined TrinityFull.fasta) (Figure 1). Each transcript was aligned to the WGS (wgs.5d.scafSeq200+.trimmed) using BLAST (Camacho et al., 2013). Transcripts that did not align to the WGS (cutoff BLAST bitscore=75) were excluded (Chakraborty et al., 2016a). Those transcripts that aligned to the WGS (list.transcriptome.clean: 106K) were split into the three longest open reading frames (ORF) (list.transcriptome.clean.ORFS: 320K).\n\nA BLAST database of protein peptides (plantpep.fasta:1M sequences) using ~30 organisms (list.plants) from the Ensembl genome was created (Kersey et al., 2016). The availability of proteomes from related organisms accelerates the annotation. The BLAST results of list.transcriptome.clean.ORFS: 320K on ‘plantpep.fasta’ was processed using a cutoff: bitscore=60, Evalue~=1E-10.\n\nThere are two instances in which ORFs can be merged to create a longer amino acid sequence. The first scenario occurs when a particular transcript has multiple ORFs that match to the same protein with high significance, indicating that a sequencing or assembly error has broken a contiguous ORF (Chakraborty et al., 2015). In total, 5% of the present transcripts (5,000 of 106,000) had two or more ORFs matching with high significance to the same protein, exactly mirroring the 5% error rates seen in transcripts restricted to the transcriptome from the tissue at the heartwood/sapwood transition zone in black walnut (Chakraborty et al., 2015). While most of these transcripts have repetitive elements, there were other non-repetitive sequences with this particular problem. C20727_G1_I1 is one example: it has two ORFS, ORF_15 and ORF_36, that match a DNA repair metallo-β-lactamase family protein (Accession number: XP007043420.1) with Evalues=9E-70 and 6E-96, respectively (Figure 2a). The two ORFs were merged (inserting the sequence ‘ZZZ’, although the length of the missing fragment is not known) since the Evalue of the combined ORF reduces to 2E-175 and the merged sequence was chosen as representative for the transcript. ORFs are not merged when the combined ORF did not significantly decrease the Evalue and the longer ORF was selected to represent the transcript.\n\n(a) ORFs from the same transcript: C20727_G1_I1 has two ORFS (ORF 15 and ORF 36) matching to a DNA repair metallo-β-lactamase family protein (Accession number: XP007043420.1) with high significance. We merged the two ORFs (inserting ‘ZZZ’) since the Evalue of the combined ORF is significantly reduced. (b) ORFs from different transcripts: We merged ORFs from two different transcripts (C53209_G8_I1 and C53209_G6_I1), since both transcripts map to the same scaffold (SUPER472) can be overlapped based on the sequence string ‘PNRSSLP’, and the merged ORF has a significantly reduced Evalue.\n\nThe other scenario occurs when the assembler fails to merge two transcripts into a single one. In this instance, two ORFs emanating from different transcripts with significant overlaps were merged. While the merging of two ORFs was described previously (Chakraborty et al., 2015), we introduced an additional filter to select mergeable ORFs based on whether the E-value obtained by merging the two ORFs is significantly reduced. For example, transcripts C53209_G8_I1 and C53209_G6_I1 both map to the scaffold SUPER472 and their corresponding ORFs can be merged based on the sequence string ‘PNRSSLP’ (Figure 2b). The individual ORFs and the combined ORFs align to an autophagy-related protein (TAIR ID: AT3G49590.2) with Evalues 2e-106, 8e-63, and 1e-180, respectively. The increased significance of the combined ORF, in addition to other checks, like ensuring that mapping is to the same scaffold, adds further support to the fact that these transcripts should have been contiguous in the final assembled transcriptome.\n\nAbout 3% of transcripts have ORFs that map to different proteins. Some transcripts should not have been merged. C1089_G1_I1 is an interesting example: a 4574 nt transcript that maps to the chloroplast and encodes two genes. One is highly variable and the other is conserved. The two ORFS, ORF_64 (fwd: 1117-2631) and ORF_108 (fwd: 3195 - 4271), map to maturase K (TAIR ID: ATCG00040.1) and photosystem II reaction center protein (TAIR ID: ATCG00020.1) with very high significance. Maturase K is a good candidate for barcoding angiosperms because it has highly variable coding sequences (Yu et al., 2011), while the photosystem II reaction center protein is completely conserved (100% similarity with Arabidopsis). Another example is C19241_G1_I1 (4702 nt), split into ORF_68 (fwd: 176-3487) and ORF_115 (reverse: 4509-4096) encoding a damaged DNA binding protein (TAIR ID: AT4G05420.1) and photosystem I subunit K (TAIR ID: AT1G30380.1) with high significance, respectively. These transcripts are split in the YeATSAM flow, resulting in one ORF per transcript. Subsequently, this artifact of the Trinity assembly led to several unannotated proteins in the MAKER-P flow.\n\nWe compared the annotations of walnut by MAKER-P (walnut.wgs.5d.all.maker.proteins.fasta) and YeATSAM (DB.ORFBEST.60). MAKER-P and YeATSAM each failed to annotate several proteins identified by the other (MAKER-P=~4000; YeATSAM=700). Although most of these unannotated proteins have repetitive sequences (transposable elements), some vital, non-repetitive proteins were excluded by each method. For example, an egg cell-secreted protein (‘WALNUT 00001389-RA’) (Sprunck et al., 2012), a Clavata3/esr-related gene (‘WALNUT 00023705-RA’) (Kinoshita et al., 2007) and a copper chaperone (‘WALNUT 00006344-RA’) (Shin et al., 2012) were not annotated in the YeATSAM flow. These genes do not have transcripts in the twenty tissues analyzed in the current study and are most likely pseudogenes.\n\nMAKER-P fails to annotate many key photosystem-related proteins (Table 1). The transcript C59245_G1_I1 has ORF_43 (fwd: 176-1714) and ORF_70 (fwd: 2212-2496) mapping to photosystem II reaction center protein B (PSBB) and photosystem II reaction center protein H (PSBH), respectively. While MAKER-P does annotate PSBB, it failed to detect PSBH. These proteins map to transcripts encoding two significant ORFs (>1E-10), indicating that failure to handle this might have excluded these proteins. Also, these proteins are encoded by the chloroplast. However, this limitation of MAKER-P is not confined to transcripts emanating from the chloroplast. For example, C48031_G3_I1 encodes a leucine-rich repeat transmembrane protein kinase (AT5G48940.1) and a metallo-β-lactamase family protein (TAIR ID: AT4G33540.1) and is mapped to scaffold ‘SUPER374’. MAKER-P failed to annotate the β-lactamase family protein.\n\nThese transcripts have multiple open reading frames (ORFs) mapping to different proteins with high significance. For example, C59245_G1_I1 has another ORF (43) which maps to photosystem II reaction center protein B (PSBB). MAKER-P annotates PSBB, but not PSBH. These transcripts all emanate from the chloroplast, although not all genes that MAKER-P failed to annotate were from the chloroplast. Genes predicted by MAKER-P that are not identified by YeATSAM are listed with their homology to corresponding genes in the TAIR database.\n\nFurthermore, MAKER-P failed to annotate any FAD-binding berberine bridge enzymes (BBE) in the WGS (Kutchan & Dittrich, 1995). These enigmatic enzymes are implicated in the transformation of (S)-reticuline to (S)-scoulerine during benzophenanthridine alkaloid biosynthesis in plants (Winkler et al., 2006). This pathway is over-expressed upon osmotic stress and pathogen attack (Attila et al., 2008; González-Candelas et al., 2010), provides resistance in lettuce, sunflower and transgenic tobacco by generating anti-microbial compounds (Custers et al., 2004), and has unknown functions at specific developmental stages in Arabidopsis (Irshad et al., 2008; Pagnussat et al., 2005). Moreover, it is expressed in floral nectar (Nectarin V, NtBBE) (Carter & Thornburg, 2004) and roots of tobacco (Kajikawa et al., 2011), and in xylem sap of cabbage (Ligat et al., 2011) and grapevine (Chakraborty et al., 2016b). NtBBE was constitutively expressed in the Phytophthora infestans-resistant potato genotype SW93-1015 (Ali et al., 2012). Benzophenanthridines are antimicrobial; the California poppy (Eschscholzia californica) is used as a traditional medicine (Cheney, 1963; Oldham et al., 2010). Oral administration of the alkaloid berberine isolated from a Chinese herb lowered cholesterol in 32 hypercholesterolemic patients over three months (Kong et al., 2004). Berberine has also been shown to possess antidiabetic properties (Lee et al., 2006).\n\nThe number of BBE genes in different plant species varies significantly from one in moss (Physcomitrella patens) to 64 in western poplar (Populus trichocarpa) (Daniel et al., 2015). A. thaliana has 27 TAIR IDs assigned to BBE enzymes, with two splice variants (Supplementary Table 1) (Daniel et al., 2015). Based on the current transcriptome, there are four full length BBE genes (JrBBE1 to 4) that map to different scaffolds in the WGS, in addition to other fragmented transcripts (Table 2 and Table 3). JrBBE1 (C54052_G1_I1) maps to the scaffold JCF7180001213852 and encodes a 564 aa long ORF, which has significant matches to Uniprot:P30986. The closest match of Uniprot:P30986 (with a low significance of 1E-07) to the MAKER-P annotation is ‘WALNUT 00019959-RA’, a 476 aa long cytokinin dehydrogenase. The sequence alignment of JrBBE genes to Uniprot (P30986) is shown (Figure 3a).\n\n(a) The JrBBE sequences were aligned to berberine bridge enzyme (BBE) genes from Eschscholzia californica (EcBBE; California poppy), Arabidopsis thaliana (AtBBE15) and Nicotiana tabacum (Nectarin V). Secondary structure information from the structure PDBid:3D2D (E. californica) was used to annotate the sequences. The signal peptides are different in these proteins, suggesting different localization of these proteins in walnut. (b) Phylogenetic tree generated from the multiple sequence alignment.\n\nThese oxidases are involved in the benzophenanthridine alkaloid biosynthesis in plants. Arabidopsis has 27 loci for this family (and a splice variant) (Table 3). Here, there are four full length berberine bridge enzyme (BBE) genes (named JrBBE1-4) identified using the transcriptome. Some of the proteins are truncated (like C54286_G1_I1), which might be an artifact of the Trinity assembler. Thus, this is not a complete enumeration of the JrBBE genes.\n\nThe genes have tissue-specific expression - JrBBE3 is highly expressed in the roots and transition zone. The tissue abbreviations are from Chakraborty et al., 2016a.\n\nAs with the walnut transcriptome, the chickpea transcriptome (transHybrid.fasta: n=34760) (Garg et al., 2011) was split into three ORFs, each of which was BLAST’ed to the subset of plant proteins created from the Ensembl database. Subsequently, the ORFs with significant homology to this database (n=29263) were BLAST’ed to the set of annotated chickpea proteins in the NCBI database (n=34198). Most of these annotations were done using Gnomon (Souvorov et al., 2010) (http://www.ncbi.nlm.nih.gov/bioproject/PRJNA190909), which analyzed ~35000 transcripts. There are ~1500 proteins identified by YeATSAM that are absent in the NCBI database (Evalue cutoff 1E-10). Some of these proteins and their corresponding genes in the TAIR database are shown (Table 4). TC00902 is an interesting example with two merged genes: a hydrogen ion-transporting ATP synthase (TAIR ID: ATMG00640.1) and a cytochrome C biogenesis (TAIR ID: ATMG00900.1). While Gnomon identified the cytochrome C biogenesis protein (Genbank: XP_004500083.1), it failed to identify the ATP synthase. Unlike MAKER-P, Gnomon generates transcripts through predictive algorithms and does not take the transcriptome as an input. Notwithstanding, these chickpea genes remain unannotated despite the presence of a straightforward method to detect them from available transcripts.\n\nMost of the NCBI genes were predicted using Gnomon. YeATSAM used the publicly available transcriptome from chickpea to identify these genes. The corresponding genes from the TAIR database are shown. Several transcripts (like TC20962) encode multiple genes, while others (like TC01181) have only one significant ORF. TRid, transcript id; TAIRid: Arabidopsis thaliana id.\n\nAmong the ~700 genes not detected by YeATSAM, there are ~500 genes with no matches in the complete ‘nr’ database. Of these, ~300 have no transcripts (SetA), while the remaining ~200 have matches among the transcripts (SetB). Considering the sensitivity of RNA-seq and the wide coverage of twenty tissues, it is a definite possibility that SetA are pseudogenes. Future work in YeATSAM will focus on methods to distinguish these two classes of genes.\n\nThe availability of a RNA-seq-derived transcriptome from a newly sequenced organism like walnut, for which there are related annotated genomes (Arabidopsis, Vitis, etc), immensely simplifies annotation of the genome and influences the choice of annotation software. Here, we introduce a new annotation method in the YeATS suite (YeATS Annotation Module - YeATSAM), which was used to annotate the newly-sequenced walnut genome using a simple workstation. The key differentiating factor in YeATSAM is the splitting of the assembled transcriptome into multiple ORFs (Chakraborty et al., 2015). Transcripts often have more than one significant ORF that must be handled differently depending on whether they map to the same or different proteins. We show that YeATSAM failed to annotate ~700 genes identified by MAKER-P, while identifying ~4000 genes missed by MAKER-P. While most of these genes have repetitive stretches, both methods missed vital genes identified by the other. Since many of the additional genes identified by MAKER-P have no known transcripts, we posit that these were identified using ab initio methods. In the absence of such an ab initio module in YeATSAM, we propose a combined method using both MAKER-P and YeATSAM to annotate the WGS. YeATSAM was also applied to the chickpea transcriptome and identified ~1000 proteins that are not annotated in the NCBI database. This transcriptome was assembled using Newbler v2.3 (Garg et al., 2011) and most of the 34198 chickpea proteins in the NCBI database were annotated using Gnomon, the standard annotation tool (http://www.ncbi.nlm.nih.gov/genome/guide/gnomon.shtml).\n\n\nSoftware availability\n\nLatest source code: https://github.com/sanchak/YeATSAM\n\nArchived source code at time of publication: DOI: 10.5281/zenodo.165992 (Sanchak, 2016)\n\nLicense: GNU General Public License",
"appendix": "Author contributions\n\n\n\nAMD and SC were involved in the study design. SC developed the software and designed pipeline that enabled the annotations, PJM-G was involved in the validation with the walnut genome sequence. SC wrote the first draft and the rest of the authors were involved in subsequent editing and modifications.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nAMD wishes to acknowledge grant support from the California Walnut Board.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors wish to thank the California Walnut Board for their generous support. We appreciate the efforts of Mary Lou Mendum in editing this manuscript.\n\n\nSupplementary material\n\nSupplementary Table 1. FAD-binding (Berberine family) reticuline oxidases in Arabidopsis. These proteins are involved in the benzophenanthridine alkaloid biosynthesis in plants. Benzophenanthridine is an anti-microbial metabolite. There are 27 loci in chromosomes 1,2,4 and 5, with one splice variant (AT4G20830) marked with an asterisk.\n\nClick here to access the data.\n\n\nReferences\n\nAli A, Moushib LI, Lenman M, et al.: Paranoid potato: phytophthora-resistant genotype shows constitutively activated defense. Plant Signal Behav. 2012; 7(3): 400–408. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAttila C, Ueda A, Cirillo SL, et al.: Pseudomonas aeruginosa PAO1 virulence factors and poplar tree response in the rhizosphere. Microb Biotechnol. 2008; 1(1): 17–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCamacho C, Madden T, Ma N, et al.: BLAST Command Line Applications User Manual. 2013. Reference Source\n\nCampbell MS, Law M, Holt C, et al.: Maker-P: a tool kit for the rapid creation, management, and quality control of plant genome annotations. Plant Physiol. 2014; 164(3): 513–524. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarter CJ, Thornburg RW: Tobacco nectarin V is a flavin-containing berberine bridge enzyme-like protein with glucose oxidase activity. Plant Physiol. 2004; 134(1): 460–469. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Britton M, Martínez-García PJ, et al.: Deep RNA-seq profile reveals biodiversity, plant-microbe interactions and a large family of NBS-LRR resistance genes in walnut (Juglans regia) tissues. AMB Express. 2016a; 6(1): 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Britton MT, Wegrzyn JL, et al.: YeATS - a tool suite for analyzing RNA-seq derived transcriptome identifies a highly transcribed putative extensin in heartwood/sapwood transition zone in black walnut [version 2; referees: 3 approved]. F1000Res. 2015; 4: 155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Nascimento R, Zaini PA, et al.: Sequence/structural analysis of xylem proteome emphasizes pathogenesis-related proteins, chitinases and β-1, 3-glucanases as key players in grapevine defense against Xylella fastidiosa. PeerJ. 2016b; 4: e2007. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChang Z, Li G, Liu J, et al.: Bridger: a new framework for de novo transcriptome assembly using RNA-seq data. Genome Biol. 2015; 16: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheney RH: Therapeutic potential of Eschscholtziae californicae herb. Q J Crude Drug Res. 1963; 3(3): 413–416. Publisher Full Text\n\nChu HT, Hsiao WW, Chen JC, et al.: EBARDenovo: highly accurate de novo assembly of RNA-seq with efficient chimera-detection. Bioinformatics. 2013; 29(8): 1004–1010. PubMed Abstract | Publisher Full Text\n\nCusters JH, Harrison SJ, Sela-Buurlage MB, et al.: Isolation and characterisation of a class of carbohydrate oxidases from higher plants, with a role in active defence. Plant J. 2004; 39(2): 147–160. PubMed Abstract | Publisher Full Text\n\nDaniel B, Pavkov-Keller T, Steiner B, et al.: Oxidation of monolignols by members of the berberine bridge enzyme family suggests a role in plant cell wall metabolism. J Biol Chem. 2015; 290(30): 18770–18781. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlintoft L: Transcriptomics: digging deep with RNA-seq. Nat Rev Genet. 2008; 9: 568. Publisher Full Text\n\nFu L, Niu B, Zhu Z, et al.: CD-HIT: accelerated for clustering the next-generation sequencing data. Bioinformatics. 2012; 28(23): 3150–3152. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarg R, Patel RK, Tyagi AK, et al.: De novo assembly of chickpea transcriptome using short reads for gene discovery and marker identification. DNA Res. 2011; 18(1): 53–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzález-Candelas L, Alamar S, Sánchez-Torres P, et al.: A transcriptomic approach highlights induction of secondary metabolism in citrus fruit in response to Penicillium digitatum infection. BMC Plant Biol. 2010; 10: 194. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrabherr MG, Haas BJ, Yassour M, et al.: Full-length transcriptome assembly from RNA-seq data without a reference genome. Nat Biotechnol. 2011; 29(7): 644–652. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuindon S, Lethiec F, Duroux P, et al.: PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference. Nucleic Acids Res. 2005; 33(Web Server issue): W557–559. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolt C, Yandell M: Maker2: an annotation pipeline and genome-database management tool for second-generation genome projects. BMC Bioinformatics. 2011; 12: 491. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIrshad M, Canut H, Borderies G, et al.: A new picture of cell wall protein dynamics in elongating cells of Arabidopsis thaliana: Confirmed actors and newcomers. BMC Plant Biol. 2008; 8: 94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoshi N, Fass J: Sickle: A sliding-window, adaptive, quality-based trimming tool for fastq files. (version 1.33) [software]; 2011. Reference Source\n\nJukanti AK, Gaur PM, Gowda CL, et al.: Nutritional quality and health benefits of chickpea (Cicer arietinum L.): a review. Br J Nutr. 2012; 108(Suppl 1): S11–S26. PubMed Abstract | Publisher Full Text\n\nKajikawa M, Shoji T, Kato A, et al.: Vacuole-localized berberine bridge enzyme-like proteins are required for a late step of nicotine biosynthesis in tobacco. Plant Physiol. 2011; 155(4): 2010–2022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKersey PJ, Allen JE, Armean I, et al.: Ensembl genomes 2016: more genomes, more complexity. Nucleic Acids Res. 2016; 44(D1): D574–D580. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKinoshita A, Nakamura Y, Sasaki E, et al.: Gain-of-function phenotypes of chemically synthetic CLAVATA3/ESR-related (CLE) peptides in Arabidopsis thaliana and Oryza sativa. Plant Cell Physiol. 2007; 48(12): 1821–1825. PubMed Abstract | Publisher Full Text\n\nKong W, Wei J, Abidi P, et al.: Berberine is a novel cholesterol-lowering drug working through a unique mechanism distinct from statins. Nat Med. 2004; 10(12): 1344–1351. PubMed Abstract | Publisher Full Text\n\nKutchan TM, Dittrich H: Characterization and mechanism of the berberine bridge enzyme, a covalently flavinylated oxidase of benzophenanthridine alkaloid biosynthesis in plants. J Biol Chem. 1995; 270(41): 24475–24481. PubMed Abstract | Publisher Full Text\n\nLarkin MA, Blackshields G, Brown NP, et al.: Clustal W and Clustal X version 2.0. Bioinformatics. 2007; 23(21): 2947–2948. PubMed Abstract | Publisher Full Text\n\nLaw M, Childs KL, Campbell MS, et al.: Automated update, revision, and quality control of the maize genome annotations using MAKER-P improves the B73 refgen_v3 gene models and identifies new genes. Plant Physiol. 2015; 167(1): 25–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee YS, Kim WS, Kim KH, et al.: Berberine, a natural plant product, activates AMP-activated protein kinase with beneficial metabolic effects in diabetic and insulin-resistant states. Diabetes. 2006; 55(8): 2256–2264. PubMed Abstract | Publisher Full Text\n\nLigat L, Lauber E, Albenne C, et al.: Analysis of the xylem sap proteome of Brassica oleracea reveals a high content in secreted proteins. Proteomics. 2011; 11(9): 1798–1813. PubMed Abstract | Publisher Full Text\n\nLohse M, Bolger MA, Nagel A, et al.: RobiNA: a user-friendly, integrated software solution for RNA-Seq-based transcriptomics. Nucleic Acids Res. 2012; 40(Web Server issue): W622–W627. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartínez-García PJ, Crepeau MW, Puiu D, et al.: The walnut (Juglans regia) genome sequence reveals diversity in genes coding for the biosynthesis of non-structural polyphenols. Plant J. 2016; 87(5): 507–32. PubMed Abstract | Publisher Full Text\n\nMbandi SK, Hesse U, van Heusden P, et al.: Inferring bona fide transfrags in RNA-Seq derived-transcriptome assemblies of non-model organisms. BMC Bioinformatics. 2015; 16(1): 58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeale DB, Wegrzyn JL, Stevens KA, et al.: Decoding the massive genome of loblolly pine using haploid DNA and novel assembly strategies. Genome Biol. 2014; 15(3): R59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNelson N, Yocum CF: Structure and function of photosystems I and II. Annu Rev Plant Biol. 2006; 57: 521–565. PubMed Abstract | Publisher Full Text\n\nOldham JT, Hincapie M, Rejtar T, et al.: Shotgun proteomic analysis of yeast-elicited California poppy (Eschscholzia californica) suspension cultures producing enhanced levels of benzophenanthridine alkaloids. J Proteome Res. 2010; 9(9): 4337–4345. PubMed Abstract | Publisher Full Text\n\nPagnussat GC, Yu HJ, Ngo QA, et al.: Genetic and molecular identification of genes required for female gametophyte development and function in Arabidopsis. Development. 2005; 132(3): 603–614. PubMed Abstract | Publisher Full Text\n\nRice P, Longden I, Bleasby A: EMBOSS: the European Molecular Biology Open Software Suite. Trends Genet. 2000; 16(6): 276–277. PubMed Abstract | Publisher Full Text\n\nRobert X, Gouet P: Deciphering key features in protein structures with the new endscript server. Nucleic Acids Res. 2014; 42(Web Server issue): W320–W324. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanchak: sanchak/YeATSAM 1 [Data set]. Zenodo. 2016. Data Source\n\nSchulz MH, Zerbino DR, Vingron M, et al.: Oases: robust de novo RNA-seq assembly across the dynamic range of expression levels. Bioinformatics. 2012; 28(8): 1086–1092. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShin LJ, Lo JC, Yeh KC: Copper chaperone antioxidant protein1 is essential for copper homeostasis. Plant Physiol. 2012; 159(3): 1099–1110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimpson JT, Wong K, Jackman SD, et al.: Abyss: a parallel assembler for short read sequence data. Genome Res. 2009; 19(6): 1117–1123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSouvorov A, Kapustin Y, Kiryutin B, et al.: Gnomon-NCBI eukaryotic gene prediction tool. NCBI. 2010; 1–24. Reference Source\n\nSprunck S, Rademacher S, Vogler F, et al.: Egg cell-secreted ec1 triggers sperm cell activation during double fertilization. Science. 2012; 338(6110): 1093–1097. PubMed Abstract | Publisher Full Text\n\nTrapnell C, Pachter L, Salzberg SL: TopHat: discovering splice junctions with RNA-seq. Bioinformatics. 2009; 25(9): 1105–1111. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Roberts A, Goff L, et al.: Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and cufflinks. Nat Protoc. 2012; 7(3): 562–578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang L, Feng Z, Wang X, et al.: DEGseq: an R package for identifying differentially expressed genes from RNA-seq data. Bioinformatics. 2010; 26(1): 136–138. PubMed Abstract | Publisher Full Text\n\nWang Z, Gerstein M, Snyder M: RNA-seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009; 10(1): 57–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilkins TA, Smart LB: Isolation of RNA from plant tissue. Ed. Paul A. Kriedg, A laboratory guide to RNA: Isolation, Analysis, and Synthesis. 1996; 21–42. Reference Source\n\nWinkler A, Hartner F, Kutchan TM, et al.: Biochemical evidence that berberine bridge enzyme belongs to a novel family of flavoproteins containing a bi-covalently attached fad cofactor. J Biol Chem. 2006; 281(30): 21276–21285. PubMed Abstract | Publisher Full Text\n\nWinkler A, Lyskowski A, Riedl S, et al.: A concerted mechanism for berberine bridge enzyme. Nat Chem Biol. 2008; 4(12): 739–741. PubMed Abstract | Publisher Full Text\n\nYandell M, Ence D: A beginner’s guide to eukaryotic genome annotation. Nat Rev Genet. 2012; 13(5): 329–342. PubMed Abstract | Publisher Full Text\n\nYu J, Xue JH, Zhou SL: New universal matK primers for DNA barcoding angiosperms. J Syst Evol. 2011; 49(3): 176–181. Publisher Full Text\n\nZerbino DR, Birney E: Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome Res. 2008; 18(5): 821–829. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "17772",
"date": "30 Nov 2016",
"name": "Carmen Cuadrado",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper the authors investigate a new annotation method in the YeATS suite (YeATS Annotation Module - YeATSAM), which was used to annotate the newly-sequenced walnut genome using a simple workstation. In YeATSAM the assembled transcriptome is splitting into multiple ORFs. They show that YeATSAM failed to annotate ~700 genes identified by MAKER-P, while identifying ~4000 genes missed by MAKER-P. While most of these genes have repetitive stretches, both methods missed important genes identified by the other. Since many of the additional genes identified by MAKER-P have no known transcripts, the authors suggest that these were identified using ab initio methods. In the absence of such an ab initio module in YeATSAM, they propose a combined method using both MAKER-P and YeATSAM to annotate the WGS. This work is very interesting because the results probe the adequacy of this new annotation method. In general, the presentation is clear and the conclusions are adjusted to the results obtained. The figures and tables are also clear. Some comments are listed below:\n\nIn the abstract, please change the order in “Results and Conclusions” part, from lines 17 to 21. Consider to mention first “YeATSAM used a […] chickpea transcriptome assembled using Newbler v2.3” and then that “1000 genes were identified, which were not previously annotated by Gnomon annotation tool”.\n\nFourth and fifth paragraphs of Introduction could be changed to the discussion and in the introduction leave some short sentences about this.\n\nEn fifth line of Methods section correct “seqeunces”.\n\nPlease consider to explain further section “future work”.",
"responses": []
},
{
"id": "17771",
"date": "06 Dec 2016",
"name": "Margaret E Staton",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis work focuses on a current major challenge in improving genome and transcriptome automated annotation. It also deals with difficulties derived from imperfect de novo assemblies, such as transcripts representing fused and split genes. The increasing affordability to generate sequencing data enhances the demand for more powerful annotation predicting tools and pipelines, although exact annotations will still remain for wet-lab experimentation. This paper compares the YeATSAM tool to previously annotated genomes, in which existing de novo assemblies are used as generated and analyzed with blast, interproscan or similar tools for homology-based annotation. Even though the paper indicates novelty of the method, there are critical points that need modification.\n\nThe method reported here - YeATSAM - is not clearly different from the work already reported in a previous paper1. This method reported here (identify 3 longest ORFs, then blast to known proteins, then merge or split if needed) looks identical to previously published in F1000 Research1 - for instance, Figure 1 in the previously published paper is an almost identical replica of the Figure 1 in this paper. The current work does appropriately cite this previous paper. However, if there is a novel algorithm to describe here, it needs to be clearly delineated from this previous work. Otherwise, it should just be cited.\nThis previous publication also compares the annotation of the walnut genome by YeATS and Maker-P. The previous paper and this paper both profile walnut transcripts where ORFs were merged and transcripts that match multiple proteins; this paper does use different transcripts to demonstrate the methodology and results. To emphasize the novelty of the present paper, the authors should clarify exactly what this paper offers in addition to the previous paper. In this regard, the paper does go a bit further than the previous one by detailing genes that were unannotated by MAKER-P but found via this method; those genes were not reported previously. If the algorithm has not changed from the previous work, a new focus for this paper is needed, possibly reporting these novel genes such as the BBEs.\nThe addition of the chickpea genome annotation is barely described - a single short results paragraph. The author also has an existing F1000 research article describing the use of YeATS on chickpea transcripts and describing the detection of missed genes and describing multiple ORFs mapping to different proteins and fragmented ORFs of the same protein2. How does this report differ from that one? That one is not cited in this report\nData reproducibility and accessibility - the new annotations are not made available for either walnut or chickpea (unless they are the same as the ones provided already in Chakraborty et al. 20151). It would be very difficult to replicate this experiment. No parameters or commands are provided to determine how PHYML, ClustalW or ENDscript server were utilized. I confirmed that YeATSAM.zip (listed as YeATSAM.tgz in manuscript) with README is available for download and the links to data are functional. I was unable to install YeATSAM; the installation and usage instructions are very vague.\n\nSpecifics:\nThe joined results of MAKER-P and YeATSAM look promising for improving genome annotations. However, a figure or table describing the total number of genes predicted by each software and the overlap would be very helpful to visualize the results. The report commonly has words like “several” or “many” and the usage of “~” in front of numbers. Numbers should be reported exactly where they are important to the method and results. Examples:\n“A BLAST database of protein peptides (plantpep.fasta: 1M seqeunces) using ~30 organisms (list.plants)” - also list.plants does not link to anything.\n\n“About 3% of transcripts have ORFs that map to different proteins”\n\n“MAKER-P and YeATSAM each failed to annotate several proteins identified by the other (MAKER-P=~4000; YeATSAM=700)”\n\n“Among the ~700 genes not detected by YeATSAM, there are ~500 genes with no matches in the complete ‘nr’ database. Of these, ~300 have no transcripts (SetA), while the remaining ~200 have matches among the transcripts (SetB).”\n\nBased on the content of the manuscript, the introduction focuses adequately on the explanation of the problematic of annotating newly assembled genomes and transcriptomes. However, a deeper introduction to the software utilized may be relevant for a better understanding of their choice and also of their basic mechanics. In relation to the results commented on the introduction, the relevance of some of the selected genes is not clear. Specifically, the relevance of the three “critical” proteins not detected by YeATSAM, which are not transcribed and are thus considered pseudogenes, is confusing.\nIn relation to the generation of de novo assemblies, the authors are suggested to provide detail on how the assemblies were combined, considering that the simple addition of libraries would lead to high redundancy. For the walnut genome, were the MAKER-P and YeATSAM packages using the same set of RNASeq reads? This would be an important point to emphasize - a true comparison of the two methods would preferably use the same starting point. The original walnut paper reports using 19 libraries (Martinez-Garcia et al. 2016); this paper reports 20 libraries.\n\nIn the results and discussion section, the manipulation of ORFs is an interesting concept, although the difference to the described methodology in Chakraborty et al. 20151 is not clear. The use of the term ORF is confusing here since it appears that the merged sequences are the encoded peptides, while ORFs are nucleotide sequences. Moreover, it seems likely the ORFs from the same gene might match different proteins because they are being compared to 30 different organisms. The ORFs could match to the orthologs of the gene in question from different organisms. (i.e. they have different matches to database entries, both orthologs, but they are legitimately from the same gene). In this case, merging is the best avenue, but the software would actually split the transcript apart. Was this seen in some transcripts? Also, when referring to significance with similar proteins, values should be provided.\nThe authors mention that many genes unannotated by MAKER-P have repetitive stretches. What types of repetitive stretches? There is no methodology given for this analysis? This needs to be described/explained. In regards to the sentence “Although most of these unannotated proteins have repetitive sequences (transposable elements)” - does that mean the unannotated proteins originate from within transposable elements, or transposable elements have inserted into the gene itself?\n\nThe authors do not address the overall differences in proteins detected by each annotation program - is a pattern that may explain these? Pseudogenes are mentioned twice, but this idea is not fully explained. While 20 tissues will capture many genes, it is probably not exhaustive – is there any additional evidence these “genes” are actually pseudogenes such as premature stop codons or frameshift mutations?\nMinor:\nThis sentence needs improved clarity: “The BLAST results of list.transcriptome.clean.ORFS: 320K on ‘plantpep.fasta’ was processed using a cutoff: bitscore=60, Evalue~=1E-10\"\nHeading “Transcripts with multiple ORFs” - the section above also deals with transcripts with multiple ORFs. This heading could be clarified.\nSome revision on the writing would improve readability. Abbreviations are recommended to be properly specified at first use in the manuscript and always in figures. Also, numbers and units should be spaced. In relation to the language, the authors are advised to review the use of scientific English, as well as verb tense consistency.\n\nIn Table 1, the main line indicates proteins not annotated by either program while the last line indicates listing of genes predicted by MAKER-P. These two sentences in the same caption lead to confusion. In addition, sizing and description of other figures might be improved.",
"responses": []
},
{
"id": "18792",
"date": "28 Dec 2016",
"name": "Nic Herndon",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article presents an annotation method, YeATSAM, that leverages the information contained in RNA-Seq derived transcriptomes. The method was compared with two other annotation methods using two organisms: MAKER-P (a RNA evidence based and ab initio hybrid method) with walnut, and NCBI Gnomon (a homology based and ab initio hybrid method) with chickpea. Although YeATSAM and MAKER-P identified same genes, there were also genes that were identified by only one of them (about 4,000 by YeATSAM and about 700 by MAKER-P), as well as genes that both methods failed to identify. Similarly, YeATSAM identified about 1,000 genes that Gnomon failed to identify.\nThe article is well written, the analysis is technically sound, the tables and figures present the results well, and the conclusions are supported by the data. Nonetheless, I would suggest the following changes:\nAddress discrepancies in the numbers reported, e.g., 20 tissues (in abstract, introduction, results, and future work) v. 19 tissues (in methods: 15 samples + four additional samples); 700 (in results) v. ~700 (in future work, and coclusions); ~1,500 chickpea proteins (in results) v. ~1,000 (in conclusions). Instead of approximate values report actual values.\n\nAs the tool is designed to be used with other organisms (besides walnut and chickpea), make the method and workflow (Figure 1) independent of any organism (e.g., the input to YeATSAM is the genome sequence rather than the walnut genome).\n\nFigure 3 (b) can be resized without losing its readability.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2689
|
https://f1000research.com/articles/4-1070/v1
|
14 Oct 15
|
{
"type": "Research Article",
"title": "RNA-Seq workflow: gene-level exploratory analysis and differential expression",
"authors": [
"Michael I. Love",
"Simon Anders",
"Vladislav Kim",
"Wolfgang Huber",
"Simon Anders",
"Vladislav Kim",
"Wolfgang Huber"
],
"abstract": "Here we walk through an end-to-end gene-level RNA-Seq differential expression workflow using Bioconductor packages. We will start from the FASTQ files, show how these were aligned to the reference genome, and prepare a count matrix which tallies the number of RNA-seq reads/fragments within each gene for each sample. We will perform exploratory data analysis (EDA) for quality assessment and to explore the relationship between samples, perform differential gene expression analysis, and visually explore the results.",
"keywords": [
"RNA-seq",
"differential expression",
"gene expression",
"Bioconductor",
"statistical analysis",
"high-throughput sequencing",
"visualization",
"genomics"
],
"content": "Introduction\n\nBioconductor has many packages which support analysis of high-throughput sequence data, including RNA sequencing (RNA-seq). The packages which we will use in this workflow include core packages maintained by the Bioconductor core team for importing and processing raw sequencing data and loading gene annotations. We will also use contributed packages for statistical analysis and visualization of sequencing data. Through scheduled releases every 6 months, the Bioconductor project ensures that all the packages within a release will work together in harmony (hence the “conductor” metaphor). The packages used in this workflow are loaded with the library function and can be installed by following the Bioconductor package installation instructions.\n\nIf you have questions about this workflow or any Bioconductor software, please post these to the Bioconductor support site. If the questions concern a specific package, you can tag the post with the name of the package, or for general questions about the workflow, tag the post with rnaseqgene. Note the posting guide for crafting an optimal question for the support site.\n\nThe data used in this workflow is stored in the airway package that summarizes an RNA-seq experiment wherein airway smooth muscle cells were treated with dexamethasone, a synthetic glucocorticoid steroid with anti-inflammatory effects1. Glucocorticoids are used, for example, by people with asthma to reduce inflammation of the airways. In the experiment, four primary human airway smooth muscle cell lines were treated with 1 micromolar dexamethasone for 18 hours. For each of the four cell lines, we have a treated and an untreated sample. For more description of the experiment see the PubMed entry 24926665 and for raw data see the GEO entry GSE52778.\n\n\nPreparing count matrices\n\nAs input, the count-based statistical methods, such as DESeq22, edgeR3, limma with the voom method4, DSS5, EBSeq6 and BaySeq7, expect input data as obtained, e.g., from RNA-seq or another high-throughput sequencing experiment, in the form of a matrix of integer values. The value in the i-th row and the j-th column of the matrix tells how many reads (or fragments, for paired-end RNA-seq) have been unambiguously assigned to gene i in sample j. Analogously, for other types of assays, the rows of the matrix might correspond e.g., to binding regions (with ChIP-Seq), species of bacteria (with metagenomic datasets), or peptide sequences (with quantitative mass spectrometry).\n\nThe values in the matrix must be raw counts of sequencing reads/fragments. This is important for DESeq2’s statistical model to hold, as only the raw counts allow assessing the measurement precision correctly. It is important to never provide counts that were pre-normalized for sequencing depth/library size, as the statistical model is most powerful when applied to raw counts, and is designed to account for library size differences internally.\n\nThe computational analysis of an RNA-seq experiment begins earlier: we first obtain a set of FASTQ files that contain the nucleotide sequence of each read and a quality score at each position. These reads must first be aligned to a reference genome or transcriptome. It is important to know if the sequencing experiment was single-end or paired-end, as the alignment software will require the user to specify both FASTQ files for a paired-end experiment. The output of this alignment step is commonly stored in a file format called SAM/BAM.\n\nA number of software programs exist to align reads to a reference genome, and the development is too rapid for this document to provide an up-to-date list. We recommend consulting benchmarking papers that discuss the advantages and disadvantages of each software, which include accuracy, sensitivity in aligning reads over splice junctions, speed, memory footprint, usability, and many other features.\n\nThe reads for this experiment were aligned to the Ensembl release 758 human reference genome using the STAR read aligner9. In this example, we have a file in the current directory called files with each line containing an identifier for each experiment, and we have all the FASTQ files in a subdirectory fastq. If you have downloaded the FASTQ files from the Sequence Read Archive, the identifiers would be SRA run IDs, e.g. SRR1039520. You should have two files for a paired-end experiment for each ID, fastq/SRR1039520_1.fastq1 and fastq/SRR1039520_2.fastq, which give the first and second read for the paired-end fragments. If you have performed a single-end experiment, you would only have one file per ID. We have also created a subdirectory, aligned, where STAR will output its alignment files.\n\n\n\nSAMtools10 was used to generate BAM files. The –@ flag can be used to allocate additional threads.\n\n\n\nThe BAM files for a number of sequencing runs can then be used to generate count matrices, as described in the following section.\n\nBesides the count matrix that we will use later, the airway package also contains eight files with a small subset of the total number of reads in the experiment. The reads were selected which aligned to a small region of chromosome 1. We chose a subset of reads because the full alignment files are large (a few gigabytes each), and because it takes between 10–30 minutes to count the fragments for each sample. We will use these files to demonstrate how a count matrix can be constructed from BAM files. Afterwards, we will load the full count matrix corresponding to all samples and all data, which is already provided in the same package, and will continue the analysis with that full matrix.\n\nWe load the data package with the example data:\n\n\n\nThe R function system.file can be used to find out where on your computer the files from a package have been installed. Here we ask for the full path to the extdata directory, where R packages store external data, that is part of the airway package.\n\n\n\nIn this directory, we find the eight BAM files (and some other files):\n\n\n\n\n\nTypically, we have a table with detailed information for each of our samples that links samples to the associated FASTQ and BAM files. For your own project, you might create such a comma-separated value (CSV) file using a text editor or spreadsheet software such as Excel.\n\nWe load such a CSV file with read.csv:\n\n\n\n\n\nNote: here and elsewhere in the workflow, the parentheses () around the entire code of the last line above is an R trick to print the output of the function in addition to saving it to sampleTable. This is equivalent to assigning and then showing the object in two steps:\n\n\n\nOnce the reads have been aligned, there are a number of tools that can be used to count the number of reads/fragments that can be uniquely assigned to genomic features for each sample. These often take as input SAM/BAM alignment files and a file specifying the genomic features, e.g. a GFF3 or GTF file specifying the gene models.\n\nThe following tools can be used generate count matrices: summarizeOverlaps11, featureCounts12, or htseq-count13 (Table 1).\n\nWe now proceed using the summarizeOverlaps method of counting. Using the Run column in the sample table, we construct the full paths to the files we want to perform the counting operation on:\n\n\n\n\n\nWe indicate in Bioconductor that these files are BAM files using the BamFileList function from the Rsamtools package that provides an R interface to BAM files. Here we also specify details about how the BAM files should be treated, e.g., only process 2 million reads at a time. See ?BamFileList for more information.\n\n\n\nNote: make sure that the chromosome names of the genomic features in the annotation you use are consistent with the chromosome names of the reference used for read alignment. Otherwise, the scripts might fail to count any reads to features due to the mismatching names. For example, a common mistake is when the alignment files contain chromosome names in the style of 1 and the gene annotation in the style of chr1, or the other way around. See the seqlevelsStyle function in the GenomeInfoDb package for solutions. We can check the chromosome names (here called “seqnames”) in the alignment files like so:\n\n\n\n\n\nNext, we need to read in the gene model that will be used for counting reads/fragments. We will read the gene model from an Ensembl GTF file8, using makeTxDbFromGFF from the GenomicFeatures package. GTF files can be downloaded from Ensembl’s FTP site or other gene model repositories. A TxDb object is a database that can be used to generate a variety of range-based objects, such as exons, transcripts, and genes. We want to make a list of exons grouped by gene for counting read/fragments.\n\nThere are other options for constructing a TxDb. For the known genes track from the UCSC Genome Browser14, one can use the pre-built Transcript DataBase: TxDb.Hsapiens.UCSC.hg19.knownGene. If the annotation file is accessible from AnnotationHub (as is the case for the Ensembl genes), a pre-scanned GTF file can be imported using makeTxDbFromGRanges. Finally, the makeTxDbFromBiomart function can be used to automatically pull a gene model from Biomart using biomaRt15.\n\nHere we will demonstrate loading from a GTF file:\n\n\n\nWe indicate that none of our sequences (chromosomes) are circular using a 0-length character vector.\n\n\n\n\n\nThe following line produces a GRangesList of all the exons grouped by gene11. Each element of the list is a GRanges object of the exons for a gene.\n\n\n\n\n\nAfter these preparations, the actual counting is easy. The function summarizeOverlaps from the GenomicAlignments package will do this. This produces a SummarizedExperiment object that contains a variety of information about the experiment, and will be described in more detail below.\n\nNote: If it is desired to perform counting using multiple cores, one can use the register and MulticoreParam or SnowParam functions from the BiocParallel package before the counting call below. Expect that the summarizeOverlaps call will take at least 30 minutes per file for a human RNA-seq file with 30 million aligned reads. By sending the files to separate cores, one can speed up the entire counting process.\n\n\n\nHere we specify to use one core, not multiple cores. We could have also skipped this line and the counting step would run in serial.\n\n\n\nThe following call creates the SummarizedExperiment object with counts:\n\n\n\nWe specify a number of arguments besides the features and the reads. The mode argument describes what kind of read overlaps will be counted. These modes are shown in Figure 1 of the Counting reads with summarizeOverlaps vignette for the GenomicAlignments package. Note that fragments will be counted only once to each gene, even if they overlap multiple exons of a gene which may themselves be overlapping. Setting singleEnd to FALSE indicates that the experiment produced paired-end reads, and we want to count a pair of reads (a fragment) only once toward the count for a gene.\n\nThe assay (pink block) contains the matrix of counts, the rowRanges (blue block) contains information about the genomic ranges and the colData (green block) contains information about the samples. The highlighted line in each block represents the first row (note that the first row of colData lines up with the first column of the assay).\n\nIn order to produce correct counts, it is important to know if the RNA-seq experiment was strand-specific or not. This experiment was not strand-specific so we set ignore.strand to TRUE. The fragments argument can be used when singleEnd=FALSE to specify if unpaired reads should be counted (yes if fragments=TRUE).\n\nThe SummarizedExperiment container is diagrammed in Figure 1 and discussed in the latest Bioconductor paper16. In our case we have created a single matrix named “counts” that contains the fragment counts for each gene and sample, which is stored in assay. It is also possible to store multiple matrices, accessed with assays. The rowRanges for our object is the GRangesList we used for counting (one GRanges of exons for each row of the count matrix). The component parts of the SummarizedExperiment are accessed with an R function of the same name: assay (or assays), rowRanges and colData.\n\nThis example code above actually only counted a small subset of fragments from the original experiment. Nevertheless, we can still investigate the resulting SummarizedExperiment by looking at the counts in the assay slot, the phenotypic data about the samples in colData slot (in this case an empty DataFrame), and the data about the genes in the rowRanges slot.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe rowRanges, when printed, only shows the first GRanges, and tells us there are 19 more elements:\n\n\n\n\n\nThe rowRanges also contains metadata about the construction of the gene model in the metadata slot. Here we use a helpful R function, str, to display the metadata compactly:\n\n\n\n\n\nThe colData:\n\n\n\n\n\nThe colData slot, so far empty, should contain all the metadata. Because we used a column of sampleTable to produce the bamfiles vector, we know the columns of se are in the same order as the rows of sampleTable. We can assign the sampleTable as the colData of the summarized experiment, by converting it into a DataFrame and using the assignment function:\n\n\n\n\n\nAt this point, we have counted the fragments which overlap the genes in the gene model we specified. This is a branching point where we could use a variety of Bioconductor packages for exploration and differential expression of the count data, including edgeR3, limma with the voom method4, DSS5, EBSeq6 and BaySeq7. We will continue, using DESeq22. The SummarizedExperiment object is all we need to start our analysis. In the following section we will show how to use it to create the data object used by DESeq2.\n\n\nThe DESeqDataSet, sample information, and the design formula\n\nBioconductor software packages often define and use a custom class for storing data that makes sure that all the needed data slots are consistently provided and fulfill the requirements. In addition, Bioconductor has general data classes (such as the SummarizedExperiment) that can be used to move data between packages. Additionally, the core Bioconductor classes provide useful functionality: for example, subsetting or reordering the rows or columns of a SummarizedExperiment automatically subsets or reorders the associated rowRanges and colData, which can help to prevent accidental sample swaps that would otherwise lead to spurious results. With SummarizedExperiment this is all taken care of behind the scenes.\n\nIn DESeq2, the custom class is called DESeqDataSet. It is built on top of the SummarizedExperiment class, and it is easy to convert SummarizedExperiment objects into DESeqDataSet objects, which we show below. One of the two main differences is that the assay slot is instead accessed using the counts accessor function, and the DESeqDataSet class enforces that the values in this matrix are non-negative integers.\n\nA second difference is that the DESeqDataSet has an associated design formula. The experimental design is specified at the beginning of the analysis, as it will inform many of the DESeq2 functions how to treat the samples in the analysis (one exception is the size factor estimation, i.e., the adjustment for differing library sizes, which does not depend on the design formula). The design formula tells which columns in the sample information table (colData) specify the experimental design and how these factors should be used in the analysis.\n\nThe simplest design formula for differential expression would be ~ condition, where condition is a column in colData(dds) that specifies which of two (or more groups) the samples belong to. For the airway experiment, we will specify ~ cell + dex meaning that we want to test for the effect of dexamethasone (dex) controlling for the effect of different cell line (cell). We can see each of the columns just using the $ directly on the SummarizedExperiment or DESeqDataSet:\n\n\n\n\n\n\n\n\n\nNote: it is prefered in R that the first level of a factor be the reference level (e.g. control, or untreated samples), so we can relevel the dex factor like so:\n\n\n\n\n\nFor running DESeq2 models, you can use R’s formula notation to express any fixed-effects experimental design. Note that DESeq2 uses the same formula notation as, for instance, the lm function of base R. If the research aim is to determine for which genes the effect of treatment is different across groups, then interaction terms can be included and tested using a design such as ~ group + treatment + group:treatment. See the manual page for ?results for more examples. We will show how to use an interaction term to test for condition-specific changes over time in a time course example below.\n\nIn the following sections, we will demonstrate the construction of the DESeqDataSet from two starting points:\n\nfrom a SummarizedExperiment object\n\nfrom a count matrix and a sample information table\n\nFor a full example of using the HTSeq Python package for read counting, please see the pasilla vignette. For an example of generating the DESeqDataSet from files produced by htseq-count, please see the DESeq2 vignette.\n\n\nStarting from SummarizedExperiment\n\nWe now use R’s data command to load a prepared SummarizedExperiment that was generated from the publicly available sequencing data files associated with the Himes et al.1 paper, described above. The steps we used to produce this object were equivalent to those you worked through in the previous sections, except that we used all the reads and all the genes. For more details on the exact steps used to create this object, type vignette(\"airway\") into your R session.\n\n\n\nAgain, we want to specify that untrt is the reference level for the dex variable:\n\n\n\n\n\nWe can quickly check the millions of fragments that uniquely aligned to the genes (the second argument of round tells how many decimal points to keep).\n\n\n\n\n\nSupposing we have constructed a SummarizedExperiment using one of the methods described in the previous section, we now need to make sure that the object contains all the necessary information about the samples, i.e., a table with metadata on the count matrix’s columns stored in the colData slot:\n\n\n\n\n\nHere we see that this object already contains an informative colData slot – because we have already prepared it for you, as described in the airway vignette. However, when you work with your own data, you will have to add the pertinent sample/phenotypic information for the experiment at this stage. We highly recommend keeping this information in a comma-separated value (CSV) or tab-separated value (TSV) file, which can be exported from an Excel spreadsheet, and the assign this to the colData slot, making sure that the rows correspond to the columns of the SummarizedExperiment. We made sure of this correspondence earlier by specifying the BAM files using a column of the sample table.\n\nOnce we have our fully annotated SummarizedExperiment object, we can construct a DESeqDataSet object from it that will then form the starting point of the analysis. We add an appropriate design for the analysis:\n\n\n\n\n\nIf we only wanted to perform transformations and exploratory data analysis (as explained later in this workflow) we could use a ~ 1 for the design, but we would need to remember to substitute a real design, e.g. ~ condition, before we run DESeq for differential testing or else we would only be testing the intercept.\n\nIn this section, we will show how to build an DESeqDataSet supposing we only have a count matrix and a table of sample information.\n\nNote: if you have prepared a SummarizedExperiment you should skip this section. While the previous section would be used to construct a DESeqDataSet from a SummarizedExperiment, here we first extract the individual object (count matrix and sample info) from the SummarizedExperiment in order to build it back up into a new object – only for demonstration purposes. In practice, the count matrix would either be read in from a file or perhaps generated by an R function like featureCounts from the Rsubread package12.\n\nThe information in a SummarizedExperiment object can be accessed with accessor functions. For example, to see the actual data, i.e., here, the fragment counts, we use the assay function. (The head function restricts the output to the first few lines.)\n\n\n\n\n\nIn this count matrix, each row represents an Ensembl gene, each column a sequenced RNA library, and the values give the raw numbers of fragments that were uniquely assigned to the respective gene in each library. We also have information on each of the samples (the columns of the count matrix). If you’ve counted reads with some other software, it is very important to check that the columns of the count matrix correspond to the rows of the sample information table.\n\n\n\nWe now have all the ingredients to prepare our data object in a form that is suitable for analysis, namely:\n\ncountdata: a table with the fragment counts\n\ncoldata: a table with information about the samples\n\nTo now construct the DESeqDataSet object from the matrix of counts and the sample information table, we use:\n\n\n\n\n\nWe will continue with the object generated from the SummarizedExperiment section.\n\n\nExploratory analysis and visualization\n\nThere are two separate paths in this workflow; the one we will see first involves transformations of the counts in order to visually explore sample relationships. In the second part, we will go back to the original raw counts for statistical testing. This is critical because the statistical testing methods rely on original count data (not scaled or transformed) for calculating the precision of measurements.\n\nOur count matrix with our DESeqDataSet contains many rows with only zeros, and additionally many rows with only a few fragments total. In order to reduce the size of the object, and to increase the speed of our functions, we can remove the rows that have no or nearly no information about the amount of gene expression. Here we remove rows of the DESeqDataSet that have no counts, or only a single count across all samples:\n\n\n\n\n\n\n\n\n\nMany common statistical methods for exploratory analysis of multidimensional data, for example clustering and principal components analysis (PCA), work best for data that generally has the same range of variance at different ranges of the mean values. When the expected amount of variance is approximately the same across different mean values, the data is said to be homoskedastic. For RNA-seq raw counts, however, the variance grows with the mean. For example, if one performs PCA directly on a matrix of size-factor-normalized read counts, the result typically depends only on the few most strongly expressed genes because they show the largest absolute differences between samples. A simple and often used strategy to avoid this is to take the logarithm of the normalized count values plus a small pseudocount; however, now the genes with the very lowest counts will tend to dominate the results because, due to the strong Poisson noise inherent to small count values, and the fact that the logarithm amplifies differences for the smallest values, these low count genes will show the strongest relative differences between samples.\n\nAs a solution, DESeq2 offers transformations for count data that stabilize the variance across the mean. One such transformation is the regularized-logarithm transformation or rlog2. For genes with high counts, the rlog transformation will give similar result to the ordinary log2 transformation of normalized counts. For genes with lower counts, however, the values are shrunken towards the genes’ averages across all samples. Using an empirical Bayesian prior on inter-sample differences in the form of a ridge penalty, the rlog-transformed data then becomes approximately homoskedastic, and can be used directly for computing distances between samples and making PCA plots. Another transformation, the variance stabilizing transformation17, is discussed alongside the rlog in the DESeq2 vignette.\n\nNote: the rlog transformation is provided for applications other than differential testing. For differential testing we recommend the DESeq function applied to raw counts, as described later in this workflow, which also takes into account the dependence of the variance of counts on the mean value during the dispersion estimation step.\n\nThe function rlog returns a SummarizedExperiment object that contains the rlog-transformed values in its assay slot.\n\n\n\n\n\nWe specify blind=FALSE, which means that differences between cell lines and treatment should not add to the variance-mean profile of the experiment. However, the experimental design is not used directly in the transformation, only in estimating the global amount of variability in the counts. For a fully unsupervised transformation, one can set blind=TRUE (which is the default).\n\nNote: for large datasets (hundreds of samples), the variance stabilizing transformation will be faster to compute.\n\nTo show the effect of the transformation, in Figure 2 we plot the first sample against the second, first simply using the log2 function (after adding 1, to avoid taking the log of zero), and then using the rlog-transformed values. For the log2 approach, we need to first estimate size factors to account for sequencing depth, and then specify normalized=TRUE. Sequencing depth correction is done automatically for the rlog method (and for varianceStabilizingTransformation).\n\nShown are scatterplots using the log2 transform of normalized counts (left side) and using the rlog (right side).\n\n\n\nWe can see how genes with low counts (bottom left-hand corner) seem to be excessively variable on the ordinary logarithmic scale, while the rlog transform compresses differences for the low count genes for which the data provide little information about differential expression.\n\nA useful first step in an RNA-seq analysis is often to assess overall similarity between samples: Which samples are similar to each other, which are different? Does this fit to the expectation from the experiment’s design?\n\nWe use the R function dist to calculate the Euclidean distance between samples. To ensure we have a roughly equal contribution from all genes, we use it on the rlog-transformed data. We need to transpose the matrix of values using t, because the dist function expects the different samples to be rows of its argument, and different dimensions (here, genes) to be columns.\n\n\n\n\n\nWe visualize the distances in a heatmap in Figure 3, using the function pheatmap from the pheatmap package.\n\n\n\nIn order to plot the sample distance matrix with the rows/columns arranged by the distances in our distance matrix, we manually provide sampleDists to the clustering_distance argument of the pheatmap function. Otherwise the pheatmap function would assume that the matrix contains the data values themselves, and would calculate distances between the rows/columns of the distance matrix, which is not desired. We also manually specify a blue color palette using the colorRampPalette function from the RColorBrewer package.\n\n\n\nNote that we have changed the row names of the distance matrix to contain treatment type and patient number instead of sample ID, so that we have all this information in view when looking at the heatmap.\n\nAnother option for calculating sample distances is to use the Poisson Distance18, implemented in the PoiClaClu package. This measure of dissimilarity between counts also takes the inherent variance structure of counts into consideration when calculating the distances between samples. The PoissonDistance function takes the original count matrix (not normalized) with samples as rows instead of columns, so we need to transpose the counts in dds.\n\n\n\nWe plot the Poisson Distance heatmap in Figure 4.\n\n\n\nAnother way to visualize sample-to-sample distances is a principal components analysis (PCA). In this ordination method, the data points (here, the samples) are projected onto the 2D plane such that they spread out in the two directions that explain most of the differences (Figure 5). The x-axis is the direction that separates the data points the most. The values of the samples in this direction are written PC1. The y-axis is a direction (it must be orthogonal to the first direction) that separates the data the second most. The values of the samples in this direction are written PC2. The percent of the total variance that is contained in the direction is printed in the axis label. Note that these percentages do not add to 100%, because there are more dimensions that contain the remaining variance (although each of these remaining dimensions will explain less than the two that we see).\n\nEach unique combination of treatment and cell line is given its own color.\n\n\n\nHere, we have used the function plotPCA that comes with DESeq2. The two terms specified by intgroup are the interesting groups for labeling the samples; they tell the function to use them to choose colors. We can also build the PCA plot from scratch using the ggplot2 package19. This is done by asking the plotPCA function to return the data used for plotting rather than building the plot. See the ggplot2 documentation for more details on using ggplot.\n\n\n\n\n\n\n\nWe can then use this data to build up a second plot in Figure 6, specifying that the color of the points should reflect dexamethasone treatment and the shape should reflect the cell line.\n\nHere we specify cell line (plotting symbol) and dexamethasone treatment (color).\n\n\n\n\n\nFrom the PCA plot, we see that the differences between cells (the different plotting shapes) are considerable, though not stronger than the differences due to treatment with dexamethasone (red vs blue color). This shows why it will be important to account for this in differential testing by using a paired design (“paired”, because each dex treated sample is paired with one untreated sample from the same cell line). We are already set up for this design by assigning the formula ~ cell + dex earlier.\n\nAnother plot, very similar to the PCA plot, can be made using the multidimensional scaling (MDS) function in base R. This is useful when we don’t have a matrix of data, but only a matrix of distances. Here we compute the MDS for the distances calculated from the rlog transformed counts and plot these (Figure 7):\n\nShown are the plots based on the rlog-transformed values (left) and the Poisson Distance (right).\n\n\n\nCreating the same plot for the PoissonDistance (also Figure 7):\n\n\n\n\nDifferential expression analysis\n\nAs we have already specified an experimental design when we created the DESeqDataSet, we can run the differential expression pipeline on the raw counts with a single call to the function DESeq:\n\n\n\nThis function will print out a message for the various steps it performs. These are described in more detail in the manual page for DESeq, which can be accessed by typing ?DESeq. Briefly these are: the estimation of size factors (controlling for differences in the sequencing depth of the samples), the estimation of dispersion values for each gene, and fitting a generalized linear model.\n\nA DESeqDataSet is returned that contains all the fitted parameters within it, and the following section describes how to extract out results tables of interest from this object.\n\nCalling results without any arguments will extract the estimated log2 fold changes and p values for the last variable in the design formula. If there are more than 2 levels for this variable, results will extract the results table for a comparison of the last level over the first level. This comparison is printed at the top of the output: dex trt vs untrt.\n\n\n\n\n\nAs res is a DataFrame object, it carries metadata with information on the meaning of the columns:\n\n\n\n\n\nThe first column, baseMean, is a just the average of the normalized count values, dividing by size factors, taken over all samples in the DESeqDataSet. The remaining four columns refer to a specific contrast, namely the comparison of the trt level over the untrt level for the factor variable dex. We will find out below how to obtain other contrasts.\n\nThe column log2FoldChange is the effect size estimate. It tells us how much the gene’s expression seems to have changed due to treatment with dexamethasone in comparison to untreated samples. This value is reported on a logarithmic scale to base 2: for example, a log2 fold change of 1.5 means that the gene’s expression is increased by a multiplicative factor of 21.5 ≈ 2.82.\n\nOf course, this estimate has an uncertainty associated with it, which is available in the column lfcSE, the standard error estimate for the log2 fold change estimate. We can also express the uncertainty of a particular effect size estimate as the result of a statistical test. The purpose of a test for differential expression is to test whether the data provides sufficient evidence to conclude that this value is really different from zero. DESeq2 performs for each gene a hypothesis test to see whether evidence is sufficient to decide against the null hypothesis that there is zero effect of the treatment on the gene and that the observed difference between treatment and control was merely caused by experimental variability (i.e., the type of variability that you can expect between different samples in the same treatment group). As usual in statistics, the result of this test is reported as a p value, and it is found in the column pvalue. Remember that a p value indicates the probability that a fold change as strong as the observed one, or even stronger, would be seen under the situation described by the null hypothesis.\n\nWe can also summarize the results with the following line of code, which reports some additional information, that will be covered in later sections.\n\n\n\n\n\nNote that there are many genes with differential expression due to dexamethasone treatment at the FDR level of 10%. This makes sense, as the smooth muscle cells of the airway are known to react to glucocorticoid steroids. However, there are two ways to be more strict about which set of genes are considered significant:\n\nlower the false discovery rate threshold (the threshold on padj in the results table)\n\nraise the log2 fold change threshold from 0 using the lfcThreshold argument of results\n\nIf we lower the false discovery rate threshold, we should also tell this value to results(), so that the function will use an alternative threshold for the optimal independent filtering step:\n\n\n\n\n\nIf we want to raise the log2 fold change threshold, so that we test for genes that show more substantial changes due to treatment, we simply supply a value on the log2 scale. For example, by specifying lfcThreshold=1, we test for genes that show significant effects of treatment on gene counts more than doubling or less than halving, because 21 = 2.\n\n\n\n\n\nSometimes a subset of the p values in res will be NA (“not available”). This is DESeq’s way of reporting that all counts for this gene were zero, and hence no test was applied. In addition, p values can be assigned NA if the gene was excluded from analysis because it contained an extreme count outlier. For more information, see the outlier detection section of the DESeq2 vignette.\n\nIf you use the results from an R analysis package in published research, you can find the proper citation for the software by typing citation(\"pkgName\"), where you would substitute the name of the package for pkgName. Citing methods papers helps to support and reward the individuals who put time into open source software for genomic data analysis.\n\nIn general, the results for a comparison of any two levels of a variable can be extracted using the contrast argument to results. The user should specify three values: the name of the variable, the name of the level for the numerator, and the name of the level for the denominator. Here we extract results for the log2 of the fold change of one cell line over another:\n\n\n\n\n\nIf results for an interaction term are desired, the name argument of results should be used. Please see the help for the results function for more details.\n\nIn high-throughput biology, we are careful to not use the p values directly as evidence against the null, but to correct for multiple testing. What would happen if we were to simply threshold the p values at a low value, say 0.05? There are 5722 genes with a p value below 0.05 among the 29391 genes, for which the test succeeded in reporting a p value:\n\n\n\n\n\n\n\n\n\nNow, assume for a moment that the null hypothesis is true for all genes, i.e., no gene is affected by the treatment with dexamethasone. Then, by the definition of the p value, we expect up to 5% of the genes to have a p value below 0.05. This amounts to 1470 genes. If we just considered the list of genes with a p value below 0.05 as differentially expressed, this list should therefore be expected to contain up to 1470/5722 = 26% false positives.\n\nDESeq2 uses the Benjamini-Hochberg (BH) adjustment20 as implemented in the base R p.adjust function; in brief, this method calculates for each gene an adjusted p value that answers the following question: if one called significant all genes with an adjusted p value less than or equal to this gene’s adjusted p value threshold, what would be the fraction of false positives (the false discovery rate, FDR) among them, in the sense of the calculation outlined above? These values, called the BH-adjusted p values, are given in the column padj of the res object.\n\nThe FDR is a useful statistic for many high-throughput experiments, as we are often interested in reporting or focusing on a set of interesting genes, and we would like to put an upper bound on the percent of false positives in this set.\n\nHence, if we consider a fraction of 10% false positives acceptable, we can consider all genes with an adjusted p value below 10% = 0.1 as significant. How many such genes are there?\n\n\n\n\n\nWe subset the results table to these genes and then sort it by the log2 fold change estimate to get the significant genes with the strongest down-regulation:\n\n\n\n\n\n... and with the strongest up-regulation:\n\n\n\n\n\n\nPlotting results\n\nA quick way to visualize the counts for a particular gene is to use the plotCounts function that takes as arguments the DESeqDataSet, a gene name, and the group over which to plot the counts (Figure 8).\n\n\n\nWe can also make custom plots using the ggplot function from the ggplot2 package (Figure 9).\n\nThe plots are customized using ggplot2 options for jitter (left), dots (middle), or with lines connecting cell line (right). Note that the DESeq2 test that was used takes into account the cell line effect, so the rightmost figure more closely depicts the difference being tested.\n\n\n\n\n\n\n\nAn MA-plot21 provides a useful overview for an experiment with a two-group comparison (Figure 10).\n\nThe log2 fold change for a particular comparison is plotted on the y-axis and the average of the counts normalized by size factor is shown on the x-axis (“M” for minus, because a log ratio is equal to log minus log, and “A” for average). Each gene is represented with a dot. Genes with an adjusted p value below a threshold (here 0.1, the default) are shown in red.\n\n\n\nThe DESeq2 package uses statistical techniques to moderate log2 fold changes from genes with very low counts and highly variable counts, as can be seen by the narrowing of the vertical spread of points on the left side of the MA-plot. For a detailed explanation of the rationale of moderated fold changes, please see the DESeq2 paper2. This plot demonstrates that only genes with a large average normalized count contain sufficient information to yield a significant call.\n\nWe can also make an MA-plot for the results table in which we raised the log2 fold change threshold (Figure 11). We can label individual points on the MA-plot as well. Here we use the with R function to plot a circle and text for a selected row of the results object. Within the with function, only the baseMean and log2FoldChange values for the selected rows of res are used.\n\nThe red points indicate genes for which the log2 fold change was significantly higher than 1 or less than -1 (treatment resulting in more than doubling or less than halving of the normalized counts) with adjusted p value less than 0.1. The point circled in blue indicates the gene with the lowest adjusted p value.\n\n\n\nAnother useful diagnostic plot is the histogram of the p values (Figure 12). This plot is best formed by excluding genes with very small counts, which otherwise generate spikes in the histogram.\n\n\n\nIn the sample distance heatmap made previously, the dendrogram at the side shows us a hierarchical clustering of the samples. Such a clustering can also be performed for the genes. Since the clustering is only relevant for genes that actually carry a signal, one usually would only cluster a subset of the most highly variable genes. Here, for demonstration, let us select the 20 genes with the highest variance across samples. We will work with the rlog transformed counts:\n\n\n\nThe heatmap becomes more interesting if we do not look at absolute expression strength but rather at the amount by which each gene deviates in a specific sample from the gene’s average across all samples. Hence, we center each genes’ values across samples, and plot a heatmap (Figure 13). We provide a data.frame that instructs the pheatmap function how to label the columns.\n\nTreatment status and cell line information are shown with colored bars at the top of the heatmap. Note that a set of genes at the top of the heatmap are separating the N061011 cell line from the others. In the center of the heatmap, we see a set of genes for which the dexamethasone treated samples have higher gene expression.\n\n\n\nThe MA plot highlights an important property of RNA-seq data. For weakly expressed genes, we have no chance of seeing differential expression, because the low read counts suffer from such high Poisson noise that any biological effect is drowned in the uncertainties from the sampling at a low rate. We can also show this by examining the ratio of small p values (say, less than, 0.05) for genes binned by mean normalized count. We will use the results table subjected to the threshold to show what this looks like in a case when there are few tests with small p value.\n\nIn the following code chunk, we create bins using the quantile function, bin the genes by base mean using cut, rename the levels of the bins using the middle point, calculate the ratio of p values less than 0.05 for each bin, and finally plot these ratios (Figure 14).\n\nHere the p values are for a test of log2 fold change greater than 1 or less than -1. This plot demonstrates that genes with low mean count are underpowered, and best excluded before multiple test correction.\n\n\n\nAt first sight, there may seem to be little benefit in filtering out these genes. After all, the test found them to be non-significant anyway. However, these genes have an influence on the multiple testing adjustment, whose performance improves if such genes are removed. By removing the low count genes from the input to the FDR procedure, we can find more genes to be significant among those that we keep, and so improved the power of our test. This approach is known as independent filtering.\n\nThe DESeq2 software automatically performs independent filtering that maximizes the number of genes with adjusted p value less than a critical value (by default, alpha is set to 0.1). This automatic independent filtering is performed by, and can be controlled by, the results function.\n\nThe term independent highlights an important caveat. Such filtering is permissible only if the statistic that we filter on (here the mean of normalized counts across all samples) is independent of the actual test statistic (the p value) under the null hypothesis. Otherwise, the filtering would invalidate the test and consequently the assumptions of the BH procedure. The independent filtering software used inside DESeq2 comes from the genefilter package, that contains a reference to a paper describing the statistical foundation for independent filtering22.\n\n\nAnnotating and exporting results\n\nOur result table so far only contains information about Ensembl gene IDs, but alternative gene names may be more informative for collaborators. Bioconductor’s annotation packages help with mapping various ID schemes to each other. We load the AnnotationDbi package and the annotation package org.Hs.eg.db:\n\n\n\nThis is the organism annotation package (“org”) for Homo sapiens (“Hs”), organized as an AnnotationDbi database package (“db”), using Entrez Gene IDs (“eg”) as primary key. To get a list of all available key types, use:\n\n\n\n\n\nWe can use the mapIds function to add individual columns to our results table. We provide the row names of our results table as a key, and specify that keytype=ENSEMBL. The column argument tells the mapIds function which information we want, and the multiVals argument tells the function what to do if there are multiple possible values for a single input value. Here we ask to just give us back the first one that occurs in the database. To add the gene symbol and Entrez ID, we call mapIds twice.\n\n\n\nNow the results have the desired external gene IDs:\n\n\n\n\n\nYou can easily save the results table in a CSV file, that you can then share or load with a spreadsheet program such as Excel. The call to as.data.frame is necessary to convert the DataFrame object (IRanges package) to a data.frame object that can be processed by write.csv. Here, we take just the top 100 genes for demonstration.\n\n\n\nAnother more sophisticated package for exporting results from various Bioconductor analysis packages is the ReportingTools package23. ReportingTools will automatically generate dynamic HTML documents, including links to external databases using gene identifiers and boxplots summarizing the normalized counts across groups. See the ReportingTools vignettes for full details. The simplest version of creating a dynamic ReportingTools report is performed with the following code:\n\n\n\nIf we have used the summarizeOverlaps function to count the reads, then our DESeqDataSet object is built on top of ready-to-use Bioconductor objects specifying the genomic ranges of the genes. We can therefore easily plot our differential expression results in genomic space. While the results function by default returns a DataFrame, using the format argument, we can ask for GRanges or GRangesList output.\n\n\n\n\n\nWe need to add the symbol again for labeling the genes on the plot:\n\n\n\nWe will use the Gviz package for plotting the GRanges and associated metadata: the log fold changes due to dexamethasone treatment.\n\n\n\nThe following code chunk specifies a window of 1 million base pairs upstream and downstream from the gene with the smallest p value. We create a subset of our full results, for genes within the window We add the gene symbol as a name, if the symbol exists or is not duplicated in our subset.\n\n\n\nWe create a vector specifying if the genes in this subset had a low false discovery rate.\n\n\n\nWe can then plot the results using Gviz functions (Figure 15). We create an axis track specifying our location in the genome, a track that will show the genes and their names, colored by significance, and a data track that will draw vertical bars showing the moderated log fold change produced by DESeq2, which we know are only large when the effect is well supported by the information in the counts.\n\nGenes highlighted in pink have adjusted p value less than 0.1.\n\n\n\nSuppose we did not know that there were different cell lines involved in the experiment, only that there was treatment with dexamethasone. The cell line effect on the counts then would represent some hidden and unwanted variation that might be affecting many or all of the genes in the dataset. We can use statistical methods designed for RNA-seq from the sva package24 to detect such groupings of the samples, and then we can add these to the DESeqDataSet design, in order to account for them. The SVA package uses the term surrogate variables for the estimated variables that we want to account for in our analysis. Another package for detecting hidden batches is the RUVSeq package25, with the acronym “Remove Unwanted Variation”.\n\n\n\nBelow we obtain a matrix of normalized counts for which the average count across samples is larger than 1. As we described above, we are trying to recover any hidden batch effects, supposing that we do not know the cell line information. So we use a full model matrix with the dex variable, and a reduced, or null, model matrix with only an intercept term. Finally we specify that we want to estimate 2 surrogate variables. For more information read the manual page for the svaseq function by typing ?svaseq.\n\n\n\n\n\n\n\n\n\nBecause we actually do know the cell lines, we can see how well the SVA method did at recovering these variables (Figure 16).\n\nHere, we know the hidden source of variation (cell line), and therefore can see how the SVA procedure is able to identify sources of variation which are correlated with cell line.\n\n\n\nFinally, in order to use SVA to remove any effect on the counts from our surrogate variables, we simply add these two surrogate variables as columns to the DESeqDataSet and then add them to the design:\n\n\n\nWe could then produce results controlling for surrogate variables by running DESeq with the new design:\n\n\n\n\nTime course experiments\n\nDESeq2 can be used to analyze time course experiments, for example to find those genes that react in a condition-specific manner over time, compared to a set of baseline samples. Here we demonstrate a basic time course analysis with the fission data package, that contains gene counts for an RNA-seq time course of fission yeast26. The yeast were exposed to oxidative stress, and half of the samples contain a deletion of the gene atf21. We use a design formula that models the strain difference at time 0, the difference over time, and any strain-specific differences over time (the interaction term strain:minute).\n\n\n\nThe following chunk of code performs a likelihood ratio test, where we remove the strain-specific differences over time. Genes with small p values from this test are those which at one or more time points after time 0 showed a strain-specific effect. Note therefore that this will not give small p values to genes that moved up or down over time in the same way in both strains.\n\n\n\n\n\nThis is just one of the tests that can be applied to time series data. Another option would be to model the counts as a smooth function of time, and to include an interaction term of the condition with the smooth function. It is possible to build such a model using spline basis functions within R.\n\nWe can plot the counts for the groups over time using ggplot2, for the gene with the smallest adjusted p value, testing for condition-dependent time profile and accounting for differences at time 0 (Figure 17). Keep in mind that the interaction terms are the difference between the two groups at a given time after accounting for the difference at time 0.\n\n\n\nWald tests for the log2 fold changes at individual time points can be investigated using the test argument to results:\n\n\n\n\n\n\n\n\n\nWe can furthermore cluster significant genes by their profiles. We extract a matrix of the shrunken log2 fold changes using the coef function:\n\n\n\n\n\nWe can now plot the log2 fold changes in a heatmap (Figure 18).\n\nThe bottom set of genes show strong induction of expression for the baseline samples in minutes 15–60 (red boxes in the bottom left corner), but then have slight differences for the mutant strain (shown in the boxes in the bottom right corner).\n\n\n\n\n\n\nSession information\n\nAs the last part of this document, we call the function sessionInfo, which reports the version numbers of R and all the packages used in this session. It is good practice to always keep such a record of this as it will help to track down what has happened in case an R script ceases to work or gives different results because the functions have been changed in a newer version of one of your packages. By including it at the bottom of a script, your reports will become more reproducible.\n\nThe session information should also always be included in any emails to the Bioconductor support site along with all code used in the analysis.\n\n\n\n",
"appendix": "Author contributions\n\n\n\nMIL, SA and WH wrote the workflow. VK assisted in reproducible delivery of the workflow materials.\n\n\nCompeting interests\n\n\n\nThe authors declare that they have no competing interests.\n\n\nGrant information\n\nMIL is supported by NIH grant 5T32CA009337-35. WH and SA acknowledge funding from the European Union’s 7th Framework Programme (Health) via Project Radiant.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank all users of DESeq and DESeq2 who provided valuable feedback.\n\n\nReferences\n\nHimes BE, Jiang X, Wagner P, et al.: RNA-Seq transcriptome profiling identifies CRISPLD2 as a glucocorticoid responsive gene that modulates cytokine function in airway smooth muscle cells. PLoS One. 2014; 9(6): e99625. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLove MI, Huber W, Anders S: Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12): 550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaw CW, Chen Y, Shi W, et al.: voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15(2): R29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu H, Wang C, Wu Z: A new shrinkage estimator for dispersion improves differential expression detection in RNA-seq data. Biostatistics. 2013; 14(2): 232–243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeng N, Dawson JA, Thomson JA, et al.: EBSeq: an empirical Bayes hierarchical model for inference in RNA-seq experiments. Bioinformatics. 2013; 29(8): 1035–1043. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHardcastle TJ, Kelly KA: baySeq: empirical Bayesian methods for identifying differential expression in sequence count data. BMC Bioinformatics. 2010; 11(1): 422. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlicek P, Amode MR, Barrell D, et al.: Ensembl 2014. Nucleic Acids Res. 2014; 42(Database issue): D749–D755. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobin A, Davis CA, Schlesinger F, et al.: STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1): 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLawrence M, Huber W, Pagès H, et al.: Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013; 9(8): e1003118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao Y, Smyth GK, Shi W: featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014; 30(7): 923–930. PubMed Abstract | Publisher Full Text\n\nAnders S, Pyl PT, Huber W: HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015; 31(2): 166–169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKent WJ, Sugnet CW, Furey TS, et al.: The human genome browser at UCSC. Genome Res. 2002; 12(6): 996–1006. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurinck S, Spellman PT, Birney E, et al.: Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat Protoc. 2009; 4(8): 1184–1191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnders S, Huber W: Differential expression analysis for sequence count data. Genome Biol. 2010; 11(10): R106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitten DM: Classification and clustering of sequencing data using a Poisson model. Ann Appl Stat. 2011; 5(4): 2493–2518. Publisher Full Text\n\nWickham H: ggplot2. Springer, New York, NY, 2009. Publisher Full Text\n\nBenjamini Y, Hochberg Y: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J Roy Stat Soc B Met. 1995; 57(1): 289–300. Reference Source\n\nDudoit S, Yang YH, Callow MJ, et al.: Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. In Statistica Sinica. 2002; 12: 111–139. Reference Source\n\nBourgon R, Gentleman R, Huber W: Independent filtering increases detection power for high-throughput experiments. Proc Natl Acad Sci U S A. 2010; 107(21): 9546–9551. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuntley MA, Larson JL, Chaivorapol C, et al.: ReportingTools: an automated result processing and presentation toolkit for high-throughput genomic analyses. Bioinformatics. 2013; 29(24): 3220–3221. PubMed Abstract | Publisher Full Text\n\nLeek JT: svaseq: removing batch effects and other unwanted noise from sequencing data. Nucleic Acids Res. 2014; 42(21): e161. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRisso D, Ngai J, Speed TP, et al.: Normalization of RNA-seq data using factor analysis of control genes or samples. Nat Biotechnol. 2014; 32(9): 896–902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeong HS, Dawson K, Wirth C, et al.: A global non-coding RNA system modulates fission yeast protein levels in response to stress. Nat Commun. 2014; 5: 3947. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "10805",
"date": "27 Oct 2015",
"name": "Davide Risso",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nLove and colleagues describe a typical RNA-seq gene-level differential expression workflow, using their popular DESeq2 Bioconductor package, as well as other core and contributed Bioconductor packages. The paper is a valuable resource for researchers that are new to RNA-seq differential expression (DE) statistical analysis and/or want to learn how to carry out such analysis within R/Bioconductor.The advantage of the presented workflow over ad-hoc pipelines is that (almost) the whole analysis (from aligned reads to DE results) is carried out within the open source Bioconductor project, facilitating transparency and reproducibility. I was pleased to be able to completely reproduce all the results and figures of the paper on my machine (although I was able to download the .Rmd file only from the Bioconductor and not from the F1000Research article page -- see next paragraph).I have only a few points that the authors should address.Most of the critical statistical steps of the DE analysis are \"hidden\" inside a single call to the \"DESeq\" wrapper function. I can see why the authors created and make use of this function in the workflow, as a typical practitioner will not need to worry about the internal computations. However, the authors should consider adding a section illustrating the steps carried out by the DESeq function (normalization, dispersion estimation, model fitting), as this will have a great educational value. Right below Figure 6 (mid page 18 in the pdf) the authors state that the model should account for \"differences between cells [...] by using a paired design. [...] We are already set up for this design by assigning the formula ~ cell + dex earlier.\" The term \"paired design\" usually refers to a design where the same units are tested before and after a treatment, and the effect of the treatment is usually tested by modeling the differences between the measurements before and after. The specified formula describes a factorial design, where the effects of treatment and cell line are included as main effects in the model. Please make sure that the right terminology is used. I found the section on independent filtering a bit confusing. Reading the fourth paragraph of the section (first paragraph of page 28 in the pdf), I understand that DESeq2 will decide how many genes to filter out by maximizing the number of genes with a low adjusted p-value. However, the next paragraph states that this type of filtering is permissible only if independent of the test statistics. How can the filtering be independent if the threshold is chosen by maximizing the number of significant genes? It would be a nice addition to the manuscript to have a concluding paragraph describing how to download and reproduce the workflow.Minor issues:The first time the authors mention Figure 1 they are referring to Figure 1 of a different paper, hence this should not link to the authors' Figure 1. Fourth paragraph of \"The DESeqDataSet, sample information, and the design formula\": the authors write colData(dds) although dds is not yet defined. The third paragraph of page 12 (of the pdf) is a bit confusing. Perhaps, it could be rephrased in terms of unsupervised problems. It's not clear why somebody with a supervised problem would want to omit the design at the EDA stage. Figure 4: the authors say that one should use unnormalized counts to compute the Poisson distances. Wouldn't this affect the heatmap if there are large sequencing depth differences between the samples? Or is the function internally adjusting for sequencing depth? The first paragraph of the \"PCA plot\" section should be re-written more clearly. The \"reduced\" argument of the DESeq function, used in the \"Time course experiments\" section, deserves a brief explanation for those readers that are not familiar with Likelihood Ratio Tests.",
"responses": [
{
"c_id": "2289",
"date": "17 Nov 2016",
"name": "Michael Love",
"role": "Author Response",
"response": "We thank the reviewers for the effort they spent on our manuscript and for their insightful comments, which have guided us in the production of the revised, second version of the manuscript. We apologize for the delay in incorporating changes. 1) We thank both reviewers for this advice and have added more explanations below the call to the DESeq() function in Section Running the differential expression pipeline. 2) We have replaced the term \"paired design\" with \"a design that incorporates the differences between cells\". 3) This question touches upon the same issues as that by Dr. Wu. Here we refer to our combined reply to both questions above. 4) We followed this suggestion and added such an explanation to the Introduction. Minor: 1) Thank you for pointing this out. We have removed this link. 2) We have removed the \"(dds)\" argument. 3) We have removed this confusing text, and use the regular design from the beginning. 4) We have added a parenthesized comment \"(as it normalizes counts internally)\" to clarify this. 5) We have rewritten the paragraph. 6) We have added an explanation."
}
]
},
{
"id": "10806",
"date": "01 Dec 2015",
"name": "Zhijin Wu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI find the workflow a very useful document, especially in teaching someone with limited R/Bioconductor experience, starting from the count table. In addition to DE analysis, the workflow included exploratory analysis and some diagnostics -- again, very handy as teaching material.A few comments:I like the example using a subset of reads in \"locating alignment files\" so alignment can be done instantly in a toy example. It would be more apparent, that in addition to having very small BAM files, if it is explicitly explained that a small GTF file is created for this example. The subsection before, on \"Aligning reads to a reference genome\", is harder to reproduce. Unlike the examples below, there is no toy fastq file here. It would be nice to either have some toy examples here as well, or to provide the links to some actual fastq files. The link to the GEO entry is provided, which does not directly point to fastq files. To make the workflow even easier to follow and reproduce, maybe a list of required packages can be provided at the very beginning. I agree with Dr. Risso that at least the main steps in the DE analysis should be mentioned, so that the user understands the major components that affect the analysis. The main workflow cannot include too many examples, but possible alternatives in normalization (for example, different choices in adjusting for sequencing depth and one may consider adjusting for more than just depth) and in dispersion estimation can be simply mentioned here so that users can check these out by themselves. In the airway example, there is one important detail that is worth having a checkpoint. Here,\"Because we used a column of sampleTable to produce the bamfiles vector, we know the columns of se are in the same order as the rows of sampleTable. \" The authors emphasized this later with the recommendation of keeping the column (sample) information in a separate file, and \"making sure that the rows correspond to the columns of the SummarizedExperiment\". I would recommend a checkpoint here (for example, by comparing rownames(colData(se)) and rownames(sampleTable)), so that the users will always confirm the correct order. If this is messed up, all following analysis is a waste. Regrading DE and multiple testing: It is worth mentioning that the \"p-value\" returned is a nominal p-value. The Wald test p-values are not necessarily valid p-values in all experiments. Though \"by the definition of the p value, we expect up to 5% of the genes to have a p value below 0.05\", this only applies to valid p-values.\n\nAnd if FDR is computed by adjusting nominal p-values, the nominal FDR may not be actual FDR either. In diagnostics, Fig 14 is a useful tool. We find that the joint distribution the p-values and baseMean provides similar information -- there is no power for genes with very low base Mean. However, one would not want to guide filtering after the analysis has already been done. Thought the authors explicitly wrote that the filtering should be independent, placing the example here may leave a reader with the impression that filtering can be done post hoc. The fact that genes with low counts have low power can be established without analyzing the actual data, and we recommend making the filtering decision truly independently, for example, using simulation in the design stage1.Minor issues:I wonder what the \"avgLength\" is in the sampleTable. SRR1039513 has avgLength 87 but if I read in the sam file I see that every read has length 63 just like the others. The format of the R code is not consistent, such that some of these can be copied correctly, and some not (mostly involving quotation marks). For example in the second lines of R code,dir <- system.file(\"extdata\", package=“airway\", mustWork=TRUE), the quotation around \"airway\".",
"responses": [
{
"c_id": "2288",
"date": "17 Nov 2016",
"name": "Michael Love",
"role": "Author Response",
"response": "We thank the reviewers for the effort they spent on our manuscript and for their insightful comments, which have guided us in the production of the revised, second version of the manuscript. We apologize for the delay in incorporating changes. 1) We added an explanation to Section Locating alignment files: \"We also include a small subset of a GTF (gene transfer format) annotation file that has information about the genes contained in this small region. We will discuss this file in more depth below.\" 2) In the introduction, we added a reference to the Session information section. 3) We thank both reviewers for this advice and have added more explanations below the call to the DESeq() function in Section Running the differential expression pipeline. 4) We followed this advice and added a checkpoint to the workflow. 5) We appreciate the difference between the formal definition of a p-value as a statistic whose null distribution fulfills the stated requirement, and the nominal p-value returned by a test procedure. We have added a sentence touching on this topic, as well as links to a number of third-party benchmarking efforts which have determined that DESeq2's p-values are well calibrated for typical RNA-seq experiments. 6) We thank both reviewers, Dr. Wu and Dr. Risso, for raising this question. Below our reply to both of them. The Independent Filtering procedure can in fact be done post hoc as long as the data-dependent covariate used for filtering is independent of the test statistic (the p value) under the null hypothesis. The independence does not require the a filter cutoff be chosen beforehand or chosen on a separate dataset. A proof of the principle of Independent Filtering is presented in the reference we cite in the article, Bourgon 2010. DESeq2’s implementation of Independent Filtering also contains a procedure to ensure that it does not overfit noise in the p value distribution, by using smoothing to reduce the functional complexity of cutoff point optimization. Additionally, the current release of DESeq2 allows for a generalization of Independent Filtering, which is called Independent Hypothesis Weighting (Ignatiadis 2016). We have included a description and link to this newer, and potentially more powerful approach to multiple test correction in the revised version. As with Independent Filtering, hypothesis weighting with weights derived from a data-dependent covariate requires that this covariate is independent of the test statistic (the p value) under the null hypothesis. Otherwise, the weighting would affect the null distribution of the test statistic and most likely invalidate it. Here, our covariate is the baseMean, and DESeq2 fulfills the requirement that its p values (under the null) are independent of the baseMean. Minor: 1) These metadata were read in directly from the experimental metadata provided by the SRA. The collation of the metadata is described in the vignette for the airway Bioconductor data package. The values in this column are not used for the analysis, and here we simply report them \"as is\" from SRA. 2) This seems to be a limitation of the system that the journal is currently using to render the HTML or PDF on their website. We recommend that, for the purpose of running code in R, readers use the code in the Rmarkdown source file that is provided on the Bioconductor site: http://www.bioconductor.org/help/workflows/rnaseqGene"
}
]
}
] | 1
|
https://f1000research.com/articles/4-1070
|
https://f1000research.com/articles/5-2510/v1
|
12 Oct 16
|
{
"type": "Review",
"title": "Making sense of the cause of Crohn’s – a new look at an old disease",
"authors": [
"Anthony W. Segal"
],
"abstract": "The cause of Crohn’s disease (CD) has posed a conundrum for at least a century. A large body of work coupled with recent technological advances in genome research have at last started to provide some of the answers. Initially this review seeks to explain and to differentiate between bowel inflammation in the primary immunodeficiencies that generally lead to very early onset diffuse bowel inflammation in humans and in animal models, and the real syndrome of CD. In the latter, a trigger, almost certainly enteric infection by one of a multitude of organisms, allows the faeces access to the tissues, at which stage the response of individuals predisposed to CD is abnormal. Direct investigation of patients’ inflammatory response together with genome-wide association studies (GWAS) and DNA sequencing indicate that in CD the failure of acute inflammation and the clearance of bacteria from the tissues, and from within cells, is defective. The retained faecal products result in the characteristic chronic granulomatous inflammation and adaptive immune response. In this review I will examine the contemporary evidence that has led to this understanding, and look for explanations for the recent dramatic increase in the incidence of this disease.",
"keywords": [
"Crohn’s",
"Inflammatory Bowel Disease",
"GWAS",
"Immunology",
"Infection",
"Bacteria",
"Gastroenteritis",
"Gene"
],
"content": "General introduction\n\nThe enigma that is the cause of Crohn’s disease (CD) has puzzled clinicians and scientists from time immemorial. It is generally accepted that CD results from an aberrant immune response to commensal microflora in genetically susceptible individuals1, however, the nature of the immune defects, the responsible microflora and the genetic susceptibility remain incompletely defined and actively debated. With advances in genomic technologies our understanding of this puzzling condition is evolving, and answers forthcoming. The purpose of this article is to undertake a holistic review of the aetiopathogenesis of CD in which historical concepts are integrated with recent discoveries.\n\nThe distal ileum and colon contain >1011 bacteria per gram of faecal material2, which pose an immediate threat to life if they penetrate into the underlying tissues. The bowel microflora are isolated by a thin film of mucus and a single layer of columnar epithelial cells with a surface area of approximately 32m2,3. The requirement for the absorption of fluids and nutrients by the bowel mucosa means that the bowel lining cannot simply be a tough impermeable barrier, and as a consequence provision must be made to defend the vulnerable mucosal epithelial cell layer against its contents. Mucus secreted by goblet cells forms a continuous, weak, viscoelastic gel, lining, 5–500 μm thick4. In addition to acting as a physical barrier and lubricant, the mucus is the site of action of a variety of antimicrobial mechanisms including secretory IgA, antimicrobial enzymes and peptides5 and H2O2 generated by the DUOX electron transport chain6. Despite these barriers, the separation of the tissues from the gut microbiome is not absolute, and even in health the mucosa is constantly penetrated by relatively small numbers of enteric organisms and soluble microbial products that gain access into the tissues7–10. Scattered amongst the epithelial cells overlying lymph follicles are Microfold (M) cells11,12, a unique intestinal epithelial cell (IEC) subset that are highly specialized for the phagocytosis and transcytosis of gut lumen macromolecules, particulate antigens and pathogenic or commensal microorganisms, which they transfer across the epithelium to mucosal macrophages and dendritic cells. This slow, constant, transit is important for the development, priming and maintenance of a potent immune system in the submucosa13–15. The protective role of the bowel immune system must be combined with tolerance to ingested antigens and commensal organisms to maintain homeostasis in a healthy bowel.\n\nThe bowel is the interface between a dense population of microbes and the immune system. Although an in depth review of the immune system in the bowel is well beyond the scope of this review, it is important to briefly cover this subject because defects in innate immunity are central to the development of CD whereas aberrant adaptive immunity causes bowel inflammation of a very different type, and a range of largely inaccurate animal models of CD.\n\nAdaptive immunity. Most of what is known of classical adaptive immunology relates to the immune system of the bowel, but there are in addition some specialised features unique to the intestinal mucosa16–18.\n\nThe mucosae and exocrine glands harbour the largest activated B-cell system of the body, amounting to some 80–90% of all immunoglobulin (Ig)-producing cells in humans19. The major product of these lymphocytes is polymeric (p)IgA (mainly dimers) with associated J chain. Both pIgA and pentameric IgM contain a binding site for the polymeric Ig receptor (pIgR), or secretory component (SC), which is a requirement for their active external transport through secretory epithelia into the overlying mucus19.\n\nM cells, and intestinal dendritic cells, that phagocytose bacteria interact with B and T cells in the Peyer’s patches, inducing B cells to produce IgA directed against intestinal bacteria20. IgA+ B cells home to the intestinal lamina propria and secrete IgA that is transcytosed across the epithelium and deposited on the apical surface. The transcytosed IgAs bind to luminal bacteria, preventing microbial translocation across the epithelial barrier21,22.\n\nAfter the initiation of the immune response by antigen processing and presentation to B and T cells in Peyer's patches, primed lymphocytes leave the mucosa via the thoracic duct. Finally they migrate back to the mucosa where they exert effector functions.\n\nThere has been considerable recent interest in IL-23 and IL-17 in relation to the aetiology of CD. IL-23 is secreted by macrophages and dendritic cells and transforms naïve T cells into Th (T-helper) 17 cells which then produce IL-17, IL-21 and IL-2223. IL-17 induces numerous cell types including T-cells, mast cells, macrophages, neutrophils, keratinocyte, and natural killer cells to produce a raft of pro-inflammatory mediators including IL-1b, IL-6, IL-8, IL-11, Gro-α, G-CSF, GM-CSF, IL-4, IL-5, IL-13, IgE, and eotaxin24. An important outcome of this cytokine cascade appears to be the recruitment of neutrophils to inflammatory sites25. Despite the apparent importance of IL-17 for intestinal barrier function26 and for diverse pro-inflammatory activities, there must be considerable redundancy in the pro-inflammatory repertoire as defects in the IL-17 pathway are associated with a very narrow predisposition to disease in the form of mucocutaneous candidiasis27. None of the hundreds of patients with this condition had CD28.\n\nIL-17 has been considered to be detrimental in CD as a consequence of its apparent pro-inflammatory actions. It is therefore of interest that a trial of the treatment of CD with monoclonal antibodies against IL-17 had to be stopped because of the deterioration of the patients’ condition29.\n\nAdaptive immunity in the bowel protects against commensal organisms, or those previously encountered in infections that were successfully overcome. This is accomplished by the production of a barrier of secreted IgA that permeates the lining mucus layer and by the production of specific IgG and IgM that opsonise penetrating organisms for phagocytosis. Immunity to pathogenic bacteria like Salmonella, Shigella, Vibrio cholera and Escherichia coli is generally not very potent or long lasting, which, together with the propensity of bacteria to mutate, makes vaccines relatively ineffective30,31. A degree of immunity does develop as a result of repeated reinfection in endemic areas but because this is not permanent, it is gradually lost after emigration to cleaner environments, which might be an important factor in relation to the subsequent triggering of CD by infection in individuals moving from regions of low to high prevalence of this condition.\n\nInnate immunity. The submucosa of the bowel is particularly vulnerable to microbial invasion if the mucosal barrier is breached as large numbers of organisms can achieve rapid access and the conditions are conducive to microbial proliferation. There is inadequate time for adaptive immunity to take effect and reliance must be placed on the innate system to contain and eliminate potentially harmful stimuli. At its heart this means the rapid and florid release of pro-inflammatory cytokines from lamina propria macrophages32, recruited from blood monocytes33, mast cells34,35, eosinophils and innate lymphoid cells36–38 when activated by bowel contents. Paneth cells are specialised intraepithelial secretory epithelium of the small intestine that reside in small clusters at the base of crypts of Lieberkühn in the small intestine. Large secretory granules in these cells contain a wide variety of proteins, the most abundant of which are antimicrobials such as the alpha defensins that are discharged into the crypt lumen. These effector molecules also diffuse from the crypt and disseminate into the mucous layer that overlies the mucosal epithelium, where they contribute to the mucosal antimicrobial barrier39.\n\nPro-inflammatory cytokines induce changes in the microvasculature40,41 leading to the extravasation of plasma proteins and to the recruitment of neutrophils42. A critical concentration of neutrophils is required to eliminate invading bacteria43 and immediately after bacterial penetration of the mucosa there is direct competition between bacterial replication and neutrophil recruitment and bacterial phagocytosis and killing. In the absence of specific antibodies, uptake of the foreign material is enhanced by non-specific opsonins like pentraxins, collectins and complement44. The neutrophils then undergo apoptosis or necrosis and the purulent collection is most probably discharged into the bowel lumen, with the residual debris being phagocytosed and cleared by macrophages45.\n\nBowel homeostasis requires an intact mucosal barrier, itself requiring the integrated function of many different cell types and molecules, and the largest collection of immunological cells in the body to present an integrated defence against the intestinal microbiome. It is therefore not surprising that defects in genes coding for proteins required for the integrity of this barrier, or for normal immune surveillance, manifest as mucosal inflammation.\n\nAs might be expected, these conditions present very early in life, and because they affect the mucosa as a whole, they result in a diffuse, non-specific inflammation, predominantly in the large bowel where concentrations of bacteria are highest. Uhlig et al. (Figure 1)46 found that about 5% of their cases of IBD had infantile or very early onset disease. This will represent a much higher proportion of cases than that occurring in the general population, because most cases of IBD occur in adults and are handled in non-specialist facilities, whereas rare inherited diseases gravitate to specialist centres like those of Uhlig and his co-authors. The monogenic lesions identified provide important identifiers of the molecules required for bowel integrity and adaptive immunity.\n\nMultiple genetic defects are summarized in the group of atypical Severe Combined Immunodeficiency (SCID), Hoyeraal–Hreidarsson syndrome, Chronic Granulomatous Disease (CGD), and Hermansky–Pudlak syndrome. By comparison, an unselected IBD population is presented (Oxford IBD cohort study; paediatric and adult referral-based IBD cohort, n = 1605 patients comprising CD, Ulcerative Colitis (UC), and IBD unclassified [IBDU]). Symbols represent individual patients. Bars represent the age range of case series if individual data were not available. Reproduced from 46 with permission from the publisher.\n\nPenetration of luminal contents into underlying tissues occurs in stage 1, which may be facilitated by environmental factors such as infection, or inherent defects in the mucosal barrier. In healthy individuals, resident macrophages secrete pro-inflammatory cytokines in response to this material, resulting in neutrophil accumulation, clearance of the material, and thereby resolution. In CD patients, defective secretion of pro-inflammatory cytokines by macrophages results in impaired neutrophil influx and clearance of foreign material (stage 2). Subsequently, chronic inflammatory responses (stage 3) will be triggered, giving rise to the characteristic features of the CD lesion. From 86 Figure 1 (reproduced with permission from the publisher).\n\nReproduced from 93 (with permission).\n\nMutations in the barrier function genes such as COL7A1, FERMT1, TTC7A and ADAM17 generally result in infantile bowel dysfunction and inflammation.\n\nThe severe immunodeficiency syndromes such as atypical Severe Combined Immunodeficiency (SCID) and Immunodysregulation Polyendocrinopathy Enteropathy X-linked Syndrome (IPEX) also generally have a very early onset and also do not have intestinal changes characteristic of either CD or UC. In contradistinction, the neutrophil defects, exemplified by Chronic Granulomatous Disease (CGD), and including Hermansky-Pudlak, congenital neutropenia and leukocyte adhesion deficiency all have a CD phenotype clinically, endoscopically and histopathologically, with a lot of perianal disease and granulomata evident on biopsy47. The neutrophil defects generally present later than the abnormalities of mucosal barrier function, or the severe immunological diseases.\n\nMutations in the genes coding for IL-10 and IL-10 receptor both present very early in life, as seen in mucosal abnormalities and immunodeficiencies, and exhibit the bowel phenotype characteristic of defective neutrophil function. These observations would appear to be at variance with the prevailing view that IL-10 down-regulates macrophage function, and that the bowel inflammation in its absence is a manifestation of uncontrolled macrophage activation48. IL-10 does appear to be required for normal intestinal development. Multisystem abnormalities were observed in the original description of the IL-10 knock-out mouse49 in which there was a general enterocolitis with greatest abnormalities in the duodenum and jejunum, not locations associated with a high burden of commensal bacteria. Under specific-pathogen free (SPF) conditions the bowel lesions persisted, but were limited to the proximal colon. In addition the mice exhibited a severe growth defect, and were severely anaemic with a paucity of erythroid precursors in the bone marrow that was filled with myeloid precursors. The anaemia was unrelated to the extent of bowel involvement. These phenotypic features indicate that IL-10 is required for the normal growth and development of the bowel mucosa and haemopoetic tissue, in addition to its suppressant effect on macrophages50.\n\nThe observed phenotype of patients with IL-10 and IL-10 receptor deficiency is in keeping with that of defective neutrophil function48,51. Almost all the patients showed evidence of bacterial infections in the form of folliculitis, and ear and respiratory tract infections. Most revealing was the almost universal occurrence of perianal disease with abscesses, fissures and fistulae that are highly characteristic of the neutrophil deficiency diseases like CGD52,53, Hermansky-Pudluck54 and glycogen storage disease 1b55. If the IL-10 deficient phenotype results from the impairment of normal cellular development and an immunodeficiency rather than an excessive, unregulated, macrophage response as proposed, then it might be expected that treatment with immunosuppressive therapy would be ineffective, which was indeed found to be the case51.\n\nIt is also noteworthy that bowel inflammation is not a feature of the classical autoinflammatory diseases in which deregulated macrophage activation is a feature. These diseases include cryopyrin-associated periodic syndrome (CAPS)56 in which activating mutations in the NLRP3 gene result in increased excretion of IL-1β excretion and other pro-inflammatory cytokines, and the haemophagocytic syndrome,57 in which the uncontrolled activation of antigen-presenting cells (macrophages and histiocytes) and T cells produces an exaggerated inflammatory response and cytokine storm.\n\nMouse models of IBD have been extensively reviewed in the literature58–60. These models are very important because they are depended upon by clinicians and scientist trying to understand the causes of these diseases, and by the pharmaceutical industry attempting to produce drugs with which to treat them.\n\nInduced inflammation. In general, impairment of the innate immune system as in mouse models of CGD or Wiscott-Aldrich does not result in the spontaneous development of bowel inflammation, although they do exhibit an exaggerated response to insult61.\n\nPredisposition to bowel inflammation may be exposed by reducing intestinal barrier function, thereby allowing access of the contained microbiome to the underlying tissues. Barrier function can be compromised through the genetic manipulation of proteins required for the production of mucus or the maintenance of epithelial integrity62 or by the use of chemicals or infectious agents.\n\nChemical agents employed for this purpose fall into three main groups. Those that produce direct damage to the mucosa such as dextran sodium sulphate (DSS63), acetic acid and carrageenan64. The second group are those such as Haptens (e.g. 2, 4-Dinitrochlorobenzene (DNCB65) or Dinitrobenzene sulphonic acid (DNBS)) that induce an immune response. Finally infection with bacteria such as Salmonella, E.coli or Citrobacter, or parasites59 may be utilised.\n\nGenetic models. The advent of gene targeting technology has provided immunologists with powerful tools with which to explore the immune system. In the course of investigating its diverse components, hundreds of different genes have been knocked-out, some of which resulted in the spontaneous development of bowel inflammation. Because of this, these mice have been proposed as models of IBD.\n\nProminent examples of such mice include the IL-266, T cell receptor (TCR)α/β67, and IL-1049 knockout models. With the exception of IL-10-deficient mice, which possess some features of human CD, the majority of these models have diffuse colonic inflammation. A strain of mouse (TnfΔAREmice) was developed in which elements of the tumour necrosis factor (TNF) gene that are required to restrict the overproduction of this cytokine have been removed. In their absence the mice exhibit sustained over production of TNF which results in a diffuse arthritis and terminal ileal and caecal inflammation68. IL-17 and IL-22 deficiency exacerbate induced colitis69.\n\nSpontaneous models. The C3H/HeJBir model of colitis was discovered by chance when mice in breeding colonies developed loose bowel actions70. The pathology is characterised by spontaneous and chronic focal inflammation localised to the right colon and caecal region although not involving the small intestine. The colitis occurs in young mice and tends to resolve with age, without recurrence. The genetic mechanisms underlying these abnormalities remain to be identified.\n\nSAMP1/Yit mice were developed from senescence-accelerated mice71. They spontaneously developed ileitis and gastritis even under germ free conditions. The underlying aetiology is unknown but there is some evidence that the primary defect lies in the epithelial cell barrier and that B cells appear to play a role in the pathogenesis of inflammation at both sites72.\n\nAdoptive transfer. One of the most commonly cited models for the study of the role of T lymphocytes in bowel inflammation in mice (as a proposed model of CD) is the adoptive transfer model in which T cells are transfused into SCID mice73, resulting in bowel inflammation. By observing the effects of varying the populations of cells infused, conclusions have been drawn as to the regulatory interaction of the various cell populations. It is important to understand that these host SCID mice have hardly any B or T lymphocytes and, as their name suggests, are hypogammaglobulinaemic and severely immunocompromised. In the CD45RBhigh transfer model, first described by Morrissey74 a subset of lymph node CD4+ T-cells, expressing high levels of the marker CD45RB (CD45RBhi), were injected into SCID mice. The mice developed a wasting disease accompanied by massive hyperplasia of the intestinal mucosa with a dense infiltration of lymphocytes thought to be due to “an augmented, unregulated reaction towards higher levels of luminal-derived bacteria or bacterial products”. These changes were not seen when the animals were infused with unfractionated CD4+ or CD45RBlo cells, indicating that the extreme reaction to bacterial products75 by the CD45RBHi cells could be controlled by the CD45RBlo cells. Soon after, similar experiments were conducted by Powrie and colleagues76,77 who observed the same pathological changes in the bowel which they equated to those changes found in “inflammatory bowel disease” in humans. They showed that these changes could be prevented by antibodies to interferon-gamma and by recombinant IL-1077, and identified the cells in the CD45RBlo population responsible for controlling CD45RBhi induced inflammation as the population of suppressor T-cells called T-reg cells78.\n\nThese observations and their extrapolation to human IBD sparked a large body of work into the role of regulatory T cells in the pathogenesis of IBD. Over the past decade, multiple groups have failed to find abnormalities in these cells in the intestines or blood of patients with IBD79, not altogether surprising given the extreme artificiality of the animal model from which their presumed role in human disease was derived.\n\nThese mouse inflammation models are undoubtedly of great value in dissecting out immunological mechanisms and attributing roles to specific cellular populations and their associated cytokines. However, equating genetic mutations leading to bowel inflammation in mice with causal mechanisms of diseases in humans can have serious consequences as it may misdirect clinicians and scientists as to the underlying pathophysiology, and mislead pharmaceutical companies as to the relevant biological pathways against which to attempt to develop drugs. On the other hand, mouse models can be of great value when the problem is turned the other way around and they, and other animals like zebrafish, are used to validate the causality of molecular lesions found in association with disease in humans, for example those involving IL-1060 and ADAM1780.\n\n\nClassification of IBD\n\n“Medicine is learned by the bedside and not in the classroom. Let not your conceptions of disease come from words heard in the lecture room or read from the book. See, and then reason and compare and control. But see first.”\n\nSir William Osler\n\nIBD has referred to CD and UC because both can largely affect the colon and terminal ileum, however, although there may be overlap at the interface of these two conditions, their classical manifestations are quite different81,82. They are both syndromes, rather than specific diseases, where common clinical pictures are united by a common set of diagnostic criteria produced by similar pathophysiological mechanisms.\n\nCD83 usually involves the terminal ileum, and the caecum and colon to a variable extent, where the lesions are patchy, known as “skip lesions”, and associated with strictures, and fistulae between the bowel and other loops of bowel, the skin, and pelvic organs like the bladder and vagina. Outside the bowel, at the sites of transmural inflammation, the mesenteric adipocytes hypertrophy, covering the exterior of the bowel with a layer of protective fat, a process known as fat wrapping. Anal disease affects about 40% of these patients84 exemplified by abscess, fistulation and skin tags. The inflammation is described as transmural, extending deep into the wall of the bowel, and contains diagnostic granulomata, collections of macrophages which represent a characteristic tissue response to retained foreign material. “The basic etiological factor in the case of all granulomas is probably the presence of a nidus of insoluble material which, if small enough is ingested by phagocytic cells, or, if too large, remains extracellular”85. The central macrophages in these granulomata are surrounded by lymphocytes.\n\nUC is very different in that it starts at the rectum and extends proximally, although occasionally, when it involves the whole large bowel there can also be involvement of the terminal ileum, a condition known as “backwash ileitis”. The inflammation in UC is superficial, being limited to the lamina propria, and the histological hallmarks are crypt abscesses and depletion of goblet cells that normally contain mucus.\n\nAlthough a syndrome, the diagnostic features of classical CD are quite precise, and very different from the very rare cases of very early onset IBD, and the vast majority of genetically abnormal mice, both characterised by bowel inflammation rather than the clinical criteria used to diagnose CD or UC.\n\n\nThe three phases of Crohn’s disease\n\nA unifying model of CD pathogenesis has been proposed in which this condition develops in three temporally distinct phases86:\n\n• The trigger - gastrointestinal infection;\n\n• A defective response to the consequences of this infection;\n\n• A subsequent prolonged chronic inflammatory adaptive immune response.\n\nEpidemiology. There is strong evidence for the role of an infectious environmental factor in the pathogenesis of CD. This is most obviously seen when populations or families emigrate from one country to another. A high proportion of family members have been documented as developing the disease after moving from Morocco to Belgium87, from Albania to Greece88 and from India to Canada89. After being imported into the household enteric infections can spread to family members90. At a population level, an increased incidence of CD has been described in recent immigrants from Ethiopia to Israel91, and from Eastern European and Iraq to Sweden92.\n\nThe epidemiology of CD has been the subject of a large body of work and multiple reviews. Most pertinent to this paper are issues concerning environmental influences, several of which are clearly associated with CD as outlined below.\n\nTemporal trends. There has been a steep rise in the incidence of CD over the last few decades in economically advanced countries across Europe, North America and Australasia93–95. This is not purely an effect of increasing economic affluence because the incidence of CD is much lower in other economically advanced countries such as Japan and South Korea, although the incidence is now also rising in these countries96,97.\n\nCD is generally more common in urban females of higher socioeconomic status95, with a male to female ratio of about 1.5–2:1.\n\nThe difference in prevalence of CD by country could be partly explained by genetic factors; however, evidence from migration studies emphasise the importance of the environment. A limited number of studies investigating the incidence of CD among recent immigrants have been undertaken. The most informative of these assessed the risk of IBD in first- and second-generation immigrants to Sweden from many different countries92. They found that overall risk of CD was lower in many groups of first-generation immigrants than in the native-born Swedish reference group but that in most groups of second-generation immigrants these decreased risks disappeared, and in some cases even exceeded those in the native Swedish population. First generation Middle Eastern immigrants to Australia developed CD at a much later age (∼57 years) than the second-generation who developed it at about 28 years of age, roughly the standard age in Western society98. CD is very rare in Ethiopia but emerged in Ethiopian Jews migrating to Israel after a median lag of about 12 years after arrival91. CD is also more common in Bangladeshi immigrants to England99. Combined, these studies imply that immigrants from underdeveloped countries initially have a resistance to CD that wanes over the subsequent decade or so.\n\nInfection. Infection has long been considered to cause CD. Attempts were made to transmit a CD agent from gut or lymph node tissue of patients to wild-type or immunodeficient mice100. More granulomata were found in the mice receiving CD tissue, but that could have been due to the fact that the inflamed tissue contained enteric organisms or inflammatory cytokines. In the first description by Danziel in 1913 of what was later to be called Crohn’s disease, the similarity between “chronic interstitial enteritis” and Johne's disease in cattle101, which is caused by infection with Mycobacterium avium paratuberculosis, was commented upon102. Evidence that this agent was also responsible for human CD has been extensively sought103 but has not been forthcoming104,105.\n\nSeveral prospective studies have followed the course of patients after infections with enteric organisms and all have found an increased incidence of IBD as compared with uninfected control subjects106–110. In one of these109 the risk was similar whether or not an infecting agent was identified, suggesting that it was the damage to the bowel rather than a specific infection that was important.\n\nEnteric infections are most commonly caused by viruses, particularly Norovirus111, and by Campylobacter, Salmonellae, Shigella, Entamoeba histolytica, Cytomegalovirus and Yersinia112,113. Particular attention has been paid to an adherent-invasive subgroup of E. coli, that has been linked to the development of CD114–116. The natural lesions produced by these organisms might provide some insight into those most likely to trigger CD. Norovirus mainly affects the proximal small intestine111,117 and Amoebic and Salmonella infections generally produce a diffuse colitis whereas the other infections result in lesions located in the terminal ileum and colon, with a patchy distribution, similar to those of the lesions of CD118–125.\n\nWhereas the incidence of most bacterial gastrointestinal infections is steady or falling, that induced by the commonest bacterial pathogen, Campylobacter is increasing in countries like North America, Europe, Scandinavia, Australia and New Zealand and Japan126 (Figure 4). This could be because this organism is a common contaminant of poultry, the consumption of which is increasing in these countries, however, broiler flocks are heavily contaminated with both Campylobacter and Salmonellae127 and the incidence of infection by the latter is steady or falling (Figure 4).\n\nEPI-NEWS 12, 2013 from Statens Serum Institut, Denmark (reproduced with permission).\n\n“Relapses” in cases of IBD have been reported to be associated with infections with various organisms including Clostridium difficile, Shigella, Salmonella, Campylobacter, E. coli and Listeria128. These subsequent infections might be inducing the development of a novel set of Crohn’s lesions in a predisposed bowel, rather than recrudescence of the original disease, as a result of infection by different organisms. These subsequent infections could be predisposed to by the immunosuppressive treatments commonly used in this condition including corticosteroids, cytotoxic and biological agents.\n\nMost gastrointestinal infections do not generally produce homogeneous mucosal damage but lead to focal areas of ulceration124,125, often in the ileocaecal region of the bowel. Because infection with invasive gastrointestinal pathogens is a stochastic process129, the age at which this occurs is highly variable, as is the outcome after the infection, which will depend upon the severity of the infection, extent of ulceration, quantity of bowel contents gaining access to the tissues and to the effectiveness of the innate immune response.\n\nThe microbiome, prebiotics, probiotics and faecal transplants. In the search for possible causal infectious agents, stool samples from CD patients have been extensively cultured and examined without a positive result (see for example130). This is not entirely surprising because the average time from the onset of symptoms to diagnosis of CD is over six months131 by which time an infectious organism will have been eliminated if it was a triggering agent rather than the cause of a chronic infection. With the advent of next generation 16S rRNA gene sequencing the phylogeny and taxonomy of samples from complex microbiomes can be determined without the need for them to be viable or culturable. Dysbiosis of the faecal microbiome is well recognised in CD132,133, with a decrease in the abundance and diversity of the Firmicutes phylum and an increased abundance of Proteobacteria, and alterations in the fungal composition134. Differences were also found between the microbiotas of CD patients with ileal and with colonic disease135. This could reflect an epiphenomenon secondary to the disease process. Major alterations in the microbiota are induced by diarrhoea136, enteral nutrition137, antibiotics138, which most of these patients receive139, and by iron therapy140 which is often prescribed because these patients are generally anaemic. In general, gut and mouth microbiomes display universal dynamics, unlike microbial communities associated with certain skin sites that are probably shaped by differences in host environment141.\n\nBecause CD predominantly occurs in those regions of the bowel with a high bacterial count, and given the differences in the microbiotas in CD described above, attempts have been made to alter the intestinal microbiota in the treatment of this condition. Prebiotics are typically non-digestible, fibre rich materials, which stimulate the growth or activity of advantageous bacteria that colonize the large bowel, whereas probiotics are live microorganisms that are directly administered by mouth. Neither prebiotics nor probiotics have been shown to be beneficial in CD142–144. An alternative means of directly altering the intestinal microbiota is by faecal microbiota transplantation, the transfer of faeces from a healthy donor, to restore the intestinal microbiota of a diseased individual. Whilst this is a logical treatment for Clostridium difficile infection, which generally develops in a colon depleted of its natural microbiome by antibiotics, it has not been found to be effective in the treatment of CD145,146.\n\nThe Hygiene Hypothesis. The considerable increase in the incidence of CD in developed countries in recent decades147 has been attributed to immunological changes to alterations in the environment as outlined in the Hygiene Hypothesis148. This hypothesis149 was first described by Strachan in 1989 who stated that “over the past century declining family size, improvements in household amenities, and higher standards of personal cleanliness have reduced the opportunity for cross infection in young families. This may have resulted in more widespread clinical expression of atopic disease”150. Subsequently modern living conditions have been held responsible for the increasing incidence of a variety of so called “auto-immune” diseases, including CD, which have been attributed to exposure a reduced load of microbes of decreased diversity. Certainly CD is less common in rural societies where there is exposure to animals, pets and soil, bedroom sharing is more common, and there is less access to hot water for ablutions98.\n\nAccording to this theory, standards of hygiene are lower in lower socioeconomic societies, leading to a greater abundance and variety of gastrointestinal pathogens. This would lead to a high incidence of gastrointestinal infections in infancy and childhood, resulting in death151 or immunity152. CD is very uncommon in underdeveloped societies in Asia153, South America154, China155 and sub-Saharan Africa156 and the increase in its incidence is closely associated with the improvement in income and living standards. Enteric infections are endemic in these developing societies in which diarrhoea is a major cause of death in children less than 5 years of age157–159. The population in underprivileged societies also host a large burden of gastrointestinal helminths160 and the low incidence of CD recorded in developing countries has been attributed to the high rates of gastrointestinal infections with these organisms161. Helminthic infection was found not to be protective against CD in Denmark162 and the outcome of several trials of iatrogenic infection with helminths as therapy for CD are awaited, but current evidence does not suggest that they will be efficacious163.\n\nImmunity to enteric organisms is transient,164–166 and may be strain specific167, and would be boosted by frequent reinfection in less advanced countries. This herd immunity would be lost over time after immigration to socially advanced, cleaner, societies, which would accord with the later age of onset of CD in first generation immigrants. One could envisage a situation in which the population of more socially advanced countries are living under increasingly hygienic conditions and are exposed to a less diverse repertoire of the enteric microorganisms capable of producing gastrointestinal infection. With less frequent gastrointestinal infection, the bowel is uninflamed, with less primed macrophages, mast cells and dendritic cells in the lamina propria and adaptive immunity is more restricted, and relatively feeble, through the lack of repeated boosts by infection, making the bowel vulnerable to attack by a novel or virulent organism.\n\nIf we postulate that the trigger for CD is enteric infection, how can the fact that the incidence of food-borne gastroenteritis is fairly steady in most developed countries168 be reconciled with the rapidly increasing incidence of CD? Due to greater regulation and control of food production and distribution the incidence of foodborne outbreaks of disease have remained steady or have declined169–171.\n\nIt is important to consider the age distribution at which patients present with Crohn’s disease. It rises to a peak at between 20 and 30 years of age after which it demonstrates a steady decline, a pattern that is remarkably consistent, and very different from that of UC, across the geographical spectrum97,172–174.\n\nThe peak incidence, generally at a later age than puberty, coincides with a stage in life accompanied by major lifestyle changes. These include the movement of individuals out of the family home, in which the ambient microbiome is likely to be relatively stable, into environments in which the risks of exposure to infection are much greater. The main two ways in which young adults are exposed to infectious enteric organisms is through the ingestion of contaminated food or fluids, or by person to person contact, the risk of both being increased by travel to places where exposure to novel organisms is more likely.\n\nAlthough enteric infections are generally considered to be foodborne, only about one half are in fact transmitted in this way176, most of the rest being transferred by person to person contact. Sexual transmission is worthy of consideration as a means of transmission of faecal organisms between individuals because, as might be expected, the peak age for the acquisition of sexually transmitted diseases is very similar to that of CD (Figure 5).\n\n(A) The age distributions of Crohn's disease and ulcerative colitis in several European countries. (Reproduced from172 with permission from the publisher). (B) Global age-sex distribution of new HIV infections175 (reproduced with permission).\n\nRedrawn from from the three National Surveys of Sexual Attitudes and Lifestyles189. Each line connects values for the same birth cohort at different ages.\n\nEpidemiological studies from developed countries have reported an increasing prevalence of invasive infections by Entamoeba histolytica177, Shigella178, Cryptosporidia179 and Campylobacter180, among men who have sex with men (MSM), which is not surprising because of the increased risk of exposure to coliform organisms by oral, anal and oro-anal sexual practises181. The ingestion of as few as 10 virulent Shigella organisms can confer full-blown dysentery182. It is easier to establish the causality of infectious outbreaks in these groups of individuals as compared with the general population, because they fall into more readily identifiable groupings which facilitate the epidemiological studies. It would be important to establish the incidence of CD in MSM, but diagnosis in these individuals is complicated by the relatively small proportions of individuals attending gastroenterology facilities, the presence of compounding factors such as “gay bowel”183 and of sexually transmitted diseases like lymphogranuloma venerium184 that can masquerade as CD.\n\nGiven that oral and oro-anal sexual practises have been demonstrated to be responsible for the transmission of enteric infections in MSM, they must also pose a risk in other populations185,186. Although rectal bacterial flora are present on the perineum of both sexes187 and in the vagina188 it is unlikely that an increase in gastrointestinal infection would result from vaginal intercourse alone. Only about 5% of the sexually active individuals in countries like Britain189 and the United States190 are not heterosexual. In the heterosexual community the anal sex is practised by 30 – 40% of the population in England (Figure 6) and North America, and fellatio and cunnilingus are almost universal189,191. In England the participation in anal sex has almost doubled over the last three decades, a similar increase to that of the incidence of CD. In terms of absolute numbers, approximately seven times more women than homosexual men engage in unprotected receptive anal intercourse192. In addition, the ratio of homosexual to bisexual men is about 3:1, and the latter can act as “bridgers”, transmitting infections from men who have sex with men into the heterosexual community193.\n\nThose countries with a high standard of living and low rate of CD like Japan96, Taiwan, China194 Korea97, Saudi Arabia195 and Malasia81, appear to have low rates of heterosexual anal sex http://www.data360.org/pdf/20070416064139.Global Sex Survey.pdf196 and in these countries the sex ratio of the disease, which is commoner in females than in males95 in Western countries, is reversed, implying that men are particularly vulnerable to infection in these places. This does not appear to be due to a reporting bias because the sex ratio of UC in these countries matches that of Europe and North America.\n\nMonogamous heterosexual couples develop complementary microbiomes197,198 which would suggest that the highest risk to infection of either partner through sexual contact would be in the early stages of a relationship, and that the risk to an individual would be related to the numbers of sexual partners, some of whom might be asymptomatic carriers of pathogenic organisms199–201.\n\nClearly the above arguments are conjectural and their validity will require validation through sound sociological, epidemiological and microbiological investigations.\n\nFour other factors, smoking, antibiotics, appendectomy and invasive pneumococcal disease have positive correlations with the incidence of CD.\n\nSmoking of tobacco is the strongest environmental influence on CD, roughly doubling the incidence202 and relapse rate203. Smoking and nicotine impair intestinal204,205 and gastric206 mucosal blood flow. Adequate blood flow is central to the development of an effective acute inflammatory response. Smoking also reduces levels of acute inflammatory cytokines in the bowel wall207 and lumen208 in patients with CD.\n\nThere is an increased frequency of antibiotic use in CD prior to diagnosis95,209. The increased frequency of antibiotic use may be explained by an increased number of childhood bacterial infections. Similarly, an increased frequency of tonsillectomies209 has been reported in CD and this may be an indication of recurrent pharyngitis. A further indication of a predisposition to infection in CD comes from the demonstration that these patients are more susceptible to invasive pneumococcal infection210.\n\nA history of a greater frequency of appendectomy in CD is also in keeping with an increased susceptibility to childhood infection in this condition95.\n\n“Any infectious agent associated with Crohn’s disease is likely to be a widely distributed organism to which some people react abnormally - that is, the disease is unlikely to show the characteristic features of an infectious disease211.”\n\nAs described above, there is very good reason to believe that the initiating lesion in CD is infection by one of a number of enteric pathogens. The key to comprehending how the pathological lesions of the disease then develop lies in understanding the response to that initial infection. The infection by the organisms described above is very unlikely to persist, or the causal connection would have been clearly established some time ago. This is also the reason that antibiotics are of only limited efficacy in the treatment of CD212.\n\nThese patients are unlikely to be unduly susceptible to infection by these organisms, or else the onset would occur earlier, and systemic disease would be expected, as occurs with Salmonellae when the interferon-gamma/IL-12 axis is disrupted213.\n\n\nImmunoparesis of the acute inflammatory response is the underlying Crohn’s phenotype\n\nThe underlying pathology in Crohn’s disease is the ineffective manner in which the faecal material entering the tissues through the damaged mucosa is dealt with. Infective damage to the mucosa followed by the entry of faecal material with a bacterial count of greater than 1011 bacteria per ml into the tissues poses an existential threat that must be dealt with vigorously. This is accomplished by the acute inflammatory response, a non-specific local reaction to tissue damage that recruits the innate immune system. It includes the secretion of inflammatory mediators from mast cells and macrophages, complement activation, markedly increased blood flow, capillary dilatation and permeability, the deposition of a fibrin network, and most importantly in the context of CD, a massive influx of neutrophil leukocytes, highly motile phagocytes that ingest and kill invading bacteria and fungi and digest foreign organic material.\n\nThe underlying, and unifying, predisposition to the development of CD is a systemic incompetence of this acute inflammatory response. I will deal with the evidence supporting this immunoparesis in some detail because these experiments have been performed on CD patients, and healthy control subjects, and in some cases patients with UC and represent a unique set of data that have not been repeated, possibly because of the invasive and uncomfortable investigations required to obtain them.\n\nThe delay in the recruitment of neutrophils to sites of trauma to the body by the innate immune response has been demonstrated in patients with CD in several different but complimentary ways. In 1976 I demonstrated that the accumulation of neutrophils in superficial abrasions on the arm called “skin windows”, was grossly deficient when compared with healthy subjects or patients with another chronic inflammatory condition, rheumatoid arthritis214. It was observed that “This abnormality of neutrophil function in Crohn's disease appears to be secondary to a defective acute inflammatory response as the neutrophils themselves were found to behave normally on in-vitro testing. A weak acute inflammatory response to particulate or antigenic material in the bowel wall could result in the chronic inflammation observed in this condition.”\n\nThe next in these series of experiments was conducted on the ileal and rectal mucosa, and again on the skin215. A small mucosal biopsy was taken from the ileum or rectum, and this was then followed 6 hours later by a further biopsy of the previous biopsy site, to determine the extent of the inflammatory response induced by the initial biopsy trauma. Once again there was a major delay in the recruitment of neutrophils in CD, and this was observed in both regions of the bowel. In addition to healthy subjects, control individuals with UC were studied and their neutrophil recruitment was normal. Trauma to the skin reproduced the impaired neutrophil recruitment into skin windows, as well as reduced secretion of IL-8 and IL-1β from them.\n\nThe direct injection of heat killed E. coli into the subcutaneous tissues of the forearm of normal subjects was followed by profound rise in local blood flow. This was considerably impaired in CD, but not in UC. Blood flow is important in recruiting innate immune cells to sites of inflammation and this already paltry vascular response in CD would be further compromised by smoking tobacco216.\n\nThe third of these experiments directly measured the accumulation of neutrophils at the site at which E. coli had been injected subcutaneously, and the rate of clearance of these organisms. In this study peripheral blood neutrophils were purified from the individual under investigation, labelled with the gamma-ray emitting radioisotope Indium-111217, and reinjected intravenously at the same time that unlabelled E. coli were injected subcutaneously into the forearms. The rate of accumulation of the radioactive neutrophils over the site of the injected bacteria was determined218. A much smaller proportion of neutrophils were recruited to the injected bacteria in the CD subjects than in the HC or UC individuals (Figure 7).\n\nReproduced from 218 with permission from the publisher. 111Indium-labeled autologous neutrophils were injected intravenously at the same time as killed E. coli were injected subcutaneously into each forearm. (a) Radioactivity measured over the injection sites showed a much smaller proportion of labelled cells accumulating in CD subjects. (b) γ-Camera image of a CD patient at 24 h after injection, demonstrating focal accumulations of radioactivity at bacterial injection sites (arrows) and confirming lack of bowel inflammation. (c) 32P-labeled killed E. coli were injected into the subcutaneous tissues of the forearm and radioactivity was measured at the skin surface. Clearance of radioactivity was much slower in CD than in HC or UC. Extrapolating these curves indicated that almost complete removal (99%) would take 10.2 and 7.1 d in HC and UC subjects, respectively, compared with 44.3 d in CD. (d and e) Effect of increasing bacterial dose from 105 to 108 on blood flow (d) and bacterial clearance (e). The numbers of subjects studied in the dose response experiment are depicted in e. All results are expressed as mean ± SEM (**, P < 0.01; ***, P < 0.001).\n\nThe next step was to radiolabel the E. coli with Phosphorus-32 and to then determine the rate of clearance of the bacteria from the tissues. This was a two phase process in HC and UC subjects with a very rapid initial clearance lasting about 4 hours followed by a slower phase, with total clearance being achieved by 7 to 10 days. In the CD subjects, initial clearance was much less efficient and total clearance was markedly delayed and was predicted to last from several weeks to infinity. This study showed unequivocally that coliform bacteria are cleared less efficiently from the tissues than normal in CD. It might be considered that this delayed recruitment of neutrophils to bacteria in the tissues should predispose these individuals to an increased incidence of clinically evident infections, which is not an obvious manifestation of CD. The reason for this apparent discrepancy is that the numbers of bacteria injected into the tissues were required to reach a certain critical load before the clearance defect was unmasked (Figure 7e). In this study 106 organism were cleared normally whereas 107 were not, indicating that a significant bacterial load must enter the tissues before the clearance systems are overwhelmed. The bowel is the only location in the body where such a burden of microbes is readily available to enter the tissues.\n\nIn the absence of an adequate acute inflammatory response and the complete clearance of the inciting agent by neutrophils, the retained foreign material produces a granulomatous inflammation219–221. E. coli, Streptococci and Listeria have been demonstrated immunochemically in macrophages, giant cells and lymph nodes of CD patients222, and E.coli DNA has been identified in Crohn’s granulomata isolated by laser capture microdissection223. The retention of this faecal material within the bowel leads to an intense adaptive immune response and the tissues become infiltrated with large numbers of T-cells. It is not therefore surprising that when actively inflamed CD tissues are biopsied, any number and variety of adaptive immune cells can be identified and immune mechanisms evoked in the pathogenesis of the condition. The macrophages and adaptive immune cells, reacting to the foreign antigenic material, will produce cytokines such as IL-1β and TNFα224,225 that lead to local inflammation and systemic symptoms226.\n\nThe clinical picture of an inflamed bowel containing large numbers of macrophages and T-cells227,228 has led to the erroneous belief that Crohn’s was an autoimmune disease79. It is however clear that the cytokines produced by these inflammatory foci in their response to foreign faecal material contribute to the local and systemic inflammation, and failure of mucosal healing, as evidenced by the, often dramatic, responses to anti-TNF drugs. However, only about half the patients respond to this treatment, and in those that do the response is often partial and temporary229.\n\nChronic inflammation leads to fibrosis, or scarring230, which in a hollow muscular organ causes narrowing, or stricture formation. Under some circumstances the material in the bowel wall undergoes liquefaction, as may occur with tuberculosis231. This material then tracks to adjoining organs, possibly driven by the osmotic pressure produced by the breakdown of the organic material within the abscess, and discharges into them. This can then produce fistulae232 between the these organs for example between bowel and bowel, bowel and skin, bladder or vagina. The perianal fistulae between the rectum and perineum are characteristic of CD and of immunodeficiencies of the innate immune system, particularly those of neutrophil function53.\n\nThis failure to clear organic material from the tissues offers an explanation for the false positive Kveim tests observed in CD233. The Kveim test234 was designed to diagnose sarcoidosis, another chronic granulomatous disease. The intradermal injection of a crude homogenate of an extract of sarcoid tissue, usually from lymph node, produced epithelioid cell granulomas in subjects with sarcoidosis, reproducing those diagnostic of this disorder. Initially it was thought that the injected material contained some sarcoid specific factor, such as an infectious agent or antigen235 but it has been recognised more recently that it relates to an abnormal host response:\n\n“The \"immune paradox\" (delayed type hypersensitivity anergy in a setting of exuberant systemic granulomatous response) resists explanation. Its relationship to the Kveim test is poorly understood. Immunological investigations generated the thesis that the characterizing systemic granuloma arise as a fall-back reaction to inefficient cellular immune processing, due most often to impaired myeloid dendritic cell function of unknown cause”236.\n\nThis is precisely the nature of the pathogenic mechanism in CD and it is therefore not surprising that positive tests are found in both conditions237 and that both diseases occasionally coexist in the same individual238.\n\n\nOn the location of the CD lesions\n\nSymptomatic lesions are largely confined to the terminal ileum, caecum and colon, probably due to the combination of mucosal damage by enteric infection coupled with the ready presence of massive numbers of bacteria to penetrate into the wall of the bowel when this happens. However, it is becoming apparent that the gastrointestinal tract is generally diffusely, sub-clinically, abnormal.\n\nOral manifestation of CD, particularly aphthous ulcers, are estimated to occur in 20–50% of patients239. A prospective endoscopic study identified upper gastrointestinal (GI) manifestations of CD in 55% of 108 untreated, newly diagnosed adult patients with CD, irrespective of symptoms. All selected were free of H. pylori, infection with which, if anything, appears to protect against CD240. About a quarter of the patients had lesions in both the stomach and duodenum and in about 20% they were in one or other of these organs. In roughly 2% of patients the gastric outlet is obstructed by a granulomatous inflammation requiring surgical intervention241. Aphthous ulcers in the oesophagus were present in 7% of these subjects. Most of these lesions exhibited a granulomatous inflammation on histology.\n\nIn view of the systemic nature of the impairment of the innate immune system in CD, it is of great interest, although not altogether surprising that patients with CGD53 exhibit very similar upper GI pathology. Aphthous ulceration and other oral lesions are common. Oesophageal, gastric and duodenal inflammation were detected in 21%,74% and 37% of 78 patients52. Large bowel lesions were present in the majority52 and are indistinguishable from those of CD47,242. Between 4%243 and 15%244 of these patients also develop gastric outflow obstruction.\n\nCGD is a condition in which there is a failure of microbial killing and digestion by neutrophils as a result of an absence of the respiratory burst produced by a NADPH oxidase, NOX2. Consequence, the pH of the phagocytic vacuole is too low for the efficient activity of the neutral protease digestive enzymes released into the vacuole from the cytoplasmic granules, and they fail to kill and digest the microbes245,246. The undigested material retained within the tissue is taken up by macrophages, producing the granulomata that give this condition its name.\n\nNeutrophils play an important role in the debridement of wounds247,248, an important prelude and necessity for healing. It is possible that the upper GI inflammation that occurs in CD and CGD results from an impaired repair response to trauma and peptic digestion rather than infection in these locations.\n\n\nIdentifying the molecular cause/s of the CD phenotype\n\nDefective secretion of pro-inflammatory cytokines in CD may be the explanation for the observed impairment in neutrophil recruitment218,249–252. In CD, the neutrophils themselves are normal253 and exhibit normal migration in vitro214,254,255 and will migrate out of skin windows if chemoattractant substances are placed over them215. In the absence of a primary abnormality of neutrophil function, CD macrophages showed defective secretion of pro-inflammatory cytokines, but normal release of chemokines, in response to stimulation with E. coli218 (Figure 8). The genes for these pro-inflammatory cytokines were transcribed and translated, but the proteins were misdirected to lysosomal degradation rather than secretion, suggestive of disordered vesicle trafficking.\n\nCytokine and chemokine release expressed as a percentage of that secreted by HC cells (blue bar) from ileal and colonic CD patients (reproduced from 218 with permission from the publisher).\n\nThe question then arises as to how anti-TNF drugs can be effective against a condition in which the secretion of TNF and other pro-inflammatory cytokines is impaired? The answer is in the timing of the different components of the immune system. The call to arms of the innate immune system is a very early and explosive secretion of pro-inflammatory mediators, including TNF. If the clearance of faecal material from the tissues is incomplete, it becomes walled off by macrophages, endotoxin diffuses into the circulation, and cells of the adaptive immune system are recruited256. They secrete a wide array of mediators over the next weeks including TNF which acts as an amplifier of the response257. It is of interest that in a recent study of high-resolution gene expression profiling using RNA sequencing of inflamed biopsies from patients with CD, UC and controls, levels of pro-inflammatory cytokines like TNF, IL-1β, IL-6 and IL-23 were all elevated to a lesser extent in CD than in UC258. The very early secretion of TNF and other mediators is required to prevent the development of the Crohn’s lesions whereas at a later stage it is the TNF and associated mediators that produce the symptoms, which in some cases respond to anti-TNF treatment. This explains why anti-TNF therapeutics can both cause259 and alleviate symptoms of the disease260.\n\nThere is a strong genetic component to the aetiology of CD. The sibling recurrence risk (risk of developing the disease in the context of an affected sibling) is approximately 13–36261 and approximately 12%262 of CD patients have at least one affected first degree relative. Furthermore, the study of over 300 twin pairs has demonstrated a higher concordance of disease phenotype in monozygotic (30%) compared with dizygotic twins (4%)263. While the twin studies support the role of genetic susceptibility, they also indicate the requirement for additional environmental or other factors for the development of overt disease. By far the most likely such factor is an enteric infection of sufficient severity to overwhelm the ability of the innate immune system to adequately clear the faecal debris from the bowel wall. However, this risk may be further modulated by additional environmental factors, such as smoking. This phenomenon, whereby a genetic predisposition to disease manifests in the presence of environmental precipitants is exemplified by alpha-1-antitrypsin deficiency, in which the predisposition to emphysema is exposed by smoking264.\n\nTechnological advances have provided the means of interrogating the genetic basis of CD.\n\nLinkage. Linkage analysis (positional cloning) is a family based technique for identifying the possible location within the genome of causal mutations underlying genetic diseases265. This is done by utilising markers of known location across the genome, such as microsatellites or single nucleotide polymorphisms (SNPs). The transmission of the markers through a family (or collection of families) is examined seeking those whose segregation closely follows the inheritance of the disease, thereby focussing attention on a small region (locus) in which the causal mutation might be found265. Linkage analysis of affected sibling pairs with CD permitted the identification of a susceptibility locus on chromosome 16 (termed IBD1) in which mutations in the gene NOD2 were subsequently identified266,267. NOD2 mutations remain the most strongly associated common genetic variants associated with CD. Linkage is only a powerful tool when almost all cases of the disease in the families under study are caused by mutations in the same gene (i.e. there is limited genetic heterogeneity) that are not seen in unaffected family members (i.e. it is of high penetrance). Numerous factors can limit the effectiveness of linkage analysis such as: the presence of unaffected individuals that harbour the mutation (incomplete penetrance); individuals who develop the disease as a result of mutations in another gene or due to environmental factors (phenocopies); the requirement for the combined effect of two or more mutations (epistasis); or the requirement of the involvement of some environmental factor such as an infectious trigger (which will effectively reduce penetrance by not facilitating the manifestation of the underlying genetic predisposition in unexposed individuals). All of these factors are likely to have contributed to the limited success of linkage analysis in CD.\n\nGWAS. Genes reside on chromosomes which undergo recombination at meioses. Population level haplotypes arise due to the non-random positioning of crossing-over events. Haplotypes are characterised by a particular set of SNP genotypes. Depending on the ancestral origin and frequency with which a mutation has arisen in the population, it may occur on a particular haplotype and thus the SNP genotypes defining that haplotype will be enriched in patients harbouring the disease-causing mutation. Therefore, when comparing a large population of diseased individuals with healthy controls, SNPs tagging the underlying mutation should be enriched in the affected compared with unaffected individuals. In GWAS, a set of SNPs are genotyped in an attempt to cover the whole-genome and the above comparison made268. One of the major problems with analysing many hundreds of thousands (or millions) of markers across the genome is that the large number of comparisons undertaken risks producing false positives. This necessitates the utilisation of a stringent p-value threshold for significance of p<5×10-8269. As a result very large sample sizes are required270,271.\n\nOne of the technical strengths of GWAS as an investigative approach is that DNA is easily obtained and once purified it is stable, enabling it to be conveniently stored and transported. Technological advancements have permitted high throughput SNP genotyping of large numbers of samples on an industrial scale. Furthermore, the GWAS approach has the advantage of being comprehensive (compared with candidate gene studies) and objective (at least up until the stage of data interpretation). There are however a number of limitations, for example incomplete genomic coverage. In addition, a major drawback is that the SNPs employed as markers must be relatively common in the general population, in order to give the study adequate statistical power, so this approach is typically unable to identify low frequency mutations, however penetrant or important.\n\nWhen a SNP is found to be statistically significantly associated with a disease by GWAS, it can be because the polymorphism is itself pathogenic or, more commonly that it is tagging a closely located genetic variant whose genotype correlates with that of the tagging polymorphism (the two variants are in linkage disequilibrium). The precise location of the causal variant(s) underlying the association signal within the identified locus may be interrogated further by fine mapping (in which higher resolution association studies are conducted) or by resequencing the locus looking for plausible pathogenic variants such as coding variants or those that affect gene-expression (eQTLs).\n\nIncreasingly large GWAS have been performed on CD and the results meta-analysed272,273. No single, or small number, of penetrant mutations have been found that independently cause the disease. The latest study of over 20,500 CD cases and 41,600 controls of European ancestry identified 145 loci associated with CD at p<5×10-8. The mean OR of the top SNPs representing these 145 loci was 1.16 and the mean control allele frequency was 0.48. Four SNPs had an OR exceeding 1.5 of which three were within NOD2 and the fourth was in IL23R. The mean difference in allele frequency between cases and controls was only 0.02274.\n\nThe very significant p-values obtained for the associated loci, led to the general perception that the molecular causes of CD have been identified. Individually the CD GWAS loci have very modest effect sizes (i.e. a small difference in frequency in the control and CD populations), consistent with a polygenic model in which it is thought that that it is the combination of these minor influences that is causally important. However, in the latest meta-analysis, all 170 significantly associated loci combined account for only 10.9% of the disease “heritability”272.\n\nAn important consideration is that about half of the healthy population also carry these variants, although (by definition) each at a slightly lower frequency than in the CD patients (Figure 9). The healthy controls carrying these variants greatly outnumber the patients with CD in the population. With a prevalence of CD of about three patients in 1000276, and taking the NOD2 frameshift mutation as an example because it has the greatest effect size at 3.32, for every 100,000 individuals in the population there will be 99,680 unaffected individuals of whom ∼2390 will carry this mutation. In this population there will be ∼320 CD cases of which ∼48 will have the mutation273. This means that the penetrance of this mutation, with by far the greatest association with the disease, is only 2%. These effect sizes pale into insignificance when compared with the effect size of HLA-B27 in ankylosing spondylitis of approximately 94277,278, and HLA in type 1 diabetes and coeliac disease with effect sizes of approximately 25 and 50, respectively279.\n\nThe data were taken from the European cohort in 275. Loci harbouring genes of interest have been indicated.\n\nGWAS conducted on Europeans and East Asian populations have yielded noticeably different findings (Figure 10)275. In East Asian populations, the loci with largest effect sizes were those harbouring the genes TNFSF15/TNFSF8 (genes encoding cytokines that belong to the tumour necrosis factor (TNF) ligand family280) and the major histocompatibility complex. Variants in NOD2 and ATG16L1 demonstrated no association with CD in these populations, and the effect size of the IL23R locus was minimal. The significant heterogeneity in common variant CD genetic architecture between different populations provides a further indication that the genes identified by GWAS are unlikely to play a primary causal role in the development of the disease, the manifestations of which are similar in patients regardless of their ethnicity.\n\nEach box represents an independently associated locus and the size of each box is proportional to the amount of variance in disease liability accounted for by that locus in the respective population. From 275 (reproduced with permission from the publisher).\n\nGWAS have been performed for many different diseases and IBD associated loci have been shown to be shared with several other immunologically mediated diseases including Rheumatoid arthritis, Systemic lupus erythematosus (SLE), Ankylosing spondylitis, Coeliac disease and Sarcoidosis281. These associations are not surprising as comorbidities of some of these conditions are well recognised in the context of IBD282–284 and because almost all are associated with an increased incidence of similar pathologies such as arthritis285, uveitis286,287 and bowel inflammation288–291.\n\nMany of the associated genes common to these conditions have been implicated in pathways leading to activation or regulation of the immune response281. It is possible that these genes were highlighted as being common to the chronic inflammatory conditions because they lead to more florid manifestations, causing signs and symptoms of the disease in those individuals with an underlying predisposition, thereby bringing them to the attention of the medical profession.\n\nDespite their unimpressive effect sizes, the main GWAS CD associated molecules have generated considerable attention and are hence worth of a brief summary.\n\nNOD2. NOD (Nucleotide-binding Oligomerisation Domain) 2 is a member of an extended family of inflammatory and immune proteins in plants (the resistance (R) genes292), Drosophila (Toll-like receptors293) and animals (NOD families). These proteins combine a central nucleotide-binding domain (NOD) with a C-terminal leucine-rich repeat (LRR) motif and an N-terminal caspase recruitment domain (CARD) or equivalent.\n\nIn general these proteins recognise a signal from an invading organism in their leucine-rich domain (LRR) domain that induces a polymerisation that triggers a signalling cascade which terminates in the production and release of pro-inflammatory molecules. NOD2 is activated by muramyl dipeptide (MDP) a component of the cell wall of both Gram negative and Gram positive bacteria. It seems to be taken into the cells within endocytic vacuoles; presumably the organisms are then digested within this compartment and the solubilised MDP is moved into the cytoplasm by peptide transporters like SLC15A3294. Very recently it has been demonstrated that NOD1 and NOD2, do not only respond to bacterial stimuli, but are also important mediators of ER-stress-induced inflammation (described in more detail below)295.\n\nIn the resting state NOD2 is doubled back on itself in an auto-inhibited conformation in the cytoplasm until activated by the attachment of MDP upon which opens it allowing self-oligomerisation and the binding of ATP296,297. A series of phosphorylation steps then end in the translocation of NF-κB to the nucleus and the production of pro-inflammatory cytokines and antimicrobial peptides296. This theoretical model of NOD2 function as a pattern recognition receptor capable of inducing pro-inflammatory cytokine secretion has been validated by in vivo studies in humans in which the application of MDP to skin windows induced the production and release of pro-inflammatory cytokines in healthy and CD patients without NOD2 mutations, but not in those carrying the CD-associated mutations215. The impaired secretion of these inflammatory mediators into skin windows in the absence of MDP, in CD patients without mutations in NOD2 in this study, demonstrates that other pro-inflammatory signals and pathways must be abnormal in these subjects, indicating that there are at least two parallel routes initiating the inflammatory response.\n\nExpression of NOD2 is largely restricted to peripheral blood monocytes298 and to Paneth cells at the base of intestinal crypts299. Monocytes constitute approximately 5% of the circulating leukocytes and are generally regarded as functioning predominantly as circulating precursors of macrophages, without much in the way of a distinct set of functions of their own. It seems intuitively unlikely that a highly specialised cell with an active NADPH oxidase and granules containing myeloperoxidase, all of which are lost with the transformation to macrophages, would be produced to act predominantly as a stem cell.\n\nIt is generally assumed that pro-inflammatory cytokines are secreted by tissue macrophages, but these are quite widely dispersed amongst tissues and could not accumulate as rapidly as needed at inflammatory sites as monocytes that are carried there in capillaries perfusing the region300,301. Monocytes are rapidly recruited to sites of acute inflammation where they extravasate into the tissues302 and make a large contribution to the production of pro-inflammatory cytokines303–305 before being transformed into inflammatory macrophages.\n\nAutophagy and ATG16L1, CALCOCO2/NDP52, LRRK2 and Optineurin. First described in the new-born mouse kidney by Clark in 1957306, and reviewed by De Duve and Wattiaux307, autophagy was initially described as a process in the cytoplasm of cells directed to the remodelling of tissues and the removal or damaged or effete organelles and proteins, and to partial self-digestion under starvation conditions.\n\nTo undertake this process, cells must first identify the region of cytoplasm, effete organelles, or invading microbes, as objects for engulfment. This is achieved by labelling the surface of the target with chains of a small protein, ubiquitin308,309. The ubiquitinated material is then encircled by a double membranous structure, produced from elongated vesicles at the Golgi apparatus or (ER)310, the ends of which then fuse to form the characteristic vacuole with a double membrane. This autophagocytic vacuole then fuses with lysosomes containing enzymes that digest the inner membrane and its contents311.\n\nMembrane vesicle extrusion, ubiquitination and fusion of granules with vesicles are general biological processes that are involved in many diverse cellular functions in addition to autophagy. Most of the cellular machinery required for these processes in autophagy was identified in mutant Saccharomyces cerevisiae312 and homologues were then then found in Drosophila and man. The molecular basis of human autophagy has largely been investigated in promyelocytic HL60 and the Human Embryonic Kidney (HEK293) cell lines313,314. Thus the autophagy molecules have been identified in assays that measure autophagy in primitive cells. This does not mean that these proteins are necessarily exerting their effects exclusively through autophagy in more mature cells and tissues.\n\nWhat is probably of particular relevance to CD is the specialised type of autophagy, known as xenophagy, that is designed to deal with bacteria that escape into the host cytosol (such as Shigella or Listeria) or those that reside in a modified intracellular vacuole (such as Salmonella and Mycobacteria), which is important for the entrapment and lysosome-mediated degradation of bacterial pathogens315–317. CD resulting from abnormal xenophagy is seen in Niemann-Pick disease type C1 and XIAP deficiency with NOD2 variants in Crohn's disease318. Other proteins implicated in xenophagy are the autophagy receptor CALCOCO2/NDP52319, Leucine-rich repeat kinase 2 (LRRK2)320 and Optineurin321,322.\n\nDespite the current interest in autophagy and in exploring the removal of intracytoplasmic organisms from intestinal cells and macrophages by xenophagy, it must not be forgotten that the vast majority of bacteria entering the body are phagocytosed, killed and digested by neutrophils, and that xenophagy only has to deal with a tiny minority escaping this process.\n\nATG (autophagy-related) 16L1. Attention was focussed upon ATG16L1 when an association was demonstrated with CD in a GWAS study323. The SNP was a common, non-synonymous variant resulting in an amino acid change from threonine to alanine at position 300 of the full-length protein (T300A) with an odds ratio of only 1.26273. The ATG16L1 protein was shown to be mainly expressed in the thymus, prostate, liver, kidney and colon with little in the small bowel, and equal levels were found in the bowel from healthy and CD subjects with and without the polymorphism.\n\nATG16L1 is recruited to the ER at the initiation of the formation of the autophagosome at this site314. It is involved in binding to ubiquitinated cytosolic Salmonella whilst these are enveloped in the phagophore to form the autophagosome324 and could be important for the clearance of bacteria from within cells314,325–327. This protein could be related to CD through the impaired clearance of intracellular organisms. It has been reported that the cellular architecture of Paneth cells is grossly distorted in knock-out mice severely depleted in this protein328 and in CD patients homozygous for the T300A polymorphism. In the mice, there were many fewer granules in these cells, with their normal contents lying predominantly in the cytoplasm, together with multiple large vesicular structures that might represent the membranes that would normally surround granules. The rest of the bowel looked normal328. It was also reported that CD patients with the T300A polymorphism demonstrated similar morphological changes with disorganized, or diminished, granules or exhibiting diffuse cytoplasmic lysozyme staining. This was claimed to be the first indication that Atg16L1 has a specific role in humans and mice in regulating the specialized properties of Paneth cells, and provides a novel and relevant mouse model that emulates one of the many diverse pathological hallmarks of human CD. Unfortunately the morphology of Paneth cells was not examined in healthy humans with the same polymorphism, which would not have been difficult given that 27% of this population is homozygous for the “risk” allele and 50% heterozygous, and which makes a pathological role for this SNP highly improbable273.\n\nIRGM. Immunity related p47 guanosine triphosphatases (GTPases) or IRGs are a group of 47–48 kDa proteins that are implicated in the eradication of organisms like mycobacteria that are taken into cells within phagocytic vacuoles. IRGM has been characterised as an autophagy related molecule329 on the basis that it was associated with vacuoles thought to contain mitochondria, a characteristic in common with autophagocytic vacuole. The evidence for the mitochondrial contents was the accumulation in the vacuoles of Mitotracker Red, a dye that partitions in mitochondria by virtue of the charge across their membranes. Mitochondria are marked for degradation by ubiquitination when they have lost their membrane potential330 and this situation would not be improved by degradation within an autophagosome, so it is unlikely that such structures would attract Mitotracker Red. On the other hand, potentials are developed across the membrane of the phagocytic vacuole by the NADPH oxidase331 and by electrogenic proton pumping of the vacuolar V-ATPases332. The classification of IRGM as primarily involved in the autophagocytic processes on the evidence provided for mitophagy should be reconsidered.\n\nThe IRGs were discovered in mice as interferon-gamma inducible elements that were important for resistance to microbes engulfed into phagocytic vacuoles. There are 23 of these molecules in mice but only one or very few in humans333. In human cells IRGM is largely expressed in immune cells whereas in mice it is almost exclusively in macrophages in which its expression is massively increased by LPS (334GeneAtlas MOE430 gcrma, Probeset: 1418825_at). Phagosomal maturation335,336 occurs through sequential fusion of the phagosome with early endosomes and lysosomes, resulting in acidification of the phagosome, release of hydrolytic proteases and death and digestion of the phagosome-bound pathogen. IRGs are located in the ER or Golgi compartments and after the host cell is infected, they are transported to pathogen-containing phagosomes/vacuoles, where they modulate the formation or processing of the phagosome, undermining pathogen survival either directly or by facilitating the action of intracellular effector molecules.\n\nXBP1 (X-box binding protein 1) and ER stress, or unfolded protein response (UPR). The ER is the principal organelle involved in the synthesis, maturation, and post- or co-translational modification of secreted and membrane proteins, as well as in various metabolic processes including dynamic ion storage and biogenesis of membrane structures339. Properly folded proteins are then directed to the Golgi apparatus, to other intracellular organelles, and to the extracellular surface by the secretory pathway340. The ER stress or unfolded protein response develops when the protein synthetic process becomes disordered; leading to the accumulation of dysfunctional unfolded protein as a result of genetic or adverse environmental factors. To deal with this, the unfolded protein triggers the unfolded protein response (UPR)337,338,341 which includes reducing protein synthesis by downregulating mRNA translation, synthesising more molecular chaperones to assist protein folding, and degrading misfolded proteins which are ubiquinated and directed to the proteasomes. XBP1 is a transcription factor that plays a central role in activating these ER stress responses342, and unexpectedly, NOD1 and NOD2 have been shown to be important mediators of this process295.\n\nMajor demands are made on the protein synthetic machinery in rapidly turning over cells like the intestinal mucosa, and this is particularly true of cells like Paneth and goblet cells that secrete proteins in addition to attending to their own homeostatic requirements. When additional requirements for protein synthesis are called upon, induced, for example, by infection or chemical toxins, defects in the ER stress response could weaken the mucosa leading to ulceration343.\n\nLRRK2. A considerable amount of work has been done on this protein because mutations in LRRK2 cause familial and sporadic Parkinson’s disease (PD). It is a large, 280kDa, protein with GTPase and kinase domains, the latter being constitutively active in PD. It is found in immune cells, in lamina propria macrophages, B-lymphocytes and dendritic cells, and levels are markedly increased in the bowel in CD320 and in microglia in the nervous system344. It interacts with the small GTPases Rab32 and Rab38 with which it co-locates to transport vesicles and recycling endosomes345 and it is important for the elimination or intracellular Salmonellae346 and Legionella347. Rab32 and Rab38 play an important role in the biogenesis and traffic of melanosomes and lysosomes and this system is disordered in Hermansky-Pudlak syndrome348, accounting for the characteristic partial albinism. If LRRK2 and its associated proteins are important for immunological resistance to the development of CD then it might be expected that CD would be more common in conditions in which the LRRK2 system is disordered, which is in fact the case. A clear association exists between CD and PD349,350 and CD and Hermansky-Pudlak351. At least two recent studies have discovered a link between mutations in LRRK2 and CD in Ashkenazi Jews (submitted for publication).\n\n\nSummary of outcome of GWAS studies in CD\n\nThe GWAS studies have provided a series of clear answers. No single gene, or a small number of genes, has been identified that is causal for CD. More than 170 GWAS hits combined contribute to about 10% of the “heritability” of CD. With the average individual contribution of only 0.1% it is unlikely that these variants will individually have major effects on cellular function, either in the CD patients or in experimental systems.\n\nCollectively, genes associated with CD by GWAS are enriched for Gene Ontology (GO) annotations (statements describing the functions of specific genes) related to host-microbe interactions, the regulation of cytokine production, lymphocyte activation and the response to molecules of bacterial origin272. This is in keeping with the large body of data that is accumulating as to the role of the innate immune system and its interaction with intestinal microbes in the causal mechanisms in CD.\n\n\nOther investigations to identify causal molecules\n\nIn the knowledge that the release of pro-inflammatory cytokines by macrophages from CD subjects is depressed as a result of impaired vesicle trafficking218, an attempt was made to identify genes contributing to this deficiency by looking for outlier levels of gene expression in these cells. The most commonly under-expressed gene identified was Optineurin, low expression levels of which were found in 10% of CD patients studied352. ADAMDEC1 was under-expressed in about 7% of patients.\n\nOptineurin (Optn)321,322 is a 67 kDa protein, ubiquitously expressed, and its expression can be induced by TNFα and interferons, probably as a result of NFκB activation, and it is localised in the cytosol and Golgi apparatus. In essence it is a linker, or adaptor, molecule and has several binding partners including Rab8, Huntingtin, the gene for which is mutated in Huntington disease, and Myosin VI, a multifunctional motor protein. Rab8 is a small GTPase involved in vesicular trafficking between the trans-Golgi network (TGN) and the plasma membrane. The function of Huntingtin itself is unknown but it is associated with several factors involved in vesicle trafficking. Myosin VI is attached by OPTN to the Golgi apparatus and then it participates in the transport of vesicles and their protein cargos from the from the Trans-Golgi network to be released at the cell surface. OPTN also contains a ubiquitin binding domain with the ability to bind polyubiquitinated cargoes and transport them to autophagosomes via its microtubule-associated protein 1 light chain 3-interacting domain353.\n\nMacrophages from patients with low expression of OPTN secreted abnormally low levels of pro-inflammatory cytokines as do macrophages from OPTN knock out mice354. mRNA expression levels of these cytokines were normal, consistent with deranged secretion rather than synthesis. These mice were more susceptible to infection with Citrobacter, E. coli and Salmonella355, and showed reduced levels of TNFα in their serum, diminished neutrophil recruitment to sites of acute inflammation and greater mortality, than wild-type mice. OPTN-knockdown zebrafish infected with Salmonella also had a higher mortality354.\n\nADAMDEC1 (ADAM-like Decysin-1) is a member of the ADAM (A Disintegrin And Metalloproteinase) family, the expression of which is restricted to the macrophage/dendritic cell populations of the gastrointestinal tract. Its biological function is unknown but it has been hypothesised to play a role in immunity. Reduced ADAMDEC1 expression in macrophages from a subgroup of CD patients has provided evidence of a potential role in bowel inflammation352. Adamdec1-/- mice were more susceptible to the induction of bacterial and chemical induced colitis and they cleared Citrobacter rodentium less efficiently than wild-type mice after infection356.\n\nThe development, availability and falling costs of high throughput DNA sequencing, has provided the means of directly identifying causal mutations in human disease357,358. Several studies employing such technology have been undertaken in CD359,360 and many more are likely to appear over the ensuing years. High-throughput DNA sequencing has also been absolutely crucial for the diagnosis of the rare primary immunodeficiencies46 that produce bowel inflammation, rather than CD, as described earlier.\n\nInitially the DNA of GWAS associated loci was sequenced, and a small number of rare variants were identified, most notably in the genes CARD9 and IL23R361,362.\n\nThe major problem in identifying causal genes by sequencing is the considerable individual variation in DNA sequence. Asymptomatic individuals carry, on average, approximately 100 genuine loss of function variants with ∼20 genes completely inactivated363. This makes it very difficult to identify the disease causing mutation/s in any one individual.\n\nOf note, an ongoing study in which whole-genome sequencing has been undertaken in 2,697 CD cases and 3,652 healthy controls failed to identify a single variant at genome-wide significance that had not already been identified by GWAS364.\n\nAlternative approaches have been taken to overcome the difficulty of the interpretation of individual variation. Several studies have focussed on the analysis of Ashkenazi Jews (AJ) because they have a roughly fourfold increased incidence of CD and demonstrate genetic homogeneity, having arisen from approximately 350 individuals about 30 generations ago365.\n\nChuang et al. sequenced the exomes of 50 AJ CD patients and prioritised low frequency coding variants which were then genotyped in approximately 3,000 AJ CD cases and 3,000 controls. They identified a frameshift mutation in CSF2RB as a strong causal candidate which was associated with CD at p<3.5×10-6 and an OR of 1.5366. This variant is rare in the non-AJ population.\n\nLevine et al. utilised an alternative family based approach367. They characterised two very large AJ families with >800 and >200 members with 54 and 26 affected cases respectively, sequenced the exomes of all cases and imputed the genotypes of the unaffected family members. Low frequency coding variants that were predicted to be damaging and were enriched in affected compared with unaffected individuals were prioritised. In the large family they independently identified the identical frameshift mutation in CSF2RB as concurrently reported by Chuang et al. as a likely causal variant. Other strong candidate genes included NLRP2, a NOD-like receptor and a component of the inflammasome; ZC3H18 which is involved in IKK and NFκB activation368, a pathway of established importance in CD; and MEGF10, a phagocytic receptor involved in apoptosis369.\n\nHaving been identified independently by two groups CSF2RB must be considered as a causal gene for CD in AJs370. CSF2RB is the common or shared β subunit of the receptors for granulocyte-macrophage colony-stimulating factor (GM-CSF), interleukin (IL)-3, and IL-5371. The distinct α chains of these receptors provide cytokine specificity whilst the β chain is responsible for high-affinity binding and is the major downstream signalling component of the receptor complexes.\n\nGM-CSF is produced by myeloid cells, dendritic cells (DCs), T cells, B cells, and several non-immunological cells including epithelial cells372 following exposure to inflammatory stimuli to promote the production and function of myeloid haemopoietic cells including haemopoietic progenitor cells and differentiated cells such as basophils, neutrophils, eosinophils, macrophages and certain dendritic cells373 to deal with the cause of the inflammation.\n\nIL-3 is predominantly produced by activated T cells, natural killer (NK) cells and mast cells. It acts on the early stages of haematopoiesis in synergy with other cytokines to induce progenitors of various lineages but it is a very important stimulus for the generation of mast cells and the regulation of mast cell function as well as basophil production and activation.\n\nIL-5 stimulates mainly the production and function of eosinophils. The major source of IL-5 is T-cells with relatively lower amounts produced by mast cells and eosinophils374.\n\nLevine et al. also identified a damaging missense mutation in DUOX2 that impaired the function of the protein and showed a possible epistatic interaction with NOD2367. DUOX2 is a member of the large NADPH oxidase (NOX) family of enzymes375. Its expression in the bowel epithelium is induced by the microbiota376. It generates H2O2 at the mucosal surface and this acts as substrate for lactoperoxidase catalysed oxidation of thiocyanate to microbicidal hypothiocyanite6. It might also attract neutrophils to inflammatory sites377. Knockdown of the DUOX2 homologue in invertebrates and mice resulted in an impaired tolerance to enteric bacteria378. Of relevance, given the possible NOD2 epistatic interaction observed, a physical and functional interaction between these proteins has been demonstrated in epithelial and HEK293 cells379.\n\n\nFuture treatment options\n\nTreatment of CD poses a conundrum. The logical approach to correcting the underlying problem would be to develop means of enhancing innate immunity, although at present no such range of drugs is currently available. It would be dangerous to attempt to do this in the presence of ongoing bowel inflammation but it could be useful to maintain patients in remission after they had been cleared of disease by surgical resection, or through the use non-immunosuppressant therapies such as elemental diets380. This approach was attempted with levamisole381 as the immunostimulant, with varying results382,383. One problem with this form of treatment is that it is important to ensure that the patients are in remission before it is commenced; otherwise it is likely to exacerbate the inflammation. In two studies levamisole induced a severe reversible polyarthropathy384,385, indicating that the drug was in fact altering the immunological/inflammatory axis, and providing clues as to IBD associated arthritis, and to the immunopathology of the idiopathic arthritides in general.\n\nCurrent drug and biological treatments are largely immunosuppressant. This, to varying degrees of efficiency, dampens down the secondary inflammation induced by the retained foreign material within the tissues. Anti-TNF treatments can be very helpful but do not provide a comprehensive answer. Only one third of patients will be in remission after one year on these treatments386. Immunosuppressant treatment further compromises the underlying innate immune deficit to mucosal damage, thereby increasing the likelihood of further infection and the influx of bowel contents into the tissues, possibly converting CD from a sporadic to a chronic condition.\n\nThe primary pathology in most case of CD appears to affect macrophages recruited from the blood as monocytes303. Advances in gene editing with the CRISPR-Cas387,388 technology make the corrective treatment of CD a real possibility in the relatively near future. Once a primary causal mutation has been identified, and validated in animal models, bone marrow could be extracted, edited and reinfused into a conditioned patient in much the same way as is being applied to gene therapy for primary immunodeficiencies389.\n\n\nConclusion\n\nAfter almost a century of concerted effort in clinical investigations combined with recent technological developments in genomic medicine, a consensus view is developing as to the causes of CD.\n\nA genetic predisposition to the condition can exist, as evidenced by twin and family studies. Studies in patients have clearly demonstrated that there is a defect in the acute inflammatory response resulting in an impaired recruitment of neutrophils to inflammatory sites and a consequent delay in the clearance of bacteria from them. This results from an initial blunted response by monocytes and macrophages leading to deficient secretion of pro-inflammatory cytokines.\n\nCD is a syndrome and the molecular basis of this deficiency will vary considerably. It might reflect the multifactorial extreme end of a normal distribution, or the critical loss of one or more important molecules. Molecules contributing to the former will be very difficult to identify whereas some of those playing a more singular role have been highlighted by linkage, GWAS and DNA sequencing studies. These extend across the spectrum of the interface between microbes and cells: from the derangement of cellular homeostasis, probably in intestinal and immune cells, through impairment of the ER stress response (XBP1 malfunction); defective signalling (aberrant NOD2); depressed killing and digestion of organisms within phagocytic vacuoles (IRGM) or xenophagic recovery of those that escape into the cytoplasm (ATG16L1); failure of vesicle trafficking resulting in disorganised lysosomal biology of reduced cytokine secretion (LRRK2, Optineurin) or ineffective signalling of cytokines on target cells because of aberrant receptors (CSF2RB). In addition, primary abnormalities of neutrophils, the final effector cells, constitute the ultimate predisposition.\n\nHowever severe the covert underlying predisposition, it requires a triggering factor to be expressed, and all the evidence points to this being an enteric infection. The identity of the organism is of secondary importance to the fact that the mucosa is breached, allowing ingress of the faecal contents into the bowel wall. Increased frequency or potency of these infections, as a result of factors implicated in the “Hygiene Hypothesis” or through increased spread by travel, or changes in sexual practises, could account for the increasing incidence of CD in the Western world.\n\nIt is the abnormal response to the penetrating faeces that is the common denominator of the CD syndrome, and it is the failure to eliminate the foreign material in the tissues that leads to the classical chronic granulomatous inflammation, and subsequently to an adaptive immune response. The counter intuitive, but logical, conclusion is that a disease characterised by grossly exuberant inflammation can result from an initial failure of innate inflammation. A similar mechanism might be responsible for other chronic inflammatory conditions of unknown aetiology, like for example sarcoidosis, ankylosing spondylitis, psoriasis, rheumatoid arthritis and SLE.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nI received financial support from the Medical Research Council, Wellcome Trust, Charles Wolfson Charitable Trust and the Irwin Joffe Memorial Trust.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nI thank Holm Ulhig and Amnon Sonnenberg for permission to publish figures from their papers. I am indebted to Adam Levine for assistance with the preparation of this manuscript.\n\n\nReferences\n\nMalik TA: Inflammatory Bowel Disease: Historical Perspective, Epidemiology, and Risk Factors. Surg Clin North Am. 2015; 95(6): 1105–22. PubMed Abstract | Publisher Full Text\n\nFranks AH, Harmsen HJ, Raangs GC, et al.: Variations of bacterial populations in human feces measured by fluorescent in situ hybridization with group-specific 16S rRNA-targeted oligonucleotide probes. Appl Environ Microbiol. 1998; 64(9): 3336–45. PubMed Abstract | Free Full Text\n\nHelander HF, Fändriks L: Surface area of the digestive tract - revisited. Scand J Gastroenterol. 2014; 49(6): 681–9. PubMed Abstract | Publisher Full Text\n\nAllen A, Hutton DA, Pearson JP, et al.: Mucus glycoprotein structure, gel formation and gastrointestinal mucus function. Ciba Found Symp. 1984; 109: 137–56. PubMed Abstract\n\nGill N, Wlodarska M, Finlay BB: Roadblocks in the gut: barriers to enteric infection. Cell Microbiol. 2011; 13(5): 660–9. PubMed Abstract | Publisher Full Text\n\nRada B, Leto TL: Oxidative innate immune defenses by Nox/Duox family NADPH oxidases. Contrib Microbiol. 2008; 15: 164–87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerg RD: Bacterial translocation from the gastrointestinal tract. Trends Microbiol. 1995; 3(4): 149–54. PubMed Abstract | Publisher Full Text\n\nSmith SD, Cardona MA, Wishnev SA, et al.: Unique characteristics of the neonatal intestinal mucosal barrier. J Pediatr Surg. 1992; 27(3): 333–6; discussion 336-8. PubMed Abstract | Publisher Full Text\n\nDuerkop BA, Vaishnava S, Hooper LV: Immune responses to the microbiota at the intestinal mucosal surface. Immunity. 2009; 31(3): 368–76. PubMed Abstract | Publisher Full Text\n\nKyd JM, Cripps AW: Functional differences between M cells and enterocytes in sampling luminal antigens. Vaccine. 2008; 26(49): 6221–4. PubMed Abstract | Publisher Full Text\n\nMabbott NA, Donaldson DS, Ohno H, et al.: Microfold (M) cells: important immunosurveillance posts in the intestinal epithelium. Mucosal Immunol. 2013; 6(4): 666–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOhno H: Intestinal M cells. J Biochem. 2016; 159(2): 151–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKamada N, Seo SU, Chen GY, et al.: Role of the gut microbiota in immunity and inflammatory disease. Nat Rev Immunol. 2013; 13(5): 321–35. PubMed Abstract | Publisher Full Text\n\nRhee KJ, Sethupathi P, Driks A, et al.: Role of commensal bacteria in development of gut-associated lymphoid tissues and preimmune antibody repertoire. J Immunol. 2004; 172(2): 1118–24. PubMed Abstract | Publisher Full Text\n\nOhnmacht C, Park JH, Cording S, et al.: MUCOSAL IMMUNOLOGY. The microbiota regulates type 2 immunity through RORγt+ T cells. Science. 2015; 349(6251): 989–93. PubMed Abstract | Publisher Full Text\n\nMaynard CL, Elson CO, Hatton RD, et al.: Reciprocal interactions of the intestinal microbiota and immune system. Nature. 2012; 489(7415): 231–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWijburg OL, Strugnell RA: Mucosal Immune Responses to Escherichia coli and Salmonella Infections. EcoSal Plus. 2006; 2(1). PubMed Abstract | Publisher Full Text\n\nPerez-Lopez A, Behnsen J, Nuccio SP, et al.: Mucosal immunity to pathogenic intestinal bacteria. Nat Rev Immunol. 2016; 16(3): 135–48. PubMed Abstract | Publisher Full Text\n\nBrandtzaeg P, Farstad IN, Johansen FE, et al.: The B-cell system of human mucosae and exocrine glands. Immunol Rev. 1999; 171: 45–87. PubMed Abstract | Publisher Full Text\n\nMacpherson AJ, Uhr T: Induction of protective IgA by intestinal dendritic cells carrying commensal bacteria. Science. 2004; 303(5664): 1662–5. PubMed Abstract | Publisher Full Text\n\nMacPherson AJ, Gatto D, Sainsbury E, et al.: A primitive T cell-independent mechanism of intestinal mucosal IgA responses to commensal bacteria. Science. 2000; 288(5474): 2222–6. PubMed Abstract | Publisher Full Text\n\nDoe WF: The intestinal immune system. Gut. 1989; 30(12): 1679–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurkett PR, Meyer zu Horste G, Kuchroo VK: Pouring fuel on the fire: Th17 cells, the environment, and autoimmunity. J Clin Invest. 2015; 125(6): 2211–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIsailovic N, Daigo K, Mantovani A, et al.: Interleukin-17 and innate immunity in infections and chronic inflammation. J Autoimmun. 2015; 60: 1–11. PubMed Abstract | Publisher Full Text\n\nSong X, He X, Li X, et al.: The roles and functional mechanisms of interleukin-17 family cytokines in mucosal immunity. Cell Mol Immunol. 2016; 13(4): 418–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee JS, Tato CM, Joyce-Shaikh B, et al.: Interleukin-23-Independent IL-17 Production Regulates Intestinal Epithelial Permeability. Immunity. 2015; 43(4): 727–38. PubMed Abstract | Publisher Full Text\n\nCypowyj S, Picard C, Maródi L, et al.: Immunity to infection in IL-17-deficient mice and humans. Eur J Immunol. 2012; 42(9): 2246–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nToubiana J, Okada S, Hiller J, et al.: Heterozygous STAT1 gain-of-function mutations underlie an unexpectedly broad clinical phenotype. Blood. 2016; 127(25): 3154–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaser A: Not all monoclonals are created equal - lessons from failed drug trials in Crohn’s disease. Best Pract Res Clin Gastroenterol. 2014; 28(3): 437–49. PubMed Abstract | Publisher Full Text\n\nHolmgren J: Mucosal immunity and vaccination. FEMS Microbiol Immunol. 1991; 4(1): 1–9. PubMed Abstract | Publisher Full Text\n\nHolmgren J, Svennerholm AM: Bacterial enteric infections and vaccine development. Gastroenterol Clin North Am. 1992; 21(2): 283–302. PubMed Abstract\n\nFarache J, Zigmond E, Shakhar G, et al.: Contributions of dendritic cells and macrophages to intestinal homeostasis and immune defense. Immunol Cell Biol. 2013; 91(3): 232–9. PubMed Abstract | Publisher Full Text\n\nBain CC, Mowat AM: Macrophages in intestinal homeostasis and inflammation. Immunol Rev. 2014; 260(1): 102–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBischoff SC: Role of mast cells in allergic and non-allergic immune responses: comparison of human and murine data. Nat Rev Immunol. 2007; 7(2): 93–104. PubMed Abstract | Publisher Full Text\n\nMekori YA, Metcalfe DD: Mast cells in innate immunity. Immunol Rev. 2000; 173(1): 131–40. PubMed Abstract | Publisher Full Text\n\nSpits H, Cupedo T: Innate lymphoid cells: emerging insights in development, lineage relationships, and function. Annu Rev Immunol. 2012; 30: 647–75. PubMed Abstract | Publisher Full Text\n\nPhilip NH, Artis D: New friendships and old feuds: relationships between innate lymphoid cells and microbial communities. Immunol Cell Biol. 2013; 91(3): 225–31. PubMed Abstract | Publisher Full Text\n\nEberl G, Di Santo JP, Vivier E: The brave new world of innate lymphoid cells. Nat Immunol. 2015; 16(1): 1–5. PubMed Abstract | Publisher Full Text\n\nBevins CL, Salzman NH: Paneth cells, antimicrobial peptides and maintenance of intestinal homeostasis. Nat Rev Microbiol. 2011; 9(5): 356–68. PubMed Abstract | Publisher Full Text\n\nPober JS, Sessa WC: Inflammation and the blood microvascular system. Cold Spring Harb Perspect Biol. 2014; 7(1): a016345. PubMed Abstract | Publisher Full Text\n\nGoddard LM, Iruela-Arispe ML: Cellular and molecular regulation of vascular permeability. Thromb Haemost. 2013; 109(3): 407–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar V, Sharma A: Neutrophils: Cinderella of innate immune system. Int Immunopharmacol. 2010; 10(11): 1325–34. PubMed Abstract | Publisher Full Text\n\nLi Y, Karlin A, Loike JD, et al.: Determination of the critical concentration of neutrophils required to block bacterial growth in tissues. J Exp Med. 2004; 200(5): 613–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoo SS, Reading PC, Jaillon S, et al.: Pentraxins and Collectins: Friend or Foe during Pathogen Invasion? Trends Microbiol. 2015; 23(12): 799–811. PubMed Abstract | Publisher Full Text\n\nKantari C, Pederzoli-Ribeil M, Witko-Sarsat V: The role of neutrophils and monocytes in innate immunity. Contrib Microbiol. 2008; 15: 118–46. PubMed Abstract | Publisher Full Text\n\nUhlig HH, Schwerd T, Koletzko S, et al.: The diagnostic approach to monogenic very early onset inflammatory bowel disease. Gastroenterology. 2014; 147(5): 990–1007.e3. PubMed Abstract | Publisher Full Text\n\nMarks DJ, Miyagi K, Rahman FZ, et al.: Pathological features of inflammatory bowel disease in CGD are indistinguishable from those observed in Crohn’s disease. Gastroenterology. 2007; 132(Suppl): 156–7.\n\nGlocker EO, Kotlarz D, Boztug K, et al.: Inflammatory bowel disease and mutations affecting the interleukin-10 receptor. N Engl J Med. 2009; 361(21): 2033–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKühn R, Löhler J, Rennick D, et al.: Interleukin-10-deficient mice develop chronic enterocolitis. Cell. 1993; 75(2): 263–74. PubMed Abstract | Publisher Full Text\n\nBogdan C, Vodovotz Y, Nathan C: Macrophage deactivation by interleukin 10. J Exp Med. 1991; 174(6): 1549–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEngelhardt KR, Shah N, Faizura-Yeop I, et al.: Clinical outcome in IL-10- and IL-10 receptor-deficient patients with or without hematopoietic stem cell transplantation. J Allergy Clin Immunol. 2013; 131(3): 825–30. PubMed Abstract | Publisher Full Text\n\nKhangura SK, Kamal N, Ho N, et al.: Gastrointestinal Features of Chronic Granulomatous Disease Found During Endoscopy. Clin Gastroenterol Hepatol. 2016; 14(3): 395–402.e5. PubMed Abstract | Publisher Full Text\n\nMarks DJ, Miyagi K, Rahman FZ, et al.: Inflammatory bowel disease in CGD reproduces the clinicopathological features of Crohn’s disease. Am J Gastroenterol. 2009; 104(1): 117–24. PubMed Abstract | Publisher Full Text\n\nHazzan D, Seward S, Stock H, et al.: Crohn’s-like colitis, enterocolitis and perianal disease in Hermansky-Pudlak syndrome. Colorectal Dis. 2006; 8(7): 539–43. PubMed Abstract | Publisher Full Text\n\nDieckgraefe BK, Korzenik JR, Husain A, et al.: Association of glycogen storage disease 1b and Crohn disease: results of a North American survey. Eur J Pediatr. 2002; 161(Suppl 1): S88–92. PubMed Abstract | Publisher Full Text\n\nKuemmerle-Deschner JB: CAPS--pathogenesis, presentation and treatment of an autoinflammatory disease. Semin Immunopathol. 2015; 37(4): 377–85. PubMed Abstract | Publisher Full Text\n\nRamos-Casals M, Brito-Zerón P, López-Guillermo A, et al.: Adult haemophagocytic syndrome. Lancet. 2014; 383(9927): 1503–16. PubMed Abstract | Publisher Full Text\n\nGoyal N, Rana A, Ahlawat A, et al.: Animal models of inflammatory bowel disease: a review. Inflammopharmacology. 2014; 22(4): 219–33. PubMed Abstract | Publisher Full Text\n\nNell S, Suerbaum S, Josenhans C: The impact of the microbiota on the pathogenesis of IBD: lessons from mouse infection models. Nat Rev Microbiol. 2010; 8(8): 564–77. PubMed Abstract | Publisher Full Text\n\nRay A, Dittel BN: Interrelatedness between dysbiosis in the gut microbiota due to immunodeficiency and disease penetrance of colitis. Immunology. 2015; 146(3): 359–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRodrigues-Sousa T, Ladeirinha AF, Santiago AR, et al.: Deficient production of reactive oxygen species leads to severe chronic DSS-induced colitis in Ncf1/p47phox-mutant mice. PLoS One. 2014; 9(5): e97532. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPastorelli L, De Salvo C, Mercado JR, et al.: Central role of the gut epithelial barrier in the pathogenesis of chronic intestinal inflammation: lessons learned from animal models and human genetics. Front Immunol. 2013; 4: 280. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNi J, Chen SF, Hollander D: Effects of dextran sulphate sodium on intestinal epithelial cells and intestinal lymphocytes. Gut. 1996; 39(2): 234–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim HS, Berstad A: Experimental colitis in animal models. Scand J Gastroenterol. 1992; 27(7): 529–37. PubMed Abstract | Publisher Full Text\n\nZheng L, Gao ZQ, Wang SX: A chronic ulcerative colitis model in rats. World J Gastroenterol. 2000; 6(1): 150–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWaidmann M, Bechtold O, Frick JS, et al.: Bacteroides vulgatus protects against Escherichia coli-induced colitis in gnotobiotic interleukin-2-deficient mice. Gastroenterology. 2003; 125(1): 162–77. PubMed Abstract | Publisher Full Text\n\nMizoguchi A, Mizoguchi E, Chiba C, et al.: Cytokine imbalance and autoantibody production in T cell receptor-alpha mutant mice with inflammatory bowel disease. J Exp Med. 1996; 183(3): 847–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKontoyiannis D, Pasparakis M, Pizarro TT, et al.: Impaired on/off regulation of TNF biosynthesis in mice lacking TNF AU-rich elements: implications for joint and gut-associated immunopathologies. Immunity. 1999; 10(3): 387–98. PubMed Abstract | Publisher Full Text\n\nMorrison PJ, Ballantyne SJ, Kullberg MC: Interleukin-23 and T helper 17-type responses in intestinal inflammation: from cytokines to T-cell plasticity. Immunology. 2011; 133(4): 397–408. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElson CO, Cong Y, Sundberg J: The C3H/HeJBir mouse model: a high susceptibility phenotype for colitis. Int Rev Immunol. 2000; 19(1): 63–75. PubMed Abstract | Publisher Full Text\n\nMatsumoto S, Okabe Y, Setoyama H, et al.: Inflammatory bowel disease-like enteritis and caecitis in a senescence accelerated mouse P1/Yit strain. Gut. 1998; 43(1): 71–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErnst PB, Wiznerowicz EB, Feldman SH, et al.: Pathogenesis of gastritis in ileitis-prone SAMP1/Yit mice. Keio J Med. 2011; 60(2): 65–8. PubMed Abstract | Publisher Full Text\n\nBosma GC, Custer RP, Bosma MJ: A severe combined immunodeficiency mutation in the mouse. Nature. 1983; 301(5900): 527–30. PubMed Abstract | Publisher Full Text\n\nMorrissey PJ, Charrier K, Braddy S, et al.: CD4+ T cells that express high levels of CD45RB induce wasting disease when transferred into congenic severe combined immunodeficient mice. Disease development is prevented by cotransfer of purified CD4+ T cells. J Exp Med. 1993; 178(1): 237–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorrissey PJ, Charrier K: Induction of wasting disease in SCID mice by the transfer of normal CD4+/CD45RBhi T cells and the regulation of this autoreactivity by CD4+/CD45RBlo T cells. Res Immunol. 1994; 145(5): 357–62. PubMed Abstract | Publisher Full Text\n\nPowrie F, Leach MW, Mauze S, et al.: Phenotypically distinct subsets of CD4+ T cells induce or protect from chronic intestinal inflammation in C. B-17 scid mice. Int Immunol. 1993; 5(11): 1461–71. PubMed Abstract | Publisher Full Text\n\nPowrie F, Leach MW, Mauze S, et al.: Inhibition of Th1 responses prevents inflammatory bowel disease in scid mice reconstituted with CD45RBhi CD4+ T cells. Immunity. 1994; 1(7): 553–62. PubMed Abstract | Publisher Full Text\n\nToms C, Powrie F: Control of intestinal inflammation by regulatory T cells. Microbes Infect. 2001; 3(11): 929–35. PubMed Abstract | Publisher Full Text\n\nLord JD: Promises and paradoxes of regulatory T cells in inflammatory bowel disease. World J Gastroenterol. 2015; 21(40): 11236–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalaris A, Adam N, Sina C, et al.: Critical role of the disintegrin metalloprotease ADAM17 for intestinal inflammation and regeneration in mice. J Exp Med. 2010; 207(8): 1617–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOoi CJ, Fock KM, Makharia GK, et al.: The Asia-Pacific consensus on ulcerative colitis. J Gastroenterol Hepatol. 2010; 25(3): 453–68. PubMed Abstract | Publisher Full Text\n\nOoi CJ, Makharia GK, Hilmi I, et al.: Asia Pacific Consensus Statements on Crohn’s disease. Part 1: Definition, diagnosis, and epidemiology: (Asia Pacific Crohn's Disease Consensus--Part 1). J Gastroenterol Hepatol. 2016; 31(1): 45–55. PubMed Abstract | Publisher Full Text\n\nLaw DH: Regional enteritis. Gastroenterology 1969; 56(6): 1086–110. PubMed Abstract\n\nSafar B, Sands D: Perianal Crohn’s disease. Clin Colon Rectal Surg. 2007; 20(4): 282–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWarren KS: A functional classification of granulomatous inflammation. Ann N Y Acad Sci. 1976; 278: 7–18. PubMed Abstract | Publisher Full Text\n\nSewell GW, Marks DJ, Segal AW: The immunopathogenesis of Crohn’s disease: a three-stage model. Curr Opin Immunol. 2009; 21(5): 506–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoossens M, Simoens M, Vermeire S, et al.: Contribution of genetic and environmental factors in the pathogenesis of Crohn’s disease in a large family with multiple cases. Inflamm Bowel Dis. 2007; 13(5): 580–4. PubMed Abstract | Publisher Full Text\n\nKatsanos KH, Karetsos V, Tsianos EV: A family report of Crohn’s disease in three children immigrating from Albania to Greece and review of the literature. J Crohns Colitis. 2010; 4(5): 582–5. PubMed Abstract | Publisher Full Text\n\nFreeman HJ, Hershfield NB: Anticipation in an Indo-Canadian family with Crohn’s disease. Can J Gastroenterol. 2001; 15(10): 695–8. PubMed Abstract | Publisher Full Text\n\nNewman RD, Zu SX, Wuhib T, et al.: Household epidemiology of Cryptosporidium parvum infection in an urban community in northeast Brazil. Ann Intern Med. 1994; 120(6): 500–5. PubMed Abstract | Publisher Full Text\n\nBar-Gil Shitrit A, Koslowsky B, Kori M, et al.: Inflammatory bowel disease: an emergent disease among Ethiopian Jews migrating to Israel. Inflamm Bowel Dis. 2015; 21(3): 631–5. PubMed Abstract | Publisher Full Text\n\nLi X, Sundquist J, Hemminki K, et al.: Risk of inflammatory bowel disease in first- and second-generation immigrants in Sweden: a nationwide follow-up study. Inflamm Bowel Dis. 2011; 17(8): 1784–91. PubMed Abstract | Publisher Full Text\n\nTalley NJ, Abreu MT, Achkar JP, et al.: An evidence-based systematic review on medical therapies for inflammatory bowel disease. Am J Gastroenterol. 2011; 106(Suppl 1): S2–25; quiz S26. PubMed Abstract | Publisher Full Text\n\nEconomou M, Zambeli E, Michopoulos S: Incidence and prevalence of Crohn’s disease and its etiological influences. Ann Gastroenterol. 2009; 22(3): 158–67. Reference Source\n\nTimmer A: Environmental influences on inflammatory bowel disease manifestations. Lessons from epidemiology. Dig Dis. 2003; 21(2): 91–104. PubMed Abstract | Publisher Full Text\n\nAsakura K, Nishiwaki Y, Inoue N, et al.: Prevalence of ulcerative colitis and Crohn’s disease in Japan. J Gastroenterol. 2009; 44(7): 659–65. PubMed Abstract | Publisher Full Text\n\nLee KM, Lee JM: Crohn’s disease in Korea: past, present, and future. Korean J Intern Med. 2014; 29(5): 558–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKo Y, Kariyawasam V, Karnib M, et al.: Inflammatory Bowel Disease Environmental Risk Factors: A Population-Based Case-Control Study of Middle Eastern Migration to Australia. Clin Gastroenterol Hepatol. 2015; 13(8): 1453–63.e1. PubMed Abstract | Publisher Full Text\n\nTsironi E, Feakins RM, Probert CS, et al.: Incidence of inflammatory bowel disease is rising and abdominal tuberculosis is falling in Bangladeshis in East London, United Kingdom. Am J Gastroenterol. 2004; 99(9): 1749–55. PubMed Abstract | Publisher Full Text\n\nMitchell DN, Rees RJ: Agent transmissible from Crohn’s disease tissue. Lancet. 1970; 2(7665): 168–71. PubMed Abstract | Publisher Full Text\n\nWu CW, Livesey M, Schmoller SK, et al.: Invasion and persistence of Mycobacterium avium subsp. paratuberculosis during early stages of Johne’s disease in calves. Infect Immun. 2007; 75(5): 2110–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDalzeil T: Chronic interstitial enteritis. Br Med J. 1913; 2(2756): 1068–70. Reference Source\n\nHermon-Taylor J, Bull TJ, Sheridan JM, et al.: Causation of Crohn’s disease by Mycobacterium avium subspecies paratuberculosis. Can J Gastroenterol. 2000; 14(6): 521–39. PubMed Abstract | Publisher Full Text\n\nChiodini RJ, Chamberlin WM, Sarosiek J, et al.: Crohn’s disease and the mycobacterioses: a quarter century later. Causation or simple association? Crit Rev Microbiol. 2012; 38(1): 52–93. PubMed Abstract | Publisher Full Text\n\nGitlin L, Borody TJ, Chamberlin W, et al.: Mycobacterium avium ss paratuberculosis-associated diseases: piecing the Crohn’s puzzle together. J Clin Gastroenterol. 2012; 649–55. PubMed Abstract | Publisher Full Text\n\nHelms M, Simonsen J, Mølbak K: Foodborne bacterial infection and hospitalization: a registry-based study. Clin Infect Dis. 2006; 42(4): 498–506. PubMed Abstract | Publisher Full Text\n\nGarcía Rodríguez LA, Ruigómez A, Panés J: Acute gastroenteritis is followed by an increased risk of inflammatory bowel disease. Gastroenterology. 2006; 130(6): 1588–94. PubMed Abstract | Publisher Full Text\n\nTernhag A, Törner A, Svensson A, et al.: Short- and long-term effects of bacterial gastrointestinal infections. Emerg Infect Dis. 2008; 14(1): 143–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGradel KO, Nielsen HL, Schønheyder HC, et al.: Increased short- and long-term risk of inflammatory bowel disease after salmonella or campylobacter gastroenteritis. Gastroenterology. 2009; 137(2): 495–501. PubMed Abstract | Publisher Full Text\n\nJess T, Simonsen J, Nielsen NM, et al.: Enteric Salmonella or Campylobacter infections and the risk of inflammatory bowel disease. Gut. 2011; 60(3): 318–24. PubMed Abstract | Publisher Full Text\n\nGlass RI, Parashar UD, Estes MK: Norovirus gastroenteritis. N Engl J Med. 2009; 361(18): 1776–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRutgeerts P, Geboes K, Ponette E, et al.: Acute infective colitis caused by endemic pathogens in western Europe: endoscopic features. Endoscopy. 1982; 14(6): 212–9. PubMed Abstract | Publisher Full Text\n\nIna K, Kusugami K, Ohta M: Bacterial hemorrhagic enterocolitis. J Gastroenterol. 2003; 38(2): 111–20. PubMed Abstract | Publisher Full Text\n\nAgus A, Massier S, Darfeuille-Michaud A, et al.: Understanding host-adherent-invasive Escherichia coli interaction in Crohn’s disease: opening up new therapeutic strategies. Biomed Res Int. 2014; 2014: 567929. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRhodes JM: The role of Escherichia coli in inflammatory bowel disease. Gut. 2007; 56(5): 610–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRolhion N, Darfeuille-Michaud A: Adherent-invasive Escherichia coli in inflammatory bowel disease. Inflamm Bowel Dis. 2007; 13(10): 1277–83. PubMed Abstract | Publisher Full Text\n\nUeda N: Gastroduodenal Perforation and Ulcer Associated With Rotavirus and Norovirus Infections in Japanese Children: A Case Report and Comprehensive Literature Review. Open Forum Infect Dis. 2016; 3(1): ofw026. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColgan T, Lambert JR, Newman A, et al.: Campylobacter jejuni enterocolitis. A clinicopathologic study. Arch Pathol Lab Med. 1980; 104(11): 571–4. PubMed Abstract\n\nKhuroo MS, Mahajan R, Zargar SA, et al.: The colon in shigellosis: serial colonoscopic appearances in Shigella dysenteriae I. Endoscopy. 1990; 22(1): 35–8. PubMed Abstract | Publisher Full Text\n\nParry CM, Hien TT, Dougan G, et al.: Typhoid fever. N Engl J Med. 2002; 347(22): 1770–82. PubMed Abstract | Publisher Full Text\n\nDilauro S, Crum-Cianflone NF: Ileitis: when it is not Crohn’s disease. Curr Gastroenterol Rep. 2010; 12(4): 249–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaniwaki S, Kataoka M, Tanaka H, et al.: Multiple ulcers of the ileum due to Cytomegalovirus infection in a patient who showed no evidence of an immunocompromised state. J Gastroenterol. 1997; 32(4): 548–52. PubMed Abstract | Publisher Full Text\n\nYamamoto T, Ishii T, Sanaka M, et al.: Ileal ulcers due to Aeromonas hydrophilia infection. J Clin Gastroenterol. 2004; 38(10): 911. PubMed Abstract\n\nVantrappen G, Geboes K, Ponette E: Yersinia enteritis. Med Clin North Am. 1982; 66(3): 639–53. PubMed Abstract\n\nPuylaert JB, Van der Zant FM, Mutsaers JA: Infectious ileocecitis caused by Yersinia, Campylobacter, and Salmonella: clinical, radiological and US findings. Eur Radiol. 1997; 7(1): 3–9. PubMed Abstract | Publisher Full Text\n\nIASR 29-8 food poisoning, foodborne infections, Food Sanitation Law, Listeriosis, E. sakazakii, C. perfringens, V. parahaemolyticus [Internet]. Reference Source\n\nYamazaki W, Uemura R, Sekiguchi S, et al.: Campylobacter and Salmonella are prevalent in broiler farms in Kyushu, Japan: Results of a 2-year distribution and circulation dynamics audit. J Appl Microbiol. 2016; 120(6): 1711–22. PubMed Abstract | Publisher Full Text\n\nMylonaki M, Langmead L, Pantes A, et al.: Enteric infection in relapse of inflammatory bowel disease: importance of microbiological examination of stool. Eur J Gastroenterol Hepatol. 2004; 16(8): 775–8. PubMed Abstract\n\nRock K, Brand S, Moir J, et al.: Dynamics of infectious diseases. Rep Prog Phys. 2014; 77(2): 026602. PubMed Abstract | Publisher Full Text\n\nVan Kruiningen HJ, Colombel JF, Cartun RW, et al.: An in-depth study of Crohn’s disease in two French families. Gastroenterology. 1993; 104(2): 351–60. PubMed Abstract\n\nMaconi G, Orlandini L, Asthana AK, et al.: The impact of symptoms, irritable bowel syndrome pattern and diagnostic investigations on the diagnostic delay of Crohn’s disease: A prospective study. Dig Liver Dis. 2015; 47(8): 646–51. PubMed Abstract | Publisher Full Text\n\nOberc A, Coombes BK: Convergence of External Crohn’s Disease Risk Factors on Intestinal Bacteria. Front Immunol. 2015; 6: 558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRicanek P, Lothe SM, Frye SA, et al.: Gut bacterial profile in patients newly diagnosed with treatment-naïve Crohn’s disease. Clin Exp Gastroenterol. 2012; 5: 173–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiguori G, Lamas B, Richard ML, et al.: Fungal Dysbiosis in Mucosa-associated Microbiota of Crohn’s Disease Patients. J Crohns Colitis. 2016; 10(3): 296–305. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaftali T, Reshef L, Kovacs A, et al.: Distinct Microbiotas are Associated with Ileum-Restricted and Colon-Involving Crohn’s Disease. Inflamm Bowel Dis. 2016; 22(2): 293–302. PubMed Abstract | Publisher Full Text\n\nYoumans BP, Ajami NJ, Jiang ZD, et al.: Characterization of the human gut microbiome during travelers’ diarrhea. Gut Microbes. 2015; 6(2): 110–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuince C, Ijaz UZ, Loman N, et al.: Extensive Modulation of the Fecal Metagenome in Children With Crohn’s Disease During Exclusive Enteral Nutrition. Am J Gastroenterol. 2015; 110(12): 1718–29 quiz 1730. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZaura E, Brandt BW, Teixeira de Mattos MJ, et al.: Same Exposure but Two Radically Different Responses to Antibiotics: Resilience of the Salivary Microbiome versus Long-Term Microbial Shifts in Feces. MBio. 2015; 6(6): e01693–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHashash JG, Chintamaneni P, Ramos Rivers CM, et al.: Patterns of Antibiotic Exposure and Clinical Disease Activity in Inflammatory Bowel Disease: A 4-year Prospective Study. Inflamm Bowel Dis. 2015; 21(11): 2576–82. PubMed Abstract | Publisher Full Text\n\nLee T, Clavel T, Smirnov K, et al.: Oral versus intravenous iron replacement therapy distinctly alters the gut microbiota and metabolome in patients with IBD. Gut. 2016. pii: gutjnl-2015-309940. PubMed Abstract | Publisher Full Text\n\nBashan A, Gibson TE, Friedman J, et al.: Universality of human microbial dynamics. Nature. 2016; 534(7606): 259–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernstein CN: Antibiotics, probiotics and prebiotics in IBD. Nestle Nutr Inst Workshop Ser. 2014; 79: 83–100. PubMed Abstract | Publisher Full Text\n\nFujiya M, Ueno N, Kohgo Y: Probiotic treatments for induction and maintenance of remission in inflammatory bowel diseases: a meta-analysis of randomized controlled trials. Clin J Gastroenterol. 2014; 7(1): 1–13. PubMed Abstract | Publisher Full Text\n\nBenjamin JL, Hedin CR, Koutsoumpas A, et al.: Randomised, double-blind, placebo-controlled trial of fructo-oligosaccharides in active Crohn’s disease. Gut. 2011; 60(7): 923–9. PubMed Abstract | Publisher Full Text\n\nRossen NG, MacDonald JK, de Vries EM, et al.: Fecal microbiota transplantation as novel therapy in gastroenterology: A systematic review. World J Gastroenterol. 2015; 21(17): 5359–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVermeire S, Joossens M, Verbeke K, et al.: Donor Species Richness Determines Faecal Microbiota Transplantation Success in Inflammatory Bowel Disease. J Crohns Colitis. 2016; 10(4): 387–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCosnes J, Gower-Rousseau C, Seksik P, et al.: Epidemiology and natural history of inflammatory bowel diseases. Gastroenterology. 2011; 140(6): 1785–94. PubMed Abstract | Publisher Full Text\n\nVersini M, Jeandel PY, Bashi T, et al.: Unraveling the Hygiene Hypothesis of helminthes and autoimmunity: origins, pathophysiology, and clinical applications. BMC Med. 2015; 13: 81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeong RW, Mitrev N, Ko Y: Hygiene Hypothesis: Is the Evidence the Same All Over the World? Dig Dis. 2016; 34(1–2): 35–42. PubMed Abstract | Publisher Full Text\n\nStrachan DP: Hay fever, hygiene, and household size. BMJ. 1989; 299(6710): 1259–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlack RE, Cousens S, Johnson HL, et al.: Global, regional, and national causes of child mortality in 2008: a systematic analysis. Lancet. 2010; 375(9730): 1969–87. PubMed Abstract | Publisher Full Text\n\nLevine MM, Kaper JB, Black RE, et al.: New knowledge on pathogenesis of bacterial enteric infections as applied to vaccine development. Microbiol Rev. 1983; 47(4): 510–50. PubMed Abstract | Free Full Text\n\nAhuja V, Tandon RK: Inflammatory bowel disease in the Asia-Pacific area: a comparison with developed countries and regional differences. J Dig Dis. 2010; 11(3): 134–47. PubMed Abstract | Publisher Full Text\n\nVictoria CR, Sassak LY, Nunes HR: Incidence and prevalence rates of inflammatory bowel diseases, in midwestern of São Paulo State, Brazil. Arq Gastroenterol. 2009; 46(1): 20–5. PubMed Abstract | Publisher Full Text\n\nHu D, Ren J, Wang G, et al.: Geographic mapping of Crohn’s disease and its relation to affluence in jiangsu province, an eastern coastal province of china. Gastroenterol Res Pract. 2014; 2014: 590467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMayberry J, Mann R: Inflammatory bowel disease in rural sub-Saharan Africa: rarity of diagnosis in patients attending mission hospitals. Digestion. 1989; 44(3): 172–6. PubMed Abstract | Publisher Full Text\n\nGuerrant RL, DeBoer MD, Moore SR, et al.: The impoverished gut--a triple burden of diarrhoea, stunting and chronic disease. Nat Rev Gastroenterol Hepatol. 2013; 10(4): 220–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLahariya C, Paul VK: Burden, differentials, and causes of child deaths in India. Indian J Pediatr. 2010; 77(11): 1312–21. PubMed Abstract | Publisher Full Text\n\nKirk MD, Pires SM, Black RE, et al.: World Health Organization Estimates of the Global and Regional Disease Burden of 22 Foodborne Bacterial, Protozoal, and Viral Diseases, 2010: A Data Synthesis. PLoS Med. 2015; 12(12): e1001921. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOjha SC, Jaide C, Jinawath N, et al.: Geohelminths: public health significance. J Infect Dev Ctries. 2014; 8(1): 5–16. PubMed Abstract | Publisher Full Text\n\nChu KM, Watermeyer G, Shelly L, et al.: Childhood helminth exposure is protective against inflammatory bowel disease: a case control study in South Africa. Inflamm Bowel Dis. 2013; 19(3): 614–20. PubMed Abstract | Publisher Full Text\n\nBager P, Vinkel Hansen A, Wohlfahrt J, et al.: Helminth infection does not reduce risk for chronic inflammatory disease in a population-based cohort study. Gastroenterology. 2012; 142(1): 55–62. PubMed Abstract | Publisher Full Text\n\nGarg SK, Croft AM, Bager P: Helminth therapy (worms) for induction of remission in inflammatory bowel disease. Cochrane database Syst Rev. 2014; 1: CD009400. PubMed Abstract | Publisher Full Text\n\nO’Ryan M, Vidal R, del Canto F, et al.: Vaccines for viral and bacterial pathogens causing acute gastroenteritis: Part I: Overview, vaccines for enteric viruses and Vibrio cholerae. Hum Vaccin Immunother. 2015; 11(3): 584–600. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLevine MM, Black RE, Clements ML, et al.: Duration of infection-derived immunity to cholera. J Infect Dis. 1981; 143(6): 818–20. PubMed Abstract | Publisher Full Text\n\nClements ML, Levine MM, Young CR, et al.: Magnitude, kinetics, and duration of vibriocidal antibody responses in North Americans after ingestion of Vibrio cholerae. J Infect Dis. 1982; 145(4): 465–73. PubMed Abstract | Publisher Full Text\n\nLevine MM, Nalin DR, Hoover DL, et al.: Immunity to enterotoxigenic Escherichia coli. Infect Immun. 1979; 23(3): 729–36. PubMed Abstract | Free Full Text\n\nNyachuba DG: Foodborne illness: is it on the rise? Nutr Rev. 2010; 68(5): 257–69. PubMed Abstract | Publisher Full Text\n\nKhabbaz RF, Moseley RR, Steiner RJ, et al.: Challenges of infectious diseases in the USA. Lancet. 2014; 384(9937): 53–63. PubMed Abstract | Publisher Full Text\n\nGormley FJ, Little CL, Rawal N, et al.: A 17-year review of foodborne outbreaks: describing the continuing decline in England and Wales (1992-2008). Epidemiol Infect. 2011; 139(5): 688–99. PubMed Abstract | Publisher Full Text\n\nKumagai Y, Gilmour S, Ota E, et al.: Estimating the burden of foodborne diseases in Japan. Bull World Health Organ. 2015; 93(8): 540–9C. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSonnenberg A: Age distribution of IBD hospitalization. Inflamm Bowel Dis. 2010; 16(3): 452–7. PubMed Abstract | Publisher Full Text\n\nLoftus EV Jr, Schoenfeld P, Sandborn WJ: The epidemiology and natural history of Crohn’s disease in population-based patient cohorts from North America: a systematic review. Aliment Pharmacol Ther. 2002; 16(1): 51–60. PubMed Abstract | Publisher Full Text\n\nBurisch J, Pedersen N, Čuković-Čavka S, et al.: East-West gradient in the incidence of inflammatory bowel disease in Europe: the ECCO-EpiCom inception cohort. Gut. 2014; 63(4): 588–97. PubMed Abstract | Publisher Full Text\n\nMurray CJ, Ortblad KF, Guinovart C, et al.: Global, regional, and national incidence and mortality for HIV, tuberculosis, and malaria during 1990-2013: a systematic analysis for the Global Burden of Disease Study 2013. Lancet. 2014; 384(9947): 1005–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButler AJ, Thomas MK, Pintar KD: Expert elicitation as a means to attribute 28 enteric pathogens to foodborne, waterborne, animal contact, and person-to-person transmission routes in Canada. Foodborne Pathog Dis. 2015; 12(4): 335–44. PubMed Abstract | Publisher Full Text\n\nHung CC, Chang SY, Ji DD: Entamoeba histolytica infection in men who have sex with men. Lancet Infect Dis. 2012; 12(9): 729–36. PubMed Abstract | Publisher Full Text\n\nSimms I, Field N, Jenkins C, et al.: Intensified shigellosis epidemic associated with sexual transmission in men who have sex with men--Shigella flexneri and S. sonnei in England, 2004 to end of February 2015. Euro Surveill. 2015; 20(15): pii: 21097. PubMed Abstract | Publisher Full Text\n\nDanila RN, Eikmeier DL, Robinson TJ, et al.: Two concurrent enteric disease outbreaks among men who have sex with men minneapolis-st paul area. Clin Infect Dis. 2014; 59(7): 987–9. PubMed Abstract | Publisher Full Text\n\nGaudreau C, Helferty M, Sylvestre JL, et al.: Campylobacter coli outbreak in men who have sex with men Quebec, Canada, 2010-2011. Emerg Infect Dis. 2013; 19(5): 764–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAragón TJ, Vugia DJ, Shallow S, et al.: Case-control study of shigellosis in San Francisco: the role of sexual transmission and HIV infection. Clin Infect Dis. 2007; 44(3): 327–34. PubMed Abstract | Publisher Full Text\n\nLevine MM, DuPont HL, Formal SB, et al.: Pathogenesis of Shigella dysenteriae 1 (Shiga) dysentery. J Infect Dis. 1973; 127(3): 261–70. PubMed Abstract | Publisher Full Text\n\nCooper F, Barber T: 'Gay bowel syndrome': relic or real (and returning) phenomenon? Curr Opin Infect Dis. 2014; 27(1): 84–9. PubMed Abstract | Publisher Full Text\n\nde Vrieze NH, de Vries HJ: Lymphogranuloma venereum among men who have sex with men. An epidemiological and clinical review. Expert Rev Anti Infect Ther. 2014; 12(6): 697–704. PubMed Abstract | Publisher Full Text\n\nSalit IE, Khairnar K, Gough K, et al.: A possible cluster of sexually transmitted Entamoeba histolytica: genetic analysis of a highly virulent strain. Clin Infect Dis. 2009; 49(3): 346–53. PubMed Abstract | Publisher Full Text\n\nEdwards S, Carne C: Oral sex and transmission of non-viral STIs. Sex Transm Infect. 1998; 74(2): 95–100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYamamoto S, Tsukamoto T, Terai A, et al.: Genetic evidence supporting the fecal-perineal-urethral hypothesis in cystitis caused by Escherichia coli. J Urol. 1997; 157(3): 1127–9. PubMed Abstract | Publisher Full Text\n\nGrüneberg RN: Relationship of infecting urinary organism to the faecal flora in patients with symptomatic urinary infection. Lancet. 1969; 2(7624): 766–8. PubMed Abstract | Publisher Full Text\n\nMercer CH, Tanton C, Prah P, et al.: Changes in sexual attitudes and lifestyles in Britain through the life course and over time: findings from the National Surveys of Sexual Attitudes and Lifestyles (Natsal). Lancet. 2013; 382(9907): 1781–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChandra A, Mosher WD, Copen C, et al.: Sexual behavior, sexual attraction, and sexual identity in the United States: data from the 2006-2008 National Survey of Family Growth. Natl Health Stat Report. 2011; (36): 1–36. PubMed Abstract\n\nBenson LS, Martins SL, Whitaker AK: Correlates of Heterosexual Anal Intercourse among Women in the 2006-2010 National Survey of Family Growth. J Sex Med. 2015; 12(8): 1746–52. PubMed Abstract | Publisher Full Text\n\nHalperin DT: Heterosexual anal intercourse: prevalence, cultural factors, and HIV infection and other health risks, Part I. AIDS Patient Care STDS. 1999; 13(12): 717–30. PubMed Abstract | Publisher Full Text\n\nCunningham SD, Olthoff G, Burnett P, et al.: Evidence of heterosexual bridging among syphilis-positive men who have sex with men. Sex Transm Infect. 2006; 82(6): 444–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZheng JJ, Zhu XS, Huangfu Z, et al.: Prevalence and incidence rates of Crohn’s disease in mainland China: a meta-analysis of 55 years of research. J Dig Dis. 2010; 11(3): 161–6. PubMed Abstract | Publisher Full Text\n\nAl-Mofarreh MA, Al-Mofleh IA: Emerging inflammatory bowel disease in saudi outpatients: a report of 693 cases. Saudi J Gastroenterol. 2013; 19(1): 16–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNg JY, Wong ML, Chan RK, et al.: Gender Differences in Factors Associated With Anal Intercourse Among Heterosexual Adolescents in Singapore. AIDS Educ Prev. 2015; 27(4): 373–85. PubMed Abstract | Publisher Full Text\n\nLiu CM, Hungate BA, Tobian AA, et al.: Penile Microbiota and Female Partner Bacterial Vaginosis in Rakai, Uganda. MBio. 2015; 6(3): e00589. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMändar R, Punab M, Borovkova N, et al.: Complementary seminovaginal microbiome in couples. Res Microbiol. 2015; 166(5): 440–7. PubMed Abstract | Publisher Full Text\n\nBlaser MJ, LaForce FM, Wilson NA, et al.: Reservoirs for human campylobacteriosis. J Infect Dis. 1980; 141(5): 665–9. PubMed Abstract | Publisher Full Text\n\nGunn JS, Marshall JM, Baker S, et al.: Salmonella chronic carriage: epidemiology, diagnosis, and gallbladder persistence. Trends Microbiol. 2014; 22(11): 648–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTodd EC, Greig JD, Bartleson CA, et al.: Outbreaks where food workers have been implicated in the spread of foodborne disease. Part 4. Infective doses and pathogen carriage. J Food Prot. 2008; 71(11): 2339–73. PubMed Abstract\n\nTo N, Gracie DJ, Ford AC: Systematic review with meta-analysis: the adverse effects of tobacco smoking on the natural history of Crohn’s disease. Aliment Pharmacol Ther. 2016; 43(5): 549–61. PubMed Abstract | Publisher Full Text\n\nDuffy LC, Zielezny MA, Marshall JR, et al.: Cigarette smoking and risk of clinical relapse in patients with Crohn’s disease. Am J Prev Med. 1990; 6(3): 161–6. PubMed Abstract\n\nRoos M, Soskic V, Poznanovic S, et al.: Post-translational modifications of endothelin receptor B from bovine lungs analyzed by mass spectrometry. J Biol Chem. 1998; 273(2): 924–31. PubMed Abstract | Publisher Full Text\n\nZimmerman DD, Gosselink MP, Mitalas LE, et al.: Smoking impairs rectal mucosal bloodflow--a pilot study: possible implications for transanal advancement flap repair. Dis Colon Rectum. 2005; 48(6): 1228–32. PubMed Abstract | Publisher Full Text\n\nEndoh K, Kauffman GL Jr, Leung FW: Mechanism of aggravation of mucosal injury by intravenous nicotine in rat stomach. Am J Physiol. 1991; 261(6 Pt 1):G1037–42. PubMed Abstract\n\nSher ME, Bank S, Greenberg R, et al.: The influence of cigarette smoking on cytokine levels in patients with inflammatory bowel disease. Inflamm Bowel Dis. 1999; 5(2): 73–8. PubMed Abstract | Publisher Full Text\n\nArnott ID, Williams N, Drummond HE, et al.: Whole gut lavage fluid interleukin-1beta and interleukin-8 in smokers and non-smokers with Crohn’s disease in clinical remission. Dig Liver Dis. 2002; 34(6): 424–9. PubMed Abstract | Publisher Full Text\n\nGilat T, Hacohen D, Lilos P, et al.: Childhood factors in ulcerative colitis and Crohn’s disease. An international cooperative study. Scand J Gastroenterol. 1987; 22(8): 1009–24. PubMed Abstract | Publisher Full Text\n\nKantsø B, Simonsen J, Hoffmann S, et al.: Inflammatory Bowel Disease Patients Are at Increased Risk of Invasive Pneumococcal Disease: A Nationwide Danish Cohort Study 1977-2013. Am J Gastroenterol. 2015; 110(11): 1582–7. PubMed Abstract | Publisher Full Text\n\nWhorwell PJ, Eade OE, Hossenbocus A, et al.: Crohn’s disease in a husband and wife. Lancet. 1978; 2(8082): 186–7. PubMed Abstract | Publisher Full Text\n\nSu JW, Ma JJ, Zhang HJ: Use of antibiotics in patients with Crohn’s disease: a systematic review and meta-analysis. J Dig Dis. 2015; 16(2): 58–66. PubMed Abstract | Publisher Full Text\n\nAltare F, Durandy A, Lammas D, et al.: Impairment of mycobacterial immunity in human interleukin-12 receptor deficiency. Science. 1998; 280(5368): 1432–5. PubMed Abstract | Publisher Full Text\n\nSegal AW, Loewi G: Neutrophil dysfunction in Crohn’s disease. Lancet. 1976; 2(7979): 219–21. PubMed Abstract | Publisher Full Text\n\nMarks DJ, Harbord MW, MacAllister R, et al.: Defective acute inflammation in Crohn’s disease: a clinical investigation. Lancet. 2006; 367(9511): 668–78. PubMed Abstract | Publisher Full Text\n\nDe Bruin AF, Schouten SB, de Kort PP, et al.: The impact of chronic smoking on rectal mucosal blood flow. Tech Coloproctol. 2009; 13(4): 269–72. PubMed Abstract | Publisher Full Text\n\nSegal AW, Arnot RN, Thakur ML, et al.: Indium-111-labelled leucocytes for localisation of abscesses. Lancet. 1976; 2(7994): 1056–8. PubMed Abstract | Publisher Full Text\n\nSmith AM, Rahman FZ, Hayee B, et al.: Disordered macrophage cytokine secretion underlies impaired acute inflammation and bacterial clearance in Crohn’s disease. J Exp Med. 2009; 206(9): 1883–97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilliams GT, Williams WJ: Granulomatous inflammation--a review. J Clin Pathol. 1983; 36(7): 723–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZorzi F, Monteleone I, Sarra M, et al.: Distinct profiles of effector cytokines mark the different phases of Crohn’s disease. PLoS One. 2013; 8(1): e54562. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParronchi P, Romagnani P, Annunziato F, et al.: Type 1 T-helper cell predominance and interleukin-12 expression in the gut of patients with Crohn’s disease. Am J Pathol. 1997; 150(3): 823–32. PubMed Abstract | Free Full Text\n\nLiu Y, van Kruiningen HJ, West AB, et al.: Immunocytochemical evidence of Listeria, Escherichia coli, and Streptococcus antigens in Crohn’s disease. Gastroenterology. 1995; 108(5): 1396–404. PubMed Abstract | Publisher Full Text\n\nRyan P, Kelly RG, Lee G, et al.: Bacterial DNA within granulomas of patients with Crohn’s disease--detection by laser capture microdissection and PCR. Am J Gastroenterol. 2004; 99(8): 1539–43. PubMed Abstract | Publisher Full Text\n\nMielke ME, Peters C, Hahn H: Cytokines in the induction and expression of T-cell-mediated granuloma formation and protection in the murine model of listeriosis. Immunol Rev. 1997; 158(1): 79–93. PubMed Abstract | Publisher Full Text\n\nOzen S, Bilginer Y: A clinical guide to autoinflammatory diseases: familial Mediterranean fever and next-of-kin. Nat Rev Rheumatol. 2014; 10(3): 135–47. PubMed Abstract | Publisher Full Text\n\nBroderick L: Recurrent Fevers for the Pediatric Immunologist: It’s Not All Immunodeficiency. Curr Allergy Asthma Rep. 2016; 16(1): 2. PubMed Abstract | Publisher Full Text\n\nElson CO, Alexander KL: Host-microbiota interactions in the intestine. Dig Dis. 2015; 33(2): 131–6. PubMed Abstract | Publisher Full Text\n\nHansen JJ: Immune Responses to Intestinal Microbes in Inflammatory Bowel Diseases. Curr Allergy Asthma Rep. 2015; 15(10): 61. PubMed Abstract | Publisher Full Text\n\nNeurath MF: New targets for mucosal healing and therapy in inflammatory bowel diseases. Mucosal Immunol. 2014; 7(1): 6–19. PubMed Abstract | Publisher Full Text\n\nRieder F, Fiocchi C: Mechanisms of tissue remodeling in inflammatory bowel disease. Dig Dis. 2013; 31(2): 186–93. PubMed Abstract | Publisher Full Text\n\nCardona PJ: A spotlight on liquefaction: evidence from clinical settings and experimental models in tuberculosis. Clin Dev Immunol. 2011; 2011: 868246. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScharl M, Rogler G: Pathophysiology of fistula formation in Crohn’s disease. World J Gastrointest Pathophysiol. 2014; 5(3): 205–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitchell DN, Dyer NC, Cannon P, et al.: The Kveim test in Crohn’s disease. Postgrad Med J. 1970; 46(538): 491–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMunro CS, Mitchell DN: The K veim response: still useful, still a puzzle. Thorax. 1987; 42(5): 321–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeirstein AS: Kveim antigen: what does it tell us about causation of sarcoidosis? Semin Respir Infect. 1998; 13(3): 206–11. PubMed Abstract\n\nReich JM: On the nature of sarcoidosis. Eur J Intern Med. 2012; 23(2): 105–9. PubMed Abstract | Publisher Full Text\n\nMitchell DN, Sutherland I, Bradstreet CM, et al.: Validation and standardization of Kveim test suspensions prepared from two human sarcoid spleens. J Clin Pathol. 1976; 29(3): 203–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChung J, Rosenbach M: Extensive cutaneous sarcoidosis and coexistant Crohn disease with dual response to infliximab: case report and review of the literature. Dermatol Online J. 2014; 21(3): pii: 13030/qt6m04m5s3. PubMed Abstract\n\nKatsanos KH, Torres J, Roda G, et al.: Review article: non-malignant oral manifestations in inflammatory bowel diseases. Aliment Pharmacol Ther. 2015; 42(1): 40–60. PubMed Abstract | Publisher Full Text\n\nBartels LE, Jepsen P, Christensen LA, et al.: Diagnosis of Helicobacter Pylori Infection is Associated with Lower Prevalence and Subsequent Incidence of Crohn’s Disease. J Crohns Colitis. 2016; 10(4): 443–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNugent FW, Richmond M, Park SK: Crohn’s disease of the duodenum. Gut. 1977; 18(2): 115–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRahman FZ, Marks DJ, Hayee BH, et al.: Phagocyte dysfunction and inflammatory bowel disease. Inflamm Bowel Dis. 2008; 14(10): 1443–52. PubMed Abstract | Publisher Full Text\n\nMovahedi M, Aghamohammadi A, Rezaei N, et al.: Gastrointestinal manifestations of patients with chronic granulomatous disease. Iran J Allergy Asthma Immunol. 2004; 3(2): 83–7. PubMed Abstract\n\nWinkelstein JA, Marino MC, Johnston RB Jr, et al.: Chronic granulomatous disease. Report on a national registry of 368 patients. Medicine (Baltimore). 2000; 79(3): 155–69. PubMed Abstract | Publisher Full Text\n\nSegal AW, Geisow M, Garcia R, et al.: The respiratory burst of phagocytic cells is associated with a rise in vacuolar pH. Nature. 1981; 290(5805): 406–9. PubMed Abstract | Publisher Full Text\n\nLevine AP, Duchen MR, de Villiers S, et al.: Alkalinity of Neutrophil Phagocytic Vacuoles Is Modulated by HVCN1 and Has Consequences for Myeloperoxidase Activity. PLoS One. 2015; 10(4): e0125906. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFournier BM, Parkos CA: The role of neutrophils during intestinal inflammation. Mucosal Immunol. 2012; 5(4): 354–66. PubMed Abstract | Publisher Full Text\n\nKolaczkowska E, Kubes P: Neutrophil recruitment and function in health and inflammation. Nat Rev Immunol. 2013; 13(3): 159–75. PubMed Abstract | Publisher Full Text\n\nElliott TR, Hudspith BN, Rayment NB, et al.: Defective macrophage handling of Escherichia coli in Crohn's disease. J Gastroenterol Hepatol. 2015; 30(8): 1265–74. PubMed Abstract | Publisher Full Text\n\nCampos N, Magro F, Castro AR, et al.: Macrophages from IBD patients exhibit defective tumour necrosis factor-α secretion but otherwise normal or augmented pro-inflammatory responses to infection. Immunobiology. 2011; 216(8): 961–70. PubMed Abstract | Publisher Full Text\n\nSewell GW, Rahman FZ, Levine AP, et al.: Defective tumor necrosis factor release from Crohn's disease macrophages in response to Toll-like receptor activation: relationship to phenotype and genome-wide association susceptibility loci. Inflamm Bowel Dis. 2012; 18(11): 2120–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVazeille E, Buisson A, Bringer MA, et al.: Monocyte-derived macrophages from Crohn's disease patients are impaired in the ability to control intracellular adherent-invasive Escherichia coli and exhibit disordered cytokine secretion profile. J Crohns Colitis. 2015; 9(5): 410–20. PubMed Abstract | Publisher Full Text\n\nLevine AP, Segal AW: What is wrong with granulocytes in inflammatory bowel diseases? Dig Dis. 2013; 31(3–4): 321–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWandall JH, Binder V: Leucocyte function in Crohn's disease. Studies on mobilisation using a quantitative skin window technique and on the function of circulating polymorphonuclear leucocytes in vitro. Gut. 1982; 23(3): 173–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSomasundaram R, Nuij VJ, van der Woude CJ, et al.: Peripheral neutrophil functions and cell signalling in Crohn`s disease. PLoS One. 2013; 8(12): e84521. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeremia A, Biancheri P, Allan P, et al.: Innate and adaptive immunity in inflammatory bowel disease. Autoimmun Rev. 2014; 13(1): 3–10. PubMed Abstract | Publisher Full Text\n\nŠedý J, Bekiaris V, Ware CF: Tumor necrosis factor superfamily in innate immunity and inflammation. Cold Spring Harb Perspect Biol. 2014; 7(4): a016279. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolgersen K, Kutlu B, Fox B, et al.: High-resolution gene expression profiling using RNA sequencing in patients with inflammatory bowel disease and in mouse models of colitis. J Crohns Colitis. 2015; 9(6): 492–506. PubMed Abstract | Publisher Full Text\n\nO’Toole A, Lucci M, Korzenik J: Inflammatory Bowel Disease Provoked by Etanercept: Report of 443 Possible Cases Combined from an IBD Referral Center and the FDA. Dig Dis Sci. 2016; 61(6): 1772–4. PubMed Abstract | Publisher Full Text\n\nLevin AD, Wildenberg ME, van den Brink GR: Mechanism of action of anti-TNF therapy in inflammatory bowel disease. J Crohns Colitis. 2016; 10(8): 989–97. PubMed Abstract | Publisher Full Text\n\nAhmad T, Satsangi J, McGovern D, et al.: Review article: the genetics of inflammatory bowel disease. Aliment Pharmacol Ther. 2001; 15(6): 731–48. PubMed Abstract | Publisher Full Text\n\nMoller FT, Andersen V, Wohlfahrt J, et al.: Familial risk of inflammatory bowel disease: a population-based cohort study 1977-2011. Am J Gastroenterol. 2015; 110(4): 564–71. PubMed Abstract | Publisher Full Text\n\nBrant SR: Update on the heritability of inflammatory bowel disease: the importance of twin studies. Inflamm Bowel Dis. 2011; 17(1): 1–5. PubMed Abstract | Publisher Full Text\n\nPiitulainen E, Eriksson S: Decline in FEV1 related to smoking status in individuals with severe alpha1-antitrypsin deficiency (PiZZ). Eur Respir J. 1999; 13(2): 247–51. PubMed Abstract\n\nDawn Teare M, Barrett JH: Genetic linkage studies. Lancet. 2005; 366(9490): 1036–44. PubMed Abstract | Publisher Full Text\n\nHugot JP, Chamaillard M, Zouali H, et al.: Association of NOD2 leucine-rich repeat variants with susceptibility to Crohn’s disease. Nature. 2001; 411(6837): 599–603. PubMed Abstract | Publisher Full Text\n\nOgura Y, Bonen DK, Inohara N, et al.: A frameshift mutation in NOD2 associated with susceptibility to Crohn’s disease. Nature. 2001; 411(6837): 603–6. PubMed Abstract | Publisher Full Text\n\nRisch N, Merikangas K: The future of genetic studies of complex human diseases. Science. 1996; 273(5281): 1516–7. PubMed Abstract | Publisher Full Text\n\nPe’er I, Yelensky R, Altshuler D, et al.: Estimation of the multiple testing burden for genomewide association studies of nearly all common variants. Genet Epidemiol. 2008; 32(4): 381–5. PubMed Abstract | Publisher Full Text\n\nSpencer CC, Su Z, Donnelly P, et al.: Designing genome-wide association studies: sample size, power, imputation, and the choice of genotyping chip. PLoS Genet. 2009; 5(5):e1000477. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHirschhorn JN, Daly MJ: Genome-wide association studies for common diseases and complex traits. Nat Rev Genet. 2005; 6(2): 95–108. PubMed Abstract | Publisher Full Text\n\nJostins L, Ripke S, Weersma RK, et al.: Host-microbe interactions have shaped the genetic architecture of inflammatory bowel disease. Nature. 2012; 491(7422): 119–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEllinghaus D, Jostins L, Spain SL, et al.: Analysis of five chronic inflammatory diseases identifies 27 new associations and highlights disease-specific patterns at shared loci. Nat Genet. 2016; 48(5): 510–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Lange KM, Moutsianas L, Lee JC, et al.: Genome-wide association study implicates immune activation of multiple integrin genes in inflammatory bowel disease. BioRxiv. Publisher Full Text\n\nLiu JZ, van Sommeren S, Huang H, et al.: Association analyses identify 38 susceptibility loci for inflammatory bowel disease and highlight shared genetic risk across populations. Nat Genet. 2015; 47(9): 979–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMolodecky NA, Soon IS, Rabi DM, et al.: Increasing incidence and prevalence of the inflammatory bowel diseases with time, based on systematic review. Gastroenterology. 2012; 142(1): 46–54.e42; quiz e30. PubMed Abstract | Publisher Full Text\n\nBrown MA, Pile KD, Kennedy LG, et al.: HLA class I associations of ankylosing spondylitis in the white population in the United Kingdom. Ann Rheum Dis. 1996; 55(4): 268–70. PubMed Abstract | Free Full Text\n\nChinn S: A simple method for converting an odds ratio to effect size for use in meta-analysis. Stat Med. 2000; 19(22): 3127–31. PubMed Abstract | Publisher Full Text\n\nSmigoc Schweiger D, Mendez A, Kunilo Jamnik S, et al.: High-risk genotypes HLA-DR3-DQ2/DR3-DQ2 and DR3-DQ2/DR4-DQ8 in co-occurrence of type 1 diabetes and celiac disease. Autoimmunity. 2016; 49(4): 240–7. PubMed Abstract | Publisher Full Text\n\nSchreiber TH, Podack ER: Immunobiology of TNFSF15 and TNFRSF25. Immunol Res. 2013; 57(1–3): 3–11. PubMed Abstract | Publisher Full Text\n\nZhernakova A, van Diemen CC, Wijmenga C: Detecting shared pathogenesis from the shared genetics of immune-related diseases. Nat Rev Genet. 2009; 10(1): 43–55. PubMed Abstract | Publisher Full Text\n\nChristophers E: Comorbidities in psoriasis. Clin Dermatol. 2007; 25(6): 529–34. PubMed Abstract | Publisher Full Text\n\nNajarian DJ, Gottlieb AB: Connections between psoriasis and Crohn’s disease. J Am Acad Dermatol. 2003; 48(6): 805–21; quiz 822-4. PubMed Abstract | Publisher Full Text\n\nHsu LN, Armstrong AW: Psoriasis and autoimmune disorders: a review of the literature. J Am Acad Dermatol. 2012; 67(5): 1076–9. PubMed Abstract | Publisher Full Text\n\nLubrano E, Ciacci C, Ames PR, et al.: The arthritis of coeliac disease: prevalence and pattern in 200 adult patients. Br J Rheumatol. 1996; 35(12): 1314–8. PubMed Abstract | Publisher Full Text\n\nSelmi C: Diagnosis and classification of autoimmune uveitis. Autoimmun Rev. 2014; 13(4–5): 591–4. PubMed Abstract | Publisher Full Text\n\nRosenbaum JT: Uveitis in spondyloarthritis including psoriatic arthritis, ankylosing spondylitis, and inflammatory bowel disease. Clin Rheumatol. 2015; 34(6): 999–1002. PubMed Abstract | Publisher Full Text\n\nMielants H, Veys EM, Cuvelier C, et al.: Ileocolonoscopic findings in seronegative spondylarthropathies. Br J Rheumatol. 1988; 27(Suppl 2): 95–105. PubMed Abstract | Publisher Full Text\n\nSegal AW, Isenberg DA, Hajirousou V, et al.: Preliminary evidence for gut involvement in the pathogenesis of rheumatoid arthritis? Br J Rheumatol. 1986; 25(2): 162–6. PubMed Abstract | Publisher Full Text\n\nNadorra RL, Nakazato Y, Landing BH: Pathologic features of gastrointestinal tract lesions in childhood-onset systemic lupus erythematosus: study of 26 patients, with review of the literature. Pediatr Pathol. 1987; 7(3): 245–59. PubMed Abstract | Publisher Full Text\n\nGenerali E, Ceribelli A, Massarotti M, et al.: Seronegative reactive spondyloarthritis and the skin. Clin Dermatol. 2015; 33(5): 531–7. PubMed Abstract | Publisher Full Text\n\nSekhwal MK, Li P, Lam I, et al.: Disease Resistance Gene Analogs (RGAs) in Plants. Int J Mol Sci. 2015; 16(8): 19248–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGay NJ, Symmons MF, Gangloff M, et al.: Assembly and localization of Toll-like receptor signalling complexes. Nat Rev Immunol. 2014; 14(8): 546–58. PubMed Abstract | Publisher Full Text\n\nNakamura N, Lill JR, Phung Q, et al.: Endosomes are specialized platforms for bacterial sensing and NOD2 signalling. Nature. 2014; 509(7499): 240–4. PubMed Abstract | Publisher Full Text\n\nKeestra-Gounder AM, Byndloss MX, Seyffert N, et al.: NOD1 and NOD2 signalling links ER stress with inflammation. Nature. 2016; 532(7599): 394–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaruso R, Warner N, Inohara N, et al.: NOD1 and NOD2: signaling, host defense, and inflammatory disease. Immunity. 2014; 41(6): 898–908. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaekawa S, Ohto U, Shibata T, et al.: Crystal structure of NOD2 and its implications in human disease. Nat Commun. 2016; 7: 11813. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOgura Y, Inohara N, Benito A, et al.: Nod2, a Nod1/Apaf-1 family member that is restricted to monocytes and activates NF-kappaB. J Biol Chem. 2001; 276(7): 4812–8. PubMed Abstract | Publisher Full Text\n\nOgura Y, Lala S, Xin W, et al.: Expression of NOD2 in Paneth cells: a possible link to Crohn’s ileitis. Gut. 2003; 52(11): 1591–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShi C, Pamer EG: Monocyte recruitment during infection and inflammation. Nat Rev Immunol. 2011; 11(11): 762–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSerbina NV, Jia T, Hohl TM, et al.: Monocyte-mediated defense against microbial pathogens. Annu Rev Immunol. 2008; 26: 421–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerhardt T, Ley K: Monocyte trafficking across the vessel wall. Cardiovasc Res. 2015; 107(3): 321–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYona S, Kim KW, Wolf Y, et al.: Fate mapping reveals origins and dynamics of monocytes and tissue macrophages under homeostasis. Immunity. 2013; 38(1): 79–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVarol C, Yona S, Jung S: Origins and tissue-context-dependent fates of blood monocytes. Immunol Cell Biol. 2009; 87(1): 30–8. PubMed Abstract | Publisher Full Text\n\nAuffray C, Fogg D, Garfa M, et al.: Monitoring of blood vessels and tissues by a population of monocytes with patrolling behavior. Science. 2007; 317(5838): 666–70. PubMed Abstract | Publisher Full Text\n\nClark SL Jr: Cellular differentiation in the kidneys of newborn mice studies with the electron microscope. J Biophys Biochem Cytol. 1957; 3(3): 349–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Duve C, Wattiaux R: Functions of lysosomes. Annu Rev Physiol. 1966; 28: 435–92. PubMed Abstract | Publisher Full Text\n\nMizushima N, Noda T, Yoshimori T, et al.: A protein conjugation system essential for autophagy. Nature. 1998; 395(6700): 395–8. PubMed Abstract | Publisher Full Text\n\nNarayanan LA, Edelmann MJ: Ubiquitination as an efficient molecular strategy employed in salmonella infection. Front Immunol. 2014; 5: 558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTooze SA, Yoshimori T: The origin of the autophagosomal membrane. Nat Cell Biol. 2010; 12(9): 831–5. PubMed Abstract | Publisher Full Text\n\nKawabata T, Yoshimori T: Beyond starvation: An update on the autophagic machinery and its functions. J Mol Cell Cardiol. 2016; 95: 2–10. PubMed Abstract | Publisher Full Text\n\nOhsumi Y: Historical landmarks of autophagy research. Cell Res. 2014; 24(1): 9–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson MI, Dooley HC, Tooze SA: WIPI2b and Atg16L1: setting the stage for autophagosome formation. Biochem Soc Trans. 2014; 42(5): 1327–34. PubMed Abstract | Publisher Full Text\n\nDooley HC, Razi M, Polson HE, et al.: WIPI2 links LC3 conjugation with PI3P, autophagosome formation, and pathogen clearance by recruiting Atg12-5-16L1. Mol Cell. 2014; 55(2): 238–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSorbara MT, Girardin SE: Emerging themes in bacterial autophagy. Curr Opin Microbiol. 2015; 23: 163–70. PubMed Abstract | Publisher Full Text\n\nKimmey JM, Huynh JP, Weiss LA, et al.: Unique role for ATG5 in neutrophil-mediated immunopathology during M. tuberculosis infection. Nature. 2015; 528(7583): 565–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang J, Brumell JH: Bacteria-autophagy interplay: a battle for survival. Nat Rev Microbiol. 2014; 12(2): 101–14. PubMed Abstract | Publisher Full Text\n\nSchwerd T, Pandey S, Yang HT, et al.: Impaired antibacterial autophagy links granulomatous intestinal inflammation in Niemann-Pick disease type C1 and XIAP deficiency with NOD2 variants in Crohn’s disease. Gut. 2016; pii: gutjnl-2015-310382. PubMed Abstract | Publisher Full Text\n\nVerlhac P, Viret C, Faure M: Dual function of CALCOCO2/NDP52 during xenophagy. Autophagy. 2015; 11(6): 965–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGardet A, Benita Y, Li C, et al.: LRRK2 is involved in the IFN-gamma response and host response to pathogens. J Immunol. 2010; 185(9): 5577–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYing H, Yue BY: Cellular and molecular biology of optineurin. Int Rev Cell Mol Biol. 2012; 294: 223–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTumbarello DA, Kendrick-Jones J, Buss F: Myosin VI and its cargo adaptors - linking endocytosis and autophagy. J Cell Sci. 2013; 126(Pt 12): 2561–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHampe J, Franke A, Rosenstiel P, et al.: A genome-wide association scan of nonsynonymous SNPs identifies a susceptibility variant for Crohn disease in ATG16L1. Nat Genet. 2007; 39(2): 207–11. PubMed Abstract | Publisher Full Text\n\nFujita N, Morita E, Itoh T, et al.: Recruitment of the autophagic machinery to endosomes during infection is mediated by ubiquitin. J Cell Biol. 2013; 203(1): 115–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nConway KL, Kuballa P, Song JH, et al.: Atg16l1 is required for autophagy in intestinal epithelial cells and protection of mice from Salmonella infection. Gastroenterology. 2013; 145(6): 1347–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSadaghian Sadabad M, Regeling A, de Goffau MC, et al.: The ATG16L1-T300A allele impairs clearance of pathosymbionts in the inflamed ileal mucosa of Crohn’s disease patients. Gut. 2015; 64(10): 1546–52. PubMed Abstract | Publisher Full Text\n\nKuballa P, Huett A, Rioux JD, et al.: Impaired autophagy of an intracellular pathogen induced by a Crohn’s disease associated ATG16L1 variant. PLoS One. 2008; 3(10): e3391. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCadwell K, Liu JY, Brown SL, et al.: A key role for autophagy and the autophagy gene Atg16l1 in mouse and human intestinal Paneth cells. Nature. 2008; 456(7219): 259–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSingh SB, Davis AS, Taylor GA, et al.: Human IRGM induces autophagy to eliminate intracellular mycobacteria. Science. 2006; 313(5792): 1438–41. PubMed Abstract | Publisher Full Text\n\nMatsuda N: Phospho-ubiquitin: upending the PINK-Parkin-ubiquitin cascade. J Biochem. 2016; 159(4): 379–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEl Chemaly A, Demaurex N: Do Hv1 proton channels regulate the ionic and redox homeostasis of phagosomes? MolCell Endocrinol. 2012; 353(1–2): 82–7. PubMed Abstract | Publisher Full Text\n\nMarshansky V, Futai M: The V-type H+-ATPase in vesicular trafficking: targeting, regulation and function. Curr Opin Cell Biol. 2008; 20(4): 415–26. PubMed Abstract | Publisher Full Text\n\nBekpen C, Hunn JP, Rohde C, et al.: The interferon-inducible p47 (IRG) GTPases in vertebrates: loss of the cell autonomous resistance mechanism in the human lineage. Genome Biol. 2005; 6(11): R92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu C, Jin X, Tsueng G, et al.: BioGPS: building your own mash-up of gene annotations and expression profiles. Nucleic Acids Res. 2016; 44(D1): D313–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi Q, Singh CR, Ma S, et al.: Label-free proteomics and systems biology analysis of mycobacterial phagosomes in dendritic cells and macrophages. J Proteome Res. 2011; 10(5): 2425–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoldati T, Neyrolles O: Mycobacteria and the Intraphagosomal Environment: Take it with a pinch of salt(s)! Traffic. 2012; 13(8): 1042–52. PubMed Abstract | Publisher Full Text\n\nKratochvílová K, Moráň L, Paďourová S, et al.: The role of the endoplasmic reticulum stress in stemness, pluripotency and development. Eur J Cell Biol. 2016; 95(3–5): 115–23. PubMed Abstract | Publisher Full Text\n\nShaffer AL, Wright G, Yang L, et al.: A library of gene expression signatures to illuminate normal and pathological lymphoid biology. Immunol Rev. 2006; 210(1): 67–85. PubMed Abstract | Publisher Full Text\n\nKleizen B, Braakman I: Protein folding and quality control in the endoplasmic reticulum. Curr Opin Cell Biol. 2004; 16(4): 343–9. PubMed Abstract | Publisher Full Text\n\nOgata M, Hino S, Saito A, et al.: Autophagy is activated for cell survival after endoplasmic reticulum stress. Mol Cell Biol. 2006; 26(24): 9220–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOzcan L, Tabas I: Role of endoplasmic reticulum stress in metabolic disease and other disorders. Annu Rev Med. 2012; 63: 317–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDíaz-Villanueva JF, Díaz-Molina R, García-González V: Protein Folding and Mechanisms of Proteostasis. Int J Mol Sci. 2015; 16(8): 17193–230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHosomi S, Kaser A, Blumberg RS: Role of endoplasmic reticulum stress and autophagy as interlinking pathways in the pathogenesis of inflammatory bowel disease. Curr Opin Gastroenterol. 2015; 31(1): 81–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchapansky J, Nardozzi JD, LaVoie MJ: The complex relationships between microglia, alpha-synuclein, and LRRK2 in Parkinson’s disease. Neuroscience. 2015; 302: 74–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWaschbüsch D, Michels H, Strassheim S, et al.: LRRK2 transport is regulated by its novel interacting partner Rab32. PLoS One. 2014; 9(10): e111632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpanò S: Host restriction in Salmonella: insights from Rab GTPases. Cell Microbiol. 2014; 16(9): 1321–8. PubMed Abstract | Publisher Full Text\n\nHoffmann C, Finsel I, Otto A, et al.: Functional analysis of novel Rab GTPases identified in the proteome of purified Legionella-containing vacuoles from macrophages. Cell Microbiol. 2014; 16(7): 1034–52. PubMed Abstract | Publisher Full Text\n\nGerondopoulos A, Langemeyer L, Liang JR, et al.: BLOC-3 mutated in Hermansky-Pudlak syndrome is a Rab32/38 guanine nucleotide exchange factor. Curr Biol. 2012; 22(22): 2135–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin JC, Lin CS, Hsu CW, et al.: Association Between Parkinson’s Disease and Inflammatory Bowel Disease: a Nationwide Taiwanese Retrospective Cohort Study. Inflamm Bowel Dis. 2016; 22(5): 1049–55. PubMed Abstract | Publisher Full Text\n\nNalls MA, Saad M, Noyce AJ, et al.: Genetic comorbidities in Parkinson’s disease. Hum Mol Genet. 2014; 23(3): 831–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKouklakis G, Efremidou EI, Papageorgiou MS, et al.: Complicated Crohn’s-like colitis, associated with Hermansky-Pudlak syndrome, treated with Infliximab: a case report and brief review of the literature. J Med Case Rep. 2007; 1: 176. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith AM, Sewell GW, Levine AP, et al.: Disruption of macrophage pro-inflammatory cytokine release in Crohn’s disease is associated with reduced optineurin expression in a subset of patients. Immunology. 2015; 144(1): 45–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYing H, Yue BY: Optineurin: The autophagy connection. Exp Eye Res. 2016; 144: 73–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChew TS, O’Shea NR, Sewell GW, et al.: Optineurin deficiency in mice contributes to impaired cytokine secretion and neutrophil recruitment in bacteria-driven colitis. Dis Model Mech. 2015; 8(8): 817–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlowicka K, Vereecke L, Mc Guire C, et al.: Optineurin deficiency in mice is associated with increased sensitivity to Salmonella but does not affect proinflammatory NF-κB signaling. Eur J Immunol. 2016; 46(4): 971–80. PubMed Abstract | Publisher Full Text\n\nO’Shea NR, Chew TS, Dunne J, et al.: Critical Role of the Disintegrin Metalloprotease ADAM-like Decysin-1 [ADAMDEC1] for Intestinal Immunity and Inflammation. J Crohns Colitis. 2016; pii: jjw111. PubMed Abstract | Publisher Full Text\n\nSawyer SL, Hartley T, Dyment DA, et al.: Utility of whole-exome sequencing for those near the end of the diagnostic odyssey: time to address gaps in care. Clin Genet. 2016; 89(3): 275–84. PubMed Abstract | Publisher Full Text\n\nStranneheim H, Wedell A: Exome and genome sequencing: a revolution for the discovery and diagnosis of monogenic disorders. J Intern Med. 2016; 279(1): 3–15. PubMed Abstract | Publisher Full Text\n\nPrescott NJ, Lehne B, Stone K, et al.: Pooled sequencing of 531 genes in inflammatory bowel disease identifies an associated rare variant in BTNL2 and implicates other immune related genes. PLoS Genet. 2015; 11(2): e1004955. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCardinale CJ, Kelsen JR, Baldassano RN, et al.: Impact of exome sequencing in inflammatory bowel disease. World J Gastroenterol. 2013; 19(40): 6721–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRivas MA, Beaudoin M, Gardet A, et al.: Deep resequencing of GWAS loci identifies independent rare variants associated with inflammatory bowel disease. Nat Genet. 2011; 43(11): 1066–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMomozawa Y, Mni M, Nakamura K, et al.: Resequencing of positional candidates identifies low frequency IL23R coding variants protecting against inflammatory bowel disease. Nat Genet. 2011; 43(1): 43–7. PubMed Abstract | Publisher Full Text\n\nMacArthur DG, Balasubramanian S, Frankish A, et al.: A systematic survey of loss-of-function variants in human protein-coding genes. Science. 2012; 335(6070): 823–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuo Y, de Lange KM, Jostins L, et al.: Exploring the genetic architecture of inflammatory bowel disease by whole genome sequencing identifies association at ADCY7. Cold Spring Harbor Labs Journals. 2016. Publisher Full Text\n\nCarmi S, Hui KY, Kochav E, et al.: Sequencing an Ashkenazi reference panel supports population-targeted personal genomics and illuminates Jewish and European origins. Nat Commun. 2014; 5: 4835. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChuang LS, Villaverde N, Hui KY, et al.: A Frameshift in CSF2RB Predominant Among Ashkenazi Jews Increases Risk for Crohn’s Disease and Reduces Monocyte Signaling via GM-CSF. Gastroenterology. 2016; 151(4): 710–723.e2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLevine AP, Pontikos N, Schiff ER, et al.: Genetic Complexity of Crohn’s Disease in Two Large Ashkenazi Jewish Families. Gastroenterology. 2016; 151(4): 698–709. PubMed Abstract | Publisher Full Text\n\nGewurz BE, Towfic F, Mar JC, et al.: Genome-wide siRNA screen for mediators of NF-κB activation. Proc Natl Acad Sci U S A. 2012; 109(7): 2467–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Lambie EJ, Bindu S, et al.: Engulfment pathways promote programmed cell death by enhancing the unequal segregation of apoptotic potential. Nat Commun. 2015; 6: 10126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDenson LA, Klein C: Granulocyte-Macrophage Colony Stimulating Factor Bioactivity and Mucosal Homeostasis in Crohn’s Disease: A Role for Genetic Variation. Gastroenterology. 2016; 151(4): 593–6. PubMed Abstract | Publisher Full Text\n\nBroughton SE, Dhagat U, Hercus TR, et al.: The GM-CSF/IL-3/IL-5 cytokine receptor family: from ligand recognition to initiation of signaling. Immunol Rev. 2012; 250(1): 277–302. PubMed Abstract | Publisher Full Text\n\nWicks IP, Roberts AW: Targeting GM-CSF in inflammatory diseases. Nat Rev Rheumatol. 2016; 12(1): 37–48. PubMed Abstract | Publisher Full Text\n\nMetcalf D: Hematopoietic cytokines. Blood. 2008; 111(2): 485–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSanderson CJ: Eosinophil differentiation factor (interleukin-5). Immunol Ser. 1990; 49: 231–56. PubMed Abstract\n\nBedard K, Krause KH: The NOX family of ROS-generating NADPH oxidases: physiology and pathophysiology. Physiol Rev. 2007; 87(1): 245–313. PubMed Abstract | Publisher Full Text\n\nSommer F, Bäckhed F: The gut microbiota engages different signaling pathways to induce Duox2 expression in the ileum and colon epithelium. Mucosal Immunol. 2015; 8(2): 372–9. PubMed Abstract | Publisher Full Text\n\nChang S, Linderholm A, Franzi L, et al.: Dual oxidase regulates neutrophil recruitment in allergic airways. Free Radic Biol Med. 2013; 65: 38–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrasberger H, El-Zaatari M, Dang DT, et al.: Dual oxidases control release of hydrogen peroxide by the gastric epithelium to prevent Helicobacter felis infection and inflammation in mice. Gastroenterology. 2013; 145(5): 1045–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLipinski S, Till A, Sina C, et al.: DUOX2-derived reactive oxygen species are effectors of NOD2-mediated antibacterial responses. J Cell Sci. 2009; 122(Pt 19): 3522–30. PubMed Abstract | Publisher Full Text\n\nO’Sullivan M, O’Morain C: Liquid diets for Crohn’s disease. Gut. 2001; 48(6): 757. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSajid MS, Iqbal Z, Muhammad G, et al.: Immunomodulatory effect of various anti-parasitics: a review. Parasitology. 2006; 132(Pt 3): 301–13. PubMed Abstract | Publisher Full Text\n\nSegal AW, Levi AJ, Loewi G: Levamisole in the treatment of Crohn’s disease. Lancet. 1977; 2(8034): 382–5. PubMed Abstract | Publisher Full Text\n\nSachar DB, Rubin KP, Gumaste V: Levamisole in Crohn’s disease: a randomized, double-blind, placebo-controlled clinical trial. Am J Gastroenterol. 1987; 82(6): 536–9. PubMed Abstract\n\nSegal AW, Pugh SF, Levi AJ, et al.: Levamisole-induced arthritis in Crohn’s disease. Br Med J. 1977; 2(6086): 555. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBenfield GF, Felix-Davies DD, Thompson RA, et al.: Severe acute polyarthropathy associated with levamisole therapy in a patient with Crohn’s disease. Eur J Rheumatol Inflamm. 1984; 7(2): 63–5. PubMed Abstract\n\nDing NS, Hart A, De Cruz P: Systematic review: predicting and optimising response to anti-TNF therapy in Crohn’s disease - algorithm for practical management. Aliment Pharmacol Ther. 2016; 43(1): 30–51. PubMed Abstract | Publisher Full Text\n\nWright AV, Nuñez JK, Doudna JA: Biology and Applications of CRISPR Systems: Harnessing Nature’s Toolbox for Genome Engineering. Cell. 2016; 164(1–2): 29–44. PubMed Abstract | Publisher Full Text\n\nShui B, Hernandez Matias L, Guo Y, et al.: The Rise of CRISPR/Cas for Genome Editing in Stem Cells. Stem Cells Int. 2016; 2016: 8140168. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGhosh S, Thrasher AJ, Gaspar HB: Gene therapy for monogenic disorders of the bone marrow. Br J Haematol. 2015; 171(2): 155–170. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "16956",
"date": "17 Oct 2016",
"name": "Jean-Laurent Casanova",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very comprehensive review of the pathogenesis of Crohn's disease (CD). The paper thoroughly covers all relevant aspects, including epidemiology, microbiology, immunology, and genetics. In that regard, it is both holistic and reductionist -- a very rare combination nowadays. It is beautifully written and presents a fair account of the current state of knowledge. Importantly, it is not biased: it does not resemble any of the too many reviews on CD that provide more beliefs and opinions than facts. This is in my view the very best review ever written on CD. This is clearly a must-read for anyone interested in this condition.",
"responses": []
},
{
"id": "17018",
"date": "27 Oct 2016",
"name": "James C. Lee",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review presents a thorough overview of the pathogenesis of Crohn's disease, drawing on a broad range of evidence, from epidemiology to human immunology to genetics. It is impressive to have collated such diverse topics into a single review, and therefore understandable that some areas are discussed in more detail than others. Overall this is a valuable contribution to the literature.\nOf note, some parts of this review read more like a perspective piece and draw conclusions that I suspect are not widely held, although they are supported by the evidence presented. There is also a considerable focus on the author's own work in several sections, although this is understandable and it is valuable to have these experiments discussed in the context of the broader literature regarding Crohn's disease pathogenesis.\n\nThe author is fairly dismissive of the role of genetics, and more specifically of the significance of disease-associated genetic variants, based on their small odds ratios and frequency within the healthy population. While I understand this opinion, I think it is important to clarify a couple of these points for a general audience. First, it is important to note that one of the founding premises of GWAS was that it was designed to identify genes that are involved in pathways relevant to disease pathogenesis. It was not expected - based on the common disease-common variant hypothesis - that any of the SNPs or genes would by themselves be causal. To take the example of impaired autophagy - something that has been commonly reported in Crohn's patients - there are several ways in which this might occur. For instance some patients may carry NOD2 or IRGM or ATG16L1 mutations, while in others it may be due to environmental factors such as smoking - see Monick et al. (2010). Based on the multi-hit model the author proposes, the key point is that \"hits\" are simultaneously required in several pathways in conjunction with an environmental trigger in order for a disease to occur. Because the exact nature of each hit is likely to vary between patients, it is not surprising that univariate genetic analysis identifies small odds ratios at individual SNPs (many of which may have similar effects on a given pathway). Moreover, the size of these effects may be different when gene-environment and epistatic interactions are taken into account. The value of this data, therefore, is that it points towards the specific pathways that are likely to be involved in disease pathogenesis, rather than identifying causal genes.\nThis is perhaps best illustrated in cardiovascular medicine, where genetic variants in the gene that encodes HMG-CoA reductase have been significantly associated with LDL cholesterol levels, but the odds ratios for these associations are very small (presumably because there are many other causes of elevated cholesterol, Kathiresan et al. (2008)). Despite this, pharamcological targeting of this enzyme with statins represents the most effective treatment for hypercholersterolaemia. As such, a small odds ratio at individual SNP should not be confused with the implicated gene or pathway not being biologically or therapeutically important.\n\nOther points:\nDiscussion of the ATG16L1 variant would benefit from including the recent evidence linking the variant to susceptibility to caspase-mediated degradation (Murthy et al. (2014)).\n\nIt would be worth highlighting the cross-talk between ER stress and autophagy, particularly in the context of Crohn's (Adolph et al.(2013)).\n\np.2: IL-23 is thought to be more involved in the expansion and survival of Th17 than in transforming naive T cells into Th17 cells\n\np.7 typo: Danziel should be Dalzeil",
"responses": [
{
"c_id": "2259",
"date": "31 Oct 2016",
"name": "Tony Segal",
"role": "Author Response",
"response": "I am most grateful to Dr Lee for taking the time and trouble to thoroughly review my manuscript and for his comments. I would like to address the issue of the relationship between GWAS and the identification of causal genetic variants. It is simple to establish causality for dominant, highly penetrant genes like polycystic renal disease, polyposis coli and Huntington’s disease. However it is more difficult when penetrance is incomplete, when one is reliant upon the odds ratio of the frequency of the variants in affected as compared with unaffected individuals. In general very high odds ratios would strongly suggest causality. Genes with very high odds rations have been identified by GWAS. For example: Birdshot chorioretinopathy https://www.ncbi.nlm.nih.gov/pubmed/24957906 However, the most important consideration is not whether or not the design of GWAS was to identify causal genes, but what the common perception is, and in many cases it is the perception that GWAS do in fact generally identify causal genes: The genetic background of Inflammatory Bowel Disease: From correlation to causality. Uniken Venema WT et al. https://www.ncbi.nlm.nih.gov/pubmed/?term=27785786 Insights into kidney diseases from genome-wide association studies. Wuttke M and Köttgen A https://www.ncbi.nlm.nih.gov/pubmed/?term=27477491 Genetics of complex traits: prediction of phenotype, identification of causal polymorphisms and genetic architecture. Goddard ME et al. https://www.ncbi.nlm.nih.gov/pubmed/?term=27440663 A large genome-wide association study of age-related macular degeneration highlights contributions of rare and common variants. Fritsche LG et al. https://www.ncbi.nlm.nih.gov/pubmed/?term=26691988 Combining GWAS and RNA-Seq Approaches for Detection of the Causal Mutation for Hereditary Junctional Epidermolysis Bullosa in Sheep. Suárez-Vega A et al. https://www.ncbi.nlm.nih.gov/pubmed/?term=25955497 It was therefore very important to state in the review that GWAS have not identified genes that cause Crohn's disease. Under the Summary of outcome of GWAS studies in CD I have now stated: The GWAS studies have provided a series of clear answers. No single gene, or a small number of genes, has been identified that is causal for CD. More than 170 GWAS hits combined contribute to about 10% of the “heritability” of CD. With the average individual contribution of only 0.1% it is unlikely that these variants will individually have major effects on cellular function, either in the CD patients or in experimental systems. One of the founding premises of GWAS was that it was designed to identify genes that are involved in pathways relevant to disease pathogenesis. The value of this data, therefore, is that it points towards the specific pathways that are likely to be involved in disease pathogenesis, rather than identifying causal genes."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2510
|
https://f1000research.com/articles/5-2681/v1
|
16 Nov 16
|
{
"type": "Research Article",
"title": "Cost-effectiveness of once-daily versus twice-daily regimens in the treatment of HIV infection in sub-Saharan Africa: a probabilistic decision model",
"authors": [
"Muinat B. Sanni-Oba",
"Olalekan A. Uthman",
"Jean B. Nachega",
"Olalekan A. Uthman",
"Jean B. Nachega"
],
"abstract": "BACKGROUND: Regimen simplification of ART, by administering them less frequently, has been suggested as a practical approach to improve adherence. The aim of this study was to assess the cost-utility of once-daily (QD) versus twice-daily (BID) antiretroviral (ART) regimens in the treatment of HIV. METHODS: A Model-based Markov modelling of cost-effectiveness using secondary data sources was developed to determine the incremental cost per quality-adjusted life year (QALY) gained of QD versus BID ART regimen for a hypothetical cohort treatment-naïve adults with HIV, from the Sub-Saharan African healthcare payer’s perspective. RESULTS: At base-case values for all parameters, the total number of QALY gained by QD regimen was 0.27 and the incremental cost difference of $2147.04. The incremental cost-effectiveness ratio (ICER) of QD versus BID regimen was $8087/QALY gained. The ICER was most sensitive to the variations in the total medical cost of state A (asymptomatic, non-AIDS, CD4> 350 cells/μL), total medical Cost State D (symptomatic AIDS or severe symptoms), and utility of State A. In our bootstrap analysis, 60% of bootstrap replicates for the ICER shows that QD is more costly and more effective than BID regimen, while the remaining 40% replicates shows that QD is less costly and less effective than BID. If decision-makers were willing to pay $1000 per QALY gained, the probability of QD being cost-effective was 44%. The probability of QD regimen being cost-effective was 48% when the willing to pay was $5000. CONCLUSIONS: From a sub-Saharan Africa societal perspective QD regimen cannot be regarded as cost-effective, although there is substantial decision uncertainty. Findings from the economic evaluation are important for low- and middle-income countries (LMIC) to consider as they decide whether to adopt the new branded single tablet regimen. Generic-based ART could yield substantial budgetary saving to HIV programmes in LMIC.",
"keywords": [
"once-daily regimen",
"twice-daily regimen",
"cost-effectiveness",
"HIV"
],
"content": "Background\n\nSub-Saharan Africa (SSA) is the region most heavily affected by human immunodeficiency virus (HIV)1. It is estimated that in 2012, as much as 68% of all people infected with HIV were living in SSA, and about 20% of all deaths and disability adjusted life years (DALYs) lost in Africa are due to HIV or acquired immunodeficiency syndrome (AIDS)1. The overarching aim of the antiretroviral therapy is to achieve optimal suppression of viral load, preserve immune functions and ultimately improve quality of life and reduce overall mortality2. The use of ART among people living with HIV has led to significant reduction in morbidity and mortality associated with HIV by slowing down the disease progression3. However, it is important to note that for the ART to effective, its clinical success depends on optimal adherence to the regimens4. It has been documented that optimal adherence to ART is associated with good viral suppression, slowing of disease progress and reduced all-cause mortality in people living with HIV5,6. Regimen simplification of ART, by administering them less frequently, has been suggested as a practical approach to improve adherence and patient convenience7. Recently, major advances have been made towards simplifying ART regimens. One of the most important advances is decreasing the dosing frequency and pill burden from more than 10 tablets to a one table once a day (QD) fixed dose combination4.\n\nWhile the literature has focused on the effectiveness of QD versus twice a day (BID) regimens7–20, little interest has been paid to the economic evaluations21–25. Economic evaluation provides a useful framework to assist policy makers in allocating resources across competing needs. To the best of our knowledge, there have been no recent attempts to assess the likely cost-effectiveness of QD versus BID regimen from sub-Saharan’s perspective. Therefore, the objective of this study was to determine the cost-effectiveness of QD versus BID antiretroviral regimen in the HIV treatment.\n\n\nMethodology\n\nWe developed a computer-based mathematical model of HIV infection to simulate the effect of QD versus BID regimen (Figure 1). The model is a traditional Markov stage-transition model26, which was used to extrapolate the costs and health outcomes over the lifetime of patients. The analysis was performed from a societal perspective, where both all direct and indirect cost was considered. Health outcomes and cost accrued beyond 1 year was discounted at 3.5%, to adjust for future costs and health benefits and expresses them in terms of their present values27. Based on recent clinical guidelines for the use of ART in HIV-infected individuals, the Markov model has five health states to represent the progression through HIV disease states to death28,29:\n\n1) State A: HIV positive, asymptomatic, non-AIDS, CD4 >350 cells/μL;\n\n2) State B: HIV positive, asymptomatic, non-AIDS, CD4 >200 cells/μL, but ≤350 cells/μL;\n\n3) State C: HIV positive, asymptomatic, AIDS, CD4 <200 cells/μL;\n\n4) State D: HIV positive, symptomatic AIDS or severe symptoms; and\n\n5) State E: Death (age- and disease-related). People living with HIV may either die from HIV-related causes or from any other causes.\n\nState A: HIV positive, asymptomatic, non-AIDS, CD4 > 350 cells/μL; State B: HIV positive, asymptomatic, non-AIDS, CD4 >200 cells/μL, but ≤350 cells/μL; State C: HIV positive, asymptomatic, AIDS, CD4 <200 cells/μL; State D: HIV positive, symptomatic AIDS or severe symptoms.\n\nPatients can remain in the same state, progress or retreat from an AIDS state to a non-AIDS state. The final state is E, death. The Markov model was based on a cohort of 1,000 hypothetical individuals, and a cycle length of one year was applied and simulated over 20 years.\n\nParameter estimates were extracted from published data30–34(http://www.msfaccess.org/content/untangling-web-antiretroviral-price-reductions-17th-edition-%E2%80%93-july-2014). We conducted a series of focused literature searches in PubMed and Google Scholar to identify appropriate model input parameters to populate the model. The search terms included the following: “once-daily”, “fixed-dose combination”, “twice-daily”, “adherence”, “transition probabilities”, “HIV treatment costs”. Table 1 presents the model input parameters and their sources. Costs of treatment were incurred in US dollars and were adjusted for inflation; these were inflated to 2015 prices using a price inflation index (http://www.bls.gov/data/inflation_calculator.htm).\n\n*both direct and indirect cost; **per patient-year of treatment; ***annual transitional probabilities for BID regimen; ****relative risk of QD versus BID for virologic suppression\n\n† State A: HIV positive, asymptomatic, non-AIDS, CD4 > 350 cells/μL; State B: HIV positive, asymptomatic, non-AIDS, CD4 >200 cells/μL, but ≤350 cells/μL; State C: HIV positive, asymptomatic, AIDS, CD4 < 200 cells/μL; State D: HIV positive, symptomatic AIDS or severe symptoms.\n\nCHAI - http://hdl.handle.net/1902.1/18843\n\nMSF - http://www.msfaccess.org/content/untangling-web-antiretroviral-price-reductions-17th-edition-%E2%80%93-july-2014\n\nIn the base-case scenario, all model parameters assumed best values from the published literature. In the best and worst case scenarios, the parameters were set to values more favourable and less favourable to QD regimen respectively.\n\nThe values of HIV-related utility scores and quality-adjusted life years (QALYs) stratified by CD4 are also shown in Table 1. The antiretroviral naïve HIV patient is assumed to have a better initial response to medication therapy than individuals who have received previous antiretroviral treatment. Transition probabilities of naïve HIV patient between the five states for twice-daily regimen were extracted from the literature. The transition probabilities for the QD regimen were based on an adjustment to the baseline values, according to the treatment effect of BID regimen relative to QD regimen. This treatment effect took the form of a relative risk, which was derived from a meta-analysis of treatment naïve patients34.\n\nIn order to examine the uncertainty around the robustness of the input parameters, a sensitivity analysis was performed on the parameters. One-way sensitivity analysis was performed on a deterministic parameter by varying all the input parameters at lower and higher values at 25%. In the best and worst case scenarios, the parameters were set to values more favourable and less favourable to QD regimen respectively. We also performed a probabilistic sensitivity analysis to assess parameters uncertainty in the model using the using the Monte Carlo technique35, were model parameters were varied according to their intrinsic distributions. beta distribution was used for all probabilities. All costs were assumed to follow a normal distribution. Uniform distribution was used for utilities, discount, and time horizon. Results were based on 10,000 Monte Carlo simulations35.\n\nResults were presented as mean incremental costs and effects, incremental cost-effectiveness ratio (ICER), cost-effectiveness planes (CE-plane) and cost-effectiveness acceptability curves (CEACs). CEACs provides a measure of the likelihood that a decision to apply a given intervention is correct across a range of ‘willingness-to-pay’ thresholds36. ‘Willingness-to-pay’ in this context represents the maximum amount a decision maker is prepared to pay for a gain of one QALY. The WHO-CHOosing Interventions that are Cost Effective (CHOICE) Working Group threshold for Africa region was adopted37,38. An intervention was defined as follows: very cost-effective, ICER < GDP per capita ($1,695); cost-effective, ICER = 1–3 × GDP per capita ($1,695 to $5,086); and not cost-effective, ICER is > 3 × GDP per capita ($5,086)37,38.\n\n\nResults\n\nThe expected costs and QALY gained generated from the model are shown in Table 2. At base-case values for all parameters, when all parameters assumed best values from the published literature, the total number of QALY gained by regimen simplification was 0.27. The base case was associated with an incremental cost of $2,147. The incremental cost-effectiveness ratio of QD versus BID regimen was $8,102/QALY gained. Figure 2 shows the result of one-way sensitivity analysis when one parameter value was varied at a time, while holding other parameters at their base-case values. However, incremental cost was most sensitive to the variations in the total medical cost of state A, total medical cost state D, utility of state A and total medical cost of state C. The incremental cost ranged from $2,352 to $13,822 when total medical cost of state A varied from $13,736 to $22,893 and ICER could increase to as much as $38,314/QALY gained.\n\nBID – twice daily regimen, QD – once daily regimen, QALY – Quality Adjusted Life Years, ICER – Incremental Cost-Effectiveness Ratio, ΔC – incremental costs; ΔQ -incremental QALY\n\nState A: HIV positive, asymptomatic, non-AIDS, CD4 > 350 cells/μL; State B: HIV positive, asymptomatic, non-AIDS, CD4 >200 cells/μL, but ≤350 cells/μL; State C: HIV positive, asymptomatic, AIDS, CD4 <200 cells/μL; State D: HIV positive, symptomatic AIDS or severe symptoms. The y-axis shows the model parameter that was varied. The bars indicate the change in the incremental cost caused by changes in the value of the indicated variable holding all other parameters similar. All costs are in 2015 US dollars.\n\nIncremental cost and QALYs are plotted on a scatter plot, as shown in the CE plane in Figure 3. About 60% of incremental cost-effect pairs fall in the northeast quadrant, indicating that the QD regimen is more costly and more effective than the BID regimen. The remaining 40% of the points lie in the southwest quadrant, indicating that QD regimen saves money, although is still less effective compared to the BID regimen. Figure 4 presents the cost-effectiveness acceptability curves (CEACs) for the incremental cost per QALY gained. As shown in Figure 4, if decision-makers were willing to pay $1,000 per QALY gained, the probability of QD being cost-effective was 44%. The probability of QD regimen being cost-effective was 48% when the willingness to pay was $5,000.\n\nQALY – Quality Adjusted Life Years.\n\nQALY – Quality Adjusted Life Years.\n\n\nDiscussion\n\nPoor adherence to ART can lead to virological failure, poor clinical outcome, and diminish future treatment options5,6,39. Ensuring adherence to prescribed ART continues to be a major public health concern. To the best of our knowledge, this is the first economic evaluation that evaluates the cost effectiveness of QD HAART regimen versus BID regimen from a sub-Saharan societal perspective. Compared with the BID regimen, the increase cost-effectiveness ratio of the QD regimen ($8,102/QALY gained) exceeds the WHO-CHOICE willingness to pay threshold (three times the country’s per capita GDP: $5,086)37,38. The incremental cost-effectiveness ratio was most sensitive to variations in the total medical cost of state A, total medical cost state D, utility of state A and total medical cost of state C.\n\nThe results of cost-effectiveness of QD versus BID literature have been mixed, while some studies demonstrated that regimen simplification to be cost-effective21–23, other found it not be cost-effective24,25. Fogolia and colleagues estimated the lifetime cost utility of QD regimens versus BID regimens in Italian human immunodeficiency virus (HIV)-infected patients naïve to treatment using a Markov microsimulation model24. Fogolia showed a cost-utility value advantage for twice-daily over QD regimen. Walensky conducted an economic evaluation of a three pill generic antiretroviral therapy and demonstrated cost-saving of such a regimen25. Similarly, Walensky and co-researchers found that generic antiretroviral therapy will be cost-saving in the USA25. Brogan and colleagues found that the QD regimen was more effective and cost-saving compared with the BID regimen in people living with HIV that are treatment naïve21.\n\nOur Markov model incorporated a probabilistic sensitivity analysis to give a comprehensive estimate of uncertainty associated with model parameters. Compared with a cost-effectiveness study conducted alongside a trial, this model-based approach has several advantages; we combined evidence from several sources and also conducted different sensitivity analyses40. However, our analysis also has some limitations. There were a few parameters for which data from low-middle income countries (LMIC) were not available, and we had to rely on data from the high-income countries or make simplifying assumptions. Another limitation includes uncertainty in parameter values and the demonstrated sensitivity of the results to changes in some parameter values. All model input parameters used in the model were extracted from the published literature, and although there are intrinsic uncertainties associated with these parameters, there were, however, modelled appropriately. We conducted a probabilistic sensitivity analysis to concurrently assess the impact of these model input parameters41. Our model was also limited by the assumptions about the mechanism of HIV disease progression.\n\n\nConclusion\n\nFrom a sub-Saharan Africa country societal perspective, the QD HAART regimen cannot be regarded as cost-effective. However, there is considerable decision uncertainty, driven particularly by the variations in the total medical cost of state A (asymptomatic, non-AIDS, CD4 >350 cells/μL), total medical cost state D (symptomatic AIDS or severe symptoms), and utility of State A; future research should focus on reducing uncertainty in these parameters. Findings from the economic evaluation are important for LMIC as they consider whether to adopt the new branded single tablet regimen. Generic-based ART could yield substantial budgetary saving to HIV programmes in LMIC.\n\n\nData availability\n\nDataset 1: Raw data for Table 1, Model parameters., 10.5256/f1000research.9954.d14242342\n\nDataset 2: Raw data for Figure 2, Tornado plot for incremental plot., 10.5256/f1000research.9954.d14242443\n\nDataset 3: Raw data for Figure 3, Incremental cost-effectiveness plane for once daily (QD) versus twice-daily regimen (BID)., 10.5256/f1000research.9954.d14242544\n\nDataset 4: Raw data for Figure 4, Cost-effectiveness acceptability curve for once daily (QD) versus twice-daily regimen (BID)., 10.5256/f1000research.9954.d14242645",
"appendix": "Author contributions\n\n\n\nMBS and OAU were responsible for conception and design of the research. Acquisition of data was carried out MBS and OAU. Economic modelling and statistical analysis were carried out by MBS and OAU. MBS, OAU and JBN were responsible for review, analysis and interpretation of the outcomes. MBS, OAU and JBN were responsible for development of the manuscript. MBS, OAU and JBN were responsible for critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interest were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting work.\n\n\nReferences\n\nUNAIDS: Global Report: UNAIDS report on the global AIDS epidemic 2013. Geneva, Switzerland: Joint United Nations Programme on HIV/AIDS (UNAIDS); 2013. Reference Source\n\nKress KD: HIV update: emerging clinical evidence and a review of recommendations for the use of highly active antiretroviral therapy. Am J Health Syst Pharm. 2004; 61(Suppl 3): S3–14; quiz S15-16. PubMed Abstract\n\nPalella FJ Jr, Delaney KM, Moorman AC, et al.: Declining morbidity and mortality among patients with advanced human immunodeficiency virus infection. HIV Outpatient Study Investigators. N Engl J Med. 1998; 338(13): 853–860. PubMed Abstract | Publisher Full Text\n\nNachega JB, Marconi VC, van Zyl GU, et al.: HIV treatment adherence, drug resistance, virologic failure: evolving concepts. Infect Disord Drug Targets. 2011; 11(2): 167–174. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBangsberg DR, Acosta EP, Gupta R, et al.: Adherence-resistance relationships for protease and non-nucleoside reverse transcriptase inhibitors explained by virological fitness. AIDS. 2006; 20(2): 223–231. PubMed Abstract | Publisher Full Text\n\nNachega JB, Hislop M, Dowdy DW, et al.: Adherence to nonnucleoside reverse transcriptase inhibitor-based HIV therapy and virologic outcomes. Ann Intern Med. 2007; 146(8): 564–573. PubMed Abstract | Publisher Full Text\n\nParienti JJ, Bangsberg DR, Verdon R, et al.: Better adherence with once-daily antiretroviral regimens: a meta-analysis. Clin Infect Dis. 2009; 48(4): 484–488. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampo RE, Cohen C, Grimm K, et al.: Switch from protease inhibitor- to efavirenz-based antiretroviral therapy improves quality of life, treatment satisfaction and adherence with low rates of virological failure in virologically suppressed patients. Int J STD AIDS. 2010; 21(3): 166–171. PubMed Abstract | Publisher Full Text\n\nCooper DA, Heera J, Goodrich J, et al.: Maraviroc versus efavirenz, both in combination with zidovudine-lamivudine, for the treatment of antiretroviral-naive subjects with CCR5-tropic HIV-1 infection. J Infect Dis. 2010; 201(6): 803–813. PubMed Abstract | Publisher Full Text\n\nFlexner C, Tierney C, Gross R, et al.: Comparison of once-daily versus twice-daily combination antiretroviral therapy in treatment-naive patients: results of AIDS clinical trials group (ACTG) A5073, a 48-week randomized controlled trial. Clin Infect Dis. 2010; 50(7): 1041–1052. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzález-Gárcia J, Cohen D, Johnson M, et al.: Short communication: Comparable safety and efficacy with once-daily versus twice-daily dosing of lopinavir/ritonavir tablets with emtricitabine + tenofovir DF in antiretroviral-naïve, HIV type 1-infected subjects: 96 week final results of the randomized trial M05-730. AIDS Res Hum Retroviruses. 2010; 26(8): 841–845. PubMed Abstract | Publisher Full Text\n\nZajdenverg R, Podsadecki TJ, Badal-Faesen S, et al.: Similar safety and efficacy of once- and twice-daily lopinavir/ritonavir tablets in treatment-experienced HIV-1-infected subjects at 48 weeks. J Acquir Immune Defic Syndr. 2010; 54(2): 143–151. PubMed Abstract\n\nCahn P, Fourie J, Grinsztejn B, et al.: Week 48 analysis of once-daily vs. twice-daily darunavir/ritonavir in treatment-experienced HIV-1-infected patients. AIDS. 2011; 25(7): 929–939. PubMed Abstract | Publisher Full Text\n\nEron JJ Jr, Rockstroh JK, Reynes J, et al.: Raltegravir once daily or twice daily in previously untreated patients with HIV-1: a randomised, active-controlled, phase 3 non-inferiority trial. Lancet Infect Dis. 2011; 11(12): 907–915. PubMed Abstract | Publisher Full Text\n\nArasteh K, Ward D, Plettenberg A, et al.: Twenty-four-week efficacy and safety of switching virologically suppressed HIV-1-infected patients from nevirapine immediate release 200 mg twice daily to nevirapine extended release 400 mg once daily (TRANxITION). HIV Med. 2012; 13(4): 236–244. PubMed Abstract | Publisher Full Text\n\nMolina JM, Lamarca A, Andrade-Villanueva J, et al.: Efficacy and safety of once daily elvitegravir versus twice daily raltegravir in treatment-experienced patients with HIV-1 receiving a ritonavir-boosted protease inhibitor: randomised, double-blind, phase 3, non-inferiority study. Lancet Infect Dis. 2012; 12(1): 27–35. PubMed Abstract | Publisher Full Text\n\nCahn P, Pozniak AL, Mingrone H, et al.: Dolutegravir versus raltegravir in antiretroviral-experienced, integrase-inhibitor-naive adults with HIV: week 48 results from the randomised, double-blind, non-inferiority SAILING study. Lancet. 2013; 382(9893): 700–708. PubMed Abstract | Publisher Full Text\n\nElion R, Molina JM, Ramón Arribas López J, et al.: A randomized phase 3 study comparing once-daily elvitegravir with twice-daily raltegravir in treatment-experienced subjects with HIV-1 infection: 96-week results. J Acquir Immune Defic Syndr. 2013; 63(4): 494–497. PubMed Abstract | Publisher Full Text\n\nLathouwers E, De La Rosa G, Van de Casteele T, et al.: Virological analysis of once-daily and twice-daily darunavir/ritonavir in the ODIN trial of treatment-experienced patients. Antivir Ther. 2013; 18(3): 289–300. PubMed Abstract | Publisher Full Text\n\nRaffi F, Rachlis A, Stellbrink HJ, et al.: Once-daily dolutegravir versus raltegravir in antiretroviral-naive adults with HIV-1 infection: 48 week results from the randomised, double-blind, non-inferiority SPRING-2 study. Lancet. 2013; 381(9868): 735–743. PubMed Abstract | Publisher Full Text\n\nBrogan A, Mauskopf J, Talbird SE, et al.: US cost effectiveness of darunavir/ritonavir 600/100 mg bid in treatment-experienced, HIV-infected adults with evidence of protease inhibitor resistance included in the TITAN Trial. Pharmacoeconomics. 2010; 28(Suppl 1): 129–146. PubMed Abstract | Publisher Full Text\n\nColombo GL, Di Matteo S, Antinori A, et al.: Economic evaluation of initial antiretroviral therapy for HIV-infected patients: an update of Italian guidelines. Clinicoecon Outcomes Res. 2013; 5: 489–496. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColombo GL, Di Matteo S, Maggiolo F: Antiretroviral therapy in HIV-infected patients: a proposal to assess the economic value of the single-tablet regimen. Clinicoecon Outcomes Res. 2013; 5: 59–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoglia E, Bonfanti P, Rizzardini G, et al.: Cost-utility analysis of lopinavir/ritonavir versus atazanavir + ritonavir administered as first-line therapy for the treatment of HIV infection in Italy: from randomised trial to real world. PLoS One. 2013; 8(2): e57777. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalensky RP, Sax PE, Nakamura YM, et al.: Economic savings versus health losses: the cost-effectiveness of generic antiretroviral therapy in the United States. Ann Intern Med. 2013; 158(2): 84–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSonnenberg FA, Beck JR: Markov models in medical decision making: a practical guide. Med Decis Making. 1993; 13(4): 322–338. PubMed Abstract | Publisher Full Text\n\nGravelle H, Smith D: Discounting for health effects in cost-benefit and cost-effectiveness analysis. Health Econ. 2001; 10(7): 587–599. PubMed Abstract | Publisher Full Text\n\nOpravil M, Sasadeusz J, Cooper DA, et al.: Effect of baseline CD4 cell count on the efficacy and safety of peginterferon Alfa-2a (40KD) plus ribavirin in patients with HIV/hepatitis C virus coinfection. J Acquir Immune Defic Syndr. 2008; 47(1): 36–49. PubMed Abstract | Publisher Full Text\n\nUS Public Health Service: Guidelines for the use of antiretroviral agents in HIV-1-infected adults and adolescents. Washinghton, DC: US Public Health Service; 2003. Reference Source\n\nTengs TO, Lin TH: A meta-analysis of utility estimates for HIV/AIDS. Med Decis Making. 2002; 22(6): 475–481. PubMed Abstract | Publisher Full Text\n\nAthan E, O'Brien DP, Legood R: Cost-effectiveness of routine and low-cost CD4 T-cell count compared with WHO clinical staging of HIV to guide initiation of antiretroviral therapy in resource-limited settings. AIDS. 2010; 24(12): 1887–1895. PubMed Abstract | Publisher Full Text\n\nGoshu AT, Dessie ZG: Modelling Progression of HIV/AIDS Disease Stages Using Semi-Markov Processes. Data Sci. 2013; 11: 269–280. Reference Source\n\nAlistar SS, Grant PM, Bendavid E: Comparative effectiveness and cost-effectiveness of antiretroviral therapy and pre-exposure prophylaxis for HIV prevention in South Africa. BMC Med. 2014; 12: 46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNachega JB, Parienti JJ, Uthman OA, et al.: Lower pill burden and once-daily antiretroviral treatment regimens for HIV infection: A meta-analysis of randomized controlled trials. Clin Infect Dis. 2014; 58(9): 1297–1307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHastings WK: Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 1970; 57(1): 97–109. Publisher Full Text\n\nFenwick E, Claxton K, Sculpher M: Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ. 2001; 10(8): 779–787. PubMed Abstract | Publisher Full Text\n\nMurray CJ, Evans DB, Acharya A, et al.: Development of WHO guidelines on generalized cost-effectiveness analysis. Health Econ. 2000; 9(3): 235–251. PubMed Abstract | Publisher Full Text\n\nEvans DB, Edejer TT, Adam T, et al.: Methods to assess the costs and health effects of interventions for improving health in developing countries. BMJ. 2005; 331(7525): 1137–1140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSethi AK, Celentano DD, Gange SJ, et al.: Association between adherence to antiretroviral therapy and human immunodeficiency virus drug resistance. Clin Infect Dis. 2003; 37(8): 1112–1118. PubMed Abstract | Publisher Full Text\n\nEhlers L, Overvad K, Sørensen J, et al.: Analysis of cost effectiveness of screening Danish men aged 65 for abdominal aortic aneurysm. BMJ. 2009; 338: b2243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaxton K, Sculpher M, McCabe C, et al.: Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra. Health Econ. 2005; 14(4): 339–347. PubMed Abstract | Publisher Full Text\n\nSanni-Oba M, Uthman O, Nachega JB: Dataset 1 in: Cost-effectiveness of once-daily versus twice-daily regimens in the treatment of HIV infection in sub-Saharan Africa: a probabilistic decision model. F1000Research. 2016. Data Source\n\nSanni-Oba M, Uthman O, Nachega JB: Dataset 2 in: Cost-effectiveness of once-daily versus twice-daily regimens in the treatment of HIV infection in sub-Saharan Africa: a probabilistic decision model. F1000Research. 2016. Data Source\n\nSanni-Oba M, Uthman O, Nachega JB: Dataset 3 in: Cost-effectiveness of once-daily versus twice-daily regimens in the treatment of HIV infection in sub-Saharan Africa: a probabilistic decision model. F1000Research. 2016. Data Source\n\nSanni-Oba M, Uthman O, Nachega JB: Dataset 4 in: Cost-effectiveness of once-daily versus twice-daily regimens in the treatment of HIV infection in sub-Saharan Africa: a probabilistic decision model. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19352",
"date": "07 Mar 2017",
"name": "Diego F Cuadros",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript discusses the cost-effectiveness of QD versus BID in resource-limited settings such as sub-Saharan Africa (SSA). Using computer simulation, authors found that QD could not be considered cost-effective in SSA.\nMajor comments:\nBackground: Literature cited in this manuscript is quite old, and in general most of the literature cited in this study was published before 2013. This is a critical issue since HIV treatment is a very dynamic field that is constantly changing. In fact, in 2013 the World Health Organization published the guidelines for ART that were updated in 2016, in which they recommended ART coverage for every HIV-infected individual, but prioritizing individuals with a CD4 count < 500. Also, a Once-daily regimen comprising a non-thymidine NRTI backbone (TDF + FTC or TDF + 3TC) and one NNRTI (EFV) was found to be cost-effective and therefore recommended to be used as a first-line ART in HIV naïve patients. Based on the framework proposed by these guidelines, how relevant is this study and what would be the potential impact and application of the study results?\n\nModel: It is very hard to understand the model with the description that is currently being provided. If adherence is an important difference between QD and BID, is this factor included in the simulation? What about potential adverse events or treatment failure? Also, why is > 350 CD4 cell count used for early state (state A)? Latest ART guidelines advise priority for treatment initiation at < 500 CD4 cell count, so why did authors not consider using this value for this state? If this is a computer simulation, why did authors decide to include such a small number of individuals (only 1,000 simulated agents)? In most published studies these simulations usually include about 1,000,000 individuals. Did the simulation consist of 1,000 individuals for each ART regimen, or 500 in one and 500 in the other one? How many individuals survived? How many died in each ART regime? Also, why did they decided to use a 1-year time step?\n\nModel parameters in Table 1 indicate that individuals can move from states beyond the next state, and return. For example, an individual in state A could move to state D, and could also return from state D to state A. However, diagram in Figure 1 indicates that movement of individuals is from one state to only the immediate adjacent state, for example, individuals in state A only can progress to state B, and return from B to A. Please clarify.\n\nModel parameters: The main flaw of this manuscript is the data used for model parameterization. Authors claimed to have conducted an extensive review of the literature available on this topic. However, no more than three sources were used for data parameterization. Furthermore, after I searched through the original sources, I was not able to find the values that authors used in this model. For example, in Goshu & Dessie (2013), they used a model in which state A was > 500 CD4 cell count and not > 350 CD4 cell count, as used in this study. Also, the probabilities of the transition states reported in Table 1 in Goshu & Dessie do not correspond with the probabilities of the transition states that authors used in their model. Moreover, the model in Goshu & Dessie included a monthly time step, whereas the model developed in this manuscript uses yearly time steps. Taking these changes in consideration, I consider that authors should address how this could affect the parameter values used from Goshu & Dessie. Did authors transform these monthly probabilities to annual probabilities? Similarly I was not able to find the values that the authors used in this study for the baseline population from each state in Goshu & Dessie. Please clarify.\n\nAuthors mentioned that there were few parameters for which data from low and middle-income countries were not available, what were those parameters?\n\nAlso, why is beta distribution used as the probability distribution of state probabilities for uncertainty analyses?\n\nWhat is exactly included in total medical cost?\n\nLimitations of the study: In the last paragraph of the discussion, authors mentioned that the model was limited by the assumptions about the mechanism of HIV progression. What are these assumptions?\nMinor comments:\nIn general there are several grammar mistakes and typos that need to be addressed.\n\nART acronym is used the first time in paragraph 1 of background but it was not defined before. Also authors use ART in some parts of the manuscript and HAART in others. Please specify.\n\nIf authors examined the uncertainty around the robustness of imputed parameters, why did they call it sensitivity analysis and not uncertainty analysis?",
"responses": []
},
{
"id": "20457",
"date": "16 Mar 2017",
"name": "Tamlyn A. Rautenberg",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper shows promise; however there are some critical aspects which mean that it cannot be approved. (The following review is based on the manuscript alone and not a review of the related literature or secondary sources.)\n\nMajor observations Efficacy data\nThere seem to be two key differences between QD and BID regimens: direct cost and relative risk (in the model this single efficacy measure seems to incorporate adherence and viral suppression). For direct cost, Table 1 shows that the mean drug cost for once daily regimen (610) is lower than twice daily regimen (638). For efficacy, based on the meta-analysis, there appears to be a benefit to using once daily regimens (QD). Based on that, it is unclear whether the cost utility analysis is warranted (i.e. the intervention (QD) appears to be less costly and have higher efficacy than BID).\n\nCountry Focus\n\nThe authors state that the analysis is done “from the Sub-Saharan African healthcare payer’s perspective.” SSA is a large geographical region which includes different countries with diverse health care systems. Consider whether the work will have more impact if one country is chosen as a focus. The authors propose that “Findings from the economic evaluation are important for low- and middle-income countries (LMIC) to consider as they decide whether to adopt the new branded single tablet regimen.” However SSA does not only include LMIC. I do not find anything in the paper which makes it particularly relevant to SSA as opposed to other similar regions which include LMIC.\n\nModel structure and assumptions\nPlease explain the rationale behind the choice of model structure. The model structure schematic is inaccurate. Table 1 shows the transition probabilities from State A to B, A to C, A to D and A to E however this is not included in the schematic. The model seems to assume that in a patient’s lifetime they will never fail first line and /or switch to second line therapy. What is the rationale for this assumption in the model? What is the rationale and justification for using a yearly cycle for the markov states? Adherence is not an explicit parameter in the model (although it appears to be factored into the relative risk from the meta-analysis.) Please explain the rationale and justification for this.\n\nPerspective and related cost data\nIt is unclear what perspective has actually been used for the analysis. Although the authors state that they have performed the analysis “from the Sub-Saharan African healthcare payer’s perspective”, this appears to be contradicted by the statement “both all direct and indirect cost was considered”. Furthermore the authors conclude the study “From a sub-Saharan Africa societal perspective“. [A clear description of the cost data would clear this point, however the “annual total medical cost” is aggregated and without going into the secondary data sources it is not possible to see exactly what has been included.] It is unclear which specific drugs have been used for the QD and BID regimens, therefore it is difficult to evaluate the calculation of the mean drug cost in Table 1.\n\nTime horizon\nIt is unclear what time horizon has been used for the analysis. The authors state that the analysis was performed “over the lifetime of patients”, and later they state that the cohort was “simulated over 20 years”. It is not clear whether the time horizon was 20 years and (possibly) some patients did not reach State E, or the simulation was run for a lifetime until all patients reached State E.\n\nEvidence\nWhich search terms were used for which parameters? If the target countries are SSA then what was the rationale for using the US Public Health Service Guidelines?\n\nLimitations\nMore discussion is needed around the limitations, it is necessary to explain how they influence the results and how (if at all) they could be improved.\n\nUncertainty\n\nAn explanation is required regarding the large uncertainty around the results. It is unclear whether the relative risk has been varied in the one way sensitivity analysis. If not, please explain why.\n\nMinor observations\nWhat software was used for the model? Reconsider the title probabilistic “decision” model - it is a probabilistic markov model (as opposed to decision tree model). “The antiretroviral naïve HIV patient is assumed to have a better initial response to medication therapy than individuals who have received previous antiretroviral treatment” please substantiate “An intervention was defined as follows: very cost-effective, ICER < GDP per capita ($1,695); cost-effective, ICER = 1–3 × GDP per capita ($1,695 to $5,086); and not cost-effective, ICER is > 3 × GDP per capita ($5,086)”. Which GDP was used from which SSA country? Small cohort noted.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2681
|
https://f1000research.com/articles/5-2680/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Survival prognosis and variable selection: A case study for metastatic castrate resistant prostate cancer patients",
"authors": [
"Søren Wengel Mogensen",
"Anne H. Petersen",
"Ann-Sophie Buchardt",
"Niels Richard Hansen",
"Søren Wengel Mogensen",
"Anne H. Petersen",
"Ann-Sophie Buchardt"
],
"abstract": "Survival prognosis is challenging, and accurate prediction of individual survival times is often very difficult. Better statistical methodology and more data can help improve the prognostic models, but it is important that methods and data usages are evaluated properly. The Prostate Cancer DREAM Challenge offered a framework for training and blinded validation of prognostic models using a large and rich dataset on patients diagnosed with metastatic castrate resistant prostate cancer. Using the Prostate Cancer DREAM Challenge data we investigated and compared an array of methods combining imputation techniques of missing values for prognostic variables with tree-based and lasso-based variable selection and model fitting methods. The benchmark metric used was integrated AUC (iAUC), and all methods were benchmarked using cross-validation on the training data as well as via the blinded validation. We found that survival forests without prior variable selection achieved the best overall performance (cv-iAUC = 0.70, validation-iACU = 0.78), while a generalized additive model was best among those methods that used explicit prior variable selection (cv-iAUC = 0.69, validation-iACU = 0.76). Our findings largely concurred with previous results in terms of the choice of important prognostic variables, though we did not find the level of prostate specific antigen to have prognostic value given the other variables included in the data.",
"keywords": [
"imputation",
"lasso",
"generalized additive models",
"stability selection",
"survival forests",
"survival prognostic models"
],
"content": "Introduction\n\nThe Prostate Cancer DREAM Challenge1,2, launched March 16, 2015, was a prediction competition, which challenged the participating teams to develop better prognostic models for survival of patients with metastatic castrate resistant prostate cancer. The full competition was divided into two subchallenges, with subchallenge 1a and 1b on prediction of survival and subchallenge 2 on prediction of discontinuation of treatment. Three of the authors of this paper participated in the challenge (as part of team KUStat) and made a final submission for subchallenge 1. We report here our findings and methodology developed for subchallenge 1a as well as in subsequent work carried out after the final submission.\n\nThe Prostate Cancer DREAM Challenge offered a large and complex dataset from four clinical trials containing data for around 2000 patients and with more than 100 potential predictor variables. The participating teams were free to develop any model, but predictions – in terms of risk scores – were assessed and compared in a fixed framework via submissions of predictions through a web interface. For the assessment data used, the survival status was held back from the participants, but the assessment system was fully disclosed, and we as participants could mimic the assessment procedure on the released data to optimize predictive performance.\n\nA reference prognostic model existed3 when the challenge was launched, and it was a requirement that the top-performing team could demonstrate an improvement over this reference model. The main scoring metric for assessing the prognostic models was time integrated AUC (iAUC). Halabi and coauthors3 reported iAUC estimates of 0.73 and 0.76 for the reference model on a test and validation dataset, respectively.\n\nThe Prostate Cancer DREAM Challenge included three rounds of submissions to a leaderboard prior to the final submission, and we submitted predictions for the second round of the leaderboard, which achieved an iAUC of 0.8062. This appeared to be a clear improvement of the reference model, which achieved an iAUC of 0.7782 in the same leaderboard round. Our final submission achieved an iAUC of 0.7732, which placed our team roughly in the middle of a big group of 15 teams that achieved an iAUC between 0.77 and 0.78 and well above the reference model, which achieved an iAUC of 0.7429 in the final scoring. However, the winning team managed to distinguish itself from the rest with an iAUC of 0.7915.\n\nOur submission was based on a variable selection method called stability selection and a subsequent fit of a generalized additive model. Some ad hoc modifications were made, but it was unclear if they had any positive effect on the predictive strength of the model. We also experimented with different techniques for imputation as there are a large number of missing values in the dataset for some variables. The effect of the imputation technique was, however, not fully understood, though we suspected that more sophisticated imputation techniques had a negative effect on predictive performance.\n\nIn this paper, we report a systematic evaluation of a total of 24 combinations of methods for model fitting, variable selection and imputation. These include the methods we used for our participation in the Prostate Cancer DREAM Challenge, some methods that we tried, but found inferior, and some additional methods that we afterwards found could potentially improve on the generalized additive model. The paper is organized as follows: first we present some descriptive and exploratory aspects of the dataset, and we describe how the dataset was prepared for the model building and evaluation process; then we briefly describe all the different methods we considered, the R functions and packages that implement the methods used and the analysis pipeline; finally, we present our results and conclusions.\n\n\nData\n\nThe Prostate Cancer DREAM Challenge dataset comprises patient baseline data as well as extensive longitudinal data tables from the comparator arm of four clinical trials: ASCENT-24, MAINSAIL5, VENICE6, and ENTHUSE-337. We will in this paper only consider the use of baseline variables for survival prognosis. Data from three of the four trials was released as training data for the Prostate Cancer DREAM Challenge, see Table 1, which includes followup survival and treatment discontinuation information. Data from the fourth trial (ENTHUSE-33, 470 patients) was released for leaderboard (157 patients) and final scoring (313 patients), and did not include followup survival information. The latter dataset comprising the 313 patients from the ENTHUSE-33 trial will be referred to as the validation data. Though we have not had access to survival times for the validation data, predictions for the validation data could be assessed via the Prostate Cancer DREAM Challenge web interface.\n\nWe note that the survival distributions for the three trials in the training data are comparable, see Figure 1 in 2 (the p-value for the log-rank test of equal survival functions is 0.63), but we also note that the followup time for the VENICE trial was considerably longer than for the other two trials.\n\nSee Supplementary Figure 1 for the correlation plot with labels. Correlations (right, below the diagonal) and pairwise associations as given by loess scatter plot smoothers (right, above the diagonal) for the numerical predictors.\n\nTo assess prognostic models, it is important to understand the target population. The patients included in the four trials were not necessarily recruited from the same population, and Table 2, Table 3, and Table 4 present breakdowns of the patients in the four trials according to age group, geography, and race, respectively. We note that the age distribution for the ASCENT-2 trial differs from the other three trials with a larger proportion of patients over 75 years old and a smaller proportion in the group 18–64 years. The age distributions for the other three trials are similar. We also note that the majority of patients are white and from Europe or North America. For the ASCENT-2 trial there is no geographic data, but it is known that these patients were recruited from North America and Europe4. The ASCENT-2 trial is, furthermore, the only trial for which the ethnicity group \"Hispanic\" is registered as race. A notable difference between the trials is that the validation data from the ENTHUSE-33 trial contains a larger proportion of Asians, and there are apparently no patients from the Asian/Pacific region in the training data. A further breakdown of the geographic group \"Other\" shows that the majority of these patients are white, but 30 patients from the VENICE trial are Asians; therefore they could be from the Asian/Pacific region. Nevertheless, there is little variability in the data in terms of race and geographic region.\n\nAll the baseline values for the predictor variables were made available for the Prostate Cancer DREAM Challenge participants in a data table referred to as CoreTable. The variables in CoreTable were filtered and cleaned prior to the model building. The filtering consisted of excluding variables that were mostly or entirely missing in either the training or the validation data, or had no variation. Variables related to treatment and discontinuation were excluded as well. The cleaning consisted of consistent registration of missing values, correct registration of variable type (categorical or numeric), and some recoding. The filtered and cleaned data contains 93 predictor variables in addition to the followup survival time, the death indicator, and the patient ID, see Table S1. Of the 93 predictors 72 are binary predictors, 4 are categorical predictors with three or more categories, and 20 are numerical predictors, which, except for BMI, are laboratory measurements. Table S1 shows that there are a considerable number of missing values in the training data for some of the numerical variables, while the validation data set is more complete. In fact, 37 of the 63 missing values in the validation data are related to only two patients, for whom most laboratory measurements are missing.\n\nFigure 1 shows pairwise correlations between binary predictors and correlations and associations between the numerical predictors. The variables were ordered by hierarchical clustering based on the correlations. We note that there is some correlation among the predictors. Among the numerical predictors, the variables CA, ALB, HB and NA. are positively correlated and also correlated with total protein TPRO. The variables NEU (neutrophils, a white blood cell type) and WBC (white blood cells) are unsurprisingly strongly positively correlated, and they are both positively correlated with PLT (platelets). Finally, the group PSA, ALP, AST, ALT and LDH also shows positive correlations with AST (aspartate aminotransferase) and ALT (alanine transaminase) being strongly correlated. For the binary variables, it is worth noting that the correlation pattern is rather weak and does not cluster in any clear pattern, though there is a certain weak overall positive correlation pattern. We see this pattern most clearly for the variables related to medical history (see Supplementary Figure 1 at the end of the paper for labels with medical history variables having prefix MH). Some of the strongest clustered correlations are unsurprising, such as the positive correlation among MI, MHCARD and BETA_BLOCKING related to cardiac disorders, the positive correlation between the metabolism and diabetes variables MHMETAB and DIAB, and the negative correlation between GONADOTROPIN (fertility medication) and ORCHIDECTOMY (testicle(s) removed).\n\n\nMethods\n\nAs mentioned above, the training data contains a large number of missing values. To deal with the missing values we implemented three imputation schemes: imputation under the missing completely at random assumption (MCAR), imputation under the missing at random assumption using only other predictors (MAR), and imputation under the missing at random assumption using other predictors as well as the survival response (MARwR). The variable to be imputed is denoted VI in the following.\n\nThe MCAR assumption means, as the name suggests, that the mechanism resulting in missing values is completely random and independent of both observed and unobserved variables. The corresponding imputation scheme is implemented by drawing observations randomly from the empirical marginal distribution of VI.\n\nThe two other imputation schemes involve fitting regression models with VI as the response variable, and their implementations share a number of components. Both schemes can use up to five other variables from the dataset to fit a linear regression model of VI. The five predictors are selected as the variables having the strongest marginal association with VI measured in terms of p-values. Only variables with p ≤ 0.05 and no missing values are considered. Missing values of VI are then imputed from the fitted regression model. The MAR scheme uses only predictor variables whereas the MARwR uses the censored survival times as well. Our MARwR scheme follows the suggestions by White and Royston8 to use the Nelson-Aalen estimate of the cumulative hazard function together with the indicator variable for censoring in the imputation model of VI.\n\nWe did not implement a specific MAR or MARwR scheme for categorical variables, and the missing values of RACE_C and REGION_C were therefore imputed using the MCAR scheme.\n\nAll methods considered except random survival forests are based on the proportional hazards model with the hazard function for the ith patient being\n\nλi(t)=ef(xi)λ0(t).\n\nHere λ0 is a baseline hazard function and f is a function of the vector of predictor variables, xi = (xij)j , for the ith patient. We refer to f (xi) as the risk score for the ith patient. For the purpose of risk prediction in the context of the Prostate Cancer DREAM Challenge, any monotonically increasing transformation of f (xi) – e.g. exp( f (xi)) – is an equivalent risk score.\n\nThe additive, linear model is given by\n\nf(xi)=∑j = 1pβjxij.\n\nThe coefficients βj can be estimated by maximizing Cox’s partial likelihood using the function coxph from the survival R package9,10. However, for a large number of predictors there will usually be a favorable bias-variance tradeoff by using shrinkage and/or variable selection. Moreover, the additive, linear model may not be adequate, since it does not capture nonlinear or interaction effects on the log-hazard scale.\n\nThe generalized additive model is given by\n\nf(xi)=∑j = 1pfj(xij),\n\nfor functions fj of the univariate predictors. For numerical predictors the functions fj are generally assumed to be smooth. The model can be fitted to data by minimizing the negative log-partial-likelihood with a quadratic penalty that penalizes roughness of the fj -functions. This can be achieved by the function gam with family = cox.ph() from the mgcv R package11. The function gam automatically chooses the trade-off between likelihood and penalty (and hence the smoothness) via built-in optimization of an unbiased risk estimate.\n\nLasso is a shrinkage and selection estimator that fits a proportional hazards model by minimizing the negative log-partial-likelihood with an ℓ1-penalty. The lasso estimator can be computed using the function glmnet with family = \"cox\" from the glmnet R package12,13. It fits models for a sequence of penalty parameters (the lasso path), and it supports selection of the penalty parameter via built-in cross-validation. Any choice of the penalty parameter will generally lead to some coefficients shrunk to 0, which can be interpreted as a variable selection procedure. For all the results presented in this paper, the penalty parameter for lasso was chosen by minimizing the cross-validated partial likelihood loss.\n\nLasso, with the penalty chosen as describe above, yields an additive, linear model and gives resulting estimates of the risk score. Some coefficients are shrunk to 0, hence lasso does implicit variable selection, but the coefficients for the selected variables are, in addition, shrunk toward 0. The debiased lasso re-estimates the coefficients for the lasso selected variables without shrinkage, and can be computed by coxph based on the variables selected by lasso.\n\nStability selection14 is a variable selection method that works by choosing variables that are stably selected on subsampled data by e.g. lasso. The method implemented is a slight adaptation of the method proposed by Meinshausen and Bühlmann in 14, which works as follows: The lasso path is computed for a subsample of the training data, cross-validation is used on the subsample to select the optimal penalty, and the coefficients not shrunk to 0 are selected for the subsample. To obtain the results reported in this paper we used the procedure with 100 replications and with each subsample being half the size of the full training data. The selection frequency was computed for all variables, and a cutoff for stably selected variables was chosen to be 50%.\n\nAny method for fitting a survival regression model can be combined with stability selection by fitting the model using only the stably selected variables.\n\nA gradient boosting machine fits base learners sequentially to so-called pseudo-residuals. A base learner is a simple model of f, e.g. one small regression tree, and the ensemble estimate of f consists of an aggregate of all the base learners. Regularization by shrinkage may be applied for each base learner. A stochastic gradient boosting machine samples (without replacement) for each iteration a subset of observations uniformly from the training data and uses only this subset for fitting a particular base learner15.\n\nWe used an implementation of a stochastic gradient boosting machine with trees as base learners that directly optimizes a smoothed version of the concordance index (C-index) as described in 16. The implementation is available on GitHub17, which is a fork of an earlier version of the gbm R package18. This implementation implicitly applies shrinkage when fitting an individual tree, as an optimal solution is not guaranteed16. Pilot experiments indicated that additional explicit shrinkage did not improve the predictions, and therefore our implementation does not use explicit shrinkage. The subsampling fraction (bag fraction) controls the number of observations used for each tree fit. Our implementation sets the subsampling fraction to 0.5, allows for interactions of up to three variables, and uses a minimum node size of 10. The number of trees is chosen by built-in cross-validation with a maximum of 1000.\n\nWe fitted gradient boosting machines using all 93 predictor variables in the dataset as well as using only the stably selected variables.\n\nA random survival forest is an ensemble method similar to a boosting machine that uses trees as base learners19. For each iteration of the algorithm, a dataset of the same size as the original is sampled with replacement. A tree is then grown using this data set. For each node of the tree, a subset of variables is sampled and considered for splitting. The splitting is done according to one of the variables in order to maximize survival difference as measured by the log-rank test statistic. In each terminal node, a Nelson-Aalen estimate is calculated and the estimates are then aggregated into an ensemble fit of the cumulative hazard function. To obtain a single predicted outcome for each subject, we used ensemble mortality as defined in 19.\n\nRandom survival forests can be fitted using the randomForestSRC R package19. Our implementation uses 1000 trees with a minimum node size of 6 (number of events in terminal nodes). For each split the procedure considers 20 candidate variables, and for each of those variables a maximum of 10 potential splitting points are randomly chosen. Setting a maximum of potential splitting points has two purposes. First, it speeds up computations. Second, it counters the fact that the algorithm is biased towards splitting on continuous variables as opposed to variables with only a few levels20.\n\nAs for stochastic gradient boosting, survival forests were fitted using all predictor variables as well as only the stably selected variables.\n\nSurvival prognosis can be viewed as a prediction of a binary variable (is the patient dead) at each future time point, in which case the prognosis by the risk score can be evaluated using the conventional AUC score at any given time point. The time integrated AUC (iAUC) constitutes a single summary score, and it was the main score metric for the DREAM subchallenge 1a. The score can be estimated using the timeROC R package21.\n\nThe methods described above for imputation, for fitting a survival model, and for variable selection can be combined in a number of ways. We implemented all meaningful combinations resulting in a total of 24 prognostic models, see Table 5.\n\nAll eight combinations were used in combination with all the three imputation methods: MCAR, MAR and MARwR.\n\nThe final submission for the Prostate Cancer DREAM challenge from team KUStat was based on a generalized additive model using stability selected variables and the MCAR imputation scheme. Some hand tuning of the final submission was made, see our write-up22 for details. The hand tuning was not implemented for this paper.\n\nThe implementation consists of a collection of supporting R functions and a main training and prediction function that fits all the 24 models on a training data set and returns the risk scores for a test data set. The assessment pipeline consists of a 5-fold cross-validation step to estimate iAUC using the training data only, and a refit step where the models are fitted to the full training dataset and risk scores are predicted for the validation dataset.\n\nFor the iAUC estimates reported in this paper, we replicated the 5-fold cross-validation 3 times and averaged the results to decrease the variation due to the random data splits. For the iAUC estimates based on the validation data we submitted predictions to the post-challenge leaderboard to assess the predictions on the 313 patients from the ENTHUSE-33 trial. We made two submissions for each model and averaged the results. Though we only found minor variations in the results for the two submissions, the double submission was done because several aspects of the model fitting rely on randomness. We wanted the results to be robust to this variation.\n\nFigure 2 shows the iAUC score as estimated by cross-validation and on the validation data for all 24 combinations of methods. First, we observe that, in general, the iAUC was lower when the response was used for imputation (MARwR). The two other imputation schemes gave comparable results, and the results reported below refer to MCAR as well as MAR imputation in combination. Survival forests were overall best with iAUC around 0.78 on the validation data and just below 0.70 in the cross-validation. Debiased lasso was worst with an iAUC around 0.73 on the validation data and 0.66 in the cross-validation. The differences are small, and we also note a large variation between folds in the cross-validation indicating heterogeneity in the training data.\n\nResults are shown for individual folds (light blue filled circles) and averaged over all folds (red filled circles). The figure also shows iAUC on the validation data (purple filled squares) and iAUC for the reference model on the validation data (purple dashed line). The four methods marked with a * used variables chosen via stability selection, whereas the other four methods relied on implicit variable selection.\n\nThe generalized additive model was the best among those that relied on stability selection with iAUC around 0.76 on the validation data and 0.69 in the cross-validation. The pure lasso prediction did surprisingly well on the validation data, compared to the cross-validation results and irrespectively of the imputation method, and the computationally much more expensive boosting method was only just on par with lasso overall.\n\nThe results from stability selection are interesting in themselves. Figure 3 shows selection proportions for the 20 most often selected predictors for each of the imputation methods. These results are from one run of the algorithm with 100 subsamples. The variability due to the random subsampling was found to be small, though some variables would cross the (somewhat arbitrary) threshold of 50% in some runs and not in others. Figure 3 is from one of the two replications used for the validation.\n\nThe threshold of 50% (red line) was used for the final variable selection.\n\nThe eight variables AST, HB, ALP, ECOG_C, ADRENAL, LIVER, ANALGESICS and ALB were selected in a large proportion of the subsamples, irrespective of the imputation scheme. The variable RACE_C just reached the 50% threshold for all three imputation schemes, while BMI, ACE_INHIBITORS and ESTROGENS reached the 50% threshold for some, but not all, imputation schemes. Notably, PSA was not stably selected. It is also noteworthy that ALB (which has 493 missing values) increased its selection proportion considerably when imputed using the response.\n\n\nDiscussion\n\nIt is difficult to correctly evaluate how well a prognostic model will generalize. We believe that competitions like the Prostate Cancer DREAM Challenge have a positive effect on the development of systematic approaches to model evaluation. However, the big differences between the cross-validated estimates of iAUC and those obtained on the validation data demonstrate how difficult it is to generalize from one dataset to another. Such differences in predictive strength, as measured by iAUC, can be explained by differences in either the composition of the patients, or in how their survival is related to the predictors, but we cannot offer a more detailed explanation. In addition, we interpret the large variation between the cross-validation folds as evidence of a heterogeneous training dataset. This is not surprising, given that the training data is pooled from three different trials. Moreover, we noted that the majority of patients in the dataset are white and from Europe or North America. Thus it is also difficult to tell how well a prognostic model based on the Prostate Cancer DREAM Challenge data will generalize to other populations.\n\nOn the other hand, even if the absolute values of iAUC are incomparable, the rankings of the fitted models obtained by either cross-validation or validation were roughly the same. Thus we believe that our results shed light on which methods are most useful for developing prognostic survival models and for selecting variables of prognostic value.\n\nFor variable selection we believe that the poor performance of debiased lasso is a consequence of lasso generally selecting too many variables – see Chapter 8 in 23 for an extensive treatment of variable selection with lasso – and thus without shrinkage of the corresponding coefficients, debiased lasso will overfit. Stability selection is a more stringent selection criterion, which is less prone to select false positives, see Chapter 10 in 23 and 14. The variables selected by stability selection as having prognostic value also largely agree with those found in 3 and used in the reference model. One difference is that the DREAM dataset gives nuanced information on disease sites, and we found that liver and adrenal lesions, in particular, had prognostic value. The PSA variable was, on the other hand, not selected. It was a predictor in the reference model, but not a very strong one. Based on this study we therefore recommend stability selection as a reliable method for selecting prognostic variables.\n\nFor imputation of missing values the use of the response seemed to degrade predictive performance. This contradicts the recommendations in e.g. 8, which presents a simulation study showing that imputation based on inclusion of an estimate of the cumulative hazard function and the indicator of censoring is superior to a number of other imputation schemes. The framework of 8 is, however, focused on parameter estimation and hypothesis testing using multiple imputation, where the objectives differ from those of prognostic modeling. We believe that further investigations into the effect of imputation – in particular when using the response – are needed to fully understand benefits and pitfalls, but our recommendations based on this study is to avoid using the response for imputation when building prognostic models.\n\nFinally, the best performing model – the survival forest – is the only model considered that is not based directly on the proportional hazards assumption. Thus we may speculate that this assumption could be violated.\n\n\nConclusions\n\nSurvival forests without explicit variable selection gave the best performance overall in the cross-validation and on the validation data. When stability selection was used for explicit variable selection, the generalized additive model gave the best performance. Imputation using the response appeared to have a negative effect on predictive performance.\n\nThe four stably selected laboratory measurements AST, HB, ALP and ALB and the ECOG performance status were selected as some of the most important prognostic factors, together with liver and adrenal lesions and prior use of analgesics.\n\n\nData and software availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying team KUStat's challenge submission can be found at: http://dx.doi.org/10.7303/syn426074222\n\nThe R code and documentation underlying the methods presented in this paper can be found at: https://github.com/nielsrhansen/ProstateDream. An archived version as at the time of publication is available at: http://dx.doi.org/10.5281/zenodo.5087224",
"appendix": "Author contributions\n\n\n\nNRH initiated the participation in the Prostate Cancer DREAM Challenge as a component in the Master’s program in Statistics at University of Copenhagen for SWM, AHP and ASB. AHP contributed with methods for imputation, ASB contributed with exploratory analysis and stability selection using lasso, SWM contributed with survival forests and gradient boosting and NRH contributed with generalized additive models and stability selection. All authors contributed to the integration of the methods into a complete analysis pipeline. NRH wrote the first draft of the manuscript and all authors contributed to its finalization.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication. The authors acknowledge Maria Bekker-Nielsen Dunbar and Lisbeth Tomaziu, who worked on subchallenge 2 as part of the KUStat team participating in the Prostate Cancer DREAM Challenge. The authors also acknowledge Jing Zhou, who worked with the KUStat team but made her own submission to the Prostate Cancer DREAM Challenge. Finally, the authors acknowledge Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the challenge.\n\n\nSupplementary material\n\nThe variables selected by stability selection by any of the imputation methods are marked with a *.\n\n\nReferences\n\nDREAM9.5 - Prostate Cancer DREAM Challenge. 2015. Publisher Full Text\n\nGuinney J, Wang T, Laajala TD, et al.: Prediction of overall survival for patients with metastatic castration-resistant prostate cancer: development of a prognostic model through a crowdsourced challenge with open clinical trial data. Lancet Oncol. 2016. Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–7. PubMed Abstract | Publisher Full Text\n\nWhite IR, Royston P: Imputing missing covariate values for the Cox model. Stat Med. 2009; 28(15): 1982–1998. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTherneau TM, Grambsch PM: Modeling Survival Data: Extending the Cox Model. Springer, New York, 2000. Publisher Full Text\n\nTherneau TM: A Package for Survival Analysis in S. Version 2.38. 2015. Reference Source\n\nWood SN: Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC. 2006; 16. Reference Source\n\nFriedman J, Hastie T, Tibshirani R: Regularization Paths for Generalized Linear Models via Coordinate Descent. J Stat Softw. 2010; 33(1): 1–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimon N, Friedman J, Hastie T, et al.: Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent. J Stat Softw. 2011; 39(5): 1–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeinshausen N, Bühlmann P: Stability selection. J R Stat Soc Series B Stat Methodol. 2010; 72(4): 417–473. Publisher Full Text\n\nFriedman JH: Stochastic gradient boosting. Comput Stat Data Anal. 2002; 38(4): 367–378. Publisher Full Text\n\nChen Y, Jia Z, Mercola D, et al.: A gradient boosting algorithm for survival analysis via direct optimization of concordance index. Comput Math Methods Med. 2013; 2013: 873595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFork of the gbm R package. Reference Source\n\nRidgeway G: gbm: Generalized Boosted Regression Models. R package version 1.6–3.2, 2012. Reference Source\n\nIshwaran H, Kogalur UB, Blackstone EH, et al.: Random survival forests. Ann Appl Stat. 2008; 2(3): 841–860. Publisher Full Text\n\nIshwaran H, Kogalur UB, Gorodeski EZ, et al.: High-dimensional variable selection for survival data. J Am Stat Assoc. 2010; 105(489): 205–217. Publisher Full Text\n\nBlanche P, Dartigues JF, Jacqmin-Gadda H: Estimating and comparing time-dependent areas under receiver operating characteristic curves for censored event times with competing risks. Stat Med. 2013; 32(30): 5381–5397. PubMed Abstract | Publisher Full Text\n\nHansen NR, Buchardt AS, Dunbar MN, et al.: Prostate Cancer DREAM Challenge submission and write-up by team KUStat. 2015. Publisher Full Text\n\nBühlmann P, van de Geer S: Statistics for high-dimensional data. Springer Series in Statistics. Springer, Heidelberg. 2011. Publisher Full Text\n\nMogensen SW, Petersen AH, Buchardt AS, et al.: ProstateDream: Prostate Dream code. Zenodo. 2016. Publisher Full Text"
}
|
[
{
"id": "17681",
"date": "30 Nov 2016",
"name": "Sarah Lemler",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have participated to the Prostate Cancer DREAM challenge. They report in this paper the methodology developed to predict the survival of patients with prostate cancer. They detail different methods, combining imputation techniques to replace missing values in the dataset with selection and model fitting procedures to predict the survival. They compare these methods to each other using the integrated AUC and give their conclusion about the best method in terms of prediction.\n\nMy opinion on the paper is the following. First of all, the paper is very well written and pleasant to read. It is clear and understandable. The procedures and methodologies followed by the authors are well explained in a concise way. In addition, the different methods considered in the paper to deal with the Prostate Cancer data are interesting not only for the dataset of the challenge, but also for other types of data. The paper gives some methodology that can be applied to any other dataset to study the survival. Lastly, the comparison of those methods shows that the best procedure is not always the one that we thought, but depends on the objective. This paper shows for example that the random survival forest is better in prediction than the Lasso, which is good in selection. As a conclusion, I strongly approve the publication of this paper.\nSome questions and little comments to improve the form:\nPage 3, line 3: the verb \"found\" has to be removed.\n\nDoes the choice of the training, test and validation data influence the results? Would it be interesting to compare the results by changing these different samples?\n\nTable 2: maybe, it would be easier for the reader to detect the differences of the age distribution in the four trials with the proportion by age class instead of the number of patients.\n\nFigure 3 and paragraph 2: should we not remove the variables with lots of missing values when we do stability selection? How could you explain that the variable ALB with many missing values has been selected for example and does one really need to select this variable?",
"responses": []
},
{
"id": "17680",
"date": "22 Dec 2016",
"name": "Jacob von Bornemann Hjelmborg",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper deals with the very hard and important problem of cancer survival prognosis with focus on evaluation of methods and data usages. Common prognosis models are assessed in a full-scale analysis, that is, in combination with imputation techniques and variable selection methods.\nThe data stems from the prostate cancer DREAM challenge and the stage is set for achieving best prognosis performance in terms of the integrated AUC in case of cross-validation using the training data or for the validation dataset. The authors provide a thoroughly description of predictors and their mutual relationship.\nCombined Methods: The missingness taxonomy, which sets the range for imputation, is chosen sufficiently exhaustive and is then combined with the modelling and variable selection approaches. The chosen stability selection is appropriate and interesting given the wide range of chosen models. Some of the models have implicit variable selection and comparisons are made.\nInterestingly the debiased lasso is shown to perform rather poorly in the comparison scenario as well as the boosting methods that optimizes a concordance measure, but are low on the iAUCs, somewhat unexpected. Explaining why the methods differ in performance is not straightforward but the general purpose of comparison is fully met and very interesting claims are provided useful for developing prognostic survival models. The stability selection procedure is recommended as a reliable method for selecting prognostic variables.\nThe authors point at ad-hoc choices and \"hand tuning\" during the process, but manage to convey and address very well the great many choices to be made for survival prognosis in action.\nThere is a comprehensive explanation of study design, chosen methods, analysis, and results. The paper is clearly written and may in combination with the supplied material serve as a very good tutorial on the topic - the case study presented seems to have very general scope for application. Papers of this type are welcomed in general.\n\nMisc: A typo of AUC in abstract. A specification of how mutual dependence of binary predictors is obtained. Not important: the estimation of the iAUC was not to be found in the supplied material - maybe a closer look is needed.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2680
|
https://f1000research.com/articles/5-2679/v1
|
16 Nov 16
|
{
"type": "Research Article",
"title": "Environmental volunteer well-being: Managers’ perception and actual well-being of volunteers",
"authors": [
"Gitte Kragh",
"Rick Stafford",
"Susanna Curtin",
"Anita Diaz",
"Rick Stafford",
"Susanna Curtin",
"Anita Diaz"
],
"abstract": "Background: Environmental volunteering can increase well-being, but environmental volunteer well-being has rarely been compared to participant well-being associated with other types of volunteering or nature-based activities. This paper aims to use a multidimensional approach to well-being to explore the immediately experienced and later remembered well-being of environmental volunteers and to compare this to the increased well-being of participants in other types of nature-based activities and volunteering. Furthermore, it aims to compare volunteer managers’ perceptions of their volunteers’ well-being with the self-reported well-being of the volunteers. Methods: Onsite surveys were conducted of practical conservation and biodiversity monitoring volunteers, as well as their control groups (walkers and fieldwork students, respectively), to measure general well-being before their nature-based activity and activity-related well-being immediately after their activity. Online surveys of current, former and potential volunteers and volunteer managers measured remembered volunteering-related well-being and managers’ perceptions of their volunteers’ well-being. Data were analysed based on Seligman’s multidimensional PERMA (‘positive emotion’, ‘engagement’, ‘positive relationship’, ‘meaning’, ‘achievement’) model of well-being. Factor analysis recovered three of the five PERMA elements, ‘engagement’, ‘relationship’ and ‘meaning’, as well as ‘negative emotion’ and ‘health’ as factors. Results: Environmental volunteering significantly improved positive elements and significantly decreased negative elements of participants’ immediate well-being, and it did so more than walking or student fieldwork. Even remembering their volunteering up to six months later, volunteers rated their volunteering-related well-being higher than volunteers rated their well-being generally in life. However, volunteering was not found to have an effect on overall mean well-being generally in life. Volunteer managers did not perceive the significant increase in well-being that volunteers reported. Conclusions: This study showed how environmental volunteering immediately improved participants’ well-being, even more than other nature-based activities. It highlights the benefit of regarding well-being as a multidimensional construct to more systematically understand, support and enhance volunteer well-being.",
"keywords": [
"citizen science",
"environmental volunteering",
"nature-based activities",
"PERMA",
"positive psychology",
"practical conservation",
"volunteering",
"well-being"
],
"content": "Introduction\n\nNatural environments have always been important for human well-being (Frumkin, 2001; Kellert & Wilson, 1993), and continue to be so as local environments become more urbanised (Kaplan, 1983). One way to harness the well-being benefits of natural environments is to participate in environmental volunteering, which can increase people’s connection to nature and their sense of well-being (Gooch, 2005; O’Brien et al., 2010; Pillemer et al., 2010). Most research on volunteer well-being has focused on comparisons between volunteers and non-volunteers, elucidating differences in specific elements of well-being, such as happiness, life satisfaction, depression and survival (Jenkinson et al., 2013; Konrath et al., 2012; Thoits & Hewitt, 2001). Very few studies have addressed the questions of how volunteering immediately affects participants’ well-being and how participants in different types of volunteering may gain benefits in different elements of well-being. In addition, no studies have examined how volunteer managers perceive the well-being of their volunteers and how this relates to actual volunteer well-being. This paper addresses these challenges by using a multidimensional well-being model to first explore the well-being of environmental volunteers and compare it to the well-being of participants in other similar types of nature-based activities and other types of volunteering. It then explores volunteer managers’ perception of the well-being of their volunteers, and finally it compares this perceived well-being to the volunteers’ self-reported well-being.\n\nMany studies have shown that volunteering is closely linked to increased well-being of volunteers (Binder & Freytag, 2013; Borgonovi, 2008; Greenfield & Marks, 2004; Jenkinson et al., 2013; Koss & Kingsley, 2010; O’Brien et al., 2010; Son & Wilson, 2012; Stukas et al., 2016; Thoits & Hewitt, 2001; Townsend, 2006; Van Willigen, 2000; Wheeler et al., 1998; Wilson, 2000). However, studies have used different definitions of well-being, and have therefore measured different constructs, which have often included only some aspects of well-being instead of taking a holistic approach. Two main approaches to conceptualising well-being prevail: hedonism and eudaimonia. Hedonism is the idea that maximisation of pleasure is the goal and the way to happiness for all humans, whereas eudaimonia proposes that striving to lead a meaningful life and achieve optimum functioning is the way to happiness (Aristotle, 2009; Diener, 2000; Ryan & Deci, 2001; Ryff, 1989). The two approaches have informed research into human well-being with different methods proposed for the study of well-being. Methods based on the study of ‘subjective well-being’ includes measures of positive affect, negative affect and life satisfaction, a mixture of both hedonic and eudaimonic well-being (Bradburn, 1969; Diener, 1984; Diener, 1994; Diener et al., 1999). The study of ‘psychological well-being’ on the other hand measures only eudaimonic elements of life, such as self-acceptance, positive relations with others, autonomy, environmental mastery, purpose in life and personal growth, leaving out the hedonic focus on pleasures (Ryff, 1989; Ryff, 1995; Ryff, 2014).\n\nThough some aspects of volunteer well-being have been studied in depth, no previous studies have investigated volunteer managers’ perceptions of the well-being of their volunteers. As volunteer managers are responsible for the well-being of their volunteers, and as improved volunteer well-being is often an important outcome for volunteers, organisations and society (O’Brien et al., 2011), it is vital that managers’ perceptions of the well-being of their volunteers correspond to actual volunteer well-being. The cumulative evidence from a broad range of studies (see meta-analyses and reviews in Jenkinson et al., 2013; Musick & Wilson, 2008; Wilson, 2000; Wheeler et al., 1998) is that volunteering has a positive relationship with a wide range of elements within the concept of well-being, though causation can be difficult to determine (Greenfield & Marks, 2004). Previous studies have investigated the effect of volunteering on subjective well-being (e.g. Binder & Freytag, 2013; Harlow & Cantor, 1996; Windsor et al., 2008) or psychological well-being (e.g. Ho, 2015), or a combination of one of these along with other elements of well-being, such as social well-being, trust, self-esteem, depression or physical health (e.g. Greenfield & Marks, 2004; Koss & Kingsley, 2010; O’Brien et al., 2010; Son & Wilson, 2012; Stukas et al., 2016; Thoits & Hewitt, 2001; Townsend, 2006). Some studies show that volunteering leads to increased well-being (Borgonovi, 2008; Piliavin, 2009; Piliavin & Siegl, 2007), while other studies show that people higher in well-being are also more likely to volunteer (Gimenez-Nadal & Molina, 2015; Greenfield & Marks, 2004) and to volunteer more hours (Son & Wilson, 2012; Thoits & Hewitt, 2001). Most likely the causality runs both ways between volunteering and well-being (Binder & Freytag, 2013; Gimenez-Nadal & Molina, 2015) in a ‘virtuous cycle’ where happy and healthy people volunteer more and volunteers are happier and healthier (Brooks, 2007). Environmental volunteering could further enhance this virtuous cycle, as spending time in nature has been linked to increased well-being (Frumkin, 2001).\n\nOnly a few studies have focused specifically on the relationship between environmental volunteers and their well-being (e.g. Koss & Kingsley, 2010; O’Brien et al., 2010; Townsend, 2006), as many studies have used cohort datasets where volunteering type was often heterogeneous or not described (Jenkinson et al., 2013). Volunteering in nature has been linked to well-being benefits for volunteers, including improved social networks (Bell et al., 2008; Gooch, 2005; Koss & Kingsley, 2010; Muirhead, 2011; O’Brien et al., 2010), increased personal satisfaction and feelings of enjoyment (Koss & Kingsley, 2010; Muirhead, 2011), and improved health and well-being (Koss & Kingsley, 2010; O’Brien et al., 2010; Pillemer et al., 2010). Environmental volunteering can have a positive effect, not only by increasing positive indices of well-being, but also by reducing negative indices such as reducing stress (Guiney & Oberhauser, 2009; O’Brien et al., 2010) and depression (Pillemer et al., 2010). Furthermore, environmental volunteering offers the added benefit of providing opportunities for volunteers to spend time in nature, which can lead to a better connection or re-connection with nature for the volunteers (Bell et al., 2008; Guiney & Oberhauser, 2009). It can also lead to volunteers gaining an increased understanding of the natural environment (Koss & Kingsley, 2010) and thereby also an enhanced sense of place (Evans et al., 2005; Gooch, 2005). A closer connection to nature has been shown to enhance people’s well-being (Bowler et al., 2010; Kellert & Wilson, 1993), and therefore it could be expected that environmental volunteers would benefit more from their volunteering than other types of volunteers. Practical conservation volunteering requires stamina and physical strength and it provides a way to exercise and gain improved fitness (Guiney & Oberhauser, 2009; O’Brien et al., 2010), which can also reinforce positive well-being (Pretty et al., 2005).\n\nTo better understand these relationships between volunteering and well-being, a more holistic and multidimensional approach to well-being, including both hedonic and eudaimonic elements, as well as social elements, would be well suited (Piliavin, 2009). Such a holistic approach to well-being is gaining acceptance (Forgeard et al., 2011; Keyes, 2002; Ryan & Deci, 2001), and one proposed multidimensional model of well-being is Seligman’s (2011) PERMA model. It is a construct with five contributing elements (PERMA): 1) ‘Positive emotion’, which encompass present positive feelings, life satisfaction and positive emotions about the future; 2) ‘engagement’, which is employing one’s strengths to a task, becoming fully absorbed in the task and therefore completely losing track of time, also referred to as getting into ‘flow’ (Csikszentmihalyi, 1975; Csikszentmihalyi, 1991; Seligman, 2011); 3) ‘positive relationships’, which are fundamental to a good life according to Seligman (2011), and Baumeister & Leary (1995) have also defined it as a basic human need that is essential for well-being; 4) ‘meaning’, which includes feelings of doing something worthwhile and having a purpose and direction in life, something which is crucial to well-being as, according to Seligman (2011), most people have a need to belong to or serve something they believe is larger than themselves, e.g. their family, an organisation or a religious group; and 5) ‘achievement’, often pursued for its own sake by individuals setting their own personal goals or striving to achieve recognition in the wider world, e.g. winning an award or accumulating wealth. Seligman (2011) did not propose a measure for his PERMA model but Butler & Kern (2016) subsequently developed the PERMA-Profiler (PERMA-P), a scale based on the PERMA model, which also includes additional elements of well-being. The additional elements in the PERMA-P are 1) ‘negative emotion’ from the concept of subjective well-being acknowledging the importance of both positive and negative aspects of well-being; 2) ‘health’, which can be considered a core part of well-being; 3) ‘loneliness’, which is a strong predictor of many negative life outcomes; and 4) ‘overall happiness’, which allows an overall assessment after reflecting on specific elements of well-being (Butler & Kern, 2016).\n\nThis paper aims to use a multidimensional approach to well-being to explore the immediately experienced and later remembered well-being of environmental volunteers, as well as their general well-being and to compare this to the well-being of participants in other types of nature-based activities and volunteering. It also aims to compare volunteer managers’ perception of their volunteers’ well-being with the self-reported well-being of the volunteers. These aims were addressed through the following research questions: 1) How does environmental volunteering immediately affect participants’ sense of well-being, and how does that compare to the immediate effect of other types of nature-based activities on participants’ sense of well-being? 2) How well do volunteers sustain the memory of this immediately experienced sense of well-being after they have gone home? 3) How do volunteer managers perceive the effect of volunteering on the well-being of their volunteers? 4) How does the volunteer managers’ perception of volunteer well-being compare to volunteers’ actual sense of volunteering-related well-being?\n\n\nMethods\n\nWell-being was investigated using a positive psychology approach based on the PERMA well-being theory proposed by Seligman (2011) and using the PERMA-Profiler (PERMA-P) developed by Butler & Kern (2016). The PERMA-P consists of the original five well-being elements proposed by Seligman, ‘positive emotion’ (P), ‘engagement’ (E), ‘positive relationships’ (R), ‘meaning’ (M) and ‘achievement’ (A), as well as ‘negative emotion’ and ‘health’, measured with three items each, and ‘loneliness’ and ‘happiness’, measured with a single item each. Three-item elements can be regarded as individual factors or elements, and the resulting PERMA-P seven-factor model of well-being can be tested through factor analysis with the ‘overall happiness’ and ‘loneliness’ items providing additional information (Butler & Kern, 2016). All items were scored on an 11-point (0–10) Likert scale (Likert, 1932). Following a pilot study (unpublished report, GK, RS, SC and AD), the wording of two items on the questionnaire was changed. The two words, ‘loved’ and ‘angry’, were seen by volunteers to be ‘quite American’ and badly fitted to a British volunteering context, and were therefore changed to ‘appreciated’ and ‘frustrated’, respectively. Data presented here are the complete subset of all items related to well-being in the questionnaires from a larger study, which also investigated volunteer motivation and activities (GK PhD research). Data were obtained from three sources: Study 1) an onsite survey of participants in nature-based activities (Dataset 1); Study 2) an online survey of former, current and potential volunteers (Dataset 2); and Study 3) an online survey of former and current volunteer managers (Dataset 3; Table 1).\n\nOverview of the three studies in this research, including focus, respondents, subgroups and type of well-being measured. BM, biodiversity monitoring volunteers; Stud, Students conducting fieldwork as part of their university course; PC, practical conservation volunteers; Walk, walkers; BMPC, biodiversity monitoring volunteers also doing practical conservation.\n\nThe aim of Study 1 was to answer research question 1) How does environmental volunteering immediately affect participants’ sense of well-being and how does that compare to the immediate effect of other types of nature-based activities on participants’ sense of well-being? Combining data from Study 1 and Study 2 aimed to answer research question 2) How well do volunteers sustain the memory of this immediately experienced sense of well-being after they have gone home? The aim of Study 3 was to answer research question 3) How do volunteer managers perceive the effect of volunteering on the well-being of their volunteers? And finally, combining data from all three studies aimed to answer research question 4) How does this volunteer manager perception of volunteer well-being compare to volunteers’ actual sense of volunteering-related well-being?\n\nEthics. This research project was approved through the ethics approval process at Bournemouth University (ref ID 2419). All participants provided written informed consent for participation.\n\nStudy 1. The onsite study was conducted between October 2014 and November 2015 and involved participants from 13 organisations from Southern England, divided into four types of activities: Biodiversity monitoring, practical conservation volunteering, walking, and students conducting fieldwork as part of their university course (Table 2). Environmental organisations were invited to participate in the study based on them conducting volunteer activities in groups. Control groups were invited based on their group activity being conducted in the same natural environments as the volunteer activities of the environmental organisations. To determine if environmental volunteering had a different effect on well-being compared to other non-altruistic activities performed outdoors, students and walkers were surveyed in addition to environmental volunteers. Students were chosen as the control group to the biodiversity monitoring volunteers, as both groups were conducting ecological fieldwork in similar areas, but whereas volunteering is often seen as altruistic (Smith, 1981; Unger, 1991), students did the fieldwork because it was a requirement of their university courses. Walking groups were chosen as the control group for the practical conservation volunteers as both activities were performed outdoors in similar areas and were somewhat physically demanding, but the purpose of the activities were again different, with volunteering being partly altruistic and walking only benefitting the walkers themselves. Also, walking is the most popular activity in the natural environment in England (Natural England, 2015) and walking programmes are promoted as health interventions to decrease negative affect and mental illness and increase well-being in participants (Iwata et al., 2016; Marselle et al., 2014). The survey was designed as a paired before-activity and after-activity survey to measure general level of well-being and experienced level of well-being during an activity, respectively. Activity participants only completed questionnaires once to ensure independent samples even if they participated in activities later where other activity participants completed questionnaires.\n\nStudies 2 and 3. Both online surveys were open to anyone with the link between September and December 2015. Environmental organisations involved in study 1 as well as other worldwide environmental organisations and volunteer centres in the UK were contacted directly and asked to invite their volunteers and volunteer managers to participate and the surveys were also sent out more widely through professional networks. Study 2 investigated the general level of well-being of former and potential volunteers as well as the remembered level of well-being during volunteering of current volunteers. In Study 2, a total of 417 responses were received with completed questions about well-being. This sample comprised 53% females and 47% males. Age ranged from 18 to 94 years old (mean=54.86, SD=16.10). Most respondents had at least one university degree (65.23%) and many were retired (48.68%), some were in full-time (21.10%) or part-time (13.19%) employment and few were students (6.95%), not currently employed (5.28%) or homemakers (1.20%). Respondents were from 11 different countries, with the majority residing in the United Kingdom (88.49%). They named 118 different organisations they previously or currently volunteer for or would like to volunteer for in the future. Respondents included people from three different periods: former volunteers (18%), current volunteers (70%) and potential future volunteers (12%). They were grouped into four types of volunteers: biodiversity monitoring volunteers (BM; 21%), practical conservation volunteers (PC; 34%), biodiversity monitoring volunteers also performing practical conservation work (BMPC; 25%), and all other types of volunteers (19%) (Table 3).\n\nBMPC, biodiversity monitoring volunteers also performing practical conservation work (n=417).\n\nStudy 3 investigated the perceived level of well-being of volunteers by former and current volunteer managers. A total of 96 responses were received with completed questions about well-being. This sample comprised 61% females and 39% males. Age ranged from 19 to 74 years old (mean=43.01, SD=13.03). Most respondents had at least one university degree (80%) and most respondents were in full-time (69%) or part-time (13%) employment, few were retired (10%), students (2%), not currently employed (1%) or homemakers (1%). Respondents were from 10 different countries, with the majority residing in the United Kingdom (80%). Respondents included people from two different periods: former volunteer managers (14%) and current volunteer managers (86%), and they identified 62 different organisations they previously or currently manage volunteers for. They were grouped into four types of volunteering similarly to the volunteers in Study 2: BM (20%), PC (26%), BMPC (35%) and all other types of volunteering (19%) (Table 4).\n\nBMPC, volunteer managers in biodiversity monitoring also performing practical conservation work (n=96).\n\nDeriving the well-being factors. The first step in exploring well-being was to test if the structures of self-reported well-being and managers’ perception of volunteer well-being were consistent with the proposed seven-factor PERMA-Profiler (PERMA-P) model (Butler & Kern, 2016). This was done by performing exploratory factor analysis (EFA) on a subsample of self-reported well-being data to generate a best fit model. The generated model and the original seven-factor PERMA-P model were subsequently tested for best fit through confirmatory factor analysis (CFA) using the other subsample of collected data from participants, and the total combined sample. EFA was also performed on the volunteer manager data sample to generate a best fit model and confirmatory factor analysis was run on the generated model, the model generated from the self-reported subsample and the original seven-factor PERMA-P model to determine the best fit model.\n\nSelf-reported well-being: Only complete responses were used for factor analysis (n=1157) (Figure 1). The data were split in two subsamples to develop (n=645) and test (n=512) the factor model. The development sample consisted of all onsite and online respondents to questionnaires measuring activity-related well-being, which included volunteers and control activity participants from Study 1 (‘after-activity survey’) and current volunteers from Study 2. The test sample consisted of all onsite and online respondents to questionnaires measuring general well-being which included volunteers and control activity participants from Study 1 (‘before-activity survey’) and former and potential volunteers from Study 2. The largest subsample was used as the development sample for the EFA.\n\nThe first step in determining the best fitting model was to test the factorability of the items in the development subsample with the Kaiser-Meyer-Olkin measure of sampling adequacy, recommended to be >0.60, and with Bartlett’s test of sphericity, where significance indicates the data are suitable for factor analysis (Dziuban & Shirkey, 1974). The first step in EFA is to determine the number of factors to extract. There is no set formula for determining this number and it is determined by using a variety of methods and interpretation of the data (Matsunaga, 2010). Several methods were used to determine the number of factors to extract, including parallel analysis (Horn, 1965), the Kaiser-Guttman criterion (counting only Eigenvalues above one, Kaiser, 1960), Velicer’s minimum average partial (MAP) test (Velicer, 1976) and visual inspection of the scree plot (Cattell, 1966). EFA using ordinary least squares to find the minimum residual (minres) solution with oblique (promax) rotation, which allows factors to be correlated, were performed for relevant models. To determine overall best fit model, results were evaluated using the root mean square error of approximation (RMSEA). RMSEA <0.05 indicate a good fit and between 0.05 and 0.08 indicate a fair fit (MacCallum et al., 1996). Cronbach’s α (Cronbach, 1951) was calculated for each factor to test internal reliability of factors. Cronbach’s α values >0.70 are considered acceptable (Nunnally, 1978), though for scales with 6 or fewer items lower α values may be acceptable (Cortina, 1993). Items with factor loadings <0.04 or loading on two factors with the difference between primary and secondary loadings <0.03 were removed from the dataset before further analyses, a suggested way of dealing with inconclusive factor loadings (Matsunaga, 2010). The best factor model was determined by choosing the model with optimal model fit indices, high internal reliability of factors and best interpretability of the data. CFA is a method to test if a certain predetermined model is a good fit for a data sample. CFA was performed for the best fit model developed from the EFA, the original seven-factor PERMA-P model and a generic one-dimensional control model using the test sample and the combined development and test sample. Model fits were evaluated using RMSEA, the standardised root mean residual (SRMR), comparative fit index (CFI) and the Tucker Lewis Index (TLI), and models were compared for best fit using χ2 difference tests. SRMR below 0.08 is considered a good fit, and TLI and CFI values >0.90 are considered acceptable and close to or above 0.95 are considered good fits (Hu & Bentler, 1999).\n\nVolunteer managers’ perception of volunteers’ well-being: Only complete responses from former and current volunteer managers were used for factor analysis (n=96) (Figure 2). Due to the limited sample size, it was not possible to split the data into a development and a test sample, as sample size should be at least 100–200 per subsample to perform the analysis (MacCallum et al., 1996). EFA was performed on the complete sample, following the method described above, including testing factorability of items, determining number of factors to extract and using oblique (promax) rotation for the EFA. The best fit model was determined also following the described method above by evaluating RMSEA, interpretability and Cronbach’s α. Items with inconclusive factor loadings were removed. CFA was then performed on the volunteer manager data sample using the best-fitting model from the EFA, the model developed from the self-reported well-being sample EFA described above, the original seven-factor PERMA-P model and a one-dimensional control model. Model fit for all models were evaluated using RMSEA, SRMR, CFI and TLI, and models were compared for best fit using χ2 difference tests.\n\nInfluence of volunteering type and other variables on well-being scores. As data were non-normally distributed, non-parametric tests were used in all analyses. As samples in the onsite survey (Study 1) had subject replication, Wilcoxon signed-rank tests were used to test for differences in the level of general well-being and level of activity-related well-being within the four groups of activity participants. For all other comparisons without subject replication, Wilcoxon rank sum tests were used to test for differences in levels between general and activity-related well-being. Kruskal-Wallis tests with post hoc Dunn’s test with Bonferroni correction were used to identify significantly different levels of actual and perceived well-being between the four different types of volunteers (Studies 2 and 3) and between managers in the four different types of volunteering (Study 3), respectively.\n\nStepwise multiple regression was performed to examine if there were any effects of external variables on overall mean well-being, calculated as the mean of all well-being items (23 items) with negative items, i.e. negative emotions and loneliness, reverse scored. Variables included in Study 1 were volunteer frequency, tenure and hours per month volunteered, and specific variables on the day: weather, group size, hours volunteered, volunteer manager experience and type of volunteering. In Study 2, variables included were volunteering type, as well as demographic variables (age, gender, education, country). Variables included in Study 3 were volunteering type, period and manager tenure, as well as demographic variables (age, gender, education, country).\n\nStatistical analysis. All statistical analyses were completed using RStudio v3.2.3 (RStudio Team, 2015). The nFactor package v.2.3.3 (Raiche, 2010), psych package v.1.5.8 (Revelle, 2016) and the GPArotation package v.2014.11-1 (Bernaards & Jennrich, 2005) were used for exploratory factor analysis, the lavaan package v.0.5-20 for R was used for confirmatory factor analysis (Rosseel, 2012) and the ggplot2 package v.2.0.0 was used to create graphs (Wickham, 2009).\n\n\nResults\n\nDeriving the self-reported well-being factors. Factorability of the items in the development sample was supported by a Kaiser-Meyer-Olkin measure of 0.94 and a significant Bartlett’s test of sphericity (χ2(210)=8448.17; p<0.001), indicating the data were fit for factor analysis. The number of factors to extract was determined by evaluating several factor extraction results: parallel analysis suggested six factors, the Kaiser-Guttman criteria suggested four factors, Velicer’s minimum average partial test identified three factors and visual inspection of the scree plot suggested between two and five factors. Three-, four-, five- and six-factor models were evaluated through exploratory factor analysis and Cronbach’s α for individual factors for each model were also evaluated. The five-factor model provided the clearest structure with a good fit (RMSEA = 0.056 [90% confidence interval = 0.048, 0.062]). Five of the seven factors could be interpreted as factors from the PERMA-P (Table 5): ‘Engagement’ (four items, α = 0.79), ‘relationships’ (three items, α = 0.77), ‘meaning’ (two items, α = 0.88), ‘negative emotions’ (three items, α = 0.64) and ‘health’ (three items, α = 0.92). One ‘positive emotion’ item, ‘In general, how often do you feel joyful?’, loaded on the ‘engagement’ factor. One ‘achievement’ item, ‘How often do you achieve the important goals you have set for yourself?’ loaded on the ‘meaning’ factor, but was dropped to substantially improve internal reliability of factor and overall model fit. Five items failed to load conclusively on any one factor and were left out of the subsequent confirmatory factor analysis.\n\nThe five well-being factors resulting from exploratory factor analysis of the development sample. Cronbach's α for each factor and items with factor loadings (only loadings <-0.30 or >0.30). Greyed out items were excluded from the final model due to inconclusive factor loadings, and were not included in the confirmatory factor analysis. One item was dropped to improve internal reliability of factor (n=645).\n\nCFA was run on the test sample and the combined development and test sample with the five-factor model developed from the EFA. Model fit was acceptable for the test sample based on all fit indices (RMSEA (0.076 [0.067; 0.085]), SRMR (0.066), CFI (0.938) and TLI (0.918)). Model fit was good for the combined development and test sample based on SRMR (0.055), CFI (0.955) and TLI (0.940) indices and was acceptable based on RMSEA (0.069 [0.064; 0.075]). The five-factor model from the EFA fitted the test sample significantly better than the original seven-factor PERMA-P model (Δχ2(88) = 530; p<0.001) or a generic one-factor model (Δχ2(109) = 1565; p<0.001). The five-factor model also fitted the combined development and test sample significantly better than the original seven-factor PERMA-P model (Δχ2(88) = 788; p<0.001) or a generic one-factor model (Δχ2(109) = 3717; p<0.001). Factor correlations based on the combined test and development sample are summarised in Table 6, and show that all factors were significantly correlated.\n\nFinal well-being factors, descriptive statistics and correlations for the combined development and test participant sample showing significant correlations between all factors (n=1157; **p<0.001).\n\nExternal factors and volunteer well-being. Volunteers spending more hours volunteering per month, and for Study 2 also spending more time volunteering outdoors, reported higher levels of overall well-being. For volunteers in Study 1, this result came from stepwise multiple regression, which reduced the model for predicting the overall mean volunteering-related well-being score to only include the number of hours spent volunteering per month as a significant factor (F1,164 = 5.55; p<0.05; R2 = 0.03). For the current volunteers in Study 2, stepwise multiple regression reduced the model for predicting the overall mean volunteering-related well-being score to include the number of hours spent volunteering per month (p<0.001) and the amount of time spent outdoors while volunteering (p<0.001) as significant factors (F2,225 = 11.69; p<0.001; R2adj = 0.09). The number of hours spent volunteering per month (r=0.22; p<0.001) and the amount of time spent outdoors while volunteering (r=0.21; p<0.01) were both significantly positively correlated with the overall mean volunteering-related well-being score.\n\nStudy 1: How does environmental volunteering immediately affect well-being? Mean scores were calculated for each well-being element for both general well-being and activity-related well-being in the four participating groups: Biodiversity monitoring volunteers, practical conservation volunteers, students and walkers (Table 7). All groups rated most of their activity-related well-being significantly better than their general well-being with the positive indices, ‘engagement’, ‘relationship’, ‘meaning’, ‘health’ and ‘happiness’, rated significantly higher and the negative indices, ‘negative emotions’ and ‘loneliness’, rated significantly lower for activity-related well-being than for general well-being (Wilcoxon signed-rank test; p<0.05 for all; Figure 3). The only exceptions were students’ rating of ‘meaning’, which was not significantly different between generally in life and during their fieldwork, and their rating of ‘engagement’, which was significantly lower for activity-related well-being than generally in life.\n\nDifferences between paired general well-being scores (light grey) and activity well-being scores (dark grey) for biodiversity monitoring volunteers, practical conservation volunteers, students and walkers (±SE bars). ‘Engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’ and ‘health’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures (Wilcoxon signed-rank tests;*p< 0.05, **p<0.01, ***p<0.001).\n\nBM, biodiversity monitoring volunteers; PC, practical conservation volunteers; BMPC, biodiversity monitoring volunteers also doing practical conservation.\n\nComparing biodiversity monitoring volunteers to their student control group for general well-being there was one significant difference, as volunteers rated their ‘health’ significantly higher than students did (Wilcoxon rank sum test; p<0.05; Figure 4). Volunteers also rated their ‘negative emotions’ slightly lower than students did (Wilcoxon rank sum test; p<0.06). When comparing their activity-related well-being, however, there were significant differences in all elements of well-being, except ‘loneliness’, as volunteers consistently rated positive indices significantly higher and ‘negative emotions’ significantly lower than students did (Wilcoxon rank sum tests; p<0.01 for all).\n\nDifferences between biodiversity monitoring volunteers (BM) and students (S) in their level of general well-being (light grey) and activity well-being (dark grey) (±SE bars). ‘Engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’ and ‘health’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures (Wilcoxon rank sum test; ·p<0.06, *p<0.05, ***p<0.001).\n\nComparing practical conservation volunteers to their walker control group for their general level of well-being there was one significant difference, as volunteers rated ‘relationships’ significantly higher than walkers did (Wilcoxon rank sum test; p<0.01; Figure 5). This difference in ‘relationship’ ratings was even more significant when comparing their activity-related well-being (Wilcoxon rank sum test; p<0.001). Also negative indices showed differences in activity-related well-being with volunteers rating their ‘negative emotions’ significantly lower than walkers (Wilcoxon rank sum test; p<0.05) and rating their ‘loneliness’ lower than walkers.\n\nDifferences between practical conservation volunteers (PC) and walkers (W) in their level of general well-being (light grey) and activity well-being (dark grey) (±SE bars). ‘Engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’ and ‘health’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures (Wilcoxon rank sum tests; · p<0.06, *p<0.05, **p<0.01, ***p<0.001).\n\nComparing the two different types of environmental volunteers, the biodiversity monitoring volunteers and the practical conservation volunteers, there were no significant differences in their levels of general (Wilcoxon rank sum tests; p>0.07 for all) or activity-related (Wilcoxon rank sum tests; p>0.30 for all) well-being, suggesting that irrespective of the type of environmental volunteering performed, the effect on well-being is equally positive.\n\nStudy 2: How well do volunteers sustain the memory of the immediately experienced sense of well-being after they have gone home? In the online survey, current volunteers were asked to remember the last time they volunteered and rate how they felt during that time. The ‘relationship’ (Kruskal-Wallis test; χ2(3) = 16.18; p<0.01), ‘meaning’ (Kruskal-Wallis test; χ2(3) = 11.69; p<0.01) and ‘negative emotion’ (Kruskal-Wallis test; χ2(3) = 9.43; p<0.05) elements showed significant differences between different types of volunteers (Table 7 and Figure 6). Biodiversity monitoring volunteers consistently rated positive indices lower than any other types of volunteers, and significantly so the ‘relationship’ element compared to biodiversity monitoring volunteers also doing practical conservation work (Dunn’s test; z = -3.44; p<0.01) and non-environmental volunteers (Dunn’s test; z = -3.46; p<0.01), and the ‘meaning’ element compared to non-environmental volunteers (Dunn’s test; z = -3.12; p<0.01). Also practical conservation volunteers rated ‘meaning’ significantly lower than non-environmental volunteers (Dunn’s test; z = 2.67; p<0.05). For ‘negative emotions’, however, both practical conservation volunteers (Dunn’s test; z = 2.95; p<0.01) and biodiversity monitoring volunteers also doing practical conservation (Dunn’s test; z = -2.48; p<0.05) rated them significantly lower than non-environmental volunteers.\n\nComparison of volunteers’ experienced well-being just after volunteering ended (Study 1), their remembered volunteering-related well-being up to 12 months after volunteering (Study 2) and their general level of well-being in life (paired data from Study 1) showed that biodiversity monitoring volunteers consistently rated experienced positive indices significantly higher than their well-being generally in life (Kruskal-Wallis with post-hoc Dunn’s tests; p<0.01 for all); remembered well-being was rated intermediate and significantly different from immediately experienced well-being for ‘engagement’, ‘relationship’ and ‘health’ (Kruskal-Wallis with post-hoc Dunn’s tests; p<0.01) and significantly different from well-being generally in life for ‘meaning’ and ‘happiness’ (Kruskal-Wallis with post-hoc Dunn’s tests; p<0.01; Table 7; Figure 7). Practical conservation volunteers showed the same trend and also rated their experienced ‘relationship’, ‘meaning’ and ‘happiness’ significantly higher just after volunteering and when later remembering it compared to generally in life (Kruskal-Wallis with post-hoc Dunn’s tests; p<0.001). Both types of volunteers rated ‘negative emotions’ significantly lower just after volunteering and when remembering later than generally in life (Kruskal-Wallis with post-hoc Dunn’s tests; p<0.001 for all).\n\nThe remembered volunteering-related well-being of different types of current volunteers (±SE bars) with significant differences found for ‘relationship’, ‘meaning’ and ‘negative emotions’ (Kruskal-Wallis tests; p<0.05, **p<0.01). ‘Engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’ and ‘health’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures. BMPC, biodiversity monitoring volunteers also doing practical conservation work.\n\nExperienced well-being just after volunteering ended and remembered volunteering-related well-being up to six months after volunteering compared to volunteers' general level of well-being in life for volunteers in biodiversity monitoring and practical conservation volunteering (±SE bars; Kruskal-Wallis tests; *p<0.05, **p<0.01, ***p<0.001).\n\nThere was no effect of time since current volunteers last volunteered within the last six months on their well-being ratings (Study 2, n=277; Kruskal-Wallis; p>0.05 for all). Comparing the baseline general well-being of volunteers from Study 1 (n=191) and non-volunteers, defined as people not having volunteered for at least 6 months, from Study 2 (n=51), there were no significant differences in ratings for any well-being elements (Wilcoxon rank sum tests; p>0.05 for all).\n\nDeriving the perceived well-being factors. Exploratory factor analysis performed on the volunteer manager data identified a four-factor model; however, fit indices indicated only marginal fit (RMSEA = 0.09 [90% CI = 0.053; 0.102], TLI = 0.91). CFA based on the four-factor model revealed bad fit (RMSEA = 0.111 [90% CI = 0.089, 0.133], SRMR = 0.097, CFI = 0.880, TLI = 0.850). CFA based on the model from the self-reported well-being sample, but excluding the ‘health’ factor as there was only one item on health in the volunteer manager questionnaire, indicated acceptable fit based on CFI (0.929), TLI (0.902) and SRMR (0.066), though RMSEA (0.100 [90% CI = 0.069, 0.130]) was high. The four-factor model from the self-reported well-being sample had significantly better fit than the model developed from the volunteer manager EFA (Δχ2(36) = 90; p<0.001), the original PERMA-P model (without the ‘health’ factor) (Δχ2(72) = 223, p<0.001) or a generic one-factor model (Δχ2(6) = 146; p<0.001) and it was therefore used for exploring perceived well-being further. Factor correlations based on the volunteer manager sample are summarised in Table 8.\n\nFinal well-being factors (‘engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’, 0–10 scale), descriptive statistics and correlations for volunteer manager sample (n=94–96, * p<0.05, **p<0.001). MV Time, manager time spent with volunteers (1–6 scale, 6 being 100%); MPS, mean perceived well-being score from all items; Education, 1–6 scale, 6 being doctorate degree.\n\nExternal factors and perceived well-being. Stepwise multiple regression reduced the model for predicting the overall mean perceived well-being score to only include the significant variable of manager time spent with volunteers (measured on 1–6 scale, 6 being 100%; p<0.05) and the important variable of managers’ level of education (measured on 1–6 scale, 6 being doctoral degree; p<0.07) as important factors (F2,91 = 4.93; R2adj = 0.08; p<0.01). Manager time spent with volunteers was significantly positively correlated with the overall mean perceived well-being score, as well as with the perceived ‘engagement’, ‘relationship’ and ‘meaning’ elements (Table 8).\n\nHow do volunteer managers perceive the effect of volunteering on the well-being of their volunteers? Volunteer managers in different types of volunteering rated the well-being of their volunteers similarly, except for ‘health’ where managers in biodiversity monitoring also doing practical conservation rated their volunteers’ ‘health’ higher than managers in non-environmental volunteering (Dunn’s test; z = 2.69; p<0.05) (Figure 8).\n\nThe perceived well-being of volunteers by different types of volunteer managers (mean score ±SE bars). Significant difference found only for Health (Kruskal-Wallis test; χ2(3) = 7.63; *p=0.05). ‘Engagement’, ‘relationship’, ‘meaning’ and ‘negative emotion’ factor scores were means of factor item aggregates. ‘Health’, ‘loneliness’ and ‘happiness’ were single item measures. BMPC, biodiversity monitoring volunteers also doing practical conservation work.\n\nVolunteer managers’ perception of their volunteers’ well-being corresponded to how volunteers felt just after volunteering ended (‘experienced well-being’) for ‘engagement’ and ‘meaning’ elements of well-being but significantly differed for ‘health’, ‘negative emotions’ and ‘loneliness’ in both biodiversity monitoring and practical conservation volunteering (Figure 9). Volunteer managers perceived their volunteers as significantly less healthy (Wilcoxon rank sum tests; p<0.001) and as having more ‘negative emotions’ (Wilcoxon rank sum tests; p<0.001) and feeling more ‘lonely’ (Wilcoxon rank sum tests; p<0.01) than was the experience of the volunteers. Managers in biodiversity monitoring also perceived volunteers’ ‘relationship’ and ‘happiness’ elements significantly lower than volunteers reported they felt (Wilcoxon rank sum tests; p<0.05).\n\nVolunteer experienced well-being just after volunteering ended compared to volunteer managers’ perception of their volunteers’ well-being (±SE bars). ‘Engagement’, ‘relationship’, ‘meaning’, ‘negative emotion’ and ‘health’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures. Health was a mean of factor item aggregates for volunteers and a single item for managers (Wilcoxon rank sum tests; *p<0.05, **p<0.01, ***p<0.001).\n\nWhen volunteer managers’ perception of the well-being of their volunteers was compared to how volunteers later rated their remembered volunteering-related well-being, there was still a significant difference in all types of volunteering with managers rating their volunteers’ ‘health’ lower than the volunteers (Wilcoxon rank sum tests; p<0.05; Figure 10). Managers rated volunteers’ perceived ‘negative emotions’ significantly higher than volunteers did in all types of volunteering (Wilcoxon rank sum tests; p<0.05), except biodiversity monitoring. Managers also rated volunteers’ perceived ‘loneliness’ significantly higher in both practical conservation and biodiversity monitoring also doing practical conservation volunteering than volunteers (Wilcoxon rank sum tests; p<0.01). In non-environmental volunteering, managers rated volunteers’ perceived ‘happiness’ significantly lower than volunteers (Wilcoxon rank sum test; p<0.05).\n\nVolunteer remembered well-being compared to volunteer managers’ perception of their volunteers’ well-being (±SE bars). ‘Engagement’, ‘relationship’, ‘meaning’ and ‘negative emotion’ factor scores were means of factor item aggregates. ‘Loneliness’ and ‘happiness’ were single item measures. ‘Health’ was a mean of factor item aggregates for volunteers and a single item for managers (Wilcoxon rank sum tests; *p<0.05, **p<0.01, ***p<0.001). BMPC, biodiversity monitoring volunteers also doing practical conservation work.\n\n\nDiscussion\n\nOverall, and supporting previous research, volunteering increased participants’ immediate sense of well-being, both by increasing positive elements and by decreasing negative emotions and loneliness, and it did so more than other types of nature-based activities. Remembering the volunteer experience later on, volunteers retained the feeling of a meaningful event with low levels of negative emotions and loneliness, though other positive feelings of engagement or positive relationships were not retained. Contrary to previous research, this study found that volunteering did not increase volunteers’ general level of well-being when compared to non-volunteers’ general level of well-being. Volunteer managers did perceive the increase in the positive elements of their volunteers’ well-being during volunteering but did not perceive the significant decrease in negative emotions and loneliness their volunteers reported. This section will further discuss these points.\n\nAll nature-based activities examined in this research had a significant positive effect on some or all elements of participants’ well-being, a result that agrees with previous studies (Iwata et al., 2016; Koss & Kingsley, 2010; O’Brien et al., 2010; Wyles et al., 2016). However, contrary to many published studies that found volunteers had higher levels of well-being generally in life than non-volunteers (e.g. Greenfield & Marks, 2004; Harlow & Cantor, 1996; Konrath et al., 2012), this study found no significant difference between volunteers and non-volunteers in their general level of well-being. For the online sample in Study 2, reasons for this could be the relatively small sample size for non-volunteers (n=51) and a potential selection bias (Ahern, 2005) in survey participation, as non-volunteers were not a random sample of people not volunteering, but rather people showing an interest in volunteering, either as former volunteers or potential future volunteers. However, findings in Study 1 were similar to Study 2 though students and walkers did not participate in this survey due to an interest in volunteering, suggesting it was not only a case of selection bias or small sample size.\n\nThe finding in the current study that volunteers who spend more time volunteering report higher immediate and remembered well-being supports previous studies (Binder & Freytag, 2013; Thoits & Hewitt, 2001). One study has suggested that between 100 and 800 volunteer hours per year provided the highest rates of well-being (Windsor et al., 2008). However, other studies have found that the benefits of volunteering over 100 hours per year either led to no further benefits (Morrow-Howell et al., 2003) or led to decreased benefits and satisfaction (Van Willigen, 2000).\n\nThe lowered levels of ‘negative emotions’ and ‘loneliness’ during all nature-based activities support previous research showing that volunteering and restorative experiences can decrease mental health issues such as depression (Korpela et al., 2016; Musick & Wilson, 2008; Pillemer et al., 2010; Townsend, 2006). It also supports the idea that volunteering reduces unhappiness (Binder & Freytag, 2013; Wilson, 2012), and has a positive effect on the positive elements of people’s well-being.\n\nVolunteering and physical health. Volunteers reported an increase in their health immediately after volunteering, reflecting previous research into practical conservation volunteering where volunteers, even though reporting they were in pain after volunteering, gained a sense of achievement from the pain, and perceived it as something positive (O’Brien et al., 2010). However, this positive effect did not last as volunteers remembering their health during volunteering later on rated it similar to their general health, which was not different to the health of non-volunteers, suggesting there is no long-term positive effect of volunteering on perceived physical health. This finding supports previous research with similar findings (Borgonovi, 2008; Jenkinson et al., 2013; Piliavin & Siegl, 2007), though some studies have found a positive relationship between volunteering and physical health (Pillemer et al., 2010; Thoits & Hewitt, 2001; Van Willigen, 2000).\n\nBiodiversity monitoring volunteers and students. The student group was the only participant group that did not consistently show improvements in all elements of well-being immediately after their activity. The unchanged sense of ‘meaning’ and lowered level of ‘engagement’ among students during their fieldwork could stem from them seeing the fieldwork as a mandatory activity that they did not freely choose, even if they did choose their university course. The feeling of personal control and choice of activity is important for an activity to be seen as a positive experience (Stukas et al., 1999). As volunteers had freely chosen to participate in their activity, this may be one reason for the differences in activity-related well-being between students and biodiversity monitoring volunteers, even though they were performing the same type of tasks.\n\nPractical conservation volunteers and walkers. Walking has previously been shown to decrease participants’ mental illness and negative affect and increase their sense of well-being (e.g. Iwata et al., 2016; Marselle et al., 2014), which was also found in this study. However, the current research also showed that even bigger decreases in negative affect can be achieved through practical conservation volunteering than through walking, and volunteering can have a positive effect on social relationships as well, an effect not consistently found for walking (Marselle et al., 2014). The ‘positive relationship’ element included an item on support from others: “To what extent did you receive help and support from others when you needed it during your walk/volunteering today?” This item was particularly differently rated by volunteers and walkers, suggesting that volunteers felt much supported in their volunteering by volunteer managers and other volunteers, whereas walkers possibly either did not perceive a need to be supported or were not supported and therefore rated the item lower than volunteers. For practical conservation volunteers, the coffee and lunch breaks provided additional opportunities for social interactions, which were important to the volunteers, as highlighted by a comment from a practical conservation volunteer to the ‘engagement’ item ‘To what extend did you lose track of time during volunteering today?’\n\n“I never lose track of time, I always know what time it is: It is either before coffee, after coffee, before lunch or after lunch!”\n\n(Male volunteer, Forestry Commission)\n\nVolunteering has previously been found to benefit social well-being (Koss & Kingsley, 2010; O’Brien et al., 2010; Onyx & Warburton, 2003; Son & Wilson, 2012), which was also the case in this study with practical conservation volunteers having significantly higher levels of ‘positive relationships’, not only during the volunteer activity but also generally in life, than walkers did. Volunteering provides a space where people are having fun with others, can engage in meaningful conversations and feel they are understood, all of which can increase the quality of social relationships (Reis et al., 2000).\n\nWhen volunteers recalled their experience of volunteering later on and up to six months after volunteering, their ratings of their well-being during volunteering were less positive than immediately after volunteering. This difference between experienced and remembered well-being during volunteering is likely partly due to recall bias (Baumeister et al., 2001; Stone et al., 1999), which is the imperfect recollection of past emotions or events by respondents. It has been shown that ‘bad is stronger than good’ (Baumeister et al., 2001), which means that people remember and put more emphasis on negative events and emotions compared with positive events and emotions. Also volunteers in this research remembered the negative, as in the lowered ‘negative emotions’ and ‘loneliness’, better than the increased positive well-being indices. The ‘meaning’ element retained its high rating over time, supporting previous research that also showed retention of meaning (Wyles et al., 2016), and suggesting it may be a more robust construct than the ‘engagement’ or ‘relationship’ factors that did not retain their high ratings over time. ‘Meaning’ is part of eudaimonia and as such has been suggested to be longer-lasting than hedonic emotions, or moods, such as ‘positive emotions’ and partly the ‘engagement’ element (Piliavin, 2009).\n\nManagers in environmental volunteering rated the ‘health’ element of their volunteers’ well-being higher than non-environmental volunteer managers did. This difference between environmental and non-environmental managers’ perception of their volunteers’ health is possibly a reflection of the physical stamina and strength needed to perform environmental volunteering (O’Brien et al., 2010), whether the tasks are clearing invasive species or walking across uneven ground to record the species composition of an area. Volunteer managers spending more time with their volunteers seemed to better understand the well-being of their volunteers, as they rated their volunteers’ well-being more similar to volunteers’ ratings than managers who spent less time with their volunteers. However, managers still perceived volunteers as having more ‘negative emotions’, being ‘lonelier’ and being in worse ‘health’ than volunteers themselves reported. These worse ratings of negative indices are in line with previous research. A meta-analysis of self-reported and other-reported agreement in well-being ratings found an average correlation of 0.42 between average self-ratings and other-reported ratings for a combined score of life satisfaction, happiness, positive affect and negative affect (Schneider & Schimmack, 2009). Positive and negative affect measures had relatively low agreement, and negative affect (r=0.18) had less agreement than positive affect (r=0.24) (Schneider & Schimmack, 2009). Again, this finding could reflect that managers also put more emphasis on and remember negative emotions and events better than positive emotions and events (Baumeister et al., 2001).\n\nIt has been suggested that volunteering brings both hedonic and eudaimonic well-being benefits to volunteers (Piliavin, 2009), and such a multidimensional approach to well-being was supported by this research. It recovered five of the seven proposed factors from the PERMA-P (Butler & Kern, 2016), including the ‘engagement’, ‘relationship’, ‘meaning’, ‘health’ and ‘negative emotion’ factors, but excluding the ‘positive emotion’ and ‘achievement’ factors. ‘Achievement’ items instead related to both the ‘engagement’ and ‘meaning’ factors, suggesting volunteers may not have set goals for themselves within their volunteering role and therefore not been focused on the achievement of any specific goals. This scenario was also supported by comments from volunteers stating that they did not have specific goals for their volunteering. ‘Positive emotion’ items instead related to the ‘engagement’ and ‘relationship’ factors, suggesting that volunteers did not pursue the positive emotions themselves, but rather that positive emotions arose due to positive relationships and task engagement during volunteering. Future research is needed to further tease apart these relationships in a volunteering context. The value of a multidimensional approach to well-being in the volunteering context is the information gained about how volunteering affects the various elements of well-being differently. In this sample of volunteers, the effects of volunteering were all positive; however, for the students, their engagement decreased during their fieldwork, highlighting an area that should be investigated further to find ways to turn this negative effect around.\n\nWalking has been advocated as a public health intervention (Iwata et al., 2016; Marselle et al., 2014), which the present findings support. However, they also suggest that environmental volunteering may provide increased benefits over and above the benefits of walking. For public health providers, this highlights environmental volunteering as a potential health intervention and a way to reintegrate people into society (O’Brien et al., 2011) by providing opportunities for positive relationships to develop. However, care must be taken to ensure that people actively choose the activity and do not feel forced to volunteer, as personal control and choice is important for a positive outcome (Stukas et al., 1999). For volunteer organisations, these positive results highlight that environmental volunteer projects provide benefits to the volunteers themselves and could be useful in motivating people to begin volunteering. In addition, it provides an opportunity to showcase to funding bodies that environmental volunteer projects provide positive outcomes also for the people involved in the projects.\n\nThe use of multidimensional well-being measures can provide the information that volunteer organisations and managers need to support and enhance the well-being of their volunteers. By assessing the individual elements, areas for improvement can be specifically targeted. For example, if the ‘meaning’ element is rated low by volunteers, improved feedback could be provided to volunteers to enhance their understanding of their role and thereby the meaning they derive from their volunteering. If ‘relationships’ are rated low, focus should be put on providing adequate support to volunteers during volunteering, as well as ensuring volunteers feel appreciated. Even if volunteers find their roles meaningful and relationships good, their ‘engagement’ may be lacking if they are not given interesting tasks and opportunities to fully immerse themselves in their volunteer tasks.\n\n\nConclusion\n\nThis study has shown the benefits of regarding volunteer well-being as a multidimensional construct to better understand how volunteering affects the various elements of well-being. It has highlighted how environmental volunteering immediately improved the well-being of participants, even more than other nature-based activities did. Volunteering improved participants’ well-being especially by lowering negative emotions and loneliness, and this was remembered long after volunteering ended. Most volunteer managers, however, did not perceive this significant decrease in negative emotions and loneliness in their volunteers during volunteering, although they did perceive an increase in positive well-being elements. This focus on negative emotions and events is possibly due to the well-established theory that ‘bad is stronger than good’. Volunteer organisations can use multidimensional assessment of volunteers’ well-being and managers’ perception of their volunteers’ well-being to identify and gain a deeper understanding of actual well-being, gaps in volunteer managers’ perceptions and potential areas for improvement.\n\n\nData availability\n\nDataset 1. Raw data from study 1, the onsite nature-based activity survey.\n\nThe raw data from onsite questionnaires of environmental volunteers and their control groups (walkers and students) supporting the findings described in the paper are provided. (DOI: 10.5256/f1000research.10016.d142072; Kragh et al., 2016a).\n\nDataset 2. Raw data from study 2, the online volunteer survey.\n\nThe raw data from online questionnaires of current, former and potential volunteers supporting the findings described in the paper are provided. (DOI: 10.5256/f1000research.10016.d142073; Kragh et al., 2016b).\n\nDataset 3. Raw data from study 3, the online volunteer manager survey.\n\nThe raw data from online questionnaires of current and former volunteer managers supporting the findings described in the paper are provided. (DOI: 10.5256/f1000research.10016.d142074; Kragh et al., 2016c).",
"appendix": "Author contributions\n\n\n\nAll authors participated in study development and design. GK carried out the research, performed data analysis and prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nWe would like to thank all participating volunteers, walkers, students and volunteer managers from the hundreds of participating organisations, including, but not limited to, the National Trust, Forestry Commission, Dorset Wildlife Trust, Hampshire and Isle of Wight Wildlife Trust, Cornwall Wildlife Trust, Riverfly Partnership, Woodland Trust, Lake District National Park, New Forest Walkers, Verwood Ramblers, Bournemouth University, Bournemouth and Poole College and Kingston Maurward College (see Supplementary Table 1).\n\n\nSupplementary material\n\nSupplementary Table 1. Acknowledgements go to all the volunteers, volunteer managers, students and walkers from the organisations detailed in the table.\n\nClick here to access the data.\n\nSupplementary File 1. The volunteer questionnaires used in Study 1.\n\nProvided here are the before-volunteering and after-volunteering questionnaires used for environmental volunteers. Similar surveys, but with appropriately adapted wording, were used for walkers and students.\n\nClick here to access the data.\n\nSupplementary File 2. The volunteer questionnaire used in Study 2.\n\nProvided here are the survey items in the online survey pertaining to the volunteer-related well-being of current environmental volunteers. Similar surveys, but with appropriately adapted wording, were used for non-environmental volunteers, as well as former and potential volunteers.\n\nClick here to access the data.\n\nSupplementary File 3. The volunteer manager questionnaire used in Study 3.\n\nProvided here are the survey items in the online survey pertaining to current environmental volunteer managers’ perception of the well-being of their volunteers. Similar surveys, but with appropriately adapted wording, were used for non-environmental volunteer managers, as well as former volunteer managers.\n\nClick here to access the data.\n\n\nReferences\n\nAhern NR: Using the internet to conduct research. Nurse Res. 2005; 13(2): 55–70. PubMed Abstract | Publisher Full Text\n\nAristotle: The Nicomachean Ethics. Oxford University Press, Translated by David Ross; revised with an introduction and notes by Lesley Brown, New York, 2009. Reference Source\n\nBaumeister R, Bratslavsky E, Finkenauer C: Bad is stronger than good. Rev Gen Psychol. 2001; 5(4): 323–370. Publisher Full Text\n\nBaumeister RF, Leary MR: The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychol Bull. 1995; 117(3): 497–529. PubMed Abstract | Publisher Full Text\n\nBell S, Marzano M, Reinert H, et al.: What counts? Volunteers and their organisations in the recording and monitoring of biodiversity. Biodivers Res Conserv. 2008; 17(14): 3443–3454. Publisher Full Text\n\nBernaards CA, Jennrich RI: Gradient Projection Algorithms and Software for Arbitrary Rotation Criteria in Factor Analysis. Educ Psychol Meas. 2005; 65(5): 676–696. Publisher Full Text\n\nBinder M, Freytag A: Volunteering, subjective well-being and public policy. J Neurosci Psychol Econ. 2013; 34: 97–119. Publisher Full Text\n\nBorgonovi F: Doing well by doing good. The relationship between formal volunteering and self-reported health and happiness. Soc Sci Med. 2008; 66(11): 2321–2334. PubMed Abstract | Publisher Full Text\n\nBowler DE, Buyung-Ali LM, Knight TM, et al.: A systematic review of evidence for the added benefits to health of exposure to natural environments. BMC Public Health. 2010; 10(1): 456. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBradburn NM: The structure of psychological well-being. BOOK, Aldine, Oxford, England. 1969. Reference Source\n\nBrooks A: Does giving make us prosperous? J Econ Finance. 2007; 31(3): 403–411. Publisher Full Text\n\nButler J, Kern ML: The PERMA-Profiler: A brief multidimentional measure of flourishing. International Journal of Wellbeing. 2016; 6(3): 1–48. Reference Source\n\nCattell R: The scree test for the number of factors. Multivariate Behav Res. 1966; 1(2): 245–276. PubMed Abstract | Publisher Full Text\n\nCortina JM: What is coefficient alpha? An examination of theory and applications. J Appl Psychol. 1993; 78(1): 98–104. Publisher Full Text\n\nCronbach L: Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16(3): 297–334. Publisher Full Text\n\nCsikszentmihalyi M: Beyond boredom and anxiety. Book, Jossey-Bass Publishers, San Francisco; 1975. Reference Source\n\nCsikszentmihalyi M: Flow: the psychology of optimal experience. Book, HarperPerennial, New York; 1991. Reference Source\n\nDiener E: Subjective well-being. Psychol Bull. 1984; 95(3): 542–575. PubMed Abstract | Publisher Full Text\n\nDiener Ed: Assessing subjective well-being: Progress and opportunities. Soc Indic Res. 1994; 31(2): 103–157. Publisher Full Text\n\nDiener E: Subjective well-being. The science of happiness and a proposal for a national index. Am Psychol. 2000; 55(1): 34–43. PubMed Abstract | Publisher Full Text\n\nDiener Ed, Suh EM, Lucas RE, et al.: Subjective well-being: Three decades of progress. Psychol Bull. 1999; 125(2): 276–302. Publisher Full Text\n\nDziuban CD, Shirkey EC: When is a correlation matrix appropriate for factor analysis? Some decision rules. Psychol Bull. 1974; 81(6): 358–361. Publisher Full Text\n\nEvans C, Abrams E, Reitsma R, et al.: The Neighborhood Nestwatch Program: participant outcomes of a citizen-science ecological research project. Conserv Biol. 2005; 19(3): 589–594. Publisher Full Text\n\nForgeard MJC, Jayawickreme E, Kern ML, et al.: Doing the Right Thing: Measuring Well-Being for Public Policy. International Journal of Wellbeing. 2011; 1(1): 79–106. Publisher Full Text\n\nFrumkin H: Beyond toxicity: human health and the natural environment. Am J Prev Med. 2001; 20(3): 234–240. PubMed Abstract | Publisher Full Text\n\nGimenez-Nadal JI, Molina JA: Voluntary Activities and Daily Happiness in the US. Discussion Paper No. 8764. Bonn: IZA, 2015; 53(4): 1735–1750. Publisher Full Text\n\nGooch M: Voices of the Volunteers: an Exploration of the Experiences of Catchment Volunteers in Coastal Queensland, Australia. Local Environ. 2005; 10(1): 5–19. Publisher Full Text\n\nGreenfield EA, Marks NF: Formal volunteering as a protective factor for older adults’ psychological well-being. J Gerontol B Psychol Sci Soc Sci. 2004; 59(5): S258–264. PubMed Abstract | Publisher Full Text\n\nGuiney MS, Oberhauser KS: Conservation volunteers’ connection to nature. Ecopsychology. 2009; 1(4): 187–197. Publisher Full Text\n\nHarlow RE, Cantor N: Still participating after all these years: a study of life task participation in later life. J Pers Soc Psychol. 1996; 71(6): 1235–1249. PubMed Abstract | Publisher Full Text\n\nHo HC: Elderly volunteering and psychological well-being. Int Soc Work. In Press, 2015. Publisher Full Text\n\nHorn JI: A rationale and test for the number of factors in factor analysis. Psychometrika. 1965; 30(2): 179–185. PubMed Abstract | Publisher Full Text\n\nHu L, Bentler PM: Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Modeling. 1999; 6(1): 1–55. Publisher Full Text\n\nIwata Y, Dhubháin ÁN, Brophy J, et al.: Benefits of Group Walking in Forests for People with Significant Mental Ill-Health. Ecopsychology. 2016; 8(1): 16–26. Publisher Full Text\n\nJenkinson CE, Dickens AP, Jones K, et al.: Is volunteering a public health intervention? A systematic review and meta-analysis of the health and survival of volunteers. BMC Public Health. 2013; 13(1): 773. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaiser HF: The Application of electronic computers to factor analysis. Educ Psychol Meas. 1960; 20(1): 141–151. Publisher Full Text\n\nKaplan R: The role of nature in the urban context. In. I Altman and JF Wohlwill, editors. Behavior and the Natural Environment. Journal Article, Plenum Press, New York. 1983; 6: 127–161. Publisher Full Text\n\nKellert SR, Wilson EO: The Biophilia hypothesis. Book, Island Press, Washington, D.C. 1993. Reference Source\n\nKeyes CL: The Mental Health Continuum: From Languishing to Flourishing in Life. J Health Soc Behav. 2002; 43(2): 207–222. PubMed Abstract\n\nKonrath S, Fuhrel-Forbis A, Lou A, et al.: Motives for volunteering are associated with mortality risk in older adults. Health Psych. 2012; 31(1): 87–96. PubMed Abstract | Publisher Full Text\n\nKorpela KM, Stengård E, Jussila P: Nature Walks as a Part of Therapeutic Intervention for Depression. Ecopsychology. 2016; 8(1): 8–15. Publisher Full Text\n\nKoss RS, “Yotti” Kingsley J: Volunteer health and emotional wellbeing in marine protected areas. Ocean Coast Manag. 2010; 53(8): 447–453. Publisher Full Text\n\nKragh G, Stafford R, Curtin S, et al.: Dataset 1 in: Environmental volunteer well-being: Managers’ perception and actual well-being of volunteers. F1000Research. 2016a. Data Source\n\nKragh G, Stafford R, Curtin S, et al.: Dataset 2 in: Environmental volunteer well-being: Managers’ perception and actual well-being of volunteers. F1000Research. 2016b. Data Source\n\nKragh G, Stafford R, Curtin S, et al.: Dataset 3 in: Environmental volunteer well-being: Managers’ perception and actual well-being of volunteers. F1000Research. 2016c. Data Source\n\nLikert R: A technique for the measurement of attitudes. Arch Psychol. 1932; 22(140): 5–55. Reference Source\n\nMacCallum RC, Browne MW, Sugawara HM: Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996; 1(2): 130–149. Publisher Full Text\n\nMarselle MR, Irvine KN, Warber SL: Examining Group Walks in Nature and Multiple Aspects of Well-Being: A Large-Scale Study. Ecopsychology. 2014; 6(3): 134–147. Reference Source\n\nMatsunaga M: How to Factor-Analyze Your Data Right: Do’s, Don’ts, and How-To’s. Int J Psychol Res. 2010; 3(1): 98–111. Reference Source\n\nMorrow-Howell N, Hinterlong J, Rozario PA, et al.: Effects of volunteering on the well-being of older adults. J Gerontol B Psychol Sci Soc Sci. 2003; 58(3): S137–145. PubMed Abstract | Publisher Full Text\n\nMuirhead S: Nature and well-being: building social and emotional capital through environmental volunteering. Thesis (PhD): University of Dundee; 2011. Reference Source\n\nMusick MA, Wilson J: Volunteers: a social profile. Book, Bloomington: Indiana University Press; 2008. Reference Source\n\nNatural England: Monitor of Engagement with the Natural Environment: Annual Report from the 2013–14 survey. Natural England, Peterborough; 2015. Reference Source\n\nNunnally JC: Psychometric theory. 2nd edition. Book, McGraw-Hill, New York; London: 1978. Reference Source\n\nO’Brien L, Burls A, Townsend M, et al.: Volunteering in nature as a way of enabling people to reintegrate into society. Perspect Public Health. 2011; 131(2): 71–81. PubMed Abstract | Publisher Full Text\n\nO’Brien L, Townsend M, Ebden M: “Doing Something Positive”: Volunteers’ Experiences of the Well-Being Benefits Derived from Practical Conservation Activities in Nature. Voluntas: International Journal of Voluntary and Nonprofit Organizations. 2010; 21(4): 525–545. Publisher Full Text\n\nOnyx J, Warburton J: Volunteering and health among older people: A review. Australas J Ageing. 2003; 22(2): 65–69. Publisher Full Text\n\nPiliavin J: Volunteering across the life span: Doing well by doing good. In S Stürmer and M Snyder, editors. The psychology of prosocial behavior: Group processes, intergroup relations, and helping. Wiley-Blackwell, Chichester, 2009; 157–172. Publisher Full Text\n\nPiliavin JA, Siegl E: Health Benefits of Volunteering in the Wisconsin Longitudinal Study. J Health Soc Behav. 2007; 48(4): 450–464. PubMed Abstract | Publisher Full Text\n\nPillemer K, Fuller-Rowell TE, Reid MC, et al.: Environmental volunteering and health outcomes over a 20-year period. Gerontologist. 2010; 50(5): 594–602. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPretty J, Peacock J, Sellens M, et al.: The mental and physical health outcomes of green exercise. Int J Environ Health Res. 2005; 15(5): 319–337. PubMed Abstract | Publisher Full Text\n\nRaiche G: nFactors: an R package for parallel analysis and non graphical solutions to the Cattell scree test. 2010. Reference Source\n\nReis HT, Sheldon KM, Gable SL, et al.: Daily well-being: The role of autonomy, competence, and relatedness. Pers Soc Psychol B. 2000; 26(4): 419–435. Publisher Full Text\n\nRevelle W: psych: Procedures for Personality and Psychological Research. Northwestern University, Evanston, Illinois, USA; 2016. Reference Source\n\nRosseel Y: lavaan: An R package for structural equation modeling. J Stat Softw. 2012; 48(2): 1–36. Publisher Full Text\n\nRStudio Team: RStudio: Integrated Development for R. RStudio, Inc., Boston, MA. 2015.\n\nRyan RM, Deci EL: On happiness and human potentials: a review of research on hedonic and eudaimonic well-being. Annu Rev Psychol. 2001; 52: 141–166. PubMed Abstract | Publisher Full Text\n\nRyff CD: Happiness Is Everything, or Is It? Explorations on the Meaning of Psychological Well-Being. J Pers Soc Psychol. 1989; 57(6): 1069–1081. Publisher Full Text\n\nRyff CD: Psychological Well-Being in Adult Life. Curr Dir Psychol Sci. 1995; 4(4): 99–104. Publisher Full Text\n\nRyff CD: Psychological well-being revisited: advances in the science and practice of eudaimonia. Psychother Psychosom. 2014; 83(1): 10–28. PubMed Abstract | Publisher Full Text\n\nSchneider L, Schimmack U: Self-Informant Agreement in Well-Being Ratings: A Meta-Analysis. Soc Indic Res. 2009; 94(3): 363–376. Publisher Full Text\n\nSeligman ME: Flourish: A New Understanding of Happiness and Well-Being - and How To Achieve Them. Nicholas Brealey Publishing, London. 2011. Reference Source\n\nSmith DH: Altruism, Volunteers, and Volunteerism. Nonprofit Volunt Sect Q. 1981; 10(1): 21–36. Publisher Full Text\n\nSon J, Wilson J: Volunteer Work and Hedonic, Eudemonic, and Social Well-Being. Sociol Forum. 2012; 27(3): 658–681. Publisher Full Text\n\nStone A, Shiffman S, DeVries M: Ecological momentary assessment. In D Kahneman, E Diener, and N Schwarz, editors. Well-being: The foundations of hedonic psychology. Russell Sage Foundation, New York, 1999; 26–39. Reference Source\n\nStukas AA, Hoye R, Nicholson M, et al.: Motivations to Volunteer and Their Associations With Volunteers’ Well-Being. Nonprofit Volunt Sect Q. 2016; 45(1): 112–132. Publisher Full Text\n\nStukas AA, Snyder M, Clary EG: The Effects of “Mandatory Volunteerism” on Intentions to Volunteer. Psychol Sci. 1999; 10(1): 59–64. Publisher Full Text\n\nThoits PA, Hewitt LN: Volunteer work and well-being. J Health Soc Behav. 2001; 42(2): 115–131. PubMed Abstract | Publisher Full Text\n\nTownsend M: Feel blue? Touch green! Participation in forest/woodland management as a treatment for depression. Urban For Urban Gree. 2006; 5(3): 111–120. Publisher Full Text\n\nUnger LS: Altruism as a motivation to volunteer. J Econ Psychol. 1991; 12(1): 71–100. Publisher Full Text\n\nVan Willigen M: Differential benefits of volunteering across the life course. J Gerontol B Psychol Sci Soc Sci. 2000; 55(5): S308–S318. PubMed Abstract | Publisher Full Text\n\nVelicer WF: Determining the number of components from the matrix of partial correlations. Psychometrika. 1976; 41(3): 321–327. Publisher Full Text\n\nWheeler JA, Gorey KM, Greenblatt B: The beneficial effects of volunteering for older volunteers and the people they serve: a meta-analysis. Int J Aging Hum Dev. 1998; 47(1): 69–79. PubMed Abstract | Publisher Full Text\n\nWickham H: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag, New York, USA. 2009. Publisher Full Text\n\nWilson J: Volunteering. Annu Rev Sociol. 2000; 26: 215–240. Publisher Full Text\n\nWilson J: Volunteerism Research: A Review Essay. Nonprofit Volunt Sect Q. 2012; 41(2): 176–212. Publisher Full Text\n\nWindsor TD, Anstey KJ, Rodgers B: Volunteering and psychological well-being among young-old adults: how much is too much? Gerontologist. 2008; 48(1): 59–70. PubMed Abstract | Publisher Full Text\n\nWyles KJ, Pahl S, Holland M, et al.: Can Beach Cleans Do More Than Clean-Up Litter? Comparing Beach Cleans to Other Coastal Activities. Environ Behav. 2016; 1–27. Publisher Full Text"
}
|
[
{
"id": "17678",
"date": "29 Nov 2016",
"name": "Sarah Elizabeth West",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe abstract results section could be clearer, in particular the sentence starting ‘ Even remembering’.\nI think it would be useful in the introduction to give the geographical context for your work, and figures about the size of the environmental volunteering sector in that country. I assumed UK, and it seems like the bulk of responses were from the UK, but I note that your survey was completed by people in 11 countries. It also needs some definition of environmental volunteering I think. I guess this includes things like practical conservation, environmental CS surveys, but what about someone delivering leaflets promoting Friends of the Earth activities for example? This example highlights why definition is important. And in your results, you talk about Biodiversity monitoring volunteers – is this your definition of environmental volunteers?\nSome justification of why PERMA was used as opposed to other multidimensional well-being measures would be useful.\nSome more info on why managers’ perceptions of their volunteers’ motivations is important is needed, I think this is missing.\n‘Worldwide responses’ – how do you know that any difference in responses is due to the factors you are interested in, not due to the fact that they are in a different part of the world? Some justification for including these (relatively small number of responses) would be useful.\nThe results text is very dense, and it is hard for those not very familiar with factor analysis (like me!) to understand what the key parts of the text are. I guess it’s the bottom of page 9 is it? I think some explanatory text at the beginning of results about what factor analysis is would be helpful. The 'External factors and volunteer well-being' section is clearer as you’ve said what the results are and then gone into the detail of how you came to that result, and means that people who are not au fait with statistics (as I guess will be many of your readers) can skip over it.\nDiscussion – how did your volunteers and non volunteers compare to others using your well-being index? Or compared to other well-being indices? This would help to give your results more context.\n\nSome of your sentences are a little long which makes them a bit hard to read, for example, the one starting However, this positive…on page 19.\n\nShould your figures be in the discussion section, or would they be better placed in the results? It breaks the text up a bit too much I feel.",
"responses": []
},
{
"id": "19742",
"date": "30 Jan 2017",
"name": "Sabine Pahl",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract: This is fine. I have some comments on the comparisons and causality below that the authors should consider.\n\nArticle content/ Conclusions: The article is well written and overall clearly structured. Using the PERMA model is a good addition. The specific research questions are very helpful in communicating the research. Nevertheless I have picked up two issues that are worth considering, and a few minor comments.\n\nI wasn’t entirely convinced by the research question looking at managers’ perception of volunteer well-being. Why is it important that these correspond (p. 3)? Even if it is important, as far as I understand, the comparison is not straightforward. The volunteers rated by the managers are not the same people as rate their own well-being, are they? So the conclusion of non-correspondence is problematic, if you’re comparing the rated/perceived well-being of *different* people.\n\nMy second issue is around the language/interpretation in the article, specifically in the Implications section. You seem to assume these are causal effects i.e. the volunteering causes people’s improved well-being (and therefore it should be used more widely). But it’s not quite that straightforward, as you haven’t allocated people to different activities so there might be other differences between people who walk vs. volunteer for example, that could account for any effects you find. You can only make strong inferences about causality when you use a proper experimental research design. It would be good to note this in the discussion. (I think only the Wyles et al. article has tried this in the volunteering literature). You mention also that choice is important, which is a related consideration. This is where recommendations are a bit tricky, because you can’t (by definition) force people to ‘volunteer’ even it is good for them, and there may be selection effects that mean happier / healthier people are also the ones who do environmental volunteering. This is not a big problem but I feel should be acknowledged.\n\nMinor points:\nI think a lot of space is dedicated to the different factor analyses (on pages 7-11) to establish questionnaire structure. While this is important and good practice it is not linked to any of the main research questions. Therefore I was wondering if (some of) this should be presented in an Appendix rather than the main text, as it distracts from the key questions and findings.\n\nOn p. 18 literature on the amount of time spent volunteering is reviewed but this all seems to be published in gerontology journals so I’m assuming uses older samples. Please add in the text if that’s the case.\nData: Links to raw data are provided.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2679
|
https://f1000research.com/articles/5-2678/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Two-step feature selection for predicting survival time of patients with metastatic castrate resistant prostate cancer",
"authors": [
"Motoki Shiga"
],
"abstract": "Metastatic castrate resistant prostate cancer (mCRPC) is the major cause of death in prostate cancer patients. Even though some options for treatment of mCRPC have been developed, the most effective therapies remain unclear. Thus finding key patient clinical variables related with mCRPC is an important issue for understanding the disease progression mechanism of mCRPC and clinical decision making for these patients. The Prostate Cancer DREAM Challenge is a crowd-based competition to tackle this essential challenge using new large clinical datasets. This paper proposes an effective procedure for predicting global risks and survival times of these patients, aimed at sub-challenge 1a and 1b of the Prostate Cancer DREAM challenge. The procedure implements a two-step feature selection procedure, which first implements sparse feature selection for numerical clinical variables and statistical hypothesis testing of differences between survival curves caused by categorical clinical variables, and then implements a forward feature selection to narrow the list of informative features. Using Cox’s proportional hazards model with these selected features, this method predicted global risk and survival time of patients using a linear model whose input is a median time computed from the hazard model. The challenge results demonstrated that the proposed procedure outperforms the state of the art model by correctly selecting more informative features on both the global risk prediction and the survival time prediction.",
"keywords": [
"Survival analysis",
"Cox-proportional hazards model",
"feature selection"
],
"content": "Introduction\n\nProstate cancer is the most common malignant tumor among men and ranks third in terms of mortality after lung cancer and colorectal cancer. The major clinical treatment against prostate cancer is an anti-androgen therapy to inhibit male hormones providing to prostate cancer cells. However, the therapy cannot inhibit the cancer cell growth for long because these cells can develop the resistance against the androgen absence condition. This developed prostate cancer is called metastatic castrate resistant prostate cancer (mCRPC), which is the major cause of death in prostate cancer patients1,2. Even though some options for treatment of mCRPC have been developed, the most effective therapies remain unclear3. Finding key clinical variables related with mCRPC is an important first step for understanding the disease progression mechanism and clinical decision making for these patients. Halabi et al.4 identified key factors of mCRPC from a lot of clinical variables by feature selection based on a Cox’s proportional hazards model with a L1 penalty, i.e. a variant of Lasso for survival analysis6,7 and built a mCRPC prognostic model. This data-driven approach is important to correctly predict patient health status for treatment choices. To validate and improve such prediction models of mCRPC patients, larger scale clinical datasets collected from several clinical institutes are useful. The Prostate Cancer DREAM challenge in DREAM 9.5 (https://www.synapse.org/ProstateCancerChallenge) provided such datasets and an opportunity to tackle this essential challenge using the wisdom of the crowd, in which participating teams were required to submit prediction models based on clinical variables from the comparator arms of four phase III clinical trials with over 2,000 mCRPC patients treated with first-line docetaxel. My method for this challenge consists of a two-step feature selection procedure, which first performs both sparse feature selection7 and statistical hypothesis testing8, and then performs a forward feature selection9 to screen out non-informative features. Selected clinical variables were used to build a prognostic model to predict global risks of patients. For a survival time prediction, my method further used a linear model fitting with median survival time5 computed by the established progression model. The final result of this DREAM challenge demonstrated that, in the sub-challenge 1a, the proposed procedure outperforms Halabi’s model4 by correctly selecting more informative features on global risk prediction. In sub-challenge 1b, my method using these selected features predicted the survival time more correctly and outperforms most of the other team’s methods.\n\n\nMethods\n\nData across comparator arms of four phase III clinical trials have been compiled, annotated, cleaned and were made available through the Challenge and remain available on the web site7. These datasets include over 150 clinical variables and over 2,000 mCRPC patients treated with first-line docetaxel. The output value to be predicted for unknown new patients is the survival time. The survival times of patients are not always observed because some patients are still alive when they are lost to follow-up or when the study ends. Thus the observed survival times are right censoring. For the training dataset, three of the clinical trial cohorts were provided, which includes data for 476, 598, and 526 patients from clinical trial ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center)10, VENICE (Sanofi)11, and MAINSAIL (Celgene)12, respectively. For the test dataset, 470 patients’ data were provided from clinical trial ENTHUSE-33 (AstraZeneca)13. The goal of this challenge was to correctly predict global risk of death and survival time of patients in the test dataset. In these datasets, clinical variables for some patients were missing. These missing values were imputed by the median of each variable for numerical values and by the most frequent value for each categorical variable.\n\nA Cox proportional hazards model is assumed for the relationship between clinical variables (input variables) of a patient and the survival time (a output variable)5. Let x be clinical variables of a patient. The hazard function of the patient at time t is given by\n\nh(t|x)=h0(t) exp(βTx),\n\nwhere h0(t) is a baseline hazard function and β is a weight vector to be optimized from training data. When the weight value of the d-th clinical variable βd is large, the clinical variable is informative to predict the survival time. On the other hand, when βd =0, the d-th clinical variable is independent with the survival time. Thus the correctly estimating β is the most important task in survival analysis. A common estimation is performed by maximizing a partial log likelihood function of N patients given by\n\nL(β)=∑n=1Nδn[βTxn−log{∑j∈Rnexp(βTxj)}],\n\nwhere xn is a vector of clinical variables of the n-th patient, δn is a binary variable. δn = 1 for died patients and δn = 0 for right-censored patients at time tn when is the survival time of the n-th patient. Rn is the risk set at time tn. This estimation is of course affected by non-informative clinical variables (noise variables) because the size of the training data is limited, where the number of clinical variable is large but the number of patients is small. Before estimating weight vector β in the hazard function, my method implemented a two-step feature selection to screen out non-informative clinical variables.\n\nThe goal of feature selection is to divide the set of all clinical variables into a set of informative variables and non-informative variables by optimizing the final scoring metric. However, this optimization is NP-hard, i.e. intractable in general. Thus my procedure implemented this task in a heuristic manner; 1) screening numerical features by a L1 sparse penalized regression and categorical features by a statistical hypothesis testing, and then 2) a forward sequential feature selection to narrow the list of informative selected features by optimizing the final scoring metric. For the first procedure, my procedure used a variant of LASSO for a Cox’s proportional hazards model7 provided by R package glmpath11. This approach should choose the weight of the L1 penalty term. My method automatically chose it by minimizing an information criterion (AIC), which is a criterion to estimate the generalized error. Because the computational cost of this implementation with a lot of clinical variables is expensive, my procedure used this sparse feature selection for only numerical variables to reduce the computational cost. Categorical variables were evaluated using rank statistical hypothesis testing5,8. This method tests if there is a significant difference between two or more survival curves with different values of a categorical variable. If the difference of curves is statistically significant, the categorical variable might be related with survival times of patients. Therefore, such variables should be selected for a survival time prediction model.\n\nAmong selected features described above, my procedure further implemented a forward feature selection9 to narrow the list of clinical variables. In my procedure, the most useful feature that maximally increases an integrated time-dependent AUC (iAUC)14, which is the final scoring metric in sub-challenge 1a, is sequentially added one by one until all variables are selected. After that, the optimal set of clinical variables is selected by maximizing iAUC. iAUCs were estimated by cross-validation (CV), which was performed by randomly splitting all training data into 90% training data and 10% test data. iAUC was estimated as the median among ten calculated iAUC values.\n\nAfter selecting informative features, parameter β in the Cox proportional hazard function was optimized using only the selected clinical variables. Next, the hazard function was used to predict the global risk of death for each patient5. The survival time of each patient can be predicted based on the median time when an estimated survival probability is equal to 0.5, computed from the hazard function5. However the root mean squared error of this prediction method was still large and an estimation bias was included because of the right censoring setting, which will be experimentally demonstrated later. Against this problem, my method used a linear model fitting from computed median times to observed survival times in the training dataset. Survival time was predicted by the liner regression model whose input is the estimated median time of each patient.\n\n\nResults\n\nMy method removed clinical variables having a lot of missing values and then it used only 14 numerical clinical variables and 56 categorical clinical variables with less number of missing values. Feature selection for numerical clinical variables was first implemented using the L1 penalized approach7 by function coxpath in R package glmpath (https://cran.r-project.org/web/packages/glmpath/glmpath.pdf). This function can compute the entire regularization path for the L1 penalized model by increasing the weight of the penalty and check only steps of the path when a weight parameter of a clinical variable becomes greater than zero. Table 1 shows the first 20 steps and the sequence of added clinical variables. Figure 1 shows computed AIC scores of these steps. The best feature set (step) was selected by minimizing an AIC score. This procedure chose the 14th step and then selected nine clinical covariates (ENTRTPC, ALP, HB, AST, ECOGC, NEU, PLT, PSA and LDH) as informative clinical variables.\n\nOn the other hand, differences of survival curves by categorical clinical variables were statistically tested using function survdiff in R package survival (https://cran.r-project.org/web/packages/survival/survival.pdf). Table 2 shows the ranking result of clinical variables with p-values. The threshold of a significance level was set to 0.05 and then the procedure selected categorical features ANALGESICS, MHGEN, MI, TURP, MHCARD, ACE_INHIBITORS, MHPSYCH and PROSTATECTOMY.\n\nFor these 17 selected clinical variables by two feature selections, we further implemented the forward feature selection described in the previous section. Figure 2 shows iAUC at each step of the forward feature selection. This figure shows that the step maximizing AUC is the sixth step which includes six clinical variables ALP, AST, ECOG_C, HB, MI and PLT. These clinical variables were finally selected to predict global risks and survival times of patients.\n\n\nPrediction performance\n\nThe parameter vector β of a Cox-proportional hazards model with six selected clinical variables was optimized by maximizing the partial log-likelihood function. Then the global risks of death of patients in the test dataset were predicted from the optimized model. Prediction performance iAUC by the proposed method is 0.7671 although iAUC by Halabi’s model is 0.7429, which can be found in the ranking result of sub-challenge 1a in the web site of Prostate Cancer DREAM Challenge (https://www.synapse.org/ProstateCancerChallenge). This result demonstrated that the proposed prediction outperforms Halabi’s method by correctly selecting informative features.\n\nFurthermore, survival times of patients were predicted using median times computed from the optimized hazard model. Figure 3(a) shows predicted values and observed values in the training dataset. This result demonstrates that the estimation of variance is large and the center of plotted data is located to the upper-left from the diagonal line, meaning that predicted values are biased. To improve these prediction errors, the median survival times were transformed by a linear model. Figure 3(b) shows the prediction result after this transformation. These figures demonstrate that the proposed prediction reduces both the estimation bias and variance. As a result, the root mean square error (RMSE) between true values and predictions is drastically improved, from 281.3 by median survival times to 198.7 by the proposed method. This prediction result in sub-challenge 1-b in the Prostate Cancer DREAM Challenge was ranked in the group of top-performers even though the global risk prediction result in sub-challenge 1a was worse than the best 10 performers.\n\n\nConclusions\n\nThis paper outlines a prediction method of global risks of mCRPC patients for sub-challenge 1a and that of survival time for sub-challenge 1b in the Prostate Cancer DREAM Challenge. The challenge result in sub-challenge 1b demonstrated that this procedure, which is based on the two-step feature selection and the correction of naïve survival time predictions from the optimized hazard model, outperformed the other teams’ methods. Especially, for survival time prediction, this correction method based on centering and reducing estimation variance works well to improve RMSE, the scoring metric of sub-challenge 1b. This analysis demonstrates that a naïve prediction from a basic model (Cox’s proportional hazards model) is not always optimal for an evaluation metric. Thus a suitable transformation is necessary to optimize the metric.\n\nThis paper also provides a two-step feature selection procedure because using only a single feature selection method leaves a lot of non-informative features. By carefully selecting features by this two-step procedure, the global risk prediction outperformed Halabi’s model4 in sub-challenge 1a. This result demonstrated that multiple feature selection procedures are necessary to screen out non-informative features. Future work includes the validation of informative clinical variables selected by not only of the method proposed here, but also other top-performing methods. Table 3 shows the comparison of our selected clinical variables with Halabi’s selected variables4. Both models selected ALP, ECOG_C and HB but neither our model nor Halabi’s model selected the other eight clinical variables. Although selection results depend on the datasets used, we should further investigate the importance of these clinical variables using knowledge in clinical and biological research areas.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn422926615",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work is partially supported by JSPS KAKENHI 25870322.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nDatasets were kindly provided by Celgene, Sanofi, Memorial Sloan Kettering Cancer Center, AstraZeneca and compiled in the Project Data Sphere® platform. I acknowledge Sage Bionetworks and the organizers of Prostate Cancer DREAM Challenge to provide this interesting clinical data analysis throughout this challenge.\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\n\nReferences\n\nJemal A, Siegel R, Ward E, et al.: Cancer statistics, 2009. CA Cancer J Clin. 2009; 59(4): 225–249. PubMed Abstract | Publisher Full Text\n\nRyan CJ, Smith MR, de Bono JS, et al.: Abiraterone in metastatic prostate cancer without previous chemotherapy. N Engl J Med. 2013; 368(2): 138–148. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu JN, Fish KM, Evans CP, et al.: No improvement noted in overall or cause-specific survival for men presenting with metastatic prostate cancer over a 20-year period. Cancer. 2014; 120(6): 818–23. PubMed Abstract | Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKleinbaum DG, Klein M: Survival Analysis: A Self-Learning Text, Third Edition. Springer. 2012. Publisher Full Text\n\nZhang HH, Lu W: Adaptive Lasso for Cox’s proportional hazards model. Biometrika. 2007; 94(3): 691–703. Publisher Full Text\n\nPark MY, Hastie T: L1-regularization path algorithm for generalized linear models. J R Statist Soc. 2007; 69(4): 659–677. Publisher Full Text\n\nHarrington DP, Fleming TR: A class of rank test procedures for censored survival data. Biometrika. 1982; 69(3): 553–566. Publisher Full Text\n\nHastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning. Springer. 2009. Publisher Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nHung H, Chiang CT: Estimation methods for time-dependent AUC models with survival data. Can J Stat. 2010; 38(1): 8–26. Publisher Full Text\n\nShiga M: Write-up for DREAM 9.5 Prostate Cancer DREAM Challenge, Synapse Storage. 2016. Publisher Full Text"
}
|
[
{
"id": "17683",
"date": "25 Nov 2016",
"name": "Niels Richard Hansen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper offers methods to calculate patient risk scores and predict survival times from proportional hazard models in the context of the Prostate Cancer DREAM Challenge. The author used a two-step feature selection procedure by first using a combination of the LASSO and significance testing and then using a forward selection method.\nThe challenge consisted of two parts. In one part the contestants were to assign global risk scores to patients and in the other they were to predict survival times. The author states that the results of the methods in question for the former outcome did not make it into the top-10 of the challenge. However, the paper seems to conclude that the two-step feature selection is superior to one-step feature selection. This is possibly based on a comparison with the DREAM benchmark model only. In this case, the paper would benefit from a more specific statement.\nFor the feature selection it seems unclear if the LASSO variable selection was done conditionally on the categorical predictors (without penalizing their coefficients) or marginally on only the continuous predictors.\nCross-validation seems to have been carried out incorrectly in the sense that only the second step (the forward selection) and not the first step was cross-validated. Whether this has consequences for the quality of the selection is unclear, but the estimated iAUC-values reported in Figure 2 are suspiciously large – and they definitely overestimate the validation iAUC.\nFor predicting survival times, the author first used a fitted proportional hazards model to estimate median survival times. Then observed survival times were regressed linearly on the predicted medians. This estimated a linear transformation, which could be used to transform predicted medians to means. The paper would benefit from a brief discussion of the motivation behind this approach. It is stated that the linear transformation “reduces both the estimation bias and variance”, which is unclear as it is not stated what we’re aiming to estimate. Arguably, estimating the means from the medians should improve the performance as the RMSE is used to score the predictions.\nMinor comments: p. 5: 1-b→1b\nTable 3: Please add information to the caption about what the symbols mean. It is clear from reading the paper that “open circle” means “selected”, but that is not self-evident.",
"responses": []
},
{
"id": "17685",
"date": "28 Nov 2016",
"name": "Ka Yee Yeung",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is generally well-written, with a clear and concise description of the problem and challenge. The author adopted a two-step feature selection procedure: a penalized L1 regression for Cox PH model (R package \"glmpath\") in the first step, and forward selection in the second step. Features are selected to optimize the iAUC (integrated time-dependent AUC) in 10-fold cross validation.\nMajor comments:\nI am confused about how the two-step feature selection procedure works. The author mentioned the following\n\"Among selected features described above, my procedure further implemented a forward feature selection to narrow the list of clinical variables.\"\n\n\"This figure shows that the step maximizing AUC is the sixth step which includes six clinical variables ALP, AST, ECOG_C, HB, MI and PLT. These clinical variables were finally selected to predict global risks and survival times of patients.\"\nTherefore, I assume the second step starts with the features selected from the first step. However, the features shown in Table 2 don't appear to be a subset of the features shown in Table 1. Also, the feature \"MI\" doesn't appear to be in Table 1.\nMinor comments:\nThe difference between sub-challenge 1a and sub-challenge 1b is not documented in the Introduction. Please explain that in sub-challenge 1a, the submissions consist of the risk scores, while in sub-challenge 1b, the submissions consist of the predicted survival time.\n\nUnder Results and \"Selected clinical variables\", the author mentioned that \"My method removed clinical variables having a lot of missing values and then it used only 14 numerical clinical variables and 56 categorical clinical variables with less number of missing values.\". What are the exact criteria for filtering clinical variables given that there are 150+ clinical variables to start with?\n\nPlease explain what the clinical variables mean (e.g. ENTRTPC, ALP, HB, AST, ECOGC, NEU, PLT, PSA and LDH in Table 1).\n\nPlease expand the captions for Table 1 and Table 2 to put these tables in the context of the 2-step feature selection procedure.\n\nIn Table 3, I assume the circle means \"yes\" and the cross means \"no\". Please add a legend to the caption.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2678
|
https://f1000research.com/articles/5-2677/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "An ensemble-based Cox proportional hazards regression framework for predicting survival in metastatic castration-resistant prostate cancer (mCRPC) patients",
"authors": [
"Richard Meier",
"Stefan Graw",
"Joseph Usset",
"Rama Raghavan",
"Junqiang Dai",
"Prabhakar Chalise",
"Shellie Ellis",
"Brooke Fridley",
"Devin Koestler",
"Richard Meier",
"Stefan Graw",
"Joseph Usset",
"Rama Raghavan",
"Junqiang Dai",
"Prabhakar Chalise",
"Shellie Ellis",
"Brooke Fridley"
],
"abstract": "From March through August 2015, nearly 60 teams from around the world participated in the Prostate Cancer Dream Challenge (PCDC). Participating teams were faced with the task of developing prediction models for patient survival and treatment discontinuation using baseline clinical variables collected on metastatic castrate-resistant prostate cancer (mCRPC) patients in the comparator arm of four phase III clinical trials. In total, over 2,000 mCRPC patients treated with first-line docetaxel comprised the training and testing data sets used in this challenge. In this paper we describe: (a) the sub-challenges comprising the PCDC, (b) the statistical metrics used to benchmark prediction performance, (c) our analytical approach, and finally (d) our team’s overall performance in this challenge. Specifically, we discuss our curated, ad-hoc, feature selection (CAFS) strategy for identifying clinically important risk-predictors, the ensemble-based Cox proportional hazards regression framework used in our final submission, and the adaptation of our modeling framework based on the results from the intermittent leaderboard rounds. Strong predictors of patient survival were successfully identified utilizing our model building approach. Several of the identified predictors were new features created by our team via strategically merging collections of weak predictors. In each of the three intermittent leaderboard rounds, our prediction models scored among the top four models across all participating teams and our final submission ranked 9th place overall with an integrated area under the curve (iAUC) of 0.7711 computed in an independent test set. While the prediction performance of teams placing between 2nd- 10th (iAUC: 0.7710-0.7789) was better than the current gold-standard prediction model for prostate cancer survival, the top-performing team, FIMM-UTU significantly outperformed all other contestants with an iAUC of 0.7915. In summary, our ensemble-based Cox regression framework with CAFS resulted in strong overall performance for predicting prostate cancer survival and represents a promising approach for future prediction problems.",
"keywords": [
"Ensemble-based modeling",
"prostate cancer",
"DREAM challenge",
"mCRPC",
"survival analysis"
],
"content": "Introduction\n\nToday, prostate cancer is one of the most prevalent cancers afflicting men in the Western world. In addition to the prevalence of this disease, the mortality rates for prostate cancer ranked fifth among the most common causes of cancer death worldwide in 2012 (http://www.cancerresearchuk.org/). In the US alone, approximately 137.9 out of 100,000 men were diagnosed with prostate cancer each year from 2008–2012, with an average annual mortality rate of 21.4 out of 100,000 men. (http://www.seer.cancer.gov/statfacts/html/prost.html). According to the Cancer Prevalence and Cost of Care Projections, the total annual cost of prostate cancer in 2016 has been estimated at 14.3 billion dollars (http://www.costprojections.cancer.gov/).\n\nOver the course of the last decade in the US, approximately 15% of prostate cancer cases were initially diagnosed with metastatic disease (stage IV). Androgen deprivation therapy (ADT) is the established treatment for these cases, but one third of patients develop resistance and their disease progresses to metastatic castrate-resistant prostate cancer (mCRPC) (https://www.synapse.org/ProstateCancerChallenge). Treatment of mCRPC has been historically challenging, and while docetaxel – the current front-line therapy for mCRPC – has been effective at improving mCRPC survival at the population level, a significant fraction of patients do not respond to treatment or prematurely discontinue treatment due to adverse events (AE)1, leading to substantial variation in the individual outcomes between mCRPC patients. For this reason, and because of the tremendous personal, societal, and economic burden associated with this disease, there is significant interest both in the identification of individual predictors for mCRPC prognosis as well as the development of prognostic models that can be used to identify high-risk mCRPC patients.\n\nIn a recent publication2, Halabi et al., utilized data from a phase III trial consisting of over one thousand mCRPC patients to develop and test a prognostic model for overall survival among patients receiving first-line chemotherapy. The time dependent area under the curve (tAUC) was > 0.73 in both testing and independent validation data sets, suggesting strong performance of the Halabi et al. model for identifying low- and high-risk mCRPC patients. Notwithstanding the significant advances made by Halabi et al., and others toward the development of accurate prognostic models for mCRPC outcomes2–4, there remains ample room for improved prediction performance.\n\nMotivated by the potential to further improve existing risk-prediction tools along with growing worldwide burden of prostate cancer, the Prostate Cancer Dream Challenge was launched in March 2015 and included the participation of nearly 60 teams from around the world. The Prostate Cancer Dream Challenge was composed of two distinct sub challenges; in sub challenge 1, teams competed in the development of prognostic models for predicting overall survival based on baseline clinical variables, whereas the objective of sub challenge 2 involved the development of models to predict short-term treatment discontinuation of docetaxel (< 3 months) due to adverse events (AE). To assist in the development and testing of prediction models, approximately 150 variables collected on over 2,000 mCRPC patients treated with first-line docetaxel in one of four different phase III clinical trials were used. Three of the four trials were combined to generate the training data set, which was used for model-building and development, while data from the remaining trial were withheld from challenge participants and used as an independent test set to evaluate each of the submitted models5.\n\nIn the present manuscript, we focus exclusively on our methodological approach to sub challenge 1. Broadly speaking, the first step of our team’s approach to sub challenge 1 involved an initial screening of the data: data cleaning and processing, creation of new variables from existing data, imputation and/or exclusion of variables with missing values, and normalization to standardize the data across trials. The final “cleaned and standardized” training data was then used to fit to an ensemble of multiple Cox proportional hazards regression models whose constituent models were developed using curated, ad-hoc, feature selection (CAFS). Models developed by our team were subjected to internal cross-validation within the training data set to identify instances of model overfitting and to assist in further refinements to our prediction models. The source code utilized for our approach can be accessed via the Team Jayhawks Prostate Cancer Dream Challenge project web page (https://www.synapse.org/#!Synapse:syn4214500/wiki/231706) or directly via the GitHub repository webpage (https://github.com/richard-meier/JayhawksProstateDream).\n\n\nMaterials and methods\n\nA detailed description of the datasets used in this challenge can be found on the Prostate Cancer Dream Challenge web page (https://www.synpase.org/ProstateCancerChallenge). Briefly, the training set originated from the ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center), MAINSAIL (Celgene) and VENICE (Sanofi) trials6–8. For the 1600 patients in the training data, baseline covariate information and clinical outcomes (i.e. time to death and time to treatment discontinuation) were provided to participating teams for the purposes of model development and training. Although baseline covariate information for a subset of patients in the ENTHUSE-33 (AstraZeneca) trial9 scoring set was provided to participating teams (n = 157), the clinical outcomes for each of these patients were censored and withheld from teams throughout the duration of the challenge. Specifically, the ENTHUSE-33 data set (n = 470) was split into two disjoint sets that consisted of 157 and 313 patients. Whereas an undisclosed randomly selected subset of the 157 patients was used for model evaluation in each intermittent leaderboard round, the remaining 313 patients were withheld completely from participating teams and used only in the final scoring round.\n\nAll aspects of our approach, from data preprocessing to model development and cross-validation, were implemented using R version 3.2.1 (2015-06-18) (https://www.r-project.org/). Baseline covariate information on subjects comprising the training data were reformatted and normalized according to the type of variable (i.e., categorical, ordinal, numeric) and feature type (i.e., medical history, laboratory values, etc). Cleaned and normalized baseline features were then used to derive additional novel risk predictors. (https://github.com/richard-meier/JayhawksProstateDream/blob/master/dataCleaningMain.R)\n\nSeveral groups of binary variables representing patient specific clinical information and prior medical history reported on patients were merged into new features. Three different merging types were explored: “logical or”, regular summation, and z-score weighted summation. For the latter, each individual feature in the training set was fit against survival time with a Cox proportional hazards model and their resulting z-scores were used to derive weights that were proportional to each variable’s strength of association with survival (https://github.com/richard-meier/JayhawksProstateDream/blob/master/deriveHardcodedWeights.R). Summation variables were created for 3 main categories: medical history information, prior medication information and metastasis information. For each of these categories, new variables generated by merging specific subcategories (i.e. protective, harmful, total, visceral, etc.) were created.\n\nA participant’s target lesion volume (TLV) was generated by multiplying each target lesion by its size, followed by summing over all lesions within that participant (https://github.com/richard-meier/JayhawksProstateDream/blob/master/src/lesion_volume.R). To impute the TLV for the ASCENT-2 trial, we calculated the average TLV per lesion within individuals who survived or died in the MAINSAIL or VENICE trials, and multiplied these separate averages by the number of non-bone lesions found in the ASCENT-2 data. To classify whether for each category a feature was in the subcategory “protective” or “harmful”, their z-scores, when individually fitting against the outcome, were used. A feature was labeled \"protective\" if its z-score was greater than 1.64 and \"harmful\" if its z-score was smaller than -1.64.\n\nPrincipal component analysis (PCA) was used to split numerical laboratory values into components that best explained their variation (see above: “deriveHardcodedWeights.R”). The top PCs were then treated as new features. In order to address issues or findings involving some specific variables, additional features were created: The ECOG performance status score was both included as continuous and categorical variable. Age groups were also recoded as an ordinal age risk variable for which 0 represented patients older than 75 years, 1 represented patients younger than 65 years and 2 represented patients with ages between 65–75 years. The latter was motivated by our observation of a non-linear trend between age and survival time.\n\nRace was recoded into a binary variable where 1 referred to patients labeled as “white” or “other” and 0 represented patients that did not fall into one of those two categories (e.g. \"black\", \"asian\", etc.). The features “harm_pro” and “harm_pro2” were created by fitting the summation variables of the medical history subgroups “harmful” and “protective” against the outcome and obtaining the z-scores of these subgroup summation variables. The difference between the two features was that harm_pro exclusively fitted the two summation variables, whereas harm_pro2 also utilized a set of important predictor variables for the initial fit. The two z-score weighted sums (corresponding to the two sets of features utilized for the previously mentioned fit) of these summation variables then correspond to the two new features. (https://github.com/richard-meier/JayhawksProstateDream/blob/master/src/add_additional_features.R)\n\nOur methodological framework utilized an ensemble of Cox proportional hazards regression models that were found to be individually competitive in predicting survival. For each patient, the ensemble-based risk scores were generated as a weighted sum of the individually estimated risk scores from separate Cox-regression models, fit using the “coxph” function in the “survival” R-package10 (Figure 1C). Feature selection among the competitive risk-prediction models that constituted our ensemble was undertaken by a method we call curated, ad-hoc, feature selection (CAFS). This method attempts to maximize the prediction performance of a given model by iteratively including and excluding features from a baseline initial model. The method is greedy in the sense that in each step of the algorithm, only the model candidates that achieve the current \"local best\" performance are selected. Each iteration started with a group of experts making two executive decisions based on a set of possible model candidates for which performance was evaluated in prior iterations. First, one model was nominated as the best current model and a decision was made whether to expand or shrink the model, or terminate the procedure and keep the model, in its current form (Figure 1A). Choosing the current best model was guided by a candidate’s estimated performance, performance of the previous best model, as well as knowledge of the researchers as to whether the form and components of a given model were reasonable in the context of the problem at hand. An example for the latter case would be that a newly introduced interaction term between completely unrelated features might be rejected after evaluation, even though it technically achieved the current best performance.\n\n(1A) Competitive prediction models were built individually by a curated, ad-hoc feature selection procedure. In each step researchers picked a new best model from the set of current models based on an optimization criterion and decided how it would be processed. (1B) Models were optimized by either forward selection, in which a new feature was added, or backward selection, in which a feature that had become obsolete was removed. Both selection methods generated a set of new models for which performance was predicted via in-depth cross-validation. (1C) Once a variety of competitive prediction models had been created, models were combined into an ensemble, which averaged their individual predictions in order to increase performance.\n\nModel reduction was done via ad-hoc backward selection (Figure 1B). In this procedure a set of new models was generated by individually excluding each parameter or feature present in the current model. For each of these models, performance was evaluated based on a previously chosen optimization criterion, i.e., integrated time-dependent area under the curve (iAUC). The criterion was estimated via a cross-validation procedure in which the training set was repeatedly split into two random subsets of a fixed size. The first subset was used to estimate parameters of a given model, whereas the second subset was used to predict the outcome using the previously estimated parameters and to calculate the optimization criterion based on comparing the prediction with the true outcome. In our study, we utilized two-thirds for the parameter estimation subset, i.e., first subset, while the remaining one third comprised the second subset. The average of the calculated optimization criterion values, obtained from all random splits, served then as a performance estimate. We used 10,000 cross-validation steps for each model in our study to ensure stability of the average performance. The new models and performance estimates were then used as the basis for subsequent iterations.\n\nExpansion of a model was accomplished using an ad-hoc forward selection procedure (Figure 1B). In this procedure several new models were created for each feature within the feature space. Each subset of new models contained one base model that included only main effect terms for new features, i.e., no interaction terms included. All other models in the subset further expanded this base model by individually introducing an interaction term with each element already in the previous best model.\n\nPerformance of each new model was again assessed via the cross-validation procedure. Since this step iterated over the feature space, it created a large amount of different models. To make this step computationally feasible, the number of cross-validation iterations had to be reduced. In our study, 500 cross-validation steps per new model were utilized. (https://github.com/richard-meier/JayhawksProstateDream/blob/master/src/modelTuning.R)\n\nFinally, since the variances of these performance estimates were much higher than in the shrinkage step, the top 30 performing models were chosen and performance was re-estimated via 10,000 fold cross-validation. This set of new models and performance estimates was then used in the next iteration. Once iterations provided only marginal performance increases, the procedure was terminated and a final model was declared. Different models for the ensemble were found either by choosing different intermediate models as the current best and branching off a certain path, or by choosing different initial models.\n\nEach of the sub challenges in the Prostate Cancer Dream Challenge had its own prediction scoring metrics. In sub challenge 1A, participants were asked to submit a global risk score and time dependent risk scores, optimized for 12, 18 and 24 months. These risk scores were evaluated utilizing two scoring metrics: a concordance index (cIndex), and an integrated time dependent area under the curve (iAUC; 6–30 months). The time specific risk scores were assessed using AUC’s computed using Hung and Chiang’s estimator of cumulative AUC11. In sub challenge 1B, participants were asked to predict the time to event (death). The predictions of time to event were scored utilizing the root mean squared error (RMSE), using patients with known days to death.\n\nWhen applying CAFS, we utilized the iAUC calculated from the predicted risk scores as an optimization criterion. This measure was also used by the challenge organizers for performance assessment in the scoring rounds for sub challenge 1A. While participants were asked to predict the risk score for overall survival based on patients' clinical variables, they were also tasked to predict the time to event (TTE) in sub challenge 1B. We used the risk score for each patient to model the TTE:\n\nTTEi = f(riskScorei) + ∈i\n\nWhere riskScorei corresponds to the risk score calculated in sub challenge 1A for the ith patient and f is an unknown smoothing function. We estimated f using a Generalized Additive Model (GAM) via the “gam” function within the “mgcv” package in R12. When regressing TTEs on risk we used only the subset of individuals who died.\n\n\nResults\n\nThe principal component analysis with all laboratory values revealed that the first principal component was highly correlated with patient survival. Furthermore, across all laboratory values, only a subset of six features (baseline levels of: albumin, alkaline phosphatase, aspartate aminotransferase, hemoglobin, lactate dehydrogenase and prostate specific antigen) contributed significantly to explaining the variation in said first component. Thus, in the first PC only these six laboratory values were used during model building and development. In addition to the first principal component, several other newly created metavariables were identified as clinically relevant predictors by our model building procedure. Three z-score weighted sums merging metastases locations, medical history and prior medication were included in our prediction models. The “logical or” merged variable, whether or not a patient had any known medical history issues, was also utilized. The protective versus harmful subcategorization was only included in the models in the form of the sum of protective medical history features. However, this category only included a single feature, vascular disorders (yes/no).\n\nWe developed 5 competitive prediction models (M1 – M5) that were used in our Cox proportional hazards regression ensemble (Figure 2). All models were developed by either refining a previous model via CAFS or by building a model from the bottom up via CAFS. M1 used the best model found by manually selecting promising features as its initial model. M2 used an intermediate model from the CAFS procedure of M1 to deliberately branch off and provide a similar, yet different model. M3 and M5 were both built by using an initial model solely utilizing the strong predictors target lesion volume and principal component 1, but branching off in early iterations. M4 was built by using an initial model utilizing target lesion volume and the alkaline phosphatase level under the restriction that principal component 1 was excluded from the feature space.\n\n(2A) The ensemble consisted of five different models, M1 to M5, which ended up sharing many feature types even though they were individually generated under different conditions. (2B) All models made use of a similar number of parameters and achieved comparable performance in cross-validation. Performance further increased when using the model ensemble.\n\nWhile no single feature was utilized in every model M1–M5, five different features were shared between four models, six features between three models, four features between two models and eight features were unique to a model (Figure 2A). Each model had at least one unique feature. Between two and four interaction terms (two-way interaction terms) were present in all of the observed models (Figure 2B). One interaction was shared between the models M3, M4 and M5, while two interactions were shared between two models M1 and M2. Including components of newly derived features, eight features that were included in the original model by Halabi et al. in some form, were also utilized in the model ensemble. In total, the ensemble contained 38 coefficients, out of which 11 were pairwise interaction terms across all models.\n\nThe estimated iAUC during performance assessment was found to be stable up to approximately three decimals when using 10,000 fold cross-validation. Similar estimated performance within the range of 0.005 iAUC difference was achieved between the competitive prediction models, the highest total iAUC being 0.745. Optimal weights were chosen based on randomly initializing weights 100 times and estimating performance. Performance tended to be optimized the smaller the maximum pairwise difference between weights in an ensemble was. The best possible performance was estimated when choosing equal weights for all models. This ensemble was chosen as the best model. Utilizing the ensemble led to an estimated performance increase of 0.012 iAUC.\n\nDuring the three leaderboard rounds the team explored and submitted various methodologies. Top performing submissions were always Cox proportional hazards models that outperformed more sophisticated approaches such as generalized boosted regression models and random survival forests. From scoring round 2 onward, single models utilizing CAFS were also submitted. In all intermittent leaderboard rounds, at least one of our submitted entries ranked among the top 4 performing models of sub challenge 1A (Figure 3A). In sub challenge 1B, at least one submission was within the top 3 performing models, with the exception of the second leaderboard round were our best model ranked number 12. Our models achieved performances ranging from 0.792 to 0.808 iAUC in 1A and from 172.51 to 196.25 RMSE in 1B. In the final scoring round, team FIMM-UTU5 significantly outperformed all other contestants with an iAUC of 0.7915 (Figure 3B). Our submission for 1A that utilized the CAFS ensemble achieved rank 9 with an iAUC of 0.7711. The performances of teams ranking from 2nd to 10th were very similar. While the difference in performance between rank 1 and 2 was 0.0126 iAUC, the difference in performance between our method and rank 2 was only 0.0078 iAUC. Our submitted model ensemble also successfully outperformed the previous best model by Halabi et al.2, which was placed at rank 36 with an iAUC of 0.7429. Sub challenge 1B was won simultaneously by 6 teams out of which our method achieved rank 3.\n\n(3A) Submitted models were consistently ranked at the top of the leaderboards during the scoring rounds before the final submission. Models build via the CAFS procedure were submitted starting with the second leaderboard round. (3B) The final challenge submission made use of the described model ensemble approach and was placed at rank 9 in sub challenge 1A and at rank 3 in sub challenge 1B.\n\n\nDiscussion\n\nMany feature types present in the original model by Halabi et al.2 were also independently picked up and retained by CAFS. This solidifies the idea that these might be key components influencing survival. Considering that five out of these eight were also involved in the first principal component, which was one of the strongest predictors, does also support this. Another set of potentially interesting predictors are those shared between three or more models.\n\nIt is debatable whether the fact that a lot of overlap exists between the various sub-models points towards the validity of selected features and the developed approach, or a potential bias in the feature selection procedure. However, the former appears more likely in the light of the approach’s good performance on new data in the competition.\n\nThe included interaction terms are difficult to interpret. There is no guarantee that an interaction is modeling a direct relationship and some terms might be artifacts of higher order interactions or confounding issues. Also, when solely including terms into the model based on the optimization criterion in each step of CAFS, there is a bias to include interaction terms. Since they introduce more parameters into the model than a main effect, they have more opportunity to improve the model within each step, even though including two different main effects in a row might be more beneficial. While our team was aware of this issue and cautious with the selection of sub-models, this still leaves potential for making suboptimal choices. This weakness could potentially be addressed in the future by switching to a parameter count based iteration, rather than a feature type based iteration.\n\nThe performed recoding of the age groups is still problematic. Intuitively, it does not make sense that the order “oldest, youngest, in-between” would be related to the outcome when disease progression usually worsens with age. A possible explanation might be that the oldest patient group is confounded with a subset of people that are resistant to the disease and have already survived for a long time. Further research is required to validate this effect.\n\nOverall the presented method successfully built a robust predictor for the target outcome. Evidence for this is provided by the fact that the estimated performance via in-depth cross validation (iAUC = 0.757) was close to the reported performance on the larger, final leaderboard set (iAUC = 0.771) and the fact that our models were among the top performing candidates throughout the entire challenge. It should also be highlighted that the required human intervention in each selection step gives the team of researchers a lot of control, which can be very useful to introduce knowledge about the feature space into the selection process. An example of this benefit is that despite the pointed out weakness in the implementation, the team was able to account for it by rejecting inclusions of interactions that did not have a great enough impact. If desirable, early branches of the selection process can be tailored towards features with a known connection to the outcome, when multiple feature inclusions provide similar performance benefits.\n\n\nConclusion\n\nThe presented method generated a model ensemble that was able to outperform the previous best efforts to predict survival in prostate cancer patients. The developed model ensemble also successfully competed with the top performing research teams in the Prostate Cancer Dream Challenge and was among the winning teams in sub challenge 1B. We attribute this success to careful data cleaning, our efforts to derive novel features and the fact that skeptic, human decision making is integral to each iteration of the curated ad-hoc feature selection. Due to its general applicability to model building, especially in exploratory settings, the approach is promising in being useful for researchers around the world. Future studies will need to validate the presented, potentially disease associated features and potential weaknesses in the CAFS procedure should be investigated and addressed.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.5281/zenodo.4906313",
"appendix": "Author contributions\n\n\n\nPC thoroughly investigated the use of LASSO, elastic-net, and ridge regression as an alternative for both model selection and improving prediction. JD helped with overall code cleaning and documentation and also built predictive models for treatment discontinuation (sub challenge 2). SE acted as a liaison between the Urology Department at KUMC and the Department of Biostatistics at KUMC. She provided indispensable advice about variables to consider and those that could be combined in the creation of novel risk predictors. BF looked into non-missing at random imputation schemes for the lab values, proposed new variables/models to consider for sub challenge 1, and assisted in the development of prediction models for treatment discontinuation. SG helped with model building, code organization and presentation, wrote data cleaning function for medication history, and created new treatment history variables. DK created code to identify novel interactions between risk predictors and identified several variables to improve cross-validation performance on the training data. RM led organization of the overall code, generated code to perform intensive stepwise selection based on iAUC, and assisted in the cleaning and transforming lab data. RR helped with model building, code organization and presentation, wrote data cleaning function for medical history, and created new medical history variables. JU focused on initial model building, cleaning and deriving metastases variables, PCA for lab values, sub challenge 1b, and write-up of the synapse wiki.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Institute of Health (NIH) grants: 1KL2TR000119 Kansas IDeA Network of Biomedical Research Excellence (K-INBRE) Bioinformatics Core supported in part by the National Institute of General Medical Science award P20GM103418, and the Biostatistics and Informatics Shared Resource supported in part by the National Cancer Institute Cancer Center Shared Grant award P30 CA168524.\n\n\nAcknowledgements\n\nWe would like to offer a special thanks to the organizing committee of the Prostate Cancer Dream Challenge and participants ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center), MAINSAIL (Celgene), VENICE (Sanofi), and ENTHUSE-33 (AstraZeneca) trials. This publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\n\nReferences\n\nSchallier D, Decoster L, Braeckman J, et al.: Docetaxel in the treatment of metastatic castration-resistant prostate cancer (mCRPC): an observational study in a single institution. Anticancer Res. 2012; 32(2): 633–41. PubMed Abstract\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChang K, Kong YY, Dai B, et al.: Combination of circulating tumor cell enumeration and tumor marker detection in predicting prognosis and treatment effect in metastatic castration-resistant prostate cancer. Oncotarget. 2015; 6(39): 41825–41836. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Soest RJ, Templeton AJ, Vera-Badillo FE, et al.: Neutrophil-to-lymphocyte ratio as a prognostic biomarker for men with metastatic castration-resistant prostate cancer receiving first-line chemotherapy: data from two randomized phase III trials. Ann Oncol. 2015; 26(4): 743–9. PubMed Abstract | Publisher Full Text\n\nGuinney J, Wang T, Laajala TD, et al.: Prediction of overall survival for patients with metastatic castration-resistant prostate cancer: development of a prognostic model through a crowdsourced challenge with open clinical trial data. Lancet Oncol. 2016. Publisher Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nTherneau TM: A Package for Survival Analysis in S. version 2.38, 2015. Reference Source\n\nHung H, Chiang CT: Estimation methods for time-dependent AUC models with survival data. Can J Stat. 2010; 38(1): 8–26. Publisher Full Text\n\nWood SN: Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J R Stat Soc Series B Stat Methodol. 2011; 73(1): 3–36. Publisher Full Text\n\nChalise P, Dai J, Ellis S, et al.: JayhawksProstateDream: First release (PCDC submission). Zenodo. 2016. Data Source"
}
|
[
{
"id": "17706",
"date": "06 Dec 2016",
"name": "Stephen R. Piccolo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe their participation in Subchallenge 1 of the Prostate Cancer DREAM Challenge. Their model performed well, even though it was not considered a top performer. They were creative in the way they designed their approach and tried many different options, which helped to provide insights into this particular problem as well as general strategies for model selection and optimization. Overall I was pleased with the quality of the writing and the level of detail used in the descriptions of methods and results. In particular, I like that they mentioned specific software versions and provided direct links to the code that they used for specific tasks. I did have a few questions and noticed a few gaps, which I have outlined below.\nMajor points:\nThe manuscript provides context about the challenge as a whole. This was helpful. For example, the authors described how their approach performed in comparison to the other approaches. However, it would have been much more insightful if the authors had provided at least a brief description of the approach used by FIMM-UTU and how that approach compared to their own and what this team might have done to perform better. In hindsight, what can they learn from this?\n\nThe manuscript describes the CAFS approach in fairly vague terms. It makes sense that the authors used intuition to optimize the feature selection. Figure 2 also provides some insight into feature and model selection. However, it is difficult to understand much about the thought process that went into these decisions. If someone else wanted to repeat this approach, how would they go about it? Are there any general guidelines that they used in making these decisions? Maybe they could provide an example that illustrates this process. Because of this, I am hesitant to accept the claim that \"the approach is promising in being useful for researchers around the world.\"\n\nThe manuscript mentions imputation and dealing with missing values in a couple places. But very little, if anything, is stated in the methods (or results) about how missing values were actually handled. The authors should be more explicit in describing this.\nMinor points:\nIn some cases, features may have been correlated strongly with each other. For example, the z-score weighted sum values and “logical or” merged variables were derived from the same underlying data. Did the authors account for these dependencies in their models in any way? If so, how?\n\nThe authors used the class labels extensively in the training set to optimize their models. For example, their z-scores were generated based on the class labels, and they trained a large number of different models on the same data set. Thus it is impressive that their iAUC values generalized as well as they did on the validation set. However, it was unclear (or perhaps I missed it) whether the authors set aside any part of the training set as a pseudo-validation set. Figure 1C suggests that they did, but I didn't see any explicit explanation of this.\n\nFor the \" weighted sum\" approach, it was a bit unclear exactly how the weights were calculated. In addition, the manuscript states that, \"Optimal weights were chosen based on randomly initializing weights 100 times and estimating performance.\" What range of weights were used and how were they varied?\n\nThe authors state that, \"Different models for the ensemble were found either by choosing different intermediate models as the current best and branching off a certain path, or by choosing different initial models.\" At an abstract level, this makes sense, but it is hard to know exactly what this means. It would help to be more explicit on this part.\n\nIn the Discussion, it says, \"Another set of potentially interesting predictors are those shared between three or more models.\" But it is unclear what these predictors are (or perhaps I missed it). Mentioning these predictors explicitly would be helpful.\n\nIt's a little confusing to have source code in two different locations (Zenodo and GitHub). I'd suggest just pointing people to Zenodo since the data files are there, in addition to the code. Or maybe the two are integrated? But again, if that is the case, I would suggest just using one or the other.\n\nI am not sure you really need to mention the top-performing team in the abstract. My recommendation would be to focus the abstract more on your solution rather than on the challenge results.\n\nThe authors use URLs as citations in the Introduction (e.g., http://www.cancerresearchuk.org/ and http://www.seer.cancer.gov/statfacts/html/prost.html and https://www.synapse.org/ProstateCancerChallenge). It seems that some of these should instead be references to peer-reviewed publications.\n\nThe authors state that, \"suggesting strong performance of the Halabi et al. model for identifying low- and high-risk mCRPC patients.\" What does this mean, more specifically, from a clinical standpoint?\n\nThe author contributions section states, \"assisted in the development of prediction models for treatment discontinuation.\" This doesn't seem relevant to this paper.\n\nI tried to install the R package dependencies that are described in the README file. However, it gave me an error message saying that some of the packages could not be found. To solve this, I had to specify a repository in the code (\"repos\" parameter of install.packages). It would be helpful if the authors changed this part of the code so that it will run out of the box.",
"responses": []
},
{
"id": "20536",
"date": "27 Feb 2017",
"name": "Ka Yee Yeung",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors documented the strategies they developed and used to predict the risk scores (Sub-challenge 1a) and survival times (Sub-challenge 1b) of prostate cancer patients as part of their participation in the DREAM 9.5 Prostate Cancer Challenge. The paper was well-written with ample of details for each step of the pipeline, with very nice figures and tables. The models performed well and ranked well in the challenge. However, there are still some areas that need further elaborations:\n\nThe authors mentioned missing data in the “Introduction” section. However, the data imputation techniques used to replace the missing data, such as the lesion volume in the ASCENT-2 trial, are not described in “Materials and Methods”. The reviewers would like to request the authors to explain what have been done to replace the missing data in all the clinical trial studies with missing data and the rationale of their strategies.\n\nThe authors developed a “curated, ad-hoc, feature selection” (CAFS) strategy to identify predictors. The reviewers would like to request additional details on how this method selects the features, and especially on how the weights of different features are computed.\n\nFigure 2 showed the representation of each model and the variables involved. If the 6 considered highly correlated variables with patients survival (ALB, ALP, etc.) are already included in the first principal component (PC1) (which is used in 4/5 of the cross validation models), wouldn't it be redundant to have the same features included in the model? What were the considerations taken for this decision?\n\nAlso related to the previous questions, model M4 was built with restrictions to not include PC1 which is claimed as the significant predictor. However, as seen in Fig 2B, this model still performed reasonably well --any observations or comments on this?",
"responses": []
},
{
"id": "17707",
"date": "20 Mar 2017",
"name": "Russell Greiner",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe the model they submitted to the recent “Prostate Cancer DREAM Challenge - Sub-Challenge - 1”, which was ranked among the top models in the challenge. In particular, they describe their feature selection process, which they mostly credit for their success in the challenge. The manuscript summarized their approach and the results from the challenge in an adequate manner. However, we have some concerns:\nMain Critique: Their CAFS feature selection (FS) process seems to related to the wrapper feature selection methods, but includes a human expert in the loop. This greatly reduces the reproducibility of this work. Also, the authors have not clearly listed the guideline followed by the experts when deciding on features, which appears to further reduce the usefulness of this approach in general.\nMinor Points:\nThe final selected model from the FS process is not explicit from Figure 1B. It would also be better if the authors explicate CAFS’s boundary between expert intervention vs data-driven selection.\n\nThe basic algorithm embodied several assumptions -- eg, p6 mentions 6 features of PC1. Why 6? Also why just use sum of \"protective medical history features\"? Why not include harmful features? We think we understand these decisions but the paper would be improved if it better motivated these various decisions.\n\nTheir approach of combining the ensemble with randomly selected weights seem to introduce instability to the final prediction in different runs. It would be worthwhile to give more details on this step, describing how the proposed method compares with a simple mean (or sum) and listing the motivations for this choice -- and relate this to the claim that equal weights for all models gives the best performance\n\nIt was great that the authors listed the results from the winning models to give the reader a good idea about the challenge itself.\n\nFigure 2A was a well thought-out table, which gives the reader insights in understanding the selected features.\n\nThe flow diagram in Figure 1A was difficult to process as it does not show a single flow (eg: left-right or top-down). Perhaps it would be improved with a more streamlined flow diagram of the FS process.\n\nThe authors mention they experimented with other survival prediction models, such as random survival forests and generalized boosted regression models. It would be useful to show results from those models, for a better comparison.\n\nThe authors have used links to certain web pages in the text. It would be more in line with academic publications if proper citations were used.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2677
|
https://f1000research.com/articles/5-2675/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Using complex networks for refining survival prognosis in prostate cancer patient",
"authors": [
"Massimiliano Zanin"
],
"abstract": "Complex network theory has been used, during the last decade, to understand the structures behind complex biological problems, yielding new knowledge in a large number of situations. Nevertheless, such knowledge has remained mostly qualitative. In this contribution, I show how information extracted from a network representation can be used in a quantitative way, to improve the score of a classification task. As a test bed, I consider a dataset corresponding to patients suffering from prostate cancer, and the task of successfully prognosing their survival. When information from a complex network representation is added on top of a simple classification model, the error is reduced from 27.9% to 23.8%. This confirms that network theory can be used to synthesize information that may not readily be accessible by standard data mining algorithms.",
"keywords": [
"Prostate cancer",
"survival prognosis",
"complex networks",
"classification"
],
"content": "Introduction\n\nConstructing prognostic models for different types of cancers is a problem that is attracting increasing attention, due to the high impact that these models may have in the clinical treatment. This is clearly related to the movement of personalized medicine (Jain, 2005; Samani et al., 2010; Van't Veer & Bernards, 2008). As more and more data describing human biology are available, both for healthy and pathological conditions, coming from heterogeneous sources (e.g. from all the -omics fields), there is a well-founded hope that such data may be of help to improve the treatment of individual patients, personalizing the way drugs and therapies are provided.\n\nWhen one ought to extract a model from a collection of data, the customary solution is to resort to data mining algorithms. In the case of cancer prognosis, this has resulted in the development of numerous models - see, for instance, Alexe et al., 2006; Gupta et al., 2011; Halabi et al., 2003; Halabi et al., 2014; Mangasarian & Wolberg, 2000 and Quaranta et al., 2005 for a few examples. Data mining nevertheless presents some drawbacks, the most important of which is the way features are analyzed. Elements are considered individually, or by being pairwise combined; yet, data mining does not provide a way to create a global picture of the available data.\n\nIn the last decade, a novel solution has been proposed. The complex network theory provides an elegant way for representing the structure created by the interactions between the elements of a complex system (Boccaletti et al., 2006; Strogatz, 2001). The result is encoded in an adjacency matrix, which can then be analyzed by means of multiple metrics (Costa et al., 2007). Applications span from the characterization of social networks, to the internet or the human brain (Costa et al., 2011).\n\nIn this contribution, I explore the possibility of using complex networks as an instrument for improving a model of survival prognosis of patients with metastatic castrate resistant prostate cancer (mCRPC) treated with docetaxel. In order to achieve this, I compare two models. The first one is a classification model, i.e. classifying between surviving and non-surviving patients, which only uses raw features like baseline lab results and patient vital signs. The second one combines such information with structural metrics extracted from a network representation of the same data. The hypothesis tested here is that complex networks should synthesize information present in the raw data in a new way that should reflect an improved classification score (Zanin et al., 2014b).\n\nThe paper is organized as follows: first, I describe the main methods of the analysis, with a special focus on the networks reconstruction methodology and the metrics used for their characterization, and the dataset considered here; afterwards, the results obtained are presented, i.e. the comparison of the two classification models; finally, some conclusions are drawn.\n\n\nMethods\n\nReconstructing a network representation of a given system entails two steps. First, one needs to define the elements of such a system. This is usually constrained by the type of available data; thus, in this case, the nodes of the network are going to correspond to the different available biomarkers.\n\nSecond, one should detect when two of such elements are connected by some kind of relationship. If a priori knowledge is available, e.g. information about how different metabolites or proteins are connected in a pathway, such information can directly be mapped into the network. Alternatively, if a time evolution (i.e. a time series) is available for each element, functional links can be established between them, by means of metrics like correlations or causalities. Note that this last option entails two important problems: a time evolution should be available, which is not straightforward in the case of biomedical analyses; and that functional links represent the “co-evolution” of factors, while in some cases, and specifically in the diagnosis of a disease, it is more interesting to detect “deviations” from the expected (healthy) behavior.\n\nRecently, a new methodology for network reconstruction has been proposed, which solves the two aforementioned problems (Zanin & Boccaletti, 2011; Zanin et al., 2014). Starting with a set of scalar values, pairs of elements are analyzed by firstly detecting if a standard relation is present between them in a set of control subjects; afterwards, data corresponding to new subjects are compared with such relation, and a link is created between two nodes if they present an abnormal deviation. The resulting object is called a parenclitic network, named after the Greek term for “deviation”, originally used by the Greek philosopher Epicurus to designate the spontaneous and unpredictable swerving of free-falling atoms (Zanin et al., 2014).\n\nIn mathematical terms, suppose n healthy subjects are described by a vector of features, such that the i-th of them is represented by fi = (fi,1, fi,2, … , fi,nf). All the nf features are mapped into nodes of the network, which is now described by an adjacency matrix Anf×nf. As the final aim is to construct a network for each subject under study, suppose a new subject j, with its corresponding vector fj, is introduced in the system. The reconstruction process should analyze each pair of features, denoted by k and l, to understand if they deviated from the expected (healthy) behavior. For the sake of simplicity, in this work we consider that the healthy relation can be obtained as a linear regression between both features:\n\nf.,l = αk,l + βk,lf.,k + ∈k,l.\n\nHere, f.,k represents the vector of values of feature k for all healthy subjects, and αk,l and βk,l the two parameters of the best linear fit. Additionally, ∈k,l is a vector containing all fit errors; note that a linear relation may not describe well the relationship between k and l, and that this vector will be key to understand its statistical significance. Now, suppose a new subject h is available, for which their health condition is unknown, and for which one wants to create the corresponding network representation. A link between nodes k and l is then created, with a weight equal to its distance from the previously detected normal relation:\n\nwk,l=fh,l– (αk,l+βk,lfh,k)σk,l,\n\nbeing σk,l the standard deviation of ∈k,l. In other words, wk,l represents the Z-score of the distance of the subject h with respect to the normal behavior of features k and l - large values of wk,l, both positive and negative, indicate that the subject under analysis presents an abnormal behavior, which may be symptomatic of a disease. When the process is repeated for all pairs of features, the result is a parenclitic network for each patient.\n\nIntuitively, healthy subjects should be associated with random-like networks, as strong links may appear due to the intrinsic noise of biological processes, but should not form coherent structures; on the other hand, patients should present networks with non-trivial topologies. Also, the more a network is different from a random structure, the more severe the pathology is expected to be.\n\nIn order to transform the obtained networks into a representation suitable to be used in a data mining (classification) algorithm, first these have been binarized, i.e. links with a weight |wk,l| ≤ 0.5 have been discarded. The threshold of 0.5 has manually been set, in order to obtain structures dense enough to support the subsequent analysis, but still being able to discard statistically insignificant connections. Afterwards, two topological (i.e. structural) properties have been considered:\n\nLink density, defined as the number of links present in the network, divided by the number of all possible links. The higher the link density, the more pairs of features present an abnormal behavior.\n\nInformation content (Zanin et al., 2014). This metric assesses the presence of mesoscale structures, i.e. structures created by small groups of nodes, by evaluating the information lost when pairs of nodes are iteratively merged together. Low values of Information Content indicate a random-like structure; conversely, high values suggest a non-trivial topology, potentially fingerprint of a severe condition.\n\nIn order to evaluate the performance of a complex network representation with respect to a baseline, a classification between the two groups of patients (i.e. surviving vs. not surviving patients) is performed, and the resulting scores compared. Such classification is based on a support vector machine (SVM) model with linear kernel (Noble, 2006; Wang, 2005).\n\nSVMs are binary linear classifiers that model concepts by creating hyperplanes in a multidimensional space, which can be used for both classification and regression (Cortes & Vapnik, 1995). A good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class, as this minimises the error. The SVM model has been chosen for two reasons: its good performance and diffusion in biomedical classification problems; and its simplicity: only linear relationships are mined, allowing a better identification of the contribution of the complex network representation.\n\nThe validation of the results has been performed using a 10-fold cross-validation (Friedman et al., 2001). The original sample of subjects is randomly partitioned into 10 equal sized subsamples. A single subsample is retained as the validation data for testing the model, and the remaining 9 subsamples are used as training data. The cross-validation process is then repeated 10 times, with each of the 10 subsamples used exactly once as the validation data. The average value of the error obtained in the 10 executions is used for estimating the error.\n\nThe dataset considered here is part of the Prostate Cancer DREAM Challenge, including information from the prostate cancer clinical trials ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center) (Scher et al., 2011), VENICE (Sanofi) (Tannock et al., 2013), MAINSAIL (Celgene) (Petrylak et al., 2015), and ENTHUSE-33 (AstraZeneca) (Fizazi et al., 2013). Only the data included in the CoreTable have been considered, representing the core patient level data. They cover information about demographics, co-existing disease conditions, prior treatment of the tumor and other co-existing conditions, important baseline lab results and vital signs, lesion measure and early response to therapy. More information on the dataset can be found at https://www.synpase.org/ProstateCancerChallenge.\n\nOne of the limitations of the network reconstruction process previously described is that it can only handle numerical features. Thus, only those features fulfilling this condition have been selected. Additionally, binary features have been transformed into numbers, i.e. 1 for “yes” and 0 for “no”. The final data set included 92 features for each patient.\n\nAfterwards, 2000 patients have been randomly selected, of which half of them did not survive cancer - as coded by the DEATH flag in the dataset. The rationale of selecting only a subset of patients is two-fold: first, to reduce the computational cost, and thus allow a more detailed analysis of results; and second, to ensure that the data set used in the classification task is balanced, i.e. it includes the same number of subjects in both classes. All other patients have been discarded.\n\n\nResults\n\nFigure 1 presents the results obtained in the classification of patients using only raw features. As previously introduced, this classification will be the baseline against which the benefits of using complex networks will be evaluated. In order to reduce the computational cost of the analysis, and to reduce the risk of overfitting, a greedy feature selection algorithm has been executed. The three selected features were: LDH (Lactate Dehydrogenase level), TURP (prior transurethral resection of the prostate, binary value) and MHGEN (presence of general disorders, binary value). The probability distributions for the three features are presented in Figure 1 top and bottom left.\n\nProbability distributions of the LDH feature for surviving and not surviving patients (top left). Appearance probability of the features TURP and MHGEN, for surviving and not surviving patients (top right and bottom left). Classification score when considering LDH, LDH + TURP, and all three features (bottom right).\n\nBy using these three selected features, the classification score reaches 72.1% (Figure 1, bottom right). Adding more features does not yield substantial improvements.\n\nIn the second case, I consider the same original raw features, plus the two features synthesized from the complex network representation, as previously described. A network has been created for each subject, by using the information of surviving patients as baseline- in other words, surviving patients have been considered as healthy, following the convention previously described. In order to avoid overfitting, a new baseline has been calculated in each one of the 10 cross-validation rounds, ensuring no patient was included both in the training and in the classification steps. Finally, a greedy feature selection algorithm has been executed on the complete feature set, following the same process described previosuly.\n\nFigure 2 presents the results obtained, both in terms of the network features probability distributions (top), and the classification score (bottom). It can be appreciated as the classification score improves, from 72.1% up to 76.2%; this corresponds to a decrease of 15% in the classification error.\n\n(Top) Probability distributions of the link density and Information Content features, for surviving and not surviving patients. See main text for definitions. (Bottom) Classification score when considering LDH, LDH + link density, and all three features.\n\n\nConclusions\n\nIf complex networks have by and large been used to describe biomedical problems (Costa et al., 2011), much less attention has been devoted to their relation with prediction, i.e. to how the information they provide could be used in the construction of diagnosis models. In this contribution, I make a first step in this direction, by studying the following hypothesis: can the precision of a predictive model be improved, if information extracted from a complex network representation is fed to a data mining algorithm along with raw features?\n\nI used, as a test bed, a data set describing patients suffering from prostate cancer, and a classification task in which patients are discriminated according to the expected prognosis (surviving vs. not surviving). The inclusion of complex network features, obtained through a parenclitic representation (Zanin & Boccaletti, 2011; Zanin et al., 2014), resulted in a small but significant reduction of the classification error (from 27.9% to 23.8%).\n\nWhen comparing these results with the state of the art, as for instance (Halabi et al., 2003; Halabi et al., 2014), it is clear that they are still far away from representing an efficient prognostic instrument. Within the Prostate Cancer DREAM Challenge, the proposed method ranked 50 out of 51 in Subchallenge 1a (iAUC of 0.6171, against a reference of 0.7429 of the Halabi et al. method and 0.7915 of the winning team); and 27 out of 49 in Subchallenge 1b (RMSE of 214.39, against 194.41 of the winning team). Additionally, an error of the 23.8% in the survival probability is clearly intolerable for clinical applications.\n\nIt is also important to note that complex networks introduce a “black box” element in the analysis. As features are represented and analyzed in a topological way, i.e. focusing on the structure created by their relationships, it is not possible to identify which element(s) contribute the most to the final model. This complicates direct comparisons with standard prognostic models, and the design of therapeutic solutions.\n\nIn spite of the discussed drawbacks, we believe that the results here reported shed light on the importance of using complex networks in future prognostic models, as a way of synthesizing complex relationships in simple and numerical metrics.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn4732239 (Zanin, 2016)",
"appendix": "Competing interests\n\n\n\nThe author declares no competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\nThe author further acknowledges Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Challenge.\n\n\nSupplementary material\n\nAdditional software, e.g. the MATLAB code for calculating the Information Content of a network, can also be found at: www.mzanin.com\n\n\nReferences\n\nAlexe G, Alexe S, Axelrod DE, et al.: Breast cancer prognosis by combinatorial analysis of gene expression data. Breast Cancer Res. 2006; 8(4): R41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoccaletti S, Latora V, Moreno Y, et al.: Complex networks: Structure and dynamics. Phys Rep. 2006; 424(4–5): 175–308. Publisher Full Text\n\nCortes C, Vapnik V: Support-vector networks. Mach Learn. 1995; 20(3): 273–297. Publisher Full Text\n\nCosta LD, Rodrigues FA, Travieso G, et al.: Characterization of complex networks: A survey of measurements. Adv Phys. 2007; 56(1): 167–242. Publisher Full Text\n\nCosta LD, Oliveira ON Jr, Travieso G, et al.: Analyzing and modeling real-world phenomena with complex networks: a survey of applications. Adv Phys. 2011; 60(3): 329–412. Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nFriedman J, Hastie T, Tibshirani R: The elements of statistical learning. Springer, Berlin: Springer series in statistics. 2001: 1.\n\nGupta S, Kumar D, Sharma A: Data mining classification techniques applied for breast cancer diagnosis and prognosis. Indian Journal of Computer Science and Engineering (IJCSE). 2011; 2(2): 188–195. Reference Source\n\nHalabi S, Small EJ, Kantoff PW, et al.: Prognostic model for predicting survival in men with hormone-refractory metastatic prostate cancer. J Clin Oncol. 2003; 21(7): 1232–1237. PubMed Abstract | Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain KK: Personalised medicine for cancer: from drug development into clinical practice. Expert Opin Pharmacother. 2005; 6(9): 1463–1476. PubMed Abstract | Publisher Full Text\n\nLee YJ, Mangasarian OL, Wolberg WH: Breast cancer survival and chemotherapy: a support vector machine analysis. In Discrete Mathematical Problems with Medical Applications: DIMACS Workshop Discrete Mathematical Problems with Medical Applications, December 8–10, 1999, DIMACS Center. American Mathematical Soc. 2000; 55: 1. Reference Source\n\nNoble WS: What is a support vector machine? Nat Biotechnol. 2006; 24(12): 1565–1567. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nQuaranta V, Weaver AM, Cummings PT, et al.: Mathematical modeling of cancer: the future of prognosis and treatment. Clin Chim Acta. 2005; 357(2): 173–179. PubMed Abstract | Publisher Full Text\n\nSamani NJ, Tomaszewski M, Schunkert H: The personal genome--the future of personalised medicine? Lancet. 2010; 375(9725): 1497–1498. PubMed Abstract | Publisher Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nStrogatz SH: Exploring complex networks. Nature. 2001; 410(6825): 268–276. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nvan't Veer LJ, Bernards R: Enabling personalized cancer medicine through analysis of gene-expression patterns. Nature. 2008; 452(7187): 564–570. PubMed Abstract | Publisher Full Text\n\nWang L (Ed.): Support vector machines: theory and applications. Springer Science & Business Media. 2005; 177. Publisher Full Text\n\nZanin M, Boccaletti S: Complex networks analysis of obstructive nephropathy data. Chaos. 2011; 21(3): 033103. PubMed Abstract | Publisher Full Text\n\nZanin M, Alcazar JM, Carbajosa JV, et al.: Parenclitic networks: uncovering new functions in biological data. Sci Rep. 2014; 4: 5112. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZanin M, Sousa PA, Menasalvas E: Information content: Assessing meso-scale structures in complex networks. EPL (Europhys Lett). 2014; 106(3): 30001. Publisher Full Text\n\nZanin M, Menasalvas E, Boccaletti S, et al.: Analysis of complex data by means of complex networks. In Technological Innovation for Collective Awareness Systems. Springer Berlin Heidelberg. 2014; 423: 39–46. Publisher Full Text\n\nZanin M: “Using networks representations to improve the prognosis of Prostate Cancer patients”. Synapse Storage, 2016. Publisher Full Text"
}
|
[
{
"id": "17941",
"date": "24 Nov 2016",
"name": "Osvaldo Anibal Rosso",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present manuscript the author explores the possibility of using complex networks as a tool for evaluation of the improving a model of survival prognosis of patients with metastatic castrate resistant prostate cancer treated with docetaxel. The main hypothesis tested in the manuscript is that complex networks should synthesize information present in raw data in a new way that should reflect an improved classification score, proposed by the author and co-workers previously1. The manuscript is clearly and well written, and the conclusions are supported by the obtained results. I share the opinion of the author that the results of present research shed light on the importance of using complex networks in the future prognostic models. I recommend the indexing of the manuscript in the present form.",
"responses": []
},
{
"id": "18888",
"date": "27 Feb 2017",
"name": "Diego Raphael Amancio",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript describes the application of complex networks concepts to the problem of identifying survival patients suffering from prostate cancer. The manuscript is scientifically sound, and the conclusions are supported by the results.\nThe author, however, should address the following issues raised below:\n\nHow performance depends on the weight chosen to binarize your data? Note that this step may affect considerably the performance.\n\nWhy traditional topological structures are not considered?\n\nThe author should further motivate the combination of traditional and network features, as similar approaches have been applied in other contexts1,2.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2675
|
https://f1000research.com/articles/5-2673/v1
|
16 Nov 16
|
{
"type": "Research Article",
"title": "Predicting discontinuation of docetaxel treatment for metastatic castration-resistant prostate cancer (mCRPC) with random forest",
"authors": [
"Daniel Kristiyanto",
"Kevin E. Anderson",
"Ling-Hong Hung",
"Ka Yee Yeung",
"Daniel Kristiyanto",
"Kevin E. Anderson",
"Ling-Hong Hung"
],
"abstract": "Prostate cancer is the most common cancer among men in developed countries. Androgen deprivation therapy (ADT) is the standard treatment for prostate cancer. However, approximately one third of all patients with metastatic disease treated with ADT develop resistance to ADT. This condition is called metastatic castrate-resistant prostate cancer (mCRPC). Patients who do not respond to hormone therapy are often treated with a chemotherapy drug called docetaxel. Sub-challenge 2 of the Prostate Cancer DREAM Challenge aims to improve the prediction of whether a patient with mCRPC would discontinue docetaxel treatment due to adverse effects. Specifically, a dataset containing three distinct clinical studies of patients with mCRPC treated with docetaxel was provided. We applied the k-nearest neighbor method for missing data imputation, the hill climbing algorithm and random forest importance for feature selection, and the random forest algorithm for classification. We also empirically studied the performance of many classification algorithms, including support vector machines and neural networks. Additionally, we found using random forest importance for feature selection provided slightly better results than the more computationally expensive method of hill climbing.",
"keywords": [
"Predictive Model",
"Multivariate Feature Selection",
"Hill Climbing",
"Random Forest"
],
"content": "Motivation & background\n\nProstate cancer is the most common cancer affecting men. It is also one of the main causes of cancer mortality1. In addition to radiotherapy, androgen deprivation therapy (ADT) is a standard treatment. However, approximately one third of all patients with metastatic disease treated with ADT develop resistance to ADT. This condition is called metastatic castrate-resistant prostate cancer (mCRPC)1,2. Patients who do not respond to hormone therapy are often treated with a chemotherapy drug called docetaxel. The Prostate Cancer DREAM Challenge is a crowd-sourcing effort that aims to improve the prediction of survival and toxicity of docetaxel treatment in patients with mCRPC2. Specifically, there are two sub-challenges: 1) to predict overall survival of mCRPC patients using clinical trial data, and 2) to predict discontinuation of the docetaxel treatment due to adverse event at early time points. This paper reports our team’s effort contributing to sub-challenge 2.\n\nThe data for the challenge were provided by Project Data Sphere3, consisting of four clinical trials (ASCENT-24, VENICE5, MAINSAIL6, and ENTHUSE-337) for patients with mCRPC treated by docetaxel. The training data made available to the challenge participants consisted of 1600 patients from three clinical trials (ASCENT-2, VENICE, MAINSAIL). The clinical data from ENTHUSE-33 serve as the scoring set to test the prediction accuracy of submissions and hence, were not available to participants. Each team was allowed a maximum of two online submissions. In our two submissions, we used the same methods but varied the hold-out data8,9. After the challenge window, the submission portal was re-opened, allowing participants to continue their effort in refining and exploring alternative methods.\n\nAs a part of the Prostate Cancer DREAM Challenge, we developed data-driven models to predict patient outcomes in mCRPC with subsequent discontinuation of docetaxel therapy. We contributed to sub-challenge 2, which aims to predict discontinuation of docetaxel treatment due to adverse events. We empirically studied and assessed the performance of various machine learning algorithms and feature selection methods using cross validation on the provided training data. We assessed our predictive models using the area under the curve (AUC)10, which is the scoring metric adopted by the Prostate Cancer DREAM Challenge sub-challenge 2. This paper reports the predictive models we developed for the Prostate Cancer DREAM Challenge as well as further improvements we made after the challenge was closed. The methods and our challenge submission are available online from Synapse8,9, and our post-challenge efforts are available on the GitHub repository11.\n\n\nObjective\n\nThe training data consist of clinical variables across 1,600 mCRPC patients in three clinical trials, namely ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center, with 476 patients)4, VENICE (Sanofi, with 598 patients)5, and MAINSAIL (Celgene, with 526 patients)6. Specifically, longitudinal data from five tables were summarized into a core table consisting of 131 variables. The five tables of raw longitudinal data at patient-level include PriorMed (prior medication table records), Med-History (medical history table records patient reported diagnoses at time of patient screening), LesionMeasure (lesion table records target and non-target lesion measurement), LabValue (lab test table includes all lab data), and VitalSign (vital sign table records patient vital sign such as height and weight)2. We used the training data in the core table to build models predictive of treatment discontinuation (binary) for patients in a fourth clinical trial (test data), ENTHUSE-33 (AstraZeneca, with 470 patients)7.\n\n\nData & methods\n\nOur approach consists of four main steps: (1) data cleansing and pre-processing, (2) feature selection, (3) classification, and (4) assessment, as shown in Figure 1.\n\nOur analysis focused on the core data only. From the 1,600 patients, we removed 111 patients without clear discontinuation status, leaving a total of 1,489 patients across three clinical trials: ASCENT-2 with 476 patients, MAINSAIL with 420 patients, and VENICE with 593 patients. Data cleansing was performed separately within each clinical trial and later concatenated back together. Some features were only available on certain clinical trials, for instance, smoking frequency, which was only available on ASCENT-2. In contrast, features such as sodium, phosphorus, and albumin were available in two clinical trials other than ASCENT-2. The heterogeneity in the data also manifested in the different interpretation of the values. To name a few, features such as lesion locations HEAD_AND_NECK or STOMACH, only contained positive responses. In this case, we assigned all of the missing values as unknown, instead of negative values.\n\nWe kept variables that are available from all three clinical trials in the training data. We also performed data imputation using the impute package version 1.42.0 from Bioconductor12. Data imputation was applied to the missing data in the baseline lab values (such as alkaline phosphatase, etc.) using the k nearest neighbor (KNN) algorithm. We varied the value of k from 1 to the number of patients, and evaluated the performance against a naive-bayes classifier13 in a 10 cross-fold validation. k = 40 was shown to be an optimal parameter. Missing information such as the patients’ weight, height and BMI were replaced by the average value of all patients. For the discrete variables, we replaced the missing values with a new variable (‘unknown’ value). Data augmentation was also performed by converting selected multi-label variables into binary variables, such as ‘smoking frequency’. Figure 2 shows how the split and reconstruction was performed.\n\nTo avoid over-fitting, we performed data cleansing and data pre-processing for the testing data (ENTHUSE-33) separately.\n\nIn step 2 (Remove Sparse Features) considering each dataset in turn, we removed features that were missing ample data (>79%) to prevent KNN imputation (illustrated as red blocks). In step 3, also considering each clinical trial as distinct, we used KNN to impute missing data from the features that contained missing data (illustrated as grey blocks). Finally, in step 4 all datasets were concatenated. and the features removed in step 2 (illustrated now as yellow blocks) were replaced by the mean values of that feature calculated from the other two clinical trials (shown using vertical arrows).\n\nWe observed that univariate feature selection methods did not perform well in this case. We used cross validation to guide us in the search for relevant features (or clinical variables) in the data. Specifically, we assessed our models using the area under the precision-recall curve (AUC) using the ROCR R package version 1.0-710.\n\nWe adopted the multivariate hill-climbing14 approach that optimized the AUC using 10-fold cross validation of the training data to search for relevant features among 131 features (clinical variables). The algorithm started with a random set of features against the model and returned the AUC. Depending on the AUC, the algorithm removed, kept, or added other features, and iterated until it converged. This method was used in both of our submissions to sub-challenge 2. Hill-climbing aims to maximize accuracy and is a greedy approach. However, hill-climbing also has its limitations: it is computationally intensive and prone to be stuck at a local optimum. In addition, it also tends to converge with different sets of features within each cross validation run, which makes it difficult to determine what are the corresponding factors that contribute to the discontinuation of the treatment. As one of the goals of the challenge is to identify prognostic markers in patients with mCRPC who will discontinue the treatment, hill-climbing may not the ideal approach.\n\nHalabi et al. reported a list of strong predictors for overall survival of mCRPC patients15,16. These predictors include race, age, BMI, prior radio therapy, prior analgesics, and patient performance status. Lab results for albumin, lactate dehydrogenase, white blood cells, aspartate amminotransferase, total bilirubin, platelet count, hemoglobin, alanine transaminase, prostate specific antigen and alkaline phosphatase were also reported to be strong predictors of overall survival for patients with mCRPC. We hypothesize that the underlying molecular mechanisms that drive overall survival (the goal of sub-challenge 1) and treatment discontinuation (the goal of sub-challenge 2) are related. In addition, after the challenge was closed and the winners were announced, we checked out the winners’ winning strategy. In particular, we were inspired by Team Jayhawks from University of Kansas Medical Center who stated that they \"made use of the variables we derived for sub-challenge 1a and also the overall risk score for survival\". Therefore, we experimented with the set of features for overall survival reported by Halabi et al. and used the results as the baseline. Additionally, we also performed random forest importance from FSelector package17 to look for additional features to improve the prediction accuracy for treatment discontinuation (sub-challenge 2).\n\nRandom forest measures the importance of a variable by estimating the prediction error when that variable is permuted and the rest of variables remain unchanged. We evaluated the importance of the remaining variables using random forest. By varying the number of the features and evaluating the results, medical history (neoplasms benign, malignant or unspecified), smoke, and glucose were identified as the contributing factors. In modeling this feature selection, we addressed the issue of imbalanced data by performing random sampling using the training data consisting of 0.31 positive samples (patients known to have discontinued the treatments) and 0.69 negative samples.\n\nWe applied various classification algorithms to the selected features, including support vector machine (SVM)18, decision trees19, neural networks20, random forest21,22, and ensemble methods23. We observed comparable performance across different classification methods, and subsequently selected random forest as the classifier in our final submissions due to its robustness in heterogeneous datasets. As an ensemble method, random forest splits the training data into a number of subset and constructed decision trees as the classification model. Although the random forest package from CRAN21 usually comes with a set of sensible default parameters, we performed cross validation to optimize the classifier models. Subsequently, we adopted the following set of parameters: pick 9 random variables on each split (parameter mtry), and set the number of trees to 6,300 trees (parameter ntree). During all of the model tuning, 123 was used as the random seed.\n\n\nResults\n\nFor sub-challenge 2, each team was allowed 2 submissions. In our two submissions, we used the same methods and varied the hold-out data8,9. See Table 1 for detailed results.\n\nEach column represents a different feature selection technique and/or hold-out dataset. Columns labeled ‘ENTHUSE-33’ were the test set provided by the Challenge organizers, and hence, were scored via the DREAM 9.5 submission system, while others were scored using the hold-out study as the testing dataset. BL = baseline (features selected based on the Halabi paper only16, HC = hill-climbing, RFI = combining features identified by Halabi and random forest importance.\n\nWe applied hill-climbing to select features from the model space and random forest as the classification methods by varying the hold-out data (see Table 1). We performed 10-fold cross validation by randomly selecting 10% of the training data across all three clinical trials as the hold-out data, and repeated the procedures 10 times. This 10-fold cross validation procedure using hill-climbing yielded an average AUC of 0.532. However, we achieved a substantial reduction in AUC (0.129) when we applied this model to the scoring data (470 patients from ENTHUSE-33 clinical trials). This model served as our 1st submission and ranked 35th on the leaderboard.\n\nWe went on to conduct additional empirical experiments to identify the difference between AUC from 10-fold cross validation and AUC from the scoring set. We hypothesized that this difference in AUCs from cross validation and from the scoring data is due to heterogeneity in the data collected in different clinical trials. Therefore, we studied the heterogeneity of three clinical trials in the training data by using each of three clinical trials as hold-out data. Table 1 showed that using the VENICE clinical trials as the hold-out data resulted in AUCs that are comparable to what we observed in our 1st submission. In particular, we produced an AUC of 0.171 by holding-out the VENICE clinical trial in the training data. Our 2nd submission resulted from applying hill-climbing and random forest to the ASCENT-2 and MAINSAIL clinical trials, and achieved an AUC of 0.132 on the scoring data (ENTHUSE-33). Our 2nd submission ranked 34th out of 61 submissions on the leaderboard. The AUC measured by the top performer was 0.19024.\n\nFigure 4 shows that albumin was consistently selected by hill-climbing as a strong predictor disregard to the hold-out data. In total, there are 27 clinical variables selected in more than one hill-climbing model, including: Na (sodium), OTHER (other lesion), ALB (albumin), ORCHIDECTOMY, CEREBACC cerebrovascular accident either hemorrhagic and or ischemic), AST (aspartate aminotransferase), HB (hemoglobin), Mg (magnesium), LYMPH_NODES (lesion), PROSTATE (lesion), BILATERAL_ORCHIDECTOMY, CORTICOSTEROID (medication), CREAT (creatinine), PSA (prostate specific antigen), CREACL (creatinine clearance).\n\nAfter the challenge ended, we continued to fine tune our models and submitted predictions to be scored. We adopted the Halabi model16 and combined it with random forest importance (RFI) to improve prediction. Random forest was kept as the classification method, and we also varied the hold-out data as the assessment. Random forest importance was computed by excluding features from the Halabi model which resulted in different sets of features depending on the hold-out data (see Figure 5).\n\nNext, we compared the features chosen by hill-climbing, RFI and the Halabi model. We observed that SMOKE and REGION overlapped among the hill-climbing and RFI results as shown in Figure 6. By varying the number of top features ranked by random forest importance combined with the Halabi model, 4 top ranking clinical variables resulted from random forest importance yielded the best average accuracy (see Figure 3). Applying this predictive model in 10-fold cross validation resulted in an average AUC of 0.275. We also repeated the process by selecting each individual clinical trial as the hold-out data, which yielded AUCs of 0.106 (VENICE as the hold-out), 0.303 (ACCENT-2), and 0.263 (MAINSAIL). Compared to the hill-climbing method, this model produced better and more consistent AUCs across clinical trials. Using each of the trained models to predict the discontinuation of docetaxel treatments for 470 patients from ENTHUSE-33 (AstraZeneca) clinical trials resulted in AUCs of 0.140 (VENICE), 0.132 (ACCENT-2), 0.124 (MAINSAIL), as also shown in Table 1.\n\nRandom forest was used as the classification method, and 10% of randomly selected data across all sample as the holdout data.\n\nRandom forest was used as the classification model.\n\nFeatures identified by the Halabi model was excluded during the computation.\n\nRandom forest was used as the classification method. RFI: the union of all features selected by random forest importance by varying the hold-out data after excluding features described by the Halabi model. HC: is the union of all features selected by hill-climbing by varying the hold-out data.\n\n\nDiscussion\n\nA major challenge of the Prostate Cancer DREAM Challenge was the unbalanced class sizes and the heterogeneity of the clinical trials. Subsequent to data cleansing, there remained only 197 positive samples (patients who discontinued the treatments) and 1,292 of negative samples. We observed that data cleaning and augmentation improved the AUC and F1 (before we augmented the data, our AUCs were in the range of 0.2 in 10-fold cross validation). Lastly, we were delighted to find that our computationally intensive hill-climbing algorithm, designed to find an optional feature set, provided strong results when testing using the datasets provided to us. When scored using the ENTHUSE-33 dataset, hill-climbing performed weakly against the combination of Halabi model and random forest importance.\n\nWe experimented with various feature selection and classification methods. We observed that some clinical variables were selected more consistently across feature selection methods, see Figure 6. These variables include: AGEGRP2 (Age Group), ALB (Albumin), ALP (Alkaline Phosphatase), ANALGESICS, AST (Aspartate Aminotransferase), BLADDER (Lesion), COPD (Blood and Lymphatic System), CREACL (Creatinine Clearance), HB (Hemoglobin), LIVER (Leison), LUNGS (Lesion), LYMPH_NODES (Lesion), NA. (Sodium), OTHER (Other Lesion), PLEURA (Pleura), PROSTATE (Lesion), PSA (Prostate Specific Antigen), RACE_C, REGION_C, SMOKE, SPINCOMP (Spinal Cord Compression), TBILI (Total Bilirubin), and WBC (White Blood Cells).\n\nIn this study, we only looked at the core table that was precompiled from detailed longitudinal tables. Given more time and resources, a close look at the raw longitudinal data may yield additional insight in discovering the clinical variables that predict the discontinuation of treatment for mCRPC patients.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc Challenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn46018488 and http://dx.doi.org/10.7303/syn47297619. The method and results are also presented as a poster25.",
"appendix": "Author contributions\n\n\n\nDK served as team captain of team ‘yoda’ in the DREAM9.5 Prostate Challenge, wrote the first draft of the manuscript and was responsible for code consolidation and submission. KA contributed data cleansing scripts, created figures and assisted in the writing of this paper. KYY supervised the project through its duration. All authors contributed to the writing of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLing Hong Hung and Ka Yee Yeung are supported by NIH grant U54-HL127624. Daniel Kristiyanto is sponsored by the U.S. Department of State and American-Indonesian Exchange Foundation (AMINEF) through Fulbright Scholarship, and gratefully acknowledges funding provided by the University of Washington in the form of full tuition waivers.\n\n\nAcknowledgements\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\nThis project used computing resources provided by Microsoft Azure.\n\nWe also would like to thank all students in TCSS 588 Bioinformatics in the Spring Quarter of 2015 at University of Washington Tacoma who contributed to this project, and extend our acknowledgments to Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the challenge.\n\n\nReferences\n\nGupta E, Guthrie T, Tan W: Changing paradigms in management of metastatic Castration Resistant Prostate Cancer (mCRPC). BMC Urol. 2014; 14(1): 55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDream 9.5 Prostate Cancer dream challenge. 2015. Publisher Full Text\n\nHome | share, integrate & analyze cancer research data | project data sphere. 2016. Reference Source\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nAnderson K, Sina Khankhajeh S, Kristiyanto D, et al.: Prostate cancer 9.5 submission 1 - syn4601848. 2015. Accessed on 02/22/2016. Publisher Full Text\n\nAnderson K, Sina Khankhajeh S, Kristiyanto D, et al.: Prostate cancer 9.5 submission 2 - syn4729761. 2015. Accessed on 02/22/2016. Publisher Full Text\n\nSing T, Sander O, Beerenwinkel N, et al.: ROCR: visualizing classifier performance in R. Bioinformatics. 2005; 21(20): 3940–3941. PubMed Abstract | Publisher Full Text\n\nKristiyanto D, Andersonn K: Predicting discontinuation of docetaxel treatment for metastatic castration-resistant prostate cancer (mCRPC). GitHub repository. 2016. Reference Source\n\nHastie T, Tibshirani R, Narasimhan B, et al.: impute: Imputation for microarray data. R package version 1.44.0. 2016. Reference Source\n\nDimitriadou E, Hornik K, Leisch F, et al.: Misc functions of the department of statistics (e1071), tu wien. R package. 2008; 1: 5–24.\n\nRomanski P: Fselector: Selecting attributes. Vienna: R Foundation for Statistical Computing. 2009.\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHalabi S, Small EJ, Kantoff PW, et al.: Prognostic model for predicting survival in men with hormone-refractory metastatic prostate cancer. J Clin Oncol. 2003; 21(7): 1232–1237. PubMed Abstract | Publisher Full Text\n\nRomanski P, Kotthoff L: FSelector: Selecting attributes. R package version 0.20. 2014. Reference Source\n\nHearst MA, Dumais ST, Osman E, et al.: Support vector machines. Intelligent Systems and their Applications, IEEE. 1998; 13(4): 18–28. Publisher Full Text\n\nRipley B: tree: Classification and Regression Trees. R package version 1.0-37. 2016. Reference Source\n\nFritsch S, Guenther F: Training of neural networks. R package version 1.32. 2012. Reference Source\n\nLiaw A, Wiener M: Classification and regression by randomforest. R News. 2002; 2(3): 18–22. Reference Source\n\nBreiman L: Random forests. Mach Learn. 2001; 45(1): 5–32. Publisher Full Text\n\nDietterich TG: Ensemble methods in machine learning. In Multiple classifier systems. Springer. 2000; 1857: 1–15. Publisher Full Text\n\nDream9.5 - prostate cancer dream challenge final scoring round - syn2813558. 2016. Reference Source\n\nKristiyanto D, Anderson K, Sina Khankhajeh S, et al.: Predicting discontinuation of docetaxel treatment for metastatic castration-resistant prostate cancer (mcrpc) with hill-climbing and random forest. F1000Research. 2015; 4:1383 (poster). Publisher Full Text"
}
|
[
{
"id": "19549",
"date": "20 Jan 2017",
"name": "Vishakh Hegde",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nVishakh Hegde: The title, abstract and introduction reflects the core content of the article. The authors clearly specify the problem and provide a sound technical overview of their approach and solution, with diagrams clearly illustrating it. However, we would like to see the following:\nWhile they authors provide the AUC for baseline (BL) scored on hold-out data, we would like to see the same for the test data as well (ENTHUSE-33). This will provide a metric to compare their algorithms (HC and RFI) with respect to BL\n\nIn the ‘Classification’ subsection, we would like to see how the AUC compare across various classification algorithms they claim to have tried.\n\nKaren Sachs: The authors present an exploration of a feature selection and classification problem in prostate cancer from multiple clinical trials. Overall an interesting exploration. I did find a few points confusing:\nThe hill climbing features selection was described as nonoptimal because it does not concur from iteration to iteration – a fair point, also described in Figure 4. I did not understand why it was nonetheless employed in the results? It was a bit confusing which of the fs procedures described were used for which result.\n\nAlso, it was not clear to me if the hill climbing feature selection was done with the entire dataset? If so it will overfit and reduce test performance. In fact I wonder if this is the reason that performance degraded for Enthuse-33. Can the authors comment on this/clarify this point? Was the entire pipeline—feature selection through classifier – performed on a subset of the data, such that the hold out (test) data had not been used in any part of the process before it was used to assess AUC?\n\nMinor point – Text in the paper “Figure 4 shows that…disregard to..” should instead be “irrespective of”.",
"responses": []
},
{
"id": "19698",
"date": "26 Jan 2017",
"name": "John E. Mittler",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors summarize the work they have done to improve computational methods for predicting which prostate cancer patients will discontinue the cancer drug docetaxel. The methods were tested using clinical data given in one of the DREAM competitions, a series of competitions in which computational biologists are challenged to submit predictions for quantitative biological and biomedical questions for which answers have been withheld from the competitors during the competition phase.\n\nI am marking this as “Approved with Reservations” because of oversimplification in the abstract and page 7 concerning the performance of Hill Climbing (HC) and Random Forest Importance (RFI). (The journal specifically asks us to comment on whether the “abstract represents a suitable summary of the work”). Although RFI arguably did better than HC on the ENTHUSE-33 dataset, HC did substantially better in the hold-out experiments in Table 1. On page 7 they write: “Compared to the hill-climbing method, [RFI] produced better and more consistent AUCs across clinical trials.” Having higher AUC in 2/4 ENTHUSE-33 tests doesn’t strike me as consistently better.\n\nTo get me to “Approve” this, I would like full attention to my main point above (this shouldn’t be too hard) and some subset of the comments below (some of which are optional).\nWith respect to my main comment above, it would be nice if the authors could give some measure (e.g., 95% CIs) of the variation in their AUC estimates. Is 0.146 (AUC for RFI on the ENTHUSE-33 dataset) significantly higher than 0.129 (AUC for HC)?\n\nA bit more discussion about the lessons that you learned from this exercise would be helpful. Based on your research, what advice, if any, would you give to someone entering a similar competition next year? If you feel uncomfortable providing more advice, please explain what it is that makes you uncomfortable.\n\nWith regard to comment #2, one of the key points I derived from this paper was the importance of properly cleaning and augmenting the data. However, the quantitative data that support this conclusion was tucked into a parenthetical remark in the discussion. These data belong in the results.\n\nAlso, in the discussion, please provide whatever thoughts you may have as to why HC didn’t do as well on the ENTHUSE-33 dataset as it did in the other experiments.\n\nThe data in Figure 3 are highly erratic. As far as I can tell, 3, 5, 10, 20, or 24 features would be indistinguishable from 4. Please comment.\n\nI concur with one of previous reviewers about the value of adding the AUC score for BL on the ENTHUSE-33 dataset. Also, consider adding a row (first three entries of which would be blank) giving AUCs for ENTHUSE-33 for models trained on the full dataset.\n\nI wonder if the statement in the abstract “We also empirically studied the performance of many classification algorithms, including support vector machines and neural networks” could be converted into some kind of result (which would need to supported in the results section with AUC values). Maybe cut back on the background to make room from this.\n\nMinor: “random” came out as “andom” on page 5. “cutoff” should be under the x-axis in Figure 3. Bold entry in Table 1 isn’t explained.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2673
|
https://f1000research.com/articles/5-2672/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Predicting survival time for metastatic castration resistant prostate cancer: An iterative imputation approach",
"authors": [
"Detian Deng",
"Yu Du",
"Zhicheng Ji",
"Karthik Rao",
"Zhenke Wu",
"Yuxin Zhu",
"R. Yates Coley",
"Yu Du",
"Zhicheng Ji",
"Karthik Rao",
"Zhenke Wu",
"Yuxin Zhu"
],
"abstract": "In this paper, we present our winning method for survival time prediction in the 2015 Prostate Cancer DREAM Challenge, a recent crowdsourced competition focused on risk and survival time predictions for patients with metastatic castration-resistant prostate cancer (mCRPC). We are interested in using a patient's covariates to predict his or her time until death after initiating standard therapy. We propose an iterative algorithm to multiply impute right-censored survival times and use ensemble learning methods to characterize the dependence of these imputed survival times on possibly many covariates. We show that by iterating over imputation and ensemble learning steps, we guide imputation with patient covariates and, subsequently, optimize the accuracy of survival time prediction. This method is generally applicable to time-to-event prediction problems in the presence of right-censoring. We demonstrate the proposed method's performance with training and validation results from the DREAM Challenge and compare its accuracy with existing methods.",
"keywords": [
"Iterative imputation",
"multiple imputation",
"Ensemble learning",
"Survival Time Prediction"
],
"content": "1 Introduction\n\nPredicting overall survival for cancer patients remains central to studying new treatment options. Given a patient’s covariates and preferences, doctors can anticipate prognosis and likely treatment effects and make clinical recommendations accordingly. For example, docetaxel is a standard treatment for patients with metastatic prostate cancer who have developed resistance to conventional androgen deprivation therapy. Using data from the docetaxel arm of four recent phase III trials of experimental interventions, the 2015 Prostate Cancer DREAM Challenge1 aims to amass community-based efforts to develop, apply, and validate prognostic models for overall patient survival under this standard treatment.\n\nA frequently encountered problem in survival analysis is data censoring, in which exact survival times are not observed for all patients. The most common type of censoring is right censoring, in which the survival time is only observed up to a certain censoring time; event times are not observed for individuals after censoring occurs. Many state-of-the-art statistical and machine learning tools cannot be directly applied to censored data while most standard methodologies that do allow for censoring assume independence between censoring and survival time; this assumption is frequently inappropriate.\n\nAmong survival analysis methods that accommodate censoring, many approaches focus on maximizing the partial likelihood, which depends only on the order of events rather than the time at which they occur. One of the most widely used methods, the proportional hazards model (also known as the Cox regression model) parameterizes this partial likelihood through a baseline hazard function and a multiplicative scaling term that depends on covariates2,3. Other methods in this class often seek different formulations of the hazard function. For instance, proportional hazard models based on artificial neural networks4,5 and the gradient boosting proportional hazard model6 have been developed to model more complex forms of the non-linear hazard function.\n\nAlternate objective functions have also been developed for survival analysis with censored data. Support vector regression techniques can be adapted to survival time prediction by considering censored outcomes as interval targets and forming a new maximum margin loss function directly with log-transformed survival time7. In random survival forests (RSF)8,9, a tree-based ensemble model that relies on bagging, each survival tree split is determined by maximizing the survival difference10 between child nodes. More recently, a gradient boosting-based model with direct optimization of Harrell’s concordance index has been developed11,12.\n\nAs an alternative to the above methods that directly accommodate right-censored survival data, multiple imputation13 methods treat the censored observations as missing data. To overcome the obstacle posed by censoring, these methods randomly generate missing survival outcomes many times in order to permit complete-data inferences. Taylor et al.(2002)14 propose two nonparametric imputation methods that enable estimation of the survival distribution for right-censored survival data without covariates. One approach, risk set imputation (RSI), replaces an individual’s censored time with a random draw of observed event times among those at risk (beyond the particular censoring time), starting from the smallest and proceeding toward the largest censored time. With an infinite number of imputations, RSI survival point estimates are equivalent to the Kaplan-Meier estimator, E{S^RSI(t)}=S^KM(t), where the expectation is taken with respect to the distribution of all possible random imputations. This imputation technique does not use the covariate data which, if modeled jointly with survival times, can improve accuracy of survival time predictions.\n\nConditional survival estimates are more informative for individual survival time predictions. Unbiased conditional survival estimation, i.e., E{S^RSI(t;x)}=S^KM(t;x) ensures unbiased population-averaged survival curve estimation, E{S^RSI(t)}=S^KM(t)=EX[S^KM(t;x)], while the reverse does not hold. Given a covariate-specific survival distribution estimate, Pr^(Ti>t|Xi), ∀t > 0, it remains open as to how to predict an individual’s exact survival time (Ti). Our method approaches this problem from another perspective by directly modeling survival times.\n\nIn this paper, we propose a new method for exact survival time prediction that relies on strategically imputing censored time and, then, building an ensemble prediction model based on the “complete” dataset. In so doing, we are able to exploit the predictive power of many state-of-the-art regression technologies. This imputation algorithm first multiply imputes censored survival times in order to construct a complete dataset without using covariates. Then, the algorithm iterates between 1) predicting the completed survival times using covariates and 2) adjusting the imputed value.\n\nIn the following, we first describe the data for training, testing, and validating our proposed survival time prediction model and, then, summarize the statistical methods that we used to construct the ensemble model. We conclude by discussing potential directions for future research and further improvements.\n\n\n2 Data\n\nData from the control arm of four phase III clinical trials of experimental therapies for mCRP were made available to participants in the Prostate Cancer DREAM Challenge. The trials are ASCENT-2 (conducted by Memorial Sloan Kettering Cancer Center)15, VENICE (Sanofi)16, MAINSAIL (Celgene)17, and ENTHUSE-33 (AstraZeneca)18. Training data include survival outcomes (time of death or censored survival time) and 131 clinical covariates from the ASCENT-2, MAINSAIL, and VENICE trials. Only covariate data were available for the ENTHUSE-33 trial; survival outcomes were blinded for scoring. Clinical covariates included patient demographics, vital signs, lab results, medical history, medication use, and tumor measurements.\n\n2.1.1 Data consolidation: A primary dataset, referred to here as the “CoreTable\", was provided by the DREAM Challenge organizers and summarized many relevant baseline covariates at patient level. An additional five raw datasets containing more detailed baseline and follow-up data were also provided. We summarized additional baseline information from these secondary tables to augment the CoreTable. For example, medications were grouped according to drug type or use including opiod analgesics, anti-depressants, and vitamin supplements. Tumor data were also summarized across disease sites including the number, average size, and maximum size of lesions. Continuous lab values were log-transformed; non-transformed values were also kept in the data. Covariate data from secondary tables that duplicated or were highly correlated with existing variables in the CoreTable were excluded from the analysis.\n\nThe resulting dataset had 2070 observations and 256 covariates, among which 78 covariates were continuous variables and 177 were categorical variables.\n\n2.1.2 Splitting data for 10-fold cross-validation: In order to maintain consistent groupings for cross validation, we evenly split the training data into ten groups by randomly generating a uniform 10-fold index for each observation. As a result, we were able to maintain the same hold-out datasets as we employed different prediction methods. When generating the random 10-fold index, we set a random number generation seed for reproducibility (DOI: 10.7303/syn4732982).\n\n2.1.3 Multiply imputing covariates: Missingness was common in the combined dataset. Figure 1 shows the missingness patterns for covariates (columns) within each study (row block). As suggested by the heat map, the missingness is largely study-dependent and likely due to the differences in study protocol and data collection procedure.\n\nDarker color indicates higher missing value percentage.\n\nThe ten continuous covariates with the most missingness are listed in Table 1. Since a considerable proportion of categorical variables were created by categorizing continuous variables (for e.g., labeling lab values as low, normal, or high), these categorical variables have a rate of missingness similar to their continuous counterparts. Other categorical covariates with a large proportion of missingness include a categorization of baseline weight and height (77.6% and 77.3% missing, respectively) and an indicator for a history of smoking (77.1% missing).\n\nMissing covariate data for the combined dataset was then imputed using multiple imputation13 via the fastpmm function in the R package mice (R 3.2.1). Multiple imputation was performed using covariate data from both training (ASCENT-2, VENICE, MAINSAIL) and validation (ENTHUSE-33) studies and was repeated to obtain five completed datasets.\n\n2.1.4 Covariate standardization: We standardized continuous data by applying the Box-Cox transformation (with power parameter 0.2) to all continuous covariates, followed by mean-variance standardization.\n\n2.1.5 Survival summaries: Figure [X of the main paper] shows the the Kaplan-Meier estimates of survival curves along with the 95% confidence band for each of the three studies in the DREAM Challenge training data. The three studies have similar survival curves up to 17 months from baseline.\n\n\n3 Methods\n\nIn this section, we describe an iterative imputation procedure that can be used in tandem with ensemble learning methods to predict survival times given possibly many covariates. This method constitutes our wining algorithm for the Prostate Cancer DREAM Challenge’s sub-challenge 1b for predicting exact survival times. Throughout our presentation, we use integrated area under the curve (iAUC) to evaluate predictive accuracy and select optimal values for tuning parameters19.\n\nLet (Yi,Δi) be the pair of observed or censored survival times and the censoring indicator for patient i = 1, … ,N. Δi = 1 if Yi is the observed survival time and 0 if censored. Let Xi be the vector of covariates. We describe our prediction algorithm below in three steps.\n\nI. Initial survival time imputation without covariates\n\nFor individuals with censored survival times–I0 = {i | Δi = 0}–add independent exponential random numbers to the right-censored survival times, i.e., Yi,new(0)=Yi+Ei, where Ei ~ Exp(α), for i ∈ I0. For individuals with observed survival times, no imputation is necessary; keep the observed Yi.\n\nNote that α is a tuning parameter for this initial step (as well as throughout the prediction algorithm). We select a value for α with a grid search that seeks to maximize the 10-fold cross-validated iAUC. In the initial imputation step, the value of α is set to be study-specific but constant across covariates within a study (given exploratory analysis showing heterogeneity across trials). As a result, the values of α chosen are: 400 (ASCENT-2), 420 (MAINSAIL), and 460 (VENICE).\n\nII. Adjust imputed survival times using covariates\n\nWe then use covariates to build a predictive model for the completed survival times. Specifically, we iterate between two processes: training an ensemble prediction model (step IIa) and adjusting the survival times (step IIb) for iterations k = 1, … , K.\n\nIIa) Select features and train prediction models:\n\nFeature selection Feature selection proceeds using the following three models to identify salient predictors of (log-transformed) survival time: regularized random forest (RRF) with two predictors sampled for splitting at each node (regularization parameter = 0.95); support vector machine (SVM) regression with radial kernel (bandwidth = 0.02, center = 0.15); and, partial least squares (PLS) regression with two components.\n\nEach model returns a vector of variable importance (VI), which is calculated by R package caret and within the range of 0 – 100. VI vectors are averaged across the three models to obtain a mean VI vector. We then choose \"important variables\", which we define as those with a final VI greater than tuning parameter γ = 24 (chosen to maximize cross-validated iAUC.) Covariates with the highest VI are discussed in section 4.4\n\nEnsemble model training and predicting Using selected features, we train five prediction models (listed in Table 2). Tuning parameters for each model were chosen by 10-fold cross-validation to maximize iAUC.\n\nTrained prediction models are then used to obtain out-of-sample predictions for survival time. In the case of 10-fold cross-validation, covariate and outcome data on 90% of patients are used for training prediction models which then, in turn, provide out-of-sample survival time predictions for the remaining 10% of patients.\n\nIIb) Adjust imputed survival times:\n\nFor each censored individual (Δi = 0), predicted survival times from each prediction model (Table 2) are averaged to Yi,adj(k), where k is the iteration number for step II. We adjust predicted survival times as follows: Yi,new(k)=Yi,adj(k) if Yi,adj(k)>Yi; otherwise, Yi,new(k)=Yi+Ei(k), where Ei(k)∼Exp(α*=80). (Here, α* is a tuning parameter whose value is determined by a grid search to maximize the 10-fold cross-validated iAUC.) This adjustment serves to increase under-estimated imputed values to a random quantity greater than the observed censoring time.\n\nUsing these imputed survival times, ensemble survival time prediction (IIa) is repeated. The training and adjustment process is repeated until the incremental increase in cross-validated iAUC is smaller than a pre-set threshold. In our application, we used a relatively large threshold (0.2) to avoid over-fitting, and the algorithm converges after just three iterations.\n\nMore generally, steps IIa and IIb are repeated several times, say K, in order to obtain the adjusted survival imputations {Yi,new(K),i∈I0} produced by the last iteration. We combine these values with the observed (uncensored) survival times and use them as the complete outcome vector for constructing a final prediction model.\n\nIII. Final predictions for patients in the validation dataset\n\nIndividual model. We trained five prediction models (Table 2) using log-transformed Yi,new(K) and Box-Cox transformed features selected in the final (Kth) iteration of step IIa above. We chose tuning parameters in order to maximize 10-fold cross-validated iAUC; tuned parameter values are listed in Table 2. In this application, we used the same five modeling approaches for both the imputation and prediction steps, though using the same models is not necessary.\n\nSuper learner. Because we have I = 5 multiply-imputed covariate datasets (see Data cleaning), the prediction procedure described above can be used to produce distinct sets of survival time predictions for all combinations of I = 5 datasets and M = 5 survival time prediction models. For each prediction model, we average the resulting out-of-sample (10-fold) predictions for each of the I = 5 imputed datasets. Finally, we fit a LASSO regression model with log-transformed survival time as the outcome to determine the optimal weights for combining predicted survival times from the M = 5 models. The final output is a predicted survival time based on patient covariate data.\n\nThis algorithm is summarized in Figure 2.\n\n\n4 Results\n\nFigure 3 displays Kaplan-Meier (KM) estimates for observed survival data and several stages of survival time prediction. The black curve shows the KM estimate for the observed survival data assuming independent censoring. The density function for censoring is given by the dashed black line and indicates that most censoring occurred between six and 20 months.\n\nThe red, green, and blue curves show the KM estimates for survival predictions after initial random imputation (step I) and k = 1 and k = 2 iterations of the covariate-based, adjusted survival time predictions (step II), respectively. All imputed survival time curves closely track the observed survival curve until 16 months follow-up, at which time survival decreases more rapidly than expected under the assumption of independent censoring.\n\nWe note that it is possible for survival estimates after initial imputation (red curve) to lie above or below the observed KM curve (black) depending on larger or smaller choices of α, respectively. Here, we see that cross-validation favors larger values of α suggesting that censored individuals likely experience shorter-than-average survival after censoring. Survival estimates of model-based predictions (green and blue curves) also suggest that patients censored earlier are expected to have an event around 13–23 months. The green and the blue curves are very similar, indicating that the imputation algorithm converges very quickly.\n\nThe left hand panel of Figure 4 shows a plot of the observed times against the out-of-sample predicted times Yi,adj(1) made in the first predictive iteration (k=1) in step IIa, prior to adjusting prediction in step IIb. By the imputation algorithm we proposed, we keep a patient’s survival time Yi,new = Yi if an event was observed and censoring did not occur(Δi = 1) and impute a patient’s survival time by Yi,new = Yi + Ei if Yi,adj < Yi and for patients with censored survival time (Δi = 0). The right hand plot shows that, after multiple iterations of this algorithm (k=3), the final imputed values show greater risk stratification for censored patients (blue circles). Because we use observed event times instead of predicted event times for uncensored patients (red diamonds), these observations lie directly on the line of equality (black dashed line).\n\nThe left panel in Figure 4 also indicates regression to the mean, i.e., the initial imputations tend to overestimate earlier survival times (Yi < 16 months) and underestimate later survival times (Yi > 16 months), resulting in a horizontal cloud of points. Our imputation algorithm deals with the underestimation at later survival times by forcing the imputed times to be larger than the observed censoring time, i.e., by the Yi,new = Yi + Ei step. On the other hand, overestimation at earlier survival times is controlled by tuning the rate parameters of the exponential distributions (α,α*) in steps I and IIb. The right panel of Figure 3 shows that the patients with earlier censoring times (circles toward the lower left) have larger differences between the imputed survival time and the observed censoring time (y — x) in comparison to patients who survive longer (circles toward the upper right).\n\nAlthough iAUC was used for evaluating the prediction performance in the training stage to make better use of the censored data, root mean squared error (RMSE) based on uncensored observations is used as the scoring metric for survival time prediction accuracy. Based on the training set, the 10-fold cross-validated RMSE of our ensemble predictive model is 246.5. (In the following section, we compare our method with other benchmarks with respect to the cross-validated RMSE using the same data-splitting index.)\n\nIn the final scoring round of the DREAM Challenge, our model was trained on the entire training set and then tested on the validation dataset from an independent clinical trial (ENTHUSE-33). Our final ensemble predictive model yielded a RMSE of 198.1 and was one of the top performing algorithms. Our predictions ranked sixth overall in accuracy are were not significantly different from the most accurate survival time predictions (Bayes factor > 3) [placeholder for main challenge paper].\n\nWe also compared RMSE of the proposed method to that of an off-the-shelf method: survival random forest (SRF). Ishwaran et al. (2008)8 propose a popular SRF method which outputs ensemble cumulative hazard function predictions Λ^(t|xi), enabling one to specify the survival function S^(t)=exp{−Λ^(t|xi)} for subject i = 1, … , n at time t. We predicted the exact survival time using the q% quantile of the estimated survival curve with q common to all subjects and selected by 10-fold cross-validation to minimize RMSE. Survival random forest is distinguished from the usual random forest methods by the criterion for choosing and splitting a node. In our implementation, we used a log-rank splitting rule that splits nodes by maximizing the log-rank test statistic10,20. We increased the speed of training using a randomized log-rank splitting rule meaning that, at each splitting step of growing a tree, we randomly split the candidate covariates and choose the covariate and split point pair that maximize the log-rank statistic. This randomized scheme is recommended to avoid overly favoring splitting continuous covariates when both continuous and categorical variables exist.\n\nWe generated 1, 000 bootstrap samples from the original training data (compiled and completed as detailed in section 2). We grew one survival tree for each bootstrap sample. The survival random forest produces the final ensemble survival function prediction by averaging over predictions obtained from these trees. To split a node in each tree, we tried a maximum of 10 random splits to determine which variable to split and where to split. Averaged over the five imputed datasets, we obtained a 10-fold cross-validated RMSE 344.8 with q% = 37%. Thus, our proposed algorithm performed considerably better (RMSE = 246.5).\n\nVia ensemble prediction modeling, we also identified the most salient predictors of survival time in this population. The strongest predictors of survival time included lab values indicating overall health and cancer activity and other measures of overall health. For example, alkaline phosphate (ALP)– the most predictive covariate–is typically elevated in individuals with metastatic disease. ALP was included as covariate in the Halabi et al. (2014) benchmark model21. Other lab measurements in the benchmark model–lactase dehydrogenase (LDH), hemoglobin (HB), prostate specific antigen (PSA), and albumin (ALB)–were also among the most predictive covariates in our model. The Eastern Cooperative Oncology Group (ECOG) performance status (a standard measure of daily living abilities) and use of opiate medication were also included in the Halabi et al. (2014) nomogram and were found to be highly predictive of survival in our approach. Disease site, the remaining predictor in the benchmark model, was not among the strongest predictors of survival in our model.\n\n\n5 Discussion\n\nIn this paper, we have introduced a survival time prediction method based on multiple imputation and ensemble learning. It is designed for right-censored survival data with many covariates. The proposed method operates by iterating through two stages: iterative imputation of right-censored outcomes and building an ensemble predictive model of survival time. Compared to the existing methods for survival time prediction, the second phase of this algorithm is particularly effective in leveraging covariates to guide imputation of the censored survival times. By imputation, we have transformed the difficult problem of time-to-event prediction with censoring to a standard predictive regression problem. The results of the Prostate Cancer DREAM Challenge 1b have empirically validated the predictive performance of our algorithm. Further research is needed to explore theoretical characteristics of the proposed algorithm. Conceptually, the iterative imputation algorithm achieves strong predictive performance by first generating model-based imputations (which makes use of the covariate information) and, then, correcting survival time predictions based on observed outcomes.\n\nFor future work, we will compare our method with other methods such as risk set imputation (RSI)14 and recursively imputed survival trees (RIST)22 using more extensive simulation studies. We will also seek to establish the MSE optimality behind this algorithm and further improve its imputation and prediction performance. In particular, we will further study the impact of the initialization strategy in step I on the final predictive accuracy to explore whether using model-based initialization (such as RIST) performs better than the current cross-validation-based random initialization. Finally, obtaining reliable confidence intervals around predicted survival time is also crucial for this method to be more clinically useful.\n\n\n5.1 Data availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc Challenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge The code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn473298231",
"appendix": "Author contributions\n\n\n\nYC, DD, YD, ZJ, KR, ZW, YZ conceived the study. YC, DD, YD, ZJ, ZW, YZ cleaned the datasets. DD established and implemented the prediction algorithm. YC incorporated the algorithm to super learner for challenge 1a. YC, DD, YD complied the code for final submission. YC, DD, YD, ZJ, KR, ZW, YZ wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\nWe thank Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Challenge.\n\nWe thank John Muschelli for helpful discussions on super learner methodology. We also thank Scott Zeger and the Patrick C. Walsh Cancer Research Fund for supporting our team’s work.\n\n\nReferences\n\nwww.synapse.org. DREAM9.5-Prostate Cancer DREAM Challenge. 2015; [Online; accessed 29-January 2016]. Publisher Full Text\n\nCox DR: Regression models and life-tables. In Breakthroughs in statistics.1992; 527–541. Publisher Full Text\n\nCox DR: Partial likelihood. Biometrika. 1975; 62(2): 269–276. Publisher Full Text\n\nFaraggi D, Simon R: A neural network model for survival data. Stat Med. 1995; 14(1): 73–82. PubMed Abstract | Publisher Full Text\n\nRipley RM, Harris AL, Tarassenko L: Non-linear survival analysis using neural networks. Stat Med. 2004; 23(5): 825–842. PubMed Abstract | Publisher Full Text\n\nRidgeway G: The state of boosting. Computing Science and Statistics. 1999; 31: 172–181. Reference Source\n\nShivaswamy PK, Chu W, Jansche M: A support vector approach to censored targets. In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, IEEE, 2007; 655–660. Publisher Full Text\n\nIshwaran Hemant, Kogalur UB, Blackstone EH, et al.: Random survival forests. The annals of applied statistics. 2008; 2(3): 841–860. Publisher Full Text\n\nHothorn T, Bühlmann P, Dudoit S, et al.: Survival ensembles. Biostatistics. 2006; 7(3): 355–373. PubMed Abstract | Publisher Full Text\n\nSegal MR: Regression trees for censored data. Biometrics. 1988; 44(1): 35–47. Publisher Full Text\n\nHarrell FE, Califf Robert M, Pryor DB, et al.: Evaluating the yield of medical tests. JAMA. 1982; 247(18): 2543–2546. PubMed Abstract | Publisher Full Text\n\nChen Y, Jia Z, Mercola D, et al.: A gradient boosting algorithm for survival analysis via direct optimization of concordance index. Comput Math Methods Med. 2013; 2013: 873595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRubin DB: Multiple imputation for nonresponse in surveys. John Wiley & Sons. 2004; 81. Reference Source\n\nTaylor JMG, Murray S, Hsu Chiu-Hsieh: Survival estimation and testing via multiple imputation. Statistics & probability letters. 2002; 58(3): 221–232. Publisher Full Text\n\nScher HI, Jia X, chi k, et al.: Randomized, open-label phase iii trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nHeagerty PJ, Zheng Y: Survival model predictive accuracy and ROC curves. Biometrics. 2005; 61(1): 92–105. PubMed Abstract | Publisher Full Text\n\nLeBlanc M, Crowley J: Survival trees by goodness of split. J Am Stat Assoc. 1993; 88(422): 457–467. Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhu R, Kosorok MR: Recursively imputed survival trees. J Am Stat Assoc. 2012; 107(497): 331–340. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan der Laan MJ, Polley EC, Hubbard AE: Super learner. Stat Appl Genet Mol Biol. 2007; 6(1): Article25. PubMed Abstract | Publisher Full Text\n\nHung H, Chiang CT: Estimation methods for time-dependent AUC models with survival data. Can J Stat. 2010; 38(1): 8–26. Publisher Full Text\n\nBair E, Hastie T, Paul D, et al.: Prediction by supervised principal components. J Am Stat Assoc. 2006; 101(473): 119–137. Publisher Full Text\n\nBinder H, Martin S: Allowing for mandatory covariates in boosting estimation of sparse high-dimensional survival models. BMC Bioinformatics. 2008; 9(1): 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFriedman J, Hastie T, Tibshirani R: The elements of statistical learning. volume 1. Springer series in statistics Springer, Berlin. 2001. Reference Source\n\nRipley BD: Pattern recognition and neural networks. Cambridge university press. 1996. Publisher Full Text\n\nKumar N, Andreou AG: Heteroscedastic discriminant analysis and reduced rank hmms for improved speech recognition. Speech communication. 1998; 26(4): 283–297. Publisher Full Text\n\nhttps://jhpce.jhu.edu/.. Joint HPC Exchange. 2016; [Online; accessed 14-February 2016]. Reference Source\n\nColey Y: Bmore Dream Team Files. Synapse Storage. 2016. Data Source"
}
|
[
{
"id": "17699",
"date": "07 Dec 2016",
"name": "Devin C. Koestler",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript entitled, “Predicting survival time for metastatic castration resistant prostate cancer: an iterative imputation approach” Deng and colleagues describe a generalizable algorithm for iteratively imputing event-times for censored observations and apply their methodology to data collected as part of the Prostate Cancer DREAM Challenge. The approach itself is very interesting, and its application within an ensemble-based framework as a means toward informing survival predictions is quite creative. The Introduction provides a nice appraisal of existing methodologies and their limitations, and in the opinion of this reviewer, adequately motivates methodology being proposed. Overall, the manuscript is well written and likely to be of interest to the prediction and machine learning communities. Some suggestions for improvement are given in the space that follows:\n\nMajor comments:\nAugment the Results section with a table or figure that captures the results generated in the training phase of the authors algorithm, i.e., scatterplot of observed versus predicted survival time based on the 10-fold cross validation procedure or a Bland-Altman plot. It would also be useful to know what features were selected to build the final prediction model that was applied to the validation data set. Lastly, what were the optimal weights for combining the predicted survival times from the M = 5 models?\n\nMinor comments:\nAbstract - “…a recent crowd-sourced competition focused on risk and survival time predictions for patients with…”. I would be careful about the use of the term “risk” here since the competition did not consist of predicting one’s risk of mCRPC, but rather “risk of early treatment discontinuation”.\n\nAbstract – “We are interested in using a patient’s covariates to predict his or her time until death after initiating standard therapy”. I would recommend removing “her” since the study population is men diagnosed with mCRPC. Alternatively, you can just replace “his or her” with “their”.\n\nIntroduction – “Many state-of-the-art statistical and machine learning tools cannot be directly applied to censored data while most standard methodologies that do allow for censoring assume independence between censoring and survival time; this assumption is frequently inappropriate”. It would be helpful to include reference(s) to support the statement that the assumption of independence of censoring and survival time is inappropriate. In addition, describing the potential inappropriateness of this assumption (and its consequences) in the context of the data set(s) considered here would help further reinforce this point.\n\nResults – “Our predictions ranked sixth overall in accuracy and were not significantly different from the most accurate survival time predictions (Bayes factor > 3)”. My suggestion would be to replace the last part of this sentence with, “…not significantly different from the model that achieved the most accurate survival time predictions (Bayes Factor < 3 compared to the top-ranked model in this subchallenge). Data 2.1.5 Survival Summaries – Might be helpful if you could briefly summarize the censoring rates and median survival times across the 4 clinical trial data sets.\n\nMethods – “We then use covariates to build a predictive model for the completed survival times”. I am struggling with the term “completed” here. Do you mean the “imputed” survival times? Perhaps a better way to say this is, “We then used covariates to build a prediction model using the imputed survival times for censored subjects”.\n\nMethods 11b) Adjust imputed survival times – For the purposes of clarity it would be helpful to denote the predicted survival times with hat notation.\n\nResults – What are the units for the RMSE? Days? In other words, the average difference between observed and predicted survival time based on your methodology was 198.1 days (in the independent ENTHUSE 33 data set)?",
"responses": []
},
{
"id": "18892",
"date": "09 Jan 2017",
"name": "Ruoqing Zhu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is nicely written, and the method is clearly described.\n\nI have only one comment regarding the initial imputation step: why an exponential distribution was chosen? Does that affect the results? Can the authors provide a brief discussion on this choice? In the literature, both RSI and RIST uses a model-based imputation value.",
"responses": []
},
{
"id": "19040",
"date": "20 Feb 2017",
"name": "C Jason Liang",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article provides a clear summary of how the team's prognostic model was created. Biomedical prognostic models are frequently built with survival data, but in practice often do not fully address or utilize the complexity of the data (e.g. dichotomizing the time-to-event outcome out of convenience rather than scientific motivation), so it was encouraging to read about a method thoughtfully developed for survival data, and a competition that embraces performance measures tailored for survival data. We have no major comments but will provide some minor comments for the authors' consideration.\nOther methods The authors' method involves an initial iterative imputation method that is attractive in that it opens up a richer suite of continuous outcome models for use with the completed data. However, as the authors mention in their discussion, the theoretical aspects of this imputation procedure are unclear. To that end, we are curious if the authors considered existing methods that try to formally account for censoring while retaining loss functions that reduce to \"typical\" loss functions when censoring is absent (e.g. mean-squared error). For example, Steingrimsson et al (2016)1 and Molinaro et al (2004)2 study random forest models with loss functions that 1) can accommodate censored outcomes; and 2) reduce to squared error loss when censoring is absent.\nCovariate imputation The authors provided a useful graphic summarizing the covariate missingness. Given that there was a nontrivial amount of missingness, a sensitivity analysis might be helpful to ensure that the results are not qualitatively different when perturbing certain aspects of the imputation procedure. Alternatively, for each of the most salient predictors, examining what proportion of observations is missing for that variable may also be useful.\nRequests for clarification\nThe authors mentioned that three datasets were used for training (ASCENT-2, MAINSAIL, and VENICE) and a fourth dataset was used for scoring (ENTHUSE-33). However, in the \"data cleaning\" section and the \"super learner\" section there is also reference to five different datasets. It was unclear how to reconcile the two descriptions. In Step I of the Methods section, how do you calculate iAUC in the initial cross-validation step to determine \\alpha, where no covariates are used? Don't you need a score - presumably derived from the covariates - to calculate iAUC?\n\nChoice of iAUC and other performance measures by the DREAM challenge organizers We commend the competition organizers for embracing prognostic performance measures that are specifically tailored for survival data, such as the concordance index and cumulative AUC. However, we are puzzled by the decision to use iAUC as the primary performance measure.\niAUC is not a standard performance measure and, to our knowledge, is not documented in the literature. While this would not necessarily preclude iAUC from being used as the primary performance measure, it would be helpful to understand the justification for its choice. There does not appear to be an immediately obvious interpretation for iAUC. According to the DREAM website, iAUC is the average of the different cumulative AUC values over all times t. While the cumulative AUC for a single t is easily interpretable, it is unclear what the interpretive value of iAUC is.\nNote that Heagerty and Zheng (2005; Section 2.2.1)3 state: \"[Cumulative AUC is] most appropriate when a specific time t' (or a small collection of times t'_1, t'_2, ..., t'_m) is important and scientific interest lies in discriminating between subjects who die prior to a given time t' and those that survive beyond t'.\"\nIf one were unable to choose a specific time or small collection of times, the concordance index offers a reasonable \"global in time\" alternative. Incidentally, Heagerty and Zheng (2005; Section 2.4)3 note that when the incident/dynamic AUC (related to but different than the cumulative AUC) is averaged over time (and subject to a specific time-weighting), the result is the concordance index.\nTypos\nSection 3, Paragraph 1, Sentence 2: \"winning\" is misspelled. Section 4.4, Sentence 4: \"...included as a covariate...\" Section 4.2, last sentence: \"...overall in accuracy and were not...\"",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2672
|
https://f1000research.com/articles/5-2671/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Three general concepts to improve risk prediction: good data, wisdom of the crowd, recalibration",
"authors": [
"Ivan Kondofersky",
"Michael Laimighofer",
"Christoph Kurz",
"Norbert Krautenbacher",
"Julia F. Söllner",
"Philip Dargatz",
"Hagen Scherb",
"Donna P. Ankerst",
"Christiane Fuchs",
"Ivan Kondofersky",
"Michael Laimighofer",
"Christoph Kurz",
"Norbert Krautenbacher",
"Julia F. Söllner",
"Philip Dargatz",
"Hagen Scherb",
"Donna P. Ankerst"
],
"abstract": "In today's information age, the necessary means exist for clinical risk prediction to capitalize on a multitude of data sources, increasing the potential for greater accuracy and improved patient care. Towards this objective, the Prostate Cancer DREAM Challenge posted comprehensive information from three clinical trials recording survival for patients with metastatic castration-resistant prostate cancer treated with first-line docetaxel. A subset of an independent clinical trial was used for interim evaluation of model submissions, providing critical feedback to participating teams for tailoring their models to the desired target. Final submitted models were evaluated and ranked on the independent clinical trial. Our team, called \"A Bavarian Dream\", utilized many of the common statistical methods for data dimension reduction and summarization during the trial. Three general modeling principles emerged that were deemed helpful for building accurate risk prediction tools and ending up among the winning teams of both sub-challenges. These principles included: first, good data, encompassing the collection of important variables and imputation of missing data; second, wisdom of the crowd, extending beyond the usual model ensemble notion to the inclusion of experts on specific risk ranges; and third, recalibration, entailing transfer learning to the target source. In this study, we illustrate the application and impact of these principles applied to data from the Prostate Cancer DREAM Challenge.",
"keywords": [
"prostate cancer",
"survival prediction",
"crowdsourcing",
"community challenge",
"DREAM",
"recalibration",
"validation",
"model averaging"
],
"content": "Introduction\n\nGovernment funded clinical and research trials are currently experiencing increased pressure to publish comprehensive anonymized data in order to maximize scientific output, ushering in new challenges and opportunities for data scientists1. In an era of personalized medicine, scientists analyzing the results of large population-based clinical and prevention trials are further encouraged to translate results to clinical practice. With patient as the consumer, this push has led to an explosion of easy-to-use online clinical risk prediction tools for nearly all types of clinical outcomes2,3. In the past, single-study prediction models dominated out of convenience. In the current climate, multiple studies are available that can be combined, increasing accuracy through the wisdom-of-the-crowd philosophy, and providing more realistic estimates of variability for decision-making. Ensembles or collections of models have been shown to outperform top-nominated models4.\n\nFollowing efforts by Project Data Sphere to coordinate the release of comparative arm data from multiple pharmaceutical companies and academic medical centers, and in cooperation with the Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative, the Prostate Cancer DREAM Challenge sought to facilitate the development of survival prediction models to assist patients with metastatic castration-resistant prostate cancer (mCRPC) treated with first-line docetaxel5,6. Baseline and follow-up data were available from 1600 patients who had received first-line docetaxel as part of their participation on the comparator arms of three clinical trials, which formed the training set; see Guinney et al.7 and https://www.synapse.org/ProstateCancerChallenge for a detailed description. This article focuses on the challenge of predicting overall patient survival (sub-challenge 1). Here, data from 157 patients from an independent trial were made available for calibration to the target, and the final model based on the training and calibration data was validated on 313 patients from the target. An open online competition format with multiple deadlines attracted researchers from around the world, encouraging efficiency and fast-paced targeted research towards a common goal of optimizing predictive accuracy of a tool on an external test set.\n\nThere is no uniform prescription for building a universally optimal risk prediction tool. In the past, researchers often focused on a small set of standard risk factors for data cleaning and inclusion in their models, either for statistical reasons or grounds content; see Kattan et al.8 for the American Joint Committee on Cancer (AJCC)’s criteria for a prognostic model. The ever more commonly performed indiscriminate data-dumps from multiple clinical trials bring forth additional challenges of signal discovery, data cleaning, and missing data adjustment. Today’s data scientist has to decide which datasets to use for training the models versus which to hold out for testing, as well as how to use initial information from the target population to fine-tune the model. The Prostate Cancer DREAM Challenge provided participating teams with hands-on experience in these critical areas. Through our participation in the challenge, we experimented with hundreds of models, data inclusion and missing-value adjustment options. By the end of the process, three general principles stood out that proved crucial to success: good data, wisdom of the crowds, and recalibration. Herein, we illustrate these principles and quantify their impact.\n\n\nPreliminaries\n\nThe goal of sub-challenge 1 was to develop a survival prediction model using data from three different clinical trials, which was to be validated on data from a fourth independent trial. Random subsets of data from the fourth validation trial were provided at multiple interim points to guide model construction. After trying several machine learning and statistical models, the combined Cox proportional hazards and lasso model was chosen as it performed optimally on the interim validation sets9. The Cox proportional hazards model specifies the mortality hazard rate for an individual with covariate vector x as:\n\nλ(t|x)=λ0(t)exp(x′β),\n\nwhere β is the vector of log hazard ratios for respective covariates comprising x, and λ0(t) is a baseline hazard function that is left unspecified, making the model semi-parametric and more flexible than fully-specified parametric survival models. The model follows proportional hazards since the ratio of hazards for an individual with a unit increase in a single covariate relative to another individual, with all other covariates fixed, equals exp (β), which is constant for all times t. The non-parametric Kaplan-Meier estimator shows the empirical distribution of the observed failure times subject to censoring. Inspection of whether the curves stratified by different covariate values remain separated across the length of follow-up can be used to informally assess whether the proportional assumption holds.\n\nThe standard method for estimating β in the Cox model is based on the partial likelihood that specifies for each individual their relative probability of failure compared to other individuals at risk:\n\nL(β)=∏r∈Dexp(β′xr)∑j∈Rrexp(β′xj).\n\nIn this formulation, D is the group of distinct death times observed in the study, and Rj denotes the risk set of all individuals still alive and on-study. If multiple individuals have the same death time, modifications are needed for the likelihood, which are implemented using a choice of algorithms.\n\nInstead of finding the β that maximizes the log likelihood ℓ(β) = log L(β) itself, the lasso (least absolute shrinkage and selection operator) maximizes it subject to the constraint that Σj |βj| < s, where s is a user-selected tuning parameter. This modification heuristically keeps model dimensionality low, with unnecessary parameters shrunk to zero as necessary.\n\nSub-challenge 1 was again divided into two tasks: In sub-challenge 1a, participants were asked to predict risks of death. Sub-challenge 1b asked for the prediction of exact times until death. For evaluating the proposed prediction models on the withheld test data, two criteria were used, corresponding to sub-challenges 1a and 1b, respectively. The evaluation criterion for 1a focused on discrimination, that is how well the risk prediction model differentiated a patient about to experience mortality versus not. This criterion only compared the ranks of risk scores among groups of patients, with no further regard to accuracy in terms of actual values of risk scores. The second criterion of calibration focused on accuracy in terms of how close the exact time to event (death) was to the predicted time to event.\n\nReceiver-operator-characteristics (ROC) curves have their origin in radar technology and signal processing and remain the standard of choice for determining the discrimination capability of a diagnostic test10. They have been most widely used for evaluating prediction models for binary disease outcomes based on retrospective case-control studies. In this context, the idea is that risk prediction tools return a probability between 0 and 1 of an individual having a disease, and any value, say c, could be used as a threshold for making a yes/no decision concerning whether the person is diseased, warranting further diagnostic work-up. A person with predicted risk exceeding c is labeled as testing positive for disease and a person with risk less than or equal to c as negative. Given a set of diseased cases and non-diseased controls, each with a predicted risk pr, for every threshold c there exist two measures of correct prediction, one for the cases and one for the controls, respectively:\n\n\n\n\n\nThe ROC curve displays the sensitivity, also termed the true positive rate, against 1-specificity, also termed the false positive rate for all possible choices of c. The area under the ROC curve (AUC) can therefore be used as a metric for model evaluation and comparison. It may be interpreted as a concordance index, where a value of 1 (100%) represents perfect accuracy (i.e. sensitivity and specificity of 1) and a value of 0.5 equals random guessing.\n\nFor extension to prediction of survival up until fixed time periods that accommodate censored observations, Heagerty et al.11 proposed time-dependent ROC curves using time-specific versions of sensitivity and specificity that were based on whether individuals still on study were alive (controls) versus not (cases) at each time t, yielding as a result a plot of AUC values versus time t. Hung et al.12 provided non-parametric estimators for the time-dependent AUC and Blanche et al.13 provided an R package timeROC that was used for evaluation in sub-challenge 1a. To arrive at a single measure, integrated AUCs from 6 to 30 months were calculated and referred to as iAUCs.\n\nCalibration measures the accuracy of numerical predictions, answering the question of how close estimates are to the truth. For sub-challenge 1b, which aimed at predicting the time to event (actual day of mortality), the root mean squared error (RMSE) was used:\n\nRMSE=1∑i=1nDi∑i=1nDi(y^i−yi)2,\n\nwhere y^ is a vector of n predictions for all patients in the test set, y is the vector of n observed values (which equals NA in case death is not observed), Di is a binary variable equal to one if death is reported and zero otherwise, and subscripts denote individual predictions and observed values on the test set. Thus, the RMSE was only calculated on patients with observed death times on study, and ∑i=1nDi referred to the number of death event times in the test set.\n\n\nMethods\n\nFigure 1 gives an overview of the Prostate Cancer DREAM Challenge data after some cleaning (see low-cost strategy in paragraph below) but before inclusion of additional variables. There were six data tables available: one core table (the basis of Figure 1), containing baseline clinical covariates at patient level, and five longitudinal data tables, containing additional information at event level. We refer to the four trials as ASCENT-2 (Novacea, provided by Memorial Sloan Kettering Cancer Center14), VENICE (Sanofi15), MAINSAIL (Celgene16), and ENTHUSE-33 (AstraZeneca17). The majority of the variables (73.95%) in the core table have been measured in all four studies. Eight variables (albumin, magnesium, sodium, total protein, phosphorus, region and presence of target and non-target lesions) were exclusive to MAINSAIL, ENTHUSE-33 and VENICE while two (red blood cells and lymphocytes) were only assessed in ENTHUSE-33 and MAINSAIL. Lactate dehydrogenase was only measured in ASCENT-2, ENTHUSE-33 and MAINSAIL but not in VENICE. The presence of neoplasms and creatinine clearance were only present in VENICE and ENTHUSE-33. Unfortunately, the interesting variable gleason score was only reported in the ASCENT-2 study. With a p-value of 0.0017 it proved to be highly significant in a univariate Cox model for those patients where the variable was available, but was removed by us due to its missingness in the test dataset ENTHUSE-33. The significance of other variables which were missing in at least one trial is presented in Table 1.\n\nIn this section we compare two strategies to secure as many data elements as possible: a relatively straightforward low-cost minimal adaptation approach versus a high-cost strategy that incorporates subject-matter knowledge into the procedure. The minimal adaptation approach followed recommendations typically provided in statistical packages. We excluded variables with more than 10% missing values in either the training or test set, while for variables with less than 10% missing values, we used imputation, replacing the missing values with the mean value among observations that were not missing. For the second more intensive strategy, we performed subject-matter informed data cleaning, such as including additional information from the event tables, preprocessing the data, including new variables such as principal components, a toxicity score and interaction effects. The extra effort for the second approach paid off in terms of substantially increasing validation accuracy on the external test set as shown in Table 3. Details of the second approach are provided below.\n\nHigh-cost data cleaning and preprocessing. An essential component for developing the final predictions for both sub-challenges 1a and 1b was a comprehensive interdisciplinary exploration of the data. We built a cleaned and preprocessed dataset comprising information from the provided covariate and event tables as described in this section.\n\nCleaning of core table. In a first data cleaning, we identified incomplete (e. g. more than 70% missing values in either the training data or the test data), inconsistent (e. g. different levels between trials for categorical data) or irrelevant (e. g. the same value for all or almost all patients) covariables in the core table and modified the datasets accordingly: We unified categories for height, weight, race and region and removed variables with either very large fractions of missing values, redundant information or hardly any variability.\n\nEvent tables. We derived baseline patient information from the event tables as follows: The PriorMed table contained information about the medication that patients received prior to their participation in the clinical trials. Categorical assignments for medications were often missing, sometimes erroneous, and categories differed between trials. Based on our clinical expertise, we assigned appropriate categories to each medication. We then introduced new variables counting for each patient the number of medications from each category. Studies substantially differed in distributions of numbers of prior medications. We suspected that this was due to reporting biases. We hence scaled the new variables such that they had identical mean and variance across studies. The MedHistory table contained information about medical diagnoses that patients got prior to their participation in the clinical trials. For each patient, we counted the number of diagnoses in the various categories. We excluded categories which we assumed not to be clinically relevant for death or treatment discontinuation. We also deleted categories where diagnoses were reported for less than 2% of the training or test patients. From the LesionMeasure table, we extracted information such as the number of target and non-target lesions, counts per tissue and maximum target size. We noticed systematic differences in numbers of reported lesions between studies. We suspected that these differences were due to different reporting behaviour rather than different patient properties. In compliance with the guidelines by Eisenhauer et al.18, we only used the five largest target lesions for covariable generation and limited the number of target lesions per tissue to two. From the VitalSign table, we used patient-specific information about pulse and blood pressure. From the LabValue table, we derived covariables with additional lab test results. Difficulties were different units and truncated lab values.\n\nPreprocessing. There were a number of values that appeared to be outliers in the statistical sense. However, though being extreme, many of these values were clinically not impossible. In order to not throw away important information, we only removed values where hemoglobin was less than five or the prostate specific antigen or platelet count were equal to zero. For ASCENT-2, there was no event data on lesions. Hence, we set the variables for the presence of target or non-target-lesions to NA (\"no information\") rather than NO (\"no lesions found\"). We log-transformed the most skewed continuous variables (prostate specific antigen, alkaline phosphatase, aspartate aminotransferase, lactate dehydrogenase and testosterone). We included selected interactions of covariables in the model, based on the results of all pairwise Cox models with two main effects and an interaction. If the coefficient of the interaction term was larger than 0.1 in its absolute value, and the p-value of the coefficient was less than 0.05 after multiple testing correction, the combination was included in the list. From the final dataset, we removed variables such that afterwards all pairwise Pearson correlations were below 0.95 in absolute value.\n\nSeveral covariables were generally observed in one or several of the studies but missing for single patients. We imputed these missing values with 5-fold multivariate imputations by chained equations (MICE) using the R package mice19 with default settings, R version 3.2.1. This imputation approach has proven to be successful for a variety of cancer specific data20–23.\n\nNew variables. We introduced a number of additional newly-derived variables to the set already described above: First, we aimed to represent the information from the large number of newly derived covariables from the event data tables by a smaller number. To that end, we performed a principal component analysis (PCA) once on the new variables from MedHistory and once on the new variables from LesionMeasure. We included the most important principal components as additional covariables until 95% of the variance was explained. The original variables derived from the event tables remained in the dataset as well. As a second measure, we introduced a toxicity score for each patient based on lab value information. In this variable, we combined all toxicity grades which were either provided in the LabValue table or which we derived from literature research, using databases from the U.S. Department of Health and Human Services, Food and Drug Administration (http://www.fda.gov/downloads/BiologicsBloodVaccines/GuidanceComplianceRegulatoryInformation/Guidances/Vaccines/ucm091977.pdf), The International Clinical Studies Support Center (ICSSC, http://www.icssc.org/Documents/Resources/AEManual2003AppendicesFebruary_06_2003 final.pdf), and HSeT - Health Teaching Portal (http://hset.bio-med.ch/cms/Default.aspx?Page=12173). Third, as the reference method by Halabi et al.24 was successful, we included their risk score as an additional covariable.\n\nWisdom of the crowd philosophically asserts that a prediction gauged among a group of experts will be more accurate than any single prediction; the readable book by Surowiecki provides tantalizing historical and contemporary examples25. Wisdom of the crowds underpins the Sage Bionetworks DREAM challenge efforts behind crowdsourcing and citizen science, the opening of challenges to mass numbers of competitive teams on the internet or active members of the public, which has brought about improvements in breast cancer prognostic modeling among other efforts26,27. Wisdom of the crowds also underpins the accepted notion that ensembles of models confer better predictive accuracy than single models, are more robust than single methods, and have the added advantage of appropriately accounting for uncertainty28. The ability to test models on parts of the withheld test set influenced the choice of which models should be contained in the ensemble; one could term this supervised ensemble construction. We herein describe the approach.\n\nModel averaging. In the first concept we described our multiple imputation approach for missing values. However, we noticed that distributions of variables differed between trials. Hence, we decided to only impute within the trials, not across. In other words, values were imputed based on covariable information only from patients within the same study.\n\nOur question was then how to deal with variables that were (almost) completely missing in one entire training study but measured in other training studies. Our solution was to estimate seven different models, each taking into account a different subset of the three studies ASCENT-2, MAINSAIL and VENICE (see Figure 2). Depending on the set of studies, different covariables could be included in the model. For example, lactate dehydrogenase was completely missing in VENICE, but not in ASCENT-2 and MAINSAIL. It was hence excluded in every model based on VENICE but not otherwise. Table 2 contains two more examples. Once we had fixed the model and corresponding data, we jointly scaled the explanatory variables of all training and test studies to mean zero and variance one.\n\nRecalibration of a model encompasses any manner of change to the model using data or information from the target model. In the Prostate Cancer DREAM Challenge, patient-level data from the test set did not include the target variables. Recalibration was still possible as described in the following.\n\nHigh-risk and low-risk recalibration (sub-challenge 1a). With the averaged Cox model described in the previous section, we expected to predict the risks for \"average patients\" satisfyingly well. For high-risk or low-risk patients, however, we aimed to further improve the predictions. Hence, we adapted the scores by estimating two more models: (i) a high-risk model, where we modified the target variable DEATH (indicating whether death had been observed) such that it only counted events that happened prior to 14 months, and (ii) a low-risk model, where we only considered events that occurred after 18 months. We then recalibrated the risk scores for the following patients: (i) For patients with risk score above the median, we calculated the average between the initial prediction and the high-risk score and considered this as the new risk score, and (ii) for those patients whose risk score was below the 25-percentile, we calculated the average between the initial model and the low-risk model. In both cases, we made sure that the modifications only altered the ranks of patients within the defined ranges, i.e. above the median and below the 25-percentile with respect to the initial risk score. Figure 3 shows the former (x-axis) vs. the new rank (y-axis) for each patient, where a low rank means a low risk of dying.\n\nQuantile recalibration (sub-challenge 1b). As described above, we estimated a Cox model with lasso regularization. Based on the estimated coefficients from the training datasets, we predicted a survival curve for each of the patients in the test data. From each survival curve we derived a point estimate for the time of death as follows: A typical estimate would have been the median. However, in the training data this estimate was not optimal with respect to RMSE. We hence determined from the training data the value of α such that\n\n∑i=1nDi(Qαi−yi)2\n\nwas minimized. In this formula, n is the number of patients in the training data, Qαi denotes the α · 100%-quantile in the survive curve for patient i, yi is the observed time of death for patient i (can be any value if death is not observed), and Di = 1 is the indicator that death of patient i has been observed (otherwise Di = 0). The resulting value of α was 0.69. We hence derived the 69%-quantiles from the survival curves as final prediction as illustrated in Figure 4.\n\nValidation by calibration (sub-challenge 1b). In addition to the above calibration of times to event, we also applied the validation-by-calibration method by Van Houwelingen29 in sub-challenge 1b. This method adjusts the original predictions by rescaling them to the range of the observed outcomes using linear regression. The original method splits the training data into two subsets for model building and validation. For computational reasons, we omit this step. Adapted to the context here, validation-and-calibration works as follows:\n\n1. Estimate a Cox-lasso model based on the three training trials (ASCENT-2, MAINSAIL, VENICE) as described above. From this model, compute survival curves and estimate the times to event for each patient in the training data. Let y^ be the predictions for those patients where death was observed, and y be the corresponding observed times of death. From the same model, estimate the times to event for all patients in the test set (ENTHUSE-33) and denote them by z^.\n\n2. Plot y versus y^ and decide whether a linear relationship of the two variables can be assumed. If so, proceed.\n\n3. Estimate a linear model y = β0 + β1ŷ + ε with ε ~ N(0, σ2). Let β^0 and β^1 be the estimated intercept and slope coefficient.\n\n4. Recalibrate the predictions for the test patients to z^c=β^0+β^1z^.\n\nFigure 5 illustrates the procedure on our training data.\n\n\nResults\n\nWe applied the three general concepts to the prediction problems of sub-challenges 1a and 1b. The benefit of applying each of the principles on the iAUC and RMSE in the Prostate Cancer DREAM Challenge is quantified in Figure 7 and Figure 8, respectively. Details are given in the following.\n\nIn order to assess the gain of the elaborate data preprocessing as compared to the low-cost minimal adaptation approach, we predicted the risk of death (sub-challenge 1a) and the time to death (sub-challenge 1b) for both data preparations. Table 3 shows the respective validation measures iAUC and RMSE when a Cox model with lasso regularization is applied as described above. For sub-challenge 1b, we used median survival times from the estimated survival curves. Prediction improved substantially for the high-cost data preparation with respect to both measures: The iAUC (sub-challenge 1a) increased by more than 0.01 units from 0.7535 to 0.7642. The RMSE (sub-challenge 1b) decreased by more than 10 units from 304.79 to 292.15.\n\nWe estimated the seven models described in the model averaging section above (see also Figure 2) on the training data and got a risk prediction for the test data for each of these. We then took the average of the seven predictions (each of which was again an average over five imputed datasets) to arrive at a final risk score. Compared to the standard approach (no splitting into submodels), this model averaging approach yielded improvements in terms of iAUC and RMSE measures. This is shown in Table 4 where both the standard approach model and the averaging approach employ the improved data as described in the first concept. While the increase in iAUC is again around 0.1 units from 0.7642 to 0.7733, the improvement of the prediction of time to event is considerably more dramatic as it decreases the RMSE by almost 30 units from 292.15 to 263.37.\n\nWe applied the three proposed recalibration techniques to our predictions for risk of death (sub-challenge 1a) and time of death (sub-challenge 1b) and validated the effects of these measures on the test data (ENTHUSE-33).\n\nSub-challenge 1a. For the risks of dying, we once applied the low-risk calibration only, the high-risk calibration only, and both measures simultaneously. Table 5 summarizes the results. It shows that neither the low-risk nor the high-risk calibration had a substantial effect on the prediction performance in terms of iAUC: The low-risk calibration led to a small increase of iAUC by approximately 0.003 units from 0.7642 to 0.7668. The high-risk calibration did not improve the prediction accuracy at all, although the ranks of patients changed.\n\nSub-challenge 1b. Recalibration of times to event caused a highly convincing improvement of prediction accuracy. Table 6 shows RMSE values for the 69%-quantile recalibration only, for Van Houwelingen’s validation-by-calibration approach only, and for the two measures combined (i. e. first applying quantile recalibration and then the validation-by-calibration method). All recalibration approaches decreased the RMSE substantially by as much as around 100 days as compared to the non-calibrated predictions.\n\nThe choice of α = 0.69 for the α-quantile recalibration had resulted from the training data only. As a further post-challenge analysis, we investigated whether this was also a good choice for the test dataset. Figure 6 shows RMSEs for the α-quantile recalibration as well as the combination of quantile recalibration and validation-by-calibration for a grid of α values between 0.6 and 0.8. On this grid, α = 0.72 was the optimal choice when applying the quantile recalibration only, but α = 0.69 was also reasonable. When followed by validation-by-calibration, the effect of α was hardly visible anymore. This makes validation-by-calibration an appealing approach for the prediction of times to event.\n\n\nConclusion\n\nAs data, computation, and statistical methods reach new horizons for the clinical risk prediction dreamers, this study reminds us of some timeless basics we should not forget: good data, wisdom of the crowds and recalibration. In this study we translated and enhanced these principles for use in developing survival risk prediction tools based on multiple heterogeneous clinical trials with large and non-overlapping sets of covariates. The impact of individual components of our proposed strategy can be quantified by their incremental influence on the assessment criteria.\n\nThe AUC is the most widely used endpoint for measuring the discrimination capability of a biomarker or risk prediction tool. However, it is limited by a lack of clinical relevance for the individual patient, defined as a comparative probability of ranks for pairs of patients, as well as a lack of statistical power being based on ranks, making it insensitive (it is invariant to monotonic changes) and notoriously difficult to budge30. Accordingly, Figure 7 shows small gains of 0.0105 points for improved data, 0.0091 additional points for model averaging, and 0.0004 additional points for recalibration, taking the best-performing option for each principle. The bottom line for implementing the three principles was to increase the iAUC from 0.7535 to 0.7768, a minor improvement, but comparable to laudable improvements in published risk prediction tools given the robust nature of the iAUC.\n\nThe RMSE measures accuracy of a risk prediction, in other words how close a projected risk is to what actually happened to the patient, and based on the continuous measures of risk, has greater statistical power to detect differences due to technical improvements. Accordingly, more significant gains are more readily apparent in the RMSE in Figure 8, with 12.64 points for improved data, 28.78 additional points for model averaging or 99.01 additional points for recalibration, resulting in a net reduction from 111.57 points on the square root prediction scale after implementation of all three principles. Missing a large gain such as this by hastily fitting a single model without regard to the data, without averaging and without recalibration, would have cost us the challenge. But, most importantly, skipping these time-consuming basics would result in a less accurate prognosis for the individual patient.\n\nOur data preparation included the generation of additional clinical variables. Post-challenge analyses showed that the newly introduced toxicity score was especially beneficial for good predictions in all sub-challenges, and so were the variables derived from the event data tables on lesion measures. We thus propose to generally capture such information in any clinical trials on prostate cancer. As more data become publicly available as a resource for expanding clinical risk tools, it becomes tempting to think that the art of risk prediction can be automated, eliminating the need for interdisciplinary scientists to work together. This study concludes that interdisciplinary subject-matter knowledge remains essential and that building optimal risk prediction tools remains as much an art as a process.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn559240531",
"appendix": "Author contributions\n\n\n\nCF, IK, CK and JS preprocessed the data. CF, IK, NK, CK, ML, HS and JS established first analyses. IK, NK, CK and ML performed in-depth analysis. CF, IK and ML proposed novel modeling refinements. PD assisted the team with respect to clinical questions. DPA, CF and HS advised the team with respect to statistical questions. IK was responsible for code integration. DPA and CF wrote the manuscript, with contributions from IK, NK, CK, ML and JS. All members proofread the manuscript. CF supervised and guided the work. All authors contributed to discussions and decision-making processes.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nCF and IK are supported by the European Union within the ERC grant LatentCauses. CF is supported by the German Research Foundation (DFG) within the Collaborative Research Centre 1243, Subproject A17.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThis publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication. We are grateful to Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Challenge. We thank Fabian Theis and Rolf Holle for supporting us to participate in the Challenge.\n\n\nReferences\n\nKoenig F, Slattery J, Groves T, et al.: Sharing clinical trial data on patient level: opportunities and challenges. Biom J. 2015; 57(1): 8–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHalabi S, Small EJ, Kantoff PW, et al.: Prognostic model for predicting survival in men with hormone-refractory metastatic prostate cancer. J Clin Oncol. 2003; 21(7): 1232–1237. PubMed Abstract | Publisher Full Text\n\nThompson IM, Ankerst DP, Chi C, et al.: Assessing prostate cancer risk: Results from the prostate cancer prevention trial. J Natl Cancer Inst. 2006; 98(8): 529–534. PubMed Abstract | Publisher Full Text\n\nChen M, Shi L, Kelly R, et al.: Selecting a single model or combining multiple models for microarray-based classifier development?--a comparative analysis based on large and diverse datasets generated from the MAQC-II project. BMC Bioinformatics. 2011; 12(Suppl 10): S3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHede K: Project data sphere to make cancer clinical trial data publicly available. J Natl Cancer Inst. 2013; 105(16): 1159–60. PubMed Abstract | Publisher Full Text\n\nRozengauz DE: [Tumor of the left temporal lobe of the brain simulating an otogenic abscess]. Zh Ushn Nos Gorl Bolezn. 1965; 25(3): 83–4. PubMed Abstract\n\nGuinney J, Wang T, Laajala TD, et al.: Prediction of overall survival for patients with metastatic castration-resistant prostate cancer: development of a prognostic model through a crowdsourced challenge with open clinical trial data. Lancet Oncol. 2016; published online Nov 15. Publisher Full Text\n\nKattan MW, Hess KR, Amin MB, et al.: American Joint Committee on cancer acceptance criteria for inclusion of risk models for individualized prognosis in the practice of precision medicine. CA Cancer J Clin. 2016. PubMed Abstract | Publisher Full Text\n\nTibshirani R: The lasso method for variable selection in the Cox model. Stat Med. 1997; 16(4): 385–95. PubMed Abstract | Publisher Full Text\n\nMetz CE: Basic principles of ROC analysis. Semin Nucl Med. 1978; 8(4): 283–298. PubMed Abstract | Publisher Full Text\n\nHeagerty PJ, Zheng Y: Survival model predictive accuracy and ROC curves. Biometrics. 2005; 61(1): 92–105. PubMed Abstract | Publisher Full Text\n\nHung H, Chiang CT: Estimation methods for time-dependent AUC models with survival data. Can J Stat. 2010; 38(1): 8–26. Publisher Full Text\n\nBlanche P, Dartigues JF, Jacqmin-Gadda H: Estimating and comparing time-dependent areas under receiver operating characteristic curves for censored event times with competing risks. Stat Med. 2013; 32(30): 5381–5397. PubMed Abstract | Publisher Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nEisenhauer EA, Therasse P, Bogaerts J, et al.: New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009; 45(2): 228–247. PubMed Abstract | Publisher Full Text\n\nvan Buuren S, Groothuis-Oudshoornl K: mice: Multivariate imputation by chained equations in R. J Stat Softw. 2011; 45(3). Publisher Full Text\n\nClark TG, Altman DG: Developing a prognostic model in the presence of missing data: an ovarian cancer case study. J Clin Epidemiol. 2003; 56(1): 28–37. PubMed Abstract | Publisher Full Text\n\nRoyston P, Parmar MK, Sylvester R: Construction and validation of a prognostic model across several studies, with an application in superficial bladder cancer. Stat Med. 2004; 23(6): 907–926. PubMed Abstract | Publisher Full Text\n\nBarosi G, Bergamaschi G, Marchetti M, et al.: JAK2 V617F mutational status predicts progression to large splenomegaly and leukemic transformation in primary myelofibrosis. Blood. 2007; 110(12): 4030–4036. PubMed Abstract | Publisher Full Text\n\nFernandes AS, Fonseca JM, Jarman IH, et al.: Evaluation of missing data imputation in longitudinal cohort studies in breast cancer survival. Int J Knowl Eng Soft Data Paradig. 2009; 1(3): 257. Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSurowiecki J: The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. Doubleday. 2004. Reference Source\n\nBain R: Citizen science and statistics: Playing a part. Significance. 2016; 13(1): 16–21. Publisher Full Text\n\nMcCarthy N: Prognostic models: rising to the challenge. Nat Rev Cancer. 2013; 13(6): 378. PubMed Abstract | Publisher Full Text\n\nHoeting JA, Madigan D, Raftery AE, et al.: Bayesian model averaging: A tutorial. Stat Sci. 1999; 14(4): 382–417. Publisher Full Text\n\nvan Houwelingen HC: Validation, calibration, revision and combination of prognostic survival models. Stat Med. 2000; 19(24): 3401–3415. PubMed Abstract | Publisher Full Text\n\nWare JH: The limitations of risk factors as prognostic tools. N Engl J Med. 2006; 355(25): 2615–2617. PubMed Abstract | Publisher Full Text\n\nKondofersky I, Laimighofer M, Kurz C, et al.: A Bavarian Dream: Methods for Challenges 1a, 1b and 2. Synapse Storage, 2016. Publisher Full Text"
}
|
[
{
"id": "17698",
"date": "28 Nov 2016",
"name": "Sebastian Pölsterl",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nKondofersky and co-authors discuss three important aspects when developing risk prediction models: good data, model averaging, and recalibration. In their paper, they evaluate the added value of each of these concepts in the context of the Prostate Cancer DREAM challenge. Their results demonstrate that data pre-processing and deriving additional variables resulted in the largest improvement, followed by combining predictions from a diverse set of models, whereas the benefit of recalibration was modest.\nThe paper addresses many questions researchers are facing with when building a risk prediction model from data collected from distinct clinical trials. They elegantly overcome the problem of trial-specific variables by employing an ensemble of models, where each model is specific to a particular trial (or a subset of multiple trials), which allows to leverage the full set of variables collected for a particular trial despite there being only a small set of variables that is common to all trials. Overall, the paper is well written, easy to follow, and features an extensive set of experiments.\n\nMajor concerns:\nPreprocessing: Some aspects of preprocessing and missing value imputation are not well explained and should be addressed in a revision of the paper. It is not clear whether “high-cost data cleaning and preprocessing” was used as an alternative to the “low-cost minimal adaptation approach” or in addition to it.\n\nIn the paragraph on the low-cost approach, the authors say that they “excluded variables with more than 10% missing values in either the training or test set”, but later in the paragraph on the high-cost method, they state “we identified incomplete (e. g. more than 70% missing values in either the training data or the test data)”. If the high-cost approach follows the low-cost approach there should not be any variable with 70% missing values any more.\n\nThe paragraph on preprocessing on page 6 mentions that highly correlated variables were removed, but is unclear whether this step occurred before or after missing value imputation using MICE. If redundant variables were removed before imputation, this could significantly increase the error of imputed values, because in contrast to model building, highly correlated variables can be very helpful for the purpose of imputation.\n\nIn the description of the low-cost approach, they authors mention that missing values were imputed by the mean. How were categorical variables, which have no mean, imputed?\n\nAssuming the low-cost approach was the baseline method denoted as “standard data” in figure 7, it is difficult to judge whether the improvement due the high-cost approach was due to a less biased imputation approach (MICE instead of mean) or the addition of additional variables. It would be very interesting to see what the improvement would be if the same imputation method was used and only additional derived variables would have been added.\n\nSince the standard data was augmented by several derived variables, an important question is to determine which of the derived variables had the largest impact, or if it was their combined effect that resulted in the observed improvement.\n\nModel averaging:\n\nBased on the results of figures 7 and 8, model averaging resulted in a considerable improvement in risk prediction, however not much technical details were provided. The authors should provide a more elaborate description or refer to a previously published work describing the details.\n\nIn particular, the authors mention that each of the 7 combinations of trials in figure 2 was imputed independently using MICE and subsequently used to construct a penalized Cox model. In this setting, two levels of averaging are necessary, first across the multiple imputed data sets, and second across the combinations of trials. The former is usually achieved by applying Rubin’s rule and the latter can be achieved using several methods ranging from simple averaging to learning an additional meta-model on top of the individual models’ risk scores. A description of the approach selected by the authors would be highly appreciated.\n\nRecalibration:\n\nKondofersky and co-authors describe an additional ensemble approach were they combine a low-risk and high-risk model with an average model. It seems that the authors assumed that the proportional hazards assumption does not hold for a subset of patients and tried to address this problem by model averaging. Although this is an interesting idea, the authors unfortunately did not provide an explanation – besides their empirical evaluation – why this approach might be useful. In fact, results in figure 7 show that this approach has little benefit. It would be very interesting to investigate whether this was due to the proposed recalibration approach itself or the choice of evaluation measure, which failed to measure the improvement. The authors state the using the iAUC score to measure the benefit of their methods is flawed, because it is invariant to monotone transformations. Maybe more insight could be obtained if the iAUC score is substituted for the time-dependent Brier score, which does not suffer from this problem.\n\nLittle details on the construction of the low- and high-risk models has been provided. The authors write that they “modified the target variable DEATH […] such that it only counted events that happened prior to 14 months” regarding the high-risk model and that they “only considered events that occurred after 18 months” regarding the low-risk model. Does this mean that some patients were excluded entirely to construct the low- and high-risk models, which would lead to right and left truncation, respectively? A Cox model trained on the truncated data would thus be biased if truncation is ignored.\n\nThe cut-offs for the low-risk and high-risk group is based on 25-percentile and median, respectively. Are the risk groups defined based on the training data, or are new risk groups constructed each time when predicting previously unseen data? In addition, are the same risk groups used for all 7 models in the ensemble?\n\nFinally, the authors sate that “modifications only altered the ranks of patients within the defined ranges”. Limiting recalibration by a hard threshold means that risk scores at the limits of the respective intervals are only allowed to move into one direction, whereas risk scores located in the middle have much more freedom to move around. It would be interesting to investigate whether a soft threshold that pulls the recalibrated risk scores towards the original threshold without establishing hard boundaries would be beneficial.\n\nMinor concerns:\nPreliminaries (Paragraph 1):\n\nThe hazard ratio should be written as exp(βj), not exp(β), because β has been used as a vector before.\n\nIt should be mentioned that the traditional Kaplan-Meier estimator is only applicable to right-censored data, not any type of censored data.\n\nTo check the proportional assumption it is not sufficient that estimated Kaplan-Meier do not intersect, they should be parallel to each other. In addition, other methods such as goodness-of-fit tests can be employed as well.\n\nPreliminaries (Paragraph 2):\n\nFor the sake of completeness it would be good to reference methods used for accounting for ties in event times.\n\nMethods (First concept: good data)\n\nIt is not explicitly stated which trials comprised the training data, although this information can be inferred from table 1.\n\nMethods (Table 1):\n\nThe table should list exact p-values, especially for the range 0.001 to 0.05\n\nMethods (Preprocessing)\n\nWere only redundant numerical features identified and removed (using Pearson correlation) or was a similar approach applied to categorical data too? If yes, which measure was used to assess whether they are redundant?\n\nMethods (High-risk and low-risk recalibration)\n\n“we calculated the average between the initial model and the low-risk model”: meaning averaging the model’s predictions as for the high-risk model?\n\nMethods (Quantile recalibration)\n\nHow was weight of the lasso penalty determined?\n\nResults (Impact of wisdom of the crowd)\n\nWhich trials were used to obtain the “standard approach”? ASCENT-2, MAINSAIL and VENICE, or only a subset of these?\n\nGrammar:\nIntroduction (Paragraph 3): “either for statistical reasons or grounds content”\n\nMethods (Cleaning of core table): “In a first data cleaning” → add word “step” at the end\n\nMethods (Second concept wisdom of the crowd):\n\n“the readable book by Surowiecki”: no need to emphasise that books can be read\n\n“the opening of challenges to mass numbers of competitive team” → massive numbers\n\nMethods (Quantile recalibration): “survive curve for patient i” → survival curve",
"responses": []
},
{
"id": "18283",
"date": "19 Dec 2016",
"name": "Riccardo de Bin",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper the authors discuss three general concepts to improve the construction of a risk prediction model, namely data pre-processing, ensemble modeling and recalibration. The idea is to take advantage of the Prostate Cancer DREAM challenge to show them applied in a case study example.\nI really like the idea, because in the literature there is a clear need for papers which can provide the practitioners with useful guidelines and suggestions (see, e.g., Sauerbrei et al., 20141). I was therefore very positive about this paper, especially after the first part, in which the authors show in a clear and effective way the importance and possible implementations of data pre-processing. Unfortunately, in the following the standard drops quite abruptly, and the other two concepts, ensemble modeling and recalibration, are treated in a rather superficial and sometimes misleading way.\nMain concerns The actualization of the ensemble modeling concept, that in the paper is called \"model averaging'', is discussed in a single paragraph, basically giving to the reader no details about its implementation. This is particularly unsatisfying in light of its good results in improving the predictions both in terms of iAUC (discrimination) and RMSE (calibration). Actually, there is a further sentence in the Result section which says something about the implementation, but it is again very cryptic. All in all, this part left me with the feeling that a reader cannot learn much on ensemble modeling from this paper.\nMy strongest concerns, however, are related to the recalibration part. I agree that recalibration is important to be able to profitably use a model derived on a dataset to predict occurrences on a different dataset. Nevertheless, I am not sure that the proposed solutions are sensible.\nIn the first approach (\"high-risk and low-risk recalibration''), two further Cox models are fitted on modifications of the dataset in which the effective sample size is reduced by considering all observations either smaller or larger than a threshold as censored. I am not sure that this procedure makes sense and, in any case, I cannot see how it can help under the proportional hazards assumption. Either this assumption is valid, and this methods is useless, or it is not valid, and the Cox model should not be used in the first place. I am not sure, this procedure may help in checking the validity of the proportional hazards assumption, but not for recalibration. I was therefore not surprised to see, in Figure 7, that there is no substantial improvement in using this method (assuming that the proportional hazards assumption holds). As a further note, I think that there is some confusion with the terms \"average patient\", \"high-risk or low-risk patient\" in the text.\nI was also pretty surprised to see van Houwelingen (2000)'s paper2 associated with the authors' third approach (``validation by calibration''). Despite the name used, the two methods do not have much in common. The main idea of van Houwelingen (2000)'s procedure is to use data from the target population to calibrate the Cox model in order to have better predictions in the new dataset. There is no artificial split in training and validation sets, as it seems to be suggested by the authors, but two separate datasets. And, in contrast to what is stated in this paper, this division cannot be avoided, otherwise it makes no sense to apply the method. In the current problem, the authors should have used the ``random subsets of data from the fourth validation trial'' in their steps 2 and 3. Moreover, these two steps should have been suitably modified to work for the Cox model as in formula 3 of van Houwelingen (2000). In the context of survival analysis, indeed, the ``validation by calibration'' cannot be performed through a linear model.\nI have not be totally able to understood the idea behind the algorithm proposed in the paper, but it seems to me that the authors are simply replicating their previous procedure \"quantile recalibration'', just working directly on the time axis instead of through the survival curve.\nFurther comments\nI find quite dangerous to write \"After trying several machine learning and statistical models, the combined Cox proportional hazards and lasso model was chosen as it performed optimally on the validation sets''. A statistical model (or machine learning procedure) should clearly not be chosen in this way, but after rational considerations, especially when the method relies on a pretty strong assumption like the Cox model. I somehow understand that here the goal is prediction and there may not be necessary to focus on the procedure to obtain the results, but such a sentence encourage the practitioners to try whatever and pick the method which, maybe by chance, seems good on the particular case. Even here, the danger of having a model which works well only for the specific subgroups of the target population and not for the whole one (a sort of \"overfitting'' in a broader sense) is high.\n\nSomehow related, it is important to note that having no-overlapping Kaplan-Meier curves is a necessary but not sufficient condition for assuming the validity of the proportional hazards assumption.\n\nI would be curious to see how much of the improvements related to pre-processing is due to the data cleaning and how much is due to the presence of new variables. In particular, whether the new variables created with the first method (PCA) are useful at all. In contrast to the second method, in which external information is included from literature research, the first method is a simple transformation of existing data, and I do not see how it can improve the prediction. The original covariates, indeed, are kept in the model, and it seems to me that there is only two times the same information. This may even be counter-productive, as lasso is well know to have problems in dealing with correlated variables.\n\nMinor concerns\nWhen mentioning methods to deal with ties in the Cox model, references should be provided and the choice implemented in the paper (if relevant) specified.\n\nSomething about the selection of the lasso tuning parameter \"s'' should be included, at least by describing the procedure implemented in the paper.\n\nThe definition of discrimination reported in the paper is specific for binary classification problems. It would be better to (also) have a specific formulation for the survival analysis context (see, e.g., De Bin et al., 20143).\n\nTables 4 and 5 do not provide any information in addition to that provided by Figures 7 and 8. They should be removed.\n\n\"The second criterion of calibration focused on accuracy in terms of how \\dots'' --> ``The second criterion focused on calibration, i.e., how \\dots''.\n\nThe sentence \"For extension to prediction of survival up until fixed time periods that accommodate censored observations'' is not clear to me, it may need to be rephrased.\n\nVariables are sometimes called \"covariates'', sometimes ''covariables'', the notation should be consistent throughout the paper.\n\n\"survive curve'' --> ``survival curve''.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2671
|
https://f1000research.com/articles/5-1851/v1
|
27 Jul 16
|
{
"type": "Research Article",
"title": "Characterization of 5-(2-18F-fluoroethoxy)-L-tryptophan for PET imaging of the pancreas",
"authors": [
"Ahmed Abbas",
"Christine Beamish",
"Rebecca McGirr",
"John Demarco",
"Neil Cockburn",
"Dawid Krokowski",
"Ting-Yim Lee",
"Michael Kovacs",
"Maria Hatzoglou",
"Savita Dhanvantari",
"Ahmed Abbas",
"Christine Beamish",
"Rebecca McGirr",
"John Demarco",
"Neil Cockburn",
"Dawid Krokowski",
"Ting-Yim Lee",
"Michael Kovacs",
"Maria Hatzoglou"
],
"abstract": "Purpose: In diabetes, pancreatic beta cell mass declines significantly prior to onset of fasting hyperglycemia. This decline may be due to endoplasmic reticulum (ER) stress, and the system L amino acid transporter LAT1 may be a biomarker of this process. In this study, we used 5-(2-18F-fluoroethoxy)-L-tryptophan (18F-L-FEHTP) to target LAT1 as a potential biomarker of beta cell function in diabetes. Procedures: Uptake of 18F-L-FEHTP was determined in wild-type C57BL/6 mice by ex vivo biodistribution. Both dynamic and static positron emission tomography (PET) images were acquired in wild-type and Akita mice, a model of ER stress-induced diabetes, as well as in mice treated with streptozotocin (STZ). LAT1 expression in both groups of mice was evaluated by immunofluorescence microscopy. Results: Uptake of 18F-L-FEHTP was highest in the pancreas, and static PET images showed highly specific pancreatic signal. Time-activity curves showed significantly reduced 18F-L-FEHTP uptake in Akita mice, and LAT1 expression was also reduced. However, mice treated with STZ, in which beta cell mass was reduced by 62%, showed no differences in 18F-L-FEHTP uptake in the pancreas, and there was no significant correlation of 18F-L-FEHTP uptake with beta cell mass. Conclusions: 18F-L-FEHTP is highly specific for the pancreas with little background uptake in kidney or liver. We were able to detect changes in LAT1 in a mouse model of diabetes, but these changes did not correlate with beta cell function or mass. Therefore, 18F-L-FEHTP PET is not a suitable method for the noninvasive imaging of changes in beta cell function during the progression of diabetes.",
"keywords": [
"pancreas",
"beta cell",
"insulin",
"diabetes",
"PET",
"LAT1",
"Akita mice",
"ER stress",
"biomarker"
],
"content": "Introduction\n\nIn both Type 1 and Type 2 diabetes, the ability of the beta cells in the pancreatic islets of Langerhans to produce insulin is disrupted. There is an extensive preclinical period of time during which beta cell mass is significantly reduced prior to the onset of fasting hyperglycemia1, and therefore, there have been several efforts to detect these changes non-invasively, with the hypothesis that disease onset may be delayed and/or halted. Such efforts have focused on engineering transgenic mice in which uptake of an imaging contrast agent is genetically enhanced in beta cells using the mouse insulin promoter2 or on identifying biomarkers on the beta cell membrane that can be targeted by specific ligands that carry image contrast3,4.\n\nWe5 and others6,7 have developed transgenic mouse models in which changes in beta cell mass can be imaged by positron emission tomography (PET) or bioluminescence. While using bioluminescence to image these same changes in beta cell mass is a more sensitive and cost-effective approach8, PET has the advantage of being a clinical imaging modality, and our study showed that PET had the sensitivity to track changes in beta cell mass before the onset of fasting hyperglycemia. For a more clinically applicable approach, PET probes that target the vesicular monoamine transporter (VMAT2)3,9 and the glucagon-like peptide-1 receptor (GLP-1R) on the surface of the beta cell10,11 are available. While VMAT2 targeting had some initial success, its expression was subsequently shown to not be specific to the beta cell12. Targeting the GLP-1R using peptide analogs of GLP-1 and exendin-4 have been shown to be useful in imaging benign insulinomas and transplanted islets10,13,14, but have very limited capacity to image beta cells in the native rodent pancreas15,16. This is due largely to unfavourable pharmacokinetics that result in accumulation of PET signal in the kidneys and liver17, thus obscuring any signal that might have been emitted from pancreatic beta cells.\n\nAlthough targeting these specific cell surface proteins for the molecular imaging of the beta cells has not been successful, it remains that such proteins would be useful targets for clinical PET imaging of beta cell mass or function. One possible biomarker that may reflect changes in beta cell function during the development of diabetes is the large neutral amino acid transporter, LAT1 (SLC7A5). LAT1 is a member of the solute carrier (SLC) transporter family of proteins that controls the uptake and efflux of solutes such as metabolites, ions, toxins and drugs. It is a Na+-independent exchanger of amino acids with large neutral side chains, such as leucine, isoleucine, valine, tyrosine, tryptophan and methionine (reviewed in 18). LAT1 is expressed in the brain, testis and placenta, and is highly abundant in the blood-brain barrier19, where it mediates the transport of metabolites and drugs into the CNS. Its expression is up-regulated in a variety of cancers, where it is thought to enhance the transport of amino acids required for nutritional support and signaling molecules for proliferation, and a number of PET imaging agents targeting LAT1 have been developed and tested for a range of human cancers20–23. One such tracer, [18F]-L-FEHTP, has been shown to specifically target LAT1 in cancer cells24; remarkably, it also showed high uptake in the pancreas in mice, leading us to investigate its use in imaging beta cells.\n\nLAT1 is expressed on the surface of pancreatic beta cells25. Recently, alterations in islet LAT1 expression at the level of transcription have been linked to the development of beta cell dysfunction in a mouse model of diabetes in which a loss of regulation of amino acid transport leads to beta cell apoptosis26. We therefore used the previously synthesized LAT1-targeted PET tracer, [18F]-L-FEHTP, and characterized its potential for the molecular imaging of beta cell function in diabetes.\n\n\nMaterials and methods\n\n18F-L-FEHTP was prepared by 2-step radiolabeling modeled after similar methods previously described24,27. 18F-fluoride was produced by 18O(p,n)18F nuclear reaction bombarding enriched 18O-water in a cyclotron (PETtrace; GE). The fluoride was loaded onto an automated synthesis unit (Tracerlab FXF-N; GE) where it was trapped on an anion-exchange cartridge (QMA Light; Waters). The activity was eluted with a 1 mL solution of 1:4 water/ACN with Cryptand 222 (15 mg; ABX) and potassium carbonate (5 mg; Aldrich). The activity was dried under a stream of nitrogen with vacuum at 95°C. Acetonitrile (0.5 mL) was added and dried twice to remove residual water azeotropically. A 1 mL solution of ethylene di(p-toluenesulfonate) (10 mg, Aldrich) in acetonitrile was added and the mixture was heated for 6 min at 85°C. The reaction mixture was cooled and the 18F-labeled product was isolated by semi-preparative HPLC. The collected fraction was diluted in water and loaded onto a C-18 cartridge (Waters). The product was eluted in 2 mL of dimethyl sulfoxide (DMSO) and passed off of the automated synthesis unit into an adjoining hot cell for the second stage of the reaction.\n\n5-Hydroxy-L-tryptophan disodium salt was prepared by combining 5-hydroxy-L-tryptophan (10 mg; Aldrich) with 2 equivalents of sodium methoxide in methanol (Aldrich) at room temperature. The methanol was removed by rotary evaporator (V-10; Biotage) and the salt was brought up in DMSO. The solution was transferred to a reaction vial under nitrogen. An aliquot of the 18F-labeled tosylate was added to the reaction vial and heated at 105 °C for 15 min. The reaction mixture was diluted with water and purified by semi-preparative HPLC, and the mobile phase from the collected fraction was removed by rotary evaporator. 18F-L-FEHTP product was dissolved in phosphate buffered saline with 8% ethanol and sterile filtered prior to use. Synthesis of the 18F-labelled tosylate on the automated synthesis unit yielded approximately 30% (decay corrected). The second reaction produced approximately 50% (decay corrected) isolated 18F-L-FEHTP in final buffered solution. The radiochemical purity of the final product was greater than 98%.\n\nAll mice were treated in accordance with the ethical guidelines set out by the Animal Use Subcommittee of the Canadian Council on Animal Care at Western University (protocol #2012-020). Both wild-type (wt) C57BL/6JJ male mice and C57BL/6J-Ins2C56Y (Akita) male mice were obtained from Jackson Labs at 5 weeks of age, and imaging experiments were conducted at 6–7 weeks of age. For streptozotocin-treated mice, 8 female C57BL/6J mice were obtained from Jackson labs and randomly assigned to control (n = 4) or streptozotocin (STZ) treatment groups (single i.p. injection of 200 mg/kg, n = 4). Seven days post-injection, blood glucose readings were taken after fasting for 4 h using a One Touch Ultra glucometer (Lifescan Inc, Milpitas, CA). Diabetes was determined by glycemia ≥11 mmol/L. Imaging was conducted 8 days post-STZ injection.\n\nAll mice were fasted for 4h prior to the imaging session. Akita mice and corresponding controls were anesthetized with 0.5–2% isoflurane by inhalation, administered 6.45–15.8 MBq [18F]-L-FEHTP via tail vein injection and immediately placed on the scanner bed of a GE Healthcare Explore Vista DR PET scanner. STZ-treated mice and their corresponding controls were anesthetized, administered 9.5–11.1 MBq [18F]-L-FEHTP and immediately scanned in an Inveon preclinical PET scanner (Siemens Medical Solutions). PET image acquisition was conducted as described previously5, with a dynamic scan for 1 h and a subsequent static scan for 30 min. Images were reconstructed and standardized uptake values (SUVs) were calculated as described previously5, using regions of interest (ROIs) drawn through six image slices corresponding to the areas of the kidneys and pancreas.\n\nTo provide a pharmacokinetic profile of [18F]-L-FEHTP uptake, select organs, as well as urine and blood, were removed and weighed immediately after cessation of the imaging session, and counted in a high-purity Ge gamma counter, as described previously5,11. Data were calculated as % injected dose/g (%ID/g) tissue and all activity was decay-corrected to the time of injection.\n\nImmediately after counting, all pancreata from both wt and Akita mice were embedded in frozen tissue embedding gel (OCT, Fisher) for immunofluorescence microscopy analysis of LAT1. Serial sections were cut at 8 μm thickness, and three sections from each pancreas were selected for immunohistochemistry as described previously5. Primary antibodies against insulin (polyclonal anti-guinea pig, 1:1000, Bachem, Cat # ab7842) and SLC7A5 (polyclonal anti-rabbit, 1:100, Sigma Chemical, Product# HPA052673) and respective Alexa secondary antibodies (Alexa 597, goat anti-guinea pig and Alexa 488, goat anti-rabbit) were used to visualize insulin and LAT1. Sections were also stained with Hoechst 33342 (Sigma) for nuclei visualization. Two to 10 fields of view per section were acquired at 20× magnification using Nikon NIS Elements v. BR 4.50.00 software and imported into ImageJ (Fiji) v.1.49v for analysis.\n\nFor image analysis, one ROI was drawn within each islet, two ROIs were drawn in the non-islet, non-ductal pancreatic areas and two ROIs were drawn in background regions of each field of view using the circle tool in ImageJ with an area of approximately 15000 pixels, as we have done previously28–30. Fluorescence intensities were calculated as corrected total cell fluorescence (CTCF) as follows:\n\nCTCF = Raw Integrated density – (Area selected * mean background fluorescence)\n\nImmediately following the static imaging session, STZ-treated mice were euthanized by CO2, and the dissected pancreata immediately fixed in 4% paraformaldehyde (PFA, Electron Microscopy Sciences, Hatfield, PA) for 24 h. Pancreata were prepared according to Beamish et al.31. Seven micron thick cryosections were cut sequentially (Leica CM 1850 cryostat) from at least 3 layers, with an interval between each layer > 150 µm. Immunochemical staining followed Chamson-Reig et al.32 using a human anti-mouse insulin primary antibody (1/200, Sigma Chemical, St Louis, MO), horse-anti-mouse secondary antibody (Vector Laboratories), and DAB chromagen (Biogenics Laboratories, Fremont, CA) according to manufacturer’s instructions. Three sections from different layers of pancreas were immunostained and analyzed. The entire pancreas section was imaged at 2.5X magnification, and insulin-positive cells were imaged at 40X magnification using Northern Eclipse software (v. 6.0, Empix Imaging, Mississauga ON Canada). Pancreas- and insulin-positive areas were measured by tracing using ImageJ v. 1.50b. Beta cell mass was calculated by dividing insulin-positive area by total pancreas area, then multiplying by pancreas weight.\n\nFor biodistribution data, statistical significance between the wt and Akita organs was determined using one-way ANOVA followed by a Tukey post-hoc test. Differences in % ID/g between corresponding organs in the wt and Akita were compared with an unpaired one tailed Student’s t-test. For PET images, differences in SUVs were analysed using an unpaired one-tailed Student’s t-test. For microscopy images, differences in fluorescence values were determined using the Mann-Whitney U test. For correlation of SUV and beta cell mass, linear regression analysis was used. All statistical analyses were carried out on GraphPad Prism v. 6.01. Significance was set at p<0.05 for all experiments.\n\n\nResults\n\nTo determine 18F-L-FEHTP uptake by various tissues, ex vivo organ biodistribution was assessed in wt mice immediately after PET imaging (Figure 1). Uptake in the pancreas was significantly greater (p<0.05) from all other tissues except blood (p = 0.0647), heart (p = 0.485) and liver (p = 0.2029).\n\nEx vivo biodistribution was calculated for the indicated organs as % injected dose/g tissue (%ID/g tissue) at 1.5 h after injection. Values are given as mean ± SEM (n=6). * p< 0.05 compared to all other organs except blood, heart, and liver.\n\nWe then compared uptake of 18F-L-FEHTP in the pancreas of wt and Akita mice, which were previously reported to have higher LAT1 mRNA expression in pancreatic beta cells. There was notable variability in 18F-L-FEHTP uptake in the pancreata of Akita mice (Figure 2A). The pancreas of the Akita mice group showed no significant difference in 18F-L-FEHTP uptake from the wt pancreas (Figure 2A). Ex vivo biodistribution of all organs in Akita mice showed no significant differences in 18F-L-FEHTP uptake except the liver (p = 0.0144) (Table 1).\n\na) Ex vivo tracer uptake was calculated for the pancreas of wt and Akita mice as % ID/g tissue at 1.5 h after injection. b) PET imaging of the pancreas after injection of 18F-L-FEHTP. One hour after injection, static images were acquired for 30 min. Representative images of wt (n=6) and Akita (n=5) mice show highly specific uptake in the pancreas with little detectable background uptake. Arrowheads indicate position of the pancreas.\n\nValues are expressed as %ID/g tissue, and are means ± SEM (n=6). ** p< 0.01 compared to wt.\n\nIn order to determine if 18F-L-FEHTP PET could detect changes in pancreatic LAT1 activity due to diabetes, static images were acquired for 30 min 1 h after 18F-L-FEHTP injections for both wt and Akita mice. The pancreas was clearly visualized with little detectable background uptake (Figure 2B and Supplementary Figure 1). The SUVs (means ± SEM) calculated from pancreatic ROIs were 1.5 ± 0.04 and 1.3 ± 0.04 for wt and Akita mice, respectively, and these values were not significantly different.\n\nTo investigate possible changes in LAT1 protein in pancreatic beta cells, immunofluorescence microscopy for LAT1 was conducted on pancreas sections from mice that had been imaged. Figure 3A shows representative immunofluorescence images for LAT1 and insulin in wt and Akita mice. Image analysis showed that there was significantly less LAT1 immunofluorescence in pancreatic islets from Akita mice, as well as in non-islet, non-ductal tissue (Figure 3B). Additionally, there were no differences in total cell fluorescence between islets and the rest of the pancreas in both wt and Akita mice. This indicates that, at the protein level, LAT1 expression decreased in both islets and the rest of the pancreas in 6-week-old male Akita mice. Insulin immunofluorescence decreased by 62 ± 3% (p =0.0009) in Akita mice; however, total islet area did not differ between the two groups (12204 ± 1246 μm2 vs. 11447 ± 671 μm2, n = 6, p = 0.3).\n\na) Representative fluorescence images of islets and surrounding tissue shows the distribution of LAT1 (green) and insulin (red). Nuclei are highlighted with DAPI staining (blue). b) Quantification of total corrected cell fluorescence of LAT1 immunoreactivity in islets and the rest of the pancreas. White bars represent values from wt mice, and grey bars represent values from Akita mice. Values are means ± SEM (n=6). * p<0.05 compared to corresponding wt values.\n\nTime-activity curves for 18F-L-FEHTP activity in the kidneys and pancreata of wt and Akita mice are shown in Figure 4. ROIs were assigned for the first and final frame (corresponding to 5 min and 60 min) around the kidneys and pancreas respectively. These frames were chosen because the tracer distributed to the kidneys initially and then accumulated in the pancreas (Figure 4A). Activity in the Akita group was consistently less than in the wt group for both organs (Figure 4B). In the pancreas, the Akita mice had significantly less 18F-L-FEHTP activity at 25 min (p = 0.0433), 30 min (p = 0.0245), 40 min (p = 0.0296), 50 min (p = 0.0192), and 60 min (p = 0.0348) (Figure 4B).\n\nTime-activity curves (TACs) for 18F-L-FEHTP uptake in the pancreas of a) wt and b) Akita mice. TACs are expressed as SUVs from regions corresponding to regions of interest from six image slices corresponding to the areas of the kidneys and pancreas. Values are means ± SEM (n=6 for wt, n=5 for Akita). * p<0.05 compared to wt.\n\nImaging data in Akita mice seemed to suggest that the decrease in 18F-L-FEHTP correlated with a decrease in pancreatic LAT1 overall and was not specific to islets. We then examined 18F-L-FEHTP uptake in a known model of decreased beta cell mass to test the sensitivity of the tracer. Fasting blood glucose levels from STZ-treated mice were significantly higher than control (non-diabetic) animals (17.8 ± 3.7 mM vs 7.6 ± 0.2 mM, n = 4, p = 0.0161), which was commensurate with a 62% decrease in β-cell mass in STZ vs control mice (0.26 ± 0.04 mg vs 0.68 ± 0.08 mg, p < 0.0005). However, pancreatic uptake of 18F-L-FEHTP as assessed by both static (0.9 ± 1.7 vs 0.8 ± 0.1, p = 0.3) and dynamic PET scans (Figure 5) was not different between the two groups, and there was no significant correlation (r = 0.2416) between pancreatic static SUVs and beta cell mass (Figure 6).\n\nTACs are expressed as SUVs from regions corresponding to regions of interest from six image slices corresponding to the pancreas. Values are means ± SEM (n=4). There were no significant differences at any of the time points.\n\nSUVs from static scans from both wt and STZ-treated mice were plotted as a function of beta cell mass. Each point represents values from individual mice. Linear regression analysis showed no significant correlation between SUV and beta cell mass.\n\n\nDiscussion\n\nThe PET tracer, 18F-L-FEHTP, is known to specifically target the amino acid transporter LAT1. Since the expression of the mRNA for LAT1 shows dramatic changes in pancreatic islets during the development of diabetes in the Akita mouse model26, we used 18F-L-FEHTP in an effort to track these changes in vivo, and hypothesized that 18F-L-FEHTP uptake could be a surrogate measure for beta cell function. We also tested the hypothesis that 18F-L-FEHTP uptake could be an in vivo measure of beta cell mass by examining uptake in STZ-treated mice, a model of beta cell depletion. Amazingly, PET images of 18F-L-FEHTP uptake showed a strong signal in the pancreas, with minimal background uptake in the kidneys or liver. In male Akita mice, time-activity curves showed significantly reduced uptake of 18F-L-FEHTP, and immunofluorescence microscopy revealed a significant reduction in LAT1 in islets and the rest of the pancreas. However, there was no correlation of 18F-L-FEHTP uptake with beta cell function, nor with beta cell mass in STZ-treated mice. We showed that LAT1 was expressed throughout the pancreas, which may lead to challenges in detecting changes in LAT1 only in the islets, as they comprise 1% of total pancreatic volume.\n\nSeveral 18F-labelled amino acid analogues have been developed for the targeting of LAT1 in cancer tumors. These tracers are based on analogues of L- and D-tyrosine20,21, or a racemic mixture of fluoropropyl tryptophan22, as well as the fluoro-ethoxy analogue used in the present study, 18F-L-FEHTP24. We chose 18F-L-FEHTP as a PET tracer that targets LAT1 since the biodistribution data in mice indicated the highest uptake in non-tumor tissue was in the pancreas, and the PET images in this study did not show appreciable background signal. Indeed, our own images show accumulation of the PET signal specifically in the pancreas, with little background uptake in the kidneys or liver. As well, uptake, efflux and metabolism of 18F-L-FEHTP have been very well characterized. 5-hydroxy-L-tryptophan is decarboxylated by amino acid decarboxylase, but 18F-L-FEHTP was shown to be very stable in cells with high amino acid decarboxylase activity, with no evidence of metabolite production in vivo24. Therefore, we chose 18F-L-FEHTP due to its extensive characterization in vitro and in vivo.\n\nThe C96Y mutation in the mouse Ins2 gene that characterizes the Akita mouse has been shown to induce beta cell apoptosis by failing to exit the ER and inducing ER stress33, and this same mutation has been implicated in the development of permanent neonatal diabetes in humans34. As proposed by Krokowski et al.26, prolonged ER stress caused by induction of protein synthesis in beta cells involves increased amino acid flux through a network of transporters that leads to increased tRNA charging with LAT1 substrates such as leucine and tryptophan. In particular, LAT1 mRNA levels were significantly up-regulated during translational recovery in islets from 6-week-old male Akita mice, leading to the hypothesis that changes in LAT1 in islets could serve as a diagnostic biomarker for the early development of diabetes. Therefore, we reasoned that 18F-L-FEHTP would be a surrogate marker for the decline in beta cell function resulting from prolonged ER stress. In the present study, we first conducted PET imaging studies using the Akita mouse model. However, our PET imaging results suggest that 18F-L-FEHTP uptake was decreased in Akita mice, and immunofluorescence microscopy suggests that islet LAT1 also decreased.\n\nIt is possible that the decrease 18F-L-FEHTP uptake may be a function of a decline in beta cell mass. However, our results show no decrease in islet area, although there is a significant decrease in insulin immunofluorescence within islets. There is some conflicting evidence on changes in beta cell mass in Akita mice: in 8-week-old male mice, there is roughly a 50% decrease35, while in 8-week-old female mice, there is no change, despite the sharp reduction in islet insulin content36. Our results indicate that, in 6-week-old Akita male mice in our facility, ongoing ER stress is diminishing insulin synthesis, but has not yet affected islet size. Additionally, our analysis shows that there is no correlation between blood glucose levels and 18F-L-FEHTP uptake, indicating that imaging changes in LAT1 is not a surrogate measure for beta cell function.\n\nSince islet area did not change in our Akita mice, we tested 18F-L-FEHTP uptake in a model of diabetes in which a dramatic decrease in beta cell mass is well documented. When treated with a single injection of 200 mg/kg STZ, female C57BL/6 mice showed a > 60% decrease in beta cell mass. However, there was no corresponding decrease in 18F-L-FEHTP uptake as assessed by either dynamic or static SUV analysis. Therefore, despite the loss of beta cells and associated LAT1 expression, no change in 18F-L-FEHTP uptake could be detected, leading us to conclude that 18F-L-FEHTP uptake is not a surrogate measure of changes in beta cell mass during the progression of diabetes.\n\nIs 18F-L-FEHTP uptake a function of LAT1 expression? Our results show that both 18F-L-FEHTP uptake and LAT1 immunofluorescence are decreased in pancreatic tissue from Akita mice, so it appears that imaging with 18F-L-FEHTP can report LAT1 activity. However, LAT1 mRNA levels are increased in Akita islets26, indicating that there may be an additional level of control in the expression of LAT1 in ER stress-dependent diabetes. There is evidence that LAT1 and other transporters are regulated differentially at the transcriptional and post-translational levels. Glucose deprivation increases LAT1 mRNA and protein expression and [3H]Leu transport activity in retinal endothelial cells37. Interestingly, high glucose levels did not change LAT1 mRNA levels, indicating that hyperglycemia may regulate LAT1 at a post-translational level. Another study suggests a mechanism for post-translational regulation, as LAT1 localization to the plasma membrane is impaired by hyperoxia in alveolar epithelial cells38, indicating that stress conditions can induce improper LAT1 trafficking, resulting in loss of transporter activity. Finally, another amino acid transporter, system A/SNAT2 (SLC38A2), is regulated differentially at the transcriptional and post-translational levels. Amino acid starvation and hypertonicity increase SNAT2 mRNA; however, stress conditions promote the proteasome-dependent degradation of SNAT239,40. It is possible that LAT1 in beta cells could be differentially regulated at the transcriptional and post-translational level in a similar manner, with nutrient availability and/or signaling determining the trafficking or degradation of LAT1 protein.\n\nSince LAT1 is expressed in both the endocrine and exocrine compartments of the pancreas, the signal is likely due to uptake of 18F-L-FEHTP in both compartments. It was tempting for us to test the sensitivity of 18F-L-FEHTP in a model of pancreatic cancer. However, although the expression of many amino acid transporters do change in caerulein-induced pancreatitis, slc7A5/LAT1 expression/immunoreactivity does not41,42. So although 18F-L-FEHTP is taken up and retained in certain types of cancers24, it is not an appropriate imaging probe for the detection of pancreatic cancer.\n\nIn order to differentiate islet uptake from acinar uptake, it has been estimated that the signal of any imaging probe must be 1000-fold higher in beta cells than in the surrounding acinar/ductal cells43, due to the fact that beta cells comprise <1% of pancreatic volume. Our study clearly demonstrates that this is not the case for LAT1. The data for other presumptive beta cell imaging targets, such as VMAT2, GLP-1R, and the 5-HT metabolic pathway in human islets, are limited. A recent study showed that mRNA levels of VMAT2 in human pancreas were 500-fold higher in islets compared with exocrine cells44; however, there was still significant uptake of [18F]FP-(+)-DTBZ in patients with T1D and no residual beta cell function. Using autoradiography, it has been reported that there is about 2X as many GLP-1 receptors in human islets than in acinar cells45. We have shown the presence of GLP-1R in glucagon- and amylase-positive cells11 and there is evidence for low-level expression of GLP-1R in exocrine pancreas. Finally, uptake of [11C]-hydroxytryptophan, postulated to be a surrogate measure of beta cell mass by targeting the serotonergic pathway, is 14 times higher in human islets than in exocrine cells46, and there is a small but significant decrease in uptake in patients with T1D. Therefore, there is still some promise of a non-invasive method to image changes in beta cell function and mass during the progression of diabetes. Further characterization of 18F-L-FEHTP uptake and retention in human islets, and not in rodent-derived cell lines or islets, may be required.\n\n\nConclusions\n\nIn the present study, we have shown specific accumulation of 18F-L-FEHTP in the pancreata of mice, with minimal background signal from the kidneys, liver or intestine. Such clear visualization of the pancreas using a targeted PET agent has not previously been achieved. The PET signal represents the total integrated tracer uptake in the pancreas, and we could not detect changes in beta cell function or mass using our mouse models of Type 1 diabetes.\n\n\nData availability\n\nF1000Research: Dataset 1. Uptake of 18F-L-FEHTP in wt and Akita mice, 10.5256/f1000research.9129.d13093247\n\nF1000Research: Dataset 2. Immunofluorescence microscopy of LAT1, 10.5256/f1000research.9129.d12935648\n\nF1000Research: Dataset 3. Time-activity values for 18F-L-FEHTP uptake in the pancreas and kidney of wt and Akita mice, 10.5256/f1000research.9129.d12935749\n\nF1000Research: Dataset 4. Time-activity values for 18F-L-FEHTP uptake in the pancreas of wt and STZ-treated mice, 10.5256/f1000research.9129.d12935850\n\nF1000Research: Dataset 5. Beta cell mass and 18F-L-FEHTP uptake, 10.5256/f1000research.9129.d12935951",
"appendix": "Author contributions\n\n\n\nSD and MH conceived the study. SD designed the experiments. AA, RM, NC, TYL and MK conducted the biodistribution and imaging experiments and analyses. CB did the beta cell mass measurements, and JD conducted fluorescence microscopy and analysis. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe research was supported by a grant from the Lawson Health Research Institute Internal Research Fund to SD and National Institutes of Health R01 DK53307 and DK60596 to MH.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank Ms. Jennifer Hadway for technical assistance with animal imaging, and Dr. Jonathan Thiessen for assistance with the image analysis.\n\n\nSupplementary material\n\nSupplementary Video 1.\n\n3-D renderings of the representative images from (a) wt (n=6) and (b) Akita (n=5) mice which show highly specific uptake in the pancreas with little background uptake.\n\nClick here to access the data.\n\n\nReferences\n\nCnop M, Welsh N, Jonas JC, et al.: Mechanisms of pancreatic beta-cell death in type 1 and type 2 diabetes: many differences, few similarities. Diabetes. 2005; 54(Suppl 2): S97–107. PubMed Abstract | Publisher Full Text\n\nDhanvantari S: Imaging Functional Beta Cell Mass: Can we See Islets Clearly Now? Curr Mol Imaging. 2012; 1(1): 44–54. Publisher Full Text\n\nSouza F, Simpson N, Raffo A, et al.: Longitudinal noninvasive PET-based beta cell mass estimates in a spontaneous diabetes rat model. J Clin Invest. 2006; 116(6): 1506–1513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang P, Yoo B, Yang J, et al.: GLP-1R-targeting magnetic nanoparticles for pancreatic islet imaging. Diabetes. 2014; 63(5): 1465–1474. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcGirr R, Hu S, Yee SP, et al.: Towards PET imaging of intact pancreatic beta cell mass: a transgenic strategy. Mol Imaging Biol. 2011; 13(5): 962–972. PubMed Abstract | Publisher Full Text\n\nVirostko J, Radhika A, Poffenberger G, et al.: Bioluminescence imaging in mouse models quantifies beta cell mass in the pancreas and after islet transplantation. Mol Imaging Biol. 2010; 12(1): 42–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYong J, Rasooly J, Dang H, et al.: Multimodality imaging of β-cells in mouse models of type 1 and 2 diabetes. Diabetes. 2011; 60(5): 1383–1392. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVirostko J, Henske J, Vinet L, et al.: Multimodal image coregistration and inducible selective cell ablation to evaluate imaging ligands. Proc Natl Acad Sci U S A. 2011; 108(51): 20719–20724. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoland R, Freeby M, Parsey R, et al.: 11C-dihydrotetrabenazine PET of the pancreas in subjects with long-standing type 1 diabetes and in healthy controls. J Nucl Med. 2009; 50(3): 382–389. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChrist E, Wild D, Forrer F, et al.: Glucagon-like peptide-1 receptor imaging for localization of insulinomas. J Clin Endocrinol Metab. 2009; 94(11): 4398–4405. PubMed Abstract | Publisher Full Text\n\nBehnam Azad B, Rota V, Yu L, et al.: Synthesis and evaluation of optical and PET GLP-1 peptide analogues for GLP-1R imaging. Mol Imaging. 2015; 14: 1–16. PubMed Abstract\n\nSaisho Y, Harris PE, Butler AE, et al.: Relationship between pancreatic vesicular monoamine transporter 2 (VMAT2) and insulin expression in human pancreas. J Mol Histol. 2008; 39(5): 543–551. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu Z, Todorov I, Li L, et al.: In vivo imaging of transplanted islets with 64Cu-DO3A-VS-Cys40-Exendin-4 by targeting GLP-1 receptor. Bioconjug Chem. 2011; 22(8): 1587–1594. PubMed Abstract | Publisher Full Text\n\nBrom M, Oyen WJ, Joosten L, et al.: 68Ga-labelled exendin-3, a new agent for the detection of Insulinomas with PET. Eur J Nucl Med Mol Imaging. 2010; 37(7): 1345–1355. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSelvaraju RK, Velikyan I, Johansson L, et al.: In vivo imaging of the glucagonlike peptide 1 receptor in the pancreas with 68Ga-labeled DO3A-exendin-4. J Nucl Med. 2013; 54(8): 1458–63. PubMed Abstract | Publisher Full Text\n\nBandara N, Zheleznyak A, Cherukuri K, et al.: Evaluation of Cu-64 and Ga-68 Radiolabeled Glucagon-Like Peptide-1 Receptor Agonists as PET Tracers for Pancreatic β cell Imaging. Mol Imaging Biol. 2016; 18(1): 90–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVegt E, Melis M, Eek A, et al.: Renal uptake of different radiolabelled peptides is mediated by megalin: SPECT and biodistribution studies in megalin-deficient mice. Eur J Nucl Med Mol Imaging. 2011; 38(4): 623–632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFotiadis D, Kanai Y, Palacín M: The SLC3 and SLC7 families of amino acid transporters. Mol Aspects Med. 2013; 34(2–3): 139–158. PubMed Abstract | Publisher Full Text\n\nDuelli R, Enerson BE, Gerhart DZ, et al.: Expression of large amino acid transporter LAT1 in rat brain endothelium. J Cereb blood flow Metab. 2000; 20(11): 1557–1562. PubMed Abstract | Publisher Full Text\n\nNobusawa A, Kim M, Kaira K, et al.: Diagnostic usefulness of 18F-FAMT PET and L-type amino acid transporter 1 (LAT1) expression in oral squamous cell carcinoma. Eur J Nucl Med Mol Imaging. 2013; 40(11): 1692–700. PubMed Abstract | Publisher Full Text\n\nOhshima Y, Hanaoka H, Tominaga H, et al.: Biological evaluation of 3-[18F]fluoro-α-methyl-D-tyrosine (D-[18F]FAMT) as a novel amino acid tracer for positron emission tomography. Ann Nucl Med. 2013; 27(4): 314–324. PubMed Abstract | Publisher Full Text\n\nChiotellis A, Mu L, Müller A, et al.: Synthesis and biological evaluation of 18F-labeled fluoropropyl tryptophan analogs as potential PET probes for tumor imaging. Eur J Med Chem. 2013; 70: 768–780. PubMed Abstract | Publisher Full Text\n\nIkotun OF, Marquez BV, Huang C, et al.: Imaging the L-type amino acid transporter-1 (LAT1) with Zr-89 immunoPET. PLoS One. 2013; 8(10): e77476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrämer SD, Mu L, Müller A, et al.: 5-(2-18F-Fluoroethoxy)-L-tryptophan as a substrate of system L transport for tumor imaging by PET. J Nucl Med. 2012; 53(3): 434–442. PubMed Abstract | Publisher Full Text\n\nFukushima D, Doi H, Fukushima K, et al.: Glutamate exocrine dynamics augmented by plasma glutamine and the distribution of amino acid transporters of the rat pancreas. J Physiol Pharmacol. 2010; 61(3): 265–271. PubMed Abstract\n\nKrokowski D, Han J, Saikia M, et al.: A self-defeating anabolic program leads to β-cell apoptosis in endoplasmic reticulum stress-induced diabetes via regulation of amino acid flux. J Biol Chem. 2013; 288(24): 17202–17213. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi R, Wu SC, Wang SC, et al.: Synthesis and evaluation of L-5-(2-[18F]fluoroethoxy)tryptophan as a new PET tracer. Appl Radiat Isot. 2010; 68(2): 303–308. PubMed Abstract | Publisher Full Text\n\nMcGirr R, Guizzetti L, Dhanvantari S: The sorting of proglucagon to secretory granules is mediated by carboxypeptidase E and intrinsic sorting signals. J Endocrinol. 2013; 217(2): 229–240. PubMed Abstract | Publisher Full Text\n\nGuizzetti L, McGirr R, Dhanvantari S: Two dipolar α-helices within hormone-encoding regions of proglucagon are sorting signals to the regulated secretory pathway. J Biol Chem. 2014; 289(21): 14968–14980. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDouglas GA, McGirr R, Charlton CL, et al.: Characterization of a far-red analog of ghrelin for imaging GHS-R in P19-derived cardiomyocytes. Peptides. 2014; 54: 81–88. PubMed Abstract | Publisher Full Text\n\nBeamish CA, Strutt BJ, Arany EJ, et al.: Insulin-positive, Glut2-low cells present within mouse pancreas exhibit lineage plasticity and are enriched within extra-islet endocrine cell clusters. Islets. 2016; 8(3): 65–82. PubMed Abstract | Publisher Full Text\n\nChamson-Reig A, Arany EJ, Summers K, et al.: A low protein diet in early life delays the onset of diabetes in the non-obese diabetic mouse. J Endocrinol. 2009; 201(2): 231–9. PubMed Abstract | Publisher Full Text\n\nLiu M, Hodish I, Rhodes CJ, et al.: Proinsulin maturation, misfolding, and proteotoxicity. Proc Natl Acad Sci U S A. 2007; 104(40): 15841–15846. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRajan S, Eames SC, Park SY, et al.: In vitro processing and secretion of mutant insulin proteins that cause permanent neonatal diabetes. Am J Physiol Endocrinol Metab. 2010; 298(3): E403–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsuda T, Kido Y, Asahara S, et al.: Ablation of C/EBPbeta alleviates ER stress and pancreatic beta cell failure through the GRP78 chaperone in mice. J Clin Invest. 2010; 120(1): 115–126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBachar-Wikstrom E, Wikstrom JD, Ariav Y, et al.: Stimulation of autophagy improves endoplasmic reticulum stress-induced diabetes. Diabetes. 2013; 62(4): 1227–1237. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsuyama R, Tomi M, Akanuma S, et al.: Up-regulation of L-type amino acid transporter 1 (LAT1) in cultured rat retinal capillary endothelial cells in response to glucose deprivation. Drug Metab Pharmacokinet. 2012; 27(3): 317–324. PubMed Abstract | Publisher Full Text\n\nBrahmajothi MV, Tinch BT, Wempe MF, et al.: Hyperoxia inhibits nitric oxide treatment effects in alveolar epithelial cells via effects on L-type amino acid transporter-1. Antioxid Redox Signal. 2014; 21(13): 1823–1836. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNardi F, Hoffmann TM, Stretton C, et al.: Proteasomal modulation of cellular SNAT2 (SLC38A2) abundance and function by unsaturated fatty acid availability. J Biol Chem. 2015; 290(13): 8173–8184. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrokowski D, Jobava R, Guan BJ, et al.: Coordinated Regulation of the Neutral Amino Acid Transporter SNAT2 and the Protein Phosphatase Subunit GADD34 Promotes Adaptation to Increased Extracellular Osmolarity. J Biol Chem. 2015; 290(29): 17822–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRooman I, Lutz C, Pinho AV, et al.: Amino acid transporters expression in acinar cells is changed during acute pancreatitis. Pancreatology. 2013; 13(5): 475–485. PubMed Abstract | Publisher Full Text\n\nKowalik AS, Johnson CL, Chadi SA, et al.: Mice lacking the transcription factor Mist1 exhibit an altered stress response and increased sensitivity to caerulein-induced pancreatitis. Am J Physiol Gastrointest Liver Physiol. 2007; 292(4): G1123–32. PubMed Abstract | Publisher Full Text\n\nZhang L, Thurber GM: Quantitative Impact of Plasma Clearance and Down-regulation on GLP-1 Receptor Molecular Imaging. Mol Imaging Biol. 2016; 18(1): 79–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFreeby MJ, Kringas P, Goland RS, et al.: Cross-sectional and Test-Retest Characterization of PET with [18F]FP-(+)-DTBZ for β Cell Mass Estimates in Diabetes. Mol Imaging Biol. 2016; 18(2): 292–301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKörner M, Stöckli M, Waser B, et al.: GLP-1 receptor expression in human tumors and human normal tissues: potential for in vivo targeting J Nucl Med. 2007; 48(5): 736–743. PubMed Abstract | Publisher Full Text\n\nEriksson O, Espes D, Selvaraju RK, et al.: Positron emission tomography ligand [11C]5-hydroxy-tryptophan can be used as a surrogate marker for the human endocrine pancreas. Diabetes. 2014; 63(10): 3428–3437. PubMed Abstract | Publisher Full Text\n\nAbbas A, Beamish C, McGirr R, et al.: Dataset 1 in: Characterization of 5-(2-18F-Fluoroethoxy)-L-Tryptophan for PET Imaging of the Pancreas. F1000Research. 2016. Data Source\n\nAbbas A, Beamish C, McGirr R, et al.: Dataset 2 in: Characterization of 5-(2-18F-Fluoroethoxy)-L-Tryptophan for PET Imaging of the Pancreas. F1000Research. 2016. Data Source\n\nAbbas A, Beamish C, McGirr R, et al.: Dataset 3 in: Characterization of 5-(2-18F-Fluoroethoxy)-L-Tryptophan for PET Imaging of the Pancreas. F1000Research. 2016. Data Source\n\nAbbas A, Beamish C, McGirr R, et al.: Dataset 4 in: Characterization of 5-(2-18F-Fluoroethoxy)-L-Tryptophan for PET Imaging of the Pancreas. F1000Research. 2016. Data Source\n\nAbbas A, Beamish C, McGirr R, et al.: Dataset 5 in: Characterization of 5-(2-18F-Fluoroethoxy)-L-Tryptophan for PET Imaging of the Pancreas. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15266",
"date": "29 Jul 2016",
"name": "John Virostko",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes the use of the PET tracer [18F]-L-FEHTP in an attempt to image beta cell mass. Pancreatic uptake of the tracer is high. However, while experiments in Akita mice show a slight (but highly variable) decline in pancreatic [18F]-L-FEHTP uptake, there is no change in STZ-treated animals. The authors demonstrate that the target receptor, LAT1, is expressed throughout the pancreas, and is not confined to islets.\n\nComments\nAs biodistribution studies were performed immediately following the cessation of PET imaging, why does the biodistribution data (Figure 2A) not match the end of the TAC curve (Figure 4A)? This biodistribution data does not support the conclusion of ‘significantly reduced 18F-L-FEHTP uptake in Akita mice.’\n\nIn Figure 2 is all activity coming from the pancreas? It appears that there may be PET signal from other organs – is the spleen visible on these images?\n\nFigure 4 caption states: ‘Time-activity curves (TACs) for 18F-L-FEHTP uptake in the pancreas of a) wt and b) Akita mice.’ However, I believe the figures show the pancreas TAC in panel A and the kidney TAC in panel B. The text under the heading ‘Dynamic scan analysis’ needs to be edited to correctly reference these figures.\n\nIt is interesting that the kidney of Akita mice also exhibits a statistically significant decline in tracer uptake. Is there known LAT expression in the kidney that would explain this finding?\n\nThe authors state that: ‘Such clear visualization of the pancreas using a targeted PET agent has not previously been achieved.’ Does the level of pancreatic uptake exceed that previously achieved with C11-acetate, another tracer with high pancreas uptake?\n\nFigure 5: Connecting the points in the TAC would improve consistency with previously presented TACs.",
"responses": [
{
"c_id": "2266",
"date": "14 Nov 2016",
"name": "Ahmed Abbas",
"role": "Author Response",
"response": "This is an excellent question and one which everyone who characterizes a new PET tracer should think about. First, one difference between calculation of biodistribution and SUV data is that SUV is normalized to body weight, while biodistribution is normalized to organ weight. Second, SUVs calculated in the context of a dynamic scan gives an idea of tracer flux through the tissue, while bioD is a measure of the activity in each organ in the absence of blood circulation. These differences may account for the mismatch in the final time point of the TAC and the biodistribution results. These differences may also explain why liver uptake appears high in the bioD, but the liver signal cannot be discerned in the PET images. The supplementary video shows that there was uptake in other organs (which do not appear in the images shown in figure 2). We agree that accumulation of activity in the spleen cannot be ruled out, and have inserted this comment in the Discussion (paragraph 2). We thank Dr. Virotsko for alerting us to this error, and we have corrected the section under Dynamic Scan Analysis.Literature evidence (Fotiadis, et al. Reference #18) does not support LAT1 renal expression. We have inserted the words “in mice” to reflect the fact that we are comparing our results to probes that target specific biomarkers (GLP-1R, VMAT2) in mouse pancreas. The key word in our statement is “targeted”. 11C-acetate does not target a specific mechanism, as its means of uptake and retention are not known. Studies of 11C-acetate have been done exclusively on humans and it is difficult to compare SUVs of 18F-FEHTP in mice and 11C-acetate in humans. The mean SUV of 11C-acetate activity in normal human pancreas is reported at 11.9±4.3 (Song et al. 2009, Nuc. Med. Commun. 30:462-465, DOI 10.1097/MNM.0b013e32832aa7ce), and our mean reported activity of 18F-FEHTP in WT C57Bl/6 male mice is 1.5±0.04. We thank Dr. Virotsko for this suggestion and have connected the points in the TAC in Figure 5."
}
]
},
{
"id": "15268",
"date": "16 Aug 2016",
"name": "Christopher Phenix",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors are developing a PET method using the tracer 18F-L-FEHTP, to detect changes in beta cell mass and/or function through imaging the expression of the LAT1 receptor within pancreas to identify those individuals at high risk of developing diabetes. Although they find that the tracer is capable of revealing lower LAT1 expression in Akita mice (but not STZ treated mice), high uptake in other types of pancreatic cells may circumvent the use of 18F-L-FEHTP to image beta cell mass and function.\nIn the introduction, the authors mention another study that they published where PET was used to successfully image changes in beta cell mass prior to fasting hyperglycemia. I think it's important, for the context and rationale of this work, to explain why a different tracer was needed for this work since as written it appears they already have a good tracer (although reading the reference would make it clear that they actually imaged a reporter gene).\n\nIn the introduction, the sentence that discusses VMAT2 expression in other tissues is incomplete. Many PET tracers that image receptor expression are expressed in a variety of tissues and this does prevent their use. A bit more information should be added at the end of the sentence to help the reader understand why VMAT2 expression in other tissues resulted in such poor PET images.\n\nThe second image in Figure 2B has the arrow pointing into a location without any tracer uptake. I don’t think that is intentional and it is close to an area with high uptake. Although slight, the arrow should be moved a bit to clearly point out the pancreas in that specific image. In addition, the figure would be generally improved by labelling all organs that demonstrate high tracer uptake in the PET images.\n\nWould it have been reasonable to perform statistical analysis to determine if there is a potential outlier in the Akita mouse data (with very high pancreatic uptake) as shown in Figure 2A?\n\nIn paragraph 1 of the discussion, use of the word “amazingly” seems inappropriate in the paper. It is established that LAT1 is highly expressed in the pancreas and high uptake was previously reported for this tracer by Kramer et al, (2012) so I wasn’t really “amazed” by the high uptake. The uptake in the pancreas is indeed impressive however.\n\nIn agreement with Dr. Virostko, I am a bit confused why there are dramatic changes in the % injected dose/g values measured in pancreas vs the kidney from the biodistribution data but much smaller relative changes in the SUV values obtained from the PET data. Can the authors explain this as I could be missing something? At least a comment in the text would be valuable.\n\nIs there really no significant differences in uptake between the pancreas and blood shown in Figure 1?",
"responses": [
{
"c_id": "2265",
"date": "14 Nov 2016",
"name": "Ahmed Abbas",
"role": "Author Response",
"response": "We apologize if the rationale of the study was not clear. We have added “in the context of a reporter gene” to line 14 of the Introduction to clarify the need to produce a clinically applicable PET tracer. We have clarified our presentation of the VMAT2 findings. We have adjusted the arrowhead to indicate a region of high uptake and indicated the location of the spleen. We have tested for the presence of outliers in the Akita pancreas bioD (Figure 2A) as suggested. Based on Grubb’s test for outliers (alpha = 0.05), there are no outliers. Also, all values are within 2 standard deviations of the mean. We have replaced “amazingly” with “interestingly” See response to Dr. Virotsko’s comment. We have run the statistical test again and confirmed that there is no significant difference in tracer uptake between pancreas and blood."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1851
|
https://f1000research.com/articles/5-2546/v1
|
20 Oct 16
|
{
"type": "Opinion Article",
"title": "Zika mosquito vectors: the jury is still out",
"authors": [
"Walter S. Leal"
],
"abstract": "After a 40-year hiatus, the International Congress of Entomology (ICE 2016) convened in Orlando, Florida (September 25-30, 2016). One of the symposia at ICE 2016, the Zika Symposium, covered multiple aspects of the Zika epidemic, including epidemiology, sexual transmission, genetic tools for reducing transmission, and particularly vector competence. While there was a consensus among participants that the yellow fever mosquito, Aedes aegypti, is a vector of the Zika virus, there is growing evidence indicating that the range of mosquito vectors might be wider than anticipated. In particular, three independent groups from Canada, China, and Brazil presented and discussed laboratory and field data strongly suggesting that the southern house mosquito, Culex quinquefasciatus, also known as the common mosquito, is highly likely to be a vector in certain environments.",
"keywords": [
"Zika",
"Aedes aegypti",
"Culex quinquefasciatus",
"Chikungunya",
"microcephaly",
"Guillain-Barré syndrome"
],
"content": "Background\n\nThe International Congresses of Entomology (ICEs) are the “Olympics of Entomology,” which started in 1910 in Brussels, Belgium. It is only the third time that ICE has been held in the United States of America (USA), the last conference being 40 years ago when delegates convened in Washington, DC for XV ICE. This time delegates from 102 countries convened in Orlando, Florida (September 25–30, 2016) for a historic event titled “Entomology Without Borders.” With 6,682 delegates, ICE 2016 was undoubtedly the largest gathering of scientists in the history of entomology.\n\nThis Opinion article is based mainly on the current literature and the Zika Symposium at ICE 2016, which was organized by Dr. Constância Ayres, Oswaldo Cruz Foundation (FIOCRUZ-PE), Recife, Brazil, and Dr. Adriana Costero, National Institutes of Health, Bethesda, Maryland. The symposium featured the following speakers (Figure 1): Dr. Celina Martelli, Centro de Pesquisas Aggeu Magalhães, FICRUZ-PE, Brazil; Dr. Stephen Higgs, Kansas State University, Manhattan, Kansas; Dr. Brian D. Foy, Colorado State University, Fort Collins, Colorado; Dr. Constância Ayres, FICRUZ PE, Dr. Duschinka Guedes, FIOCRUZ-PE; Dr. Luciano Moreira, FIOCRUZ-MG; Dr. Anthony A. James, University of California, Irvine; Dr. Fiona F. Hunter, Brock University, Canada; and Dr. Tang-yan Zhao, Institute of Microbiology and Epidemiology, Beijing, China; and was attended by hundreds of delegates.\n\nFrom the left: Dr. Adriana Costero; Dr. Celina Martelli; Dr. Duschinka Guedes (back row); Dr. Tang-yan Zhao; Dr. Anthony A. James (back row); Dr. Stephen Higgs; Dr. Constância Ayres; the author (back row); Dr. Brian D. Foy; Dr. Luciano Moreira; and Dr. Fiona F. Hunter.\n\nAt the time we were preparing to submit a bid on behalf of the Entomological Society of America to host ICE 2016 in Orlando and during the first years of preparation for the conference, Zika was not part of the vocabulary of a large majority of entomologists, myself included. The Zika virus (ZIKV) was isolated first from a febrile monkey and later from the mosquito Aedes africanus about the time our predecessors were preparing to host VIII ICE in Stockholm1. After Sweden in 1948, ICE convened in Amsterdam, Montreal, Vienna, London, Moscow, Canberra, Washington, Kyoto, Hamburg, Vancouver, Beijing, Florence, Iguaçu Falls, Brisbane, Durban, and Daegu (South Korea), and yet ZIKV was not part of our jargon. During these years while ICE delegates were travelling throughout the world to advance the field of entomology, ZIKV was silently making headway out of the Zika forest (Entebbe, Uganda), hitchhiking on humans, and conquering new habitats. As it reached new environments, the virus was likely being transmitted locally by native species of mosquitoes and/or other “illegal immigrants” as, for example, the notorious yellow fever mosquito, Aedes aegypti. ZIKV was isolated from humans for the first time in 1954 during an outbreak of jaundice suspected of being yellow fever. The virus was isolated from one of the three patients examined, and the other two exhibited high titers of serum antibodies against the virus2. As Dr. Stephen Higgs pointed out in his presentation, outbreaks occurred in Asia between 1954 and 2007, and up to that point when the virus reached Yap Island only 14 human cases had been identified, so no one paid much attention to the virus. However, on Yap Island, Federal State of Micronesia, it was estimated that 73% of residents 3 years of age or older have been infected with ZIKV3, which is characterized by rash, conjunctivitis, and arthralgia. In an attempt to identify the mosquito vector, they collected adults from the field and found that Aedes hensilli and Culex quinquefasciatus, 41.2 and 28.1%, respectively, were the predominant species. Because the virus was not found in any field-collected Aedes mosquitoes, they conducted laboratory studies to determine the vector competence of Ae. hensilli to transmit ZIKV. About 80% of the mosquitoes fed through the Hemotek® feeding system on ZIKV-containing sheep blood became infected, but only 13–23% developed dissemination infections. By contrast, 60% of the mosquitoes fed blood-containing Chikungunya virus (CHIKV) became infected and 80% disseminated. Their findings supported the possibility that Ae. hensilli served as a vector during the Zika outbreak on Yap Island.\n\nUp until 2013, ZIKV infection was considered a mild infection, but in French Polynesia the symptoms were complicated by Guillain-Barré syndrome4, a rare neurological disorder identified 100 years ago. It has been estimated that between September 2013 and March 2014 as many as 28,000 patients were affected, i.e., ca. 11% of the Polynesian population5. The next and most notorious ZIKV outbreak occurred in Brazil, the locale from which the virus has been moving rapidly up toward North America and back to Africa. Apparently, ZIKV went under the “radar screen” soon after the arrival of CHIKV. Brazil was expecting an outbreak of CHIKV; primary care physicians (PCPs) had been trained at least two years in advance on how to distinguish CHIKV symptoms from those elicited by the Dengue virus (DENV). The first local transmission of CHIKV in Brazil was reported on September 16, 20146. Epidemiologists and PCPs created a network using WhatsApp, “Chickv – the mission,” to share information and get better prepared for the epidemic. While working on CHIKV, they started observing cases sometimes referred to as “dengue fraca” (weak dengue), which was characterized by low fever and an intense allergic reaction. The detective work of medical doctors prompted epidemiologists to consider the possibility of another virus outbreak. ZIKV was detected in Brazil for the first time in April/May 2015. According to an investigative account by professor of bioethics, Debora Diniz6, the first publication on the occurrence of ZIKV in Brazil7 was authored by the second group to isolate the virus. The first group to report their findings to the media (on April 29, 2015) published their findings in a peer-reviewed journal8 five months after the first scientific report7.\n\nDuring the Zika Symposium, Dr. Celina Martelli reported that in August–September 2015, neurologists, neonatologists, and epidemiologists became more aware of neonates with microcephaly, and in October clinical data and brain imaging suggested a congenital infection. Then, they noticed an important correlation between the major circulation of ZIKV in the Northeast region of Brazil and the time of the first gestational months of mothers. Thereafter, ZIKV was detected in amniotic fluid9, and it is now well established that ZIKV causes microcephaly10. It was this teratogenic effect, unique for a flavivirus11 and arboviruses in general, coupled with its rapid dissemination in Latin America that led the World Health Organization on February 1, 2016 to declare ZIKV an international public health emergency.\n\nZIKV is the only arbovirus currently known to also be sexually transmitted. Speaking at ICE 2016, Dr. Brian Foy suggested that sexual transmission is underestimated in epidemiological data. He pointed out that there are two main causes for this underestimation. It is difficult to decouple sexual from mosquito-borne transmission in outbreak areas and questionnaires regarding patients’ sex life tend to be inaccurate for various reasons. The case study of Dr. Foy and his wife in the United States was the first report to suggest sexual transmission of ZIKV. Five to six days after returning from field research in Senegal, Dr. Foy had typical symptoms of arbovirus infection, and a few days later his wife started developing similar symptoms, although her headaches and photophobia were more intense and arthralgia remained for months. There were no infected mosquito vectors in the area, their children did not get sick, and Dr. Foy had no reason to “lie on a questionnaire.” He immediately suspected sexual transmission and the serological data supported his hypothesis12. It was the first documented case of sexual transmission of ZIKV, but it garnered little attention from authorities, because no one could envision that ZIKV would cause such an epidemic eight years later. Dr. Foy suggested that sexual transmission is a matter of major concern, particularly now that there is growing evidence that ZIKV remains active in seminal fluid for six months13 and localizes to and replicates very well in tissues of the urogenital tract14 after both mosquito bite and sexual transmission. In short, the epidemic is unlikely to be controlled only by interrupting/terminating transmission by mosquito vectors.\n\nDr. Luciano Moreira discussed his laboratory findings indicating that a bacterium isolated from the southern house mosquito, Cx. quinquefasciatus, Wolbachia sp (wMel_Br strain), blocks transmission of ZIKV by the yellow fever mosquito, Ae. aegypti, and could be a useful tool for decreasing ZIVK transmission. Dr. Anthony James gave an overview of all genetic tools currently available in vector control. He explained that the strategy for population suppression is a genetic tool analogous to the development of an insecticide in the sense that the ultimate goal is to reduce or eliminate populations of vectors. Another strategy is population alteration (formerly known as population replacement) in which the ability of the mosquito to transmit a virus is changed, i.e., a gene engineered into the mosquito’s genome impacts the vector competence and, consequently, the virus does not replicate and is not transmitted. This is analogous to the Wolbachia strategy reported by Dr. Moreira. The two techniques have quite different approaches for an ultimate goal of reducing transmission of vector-borne diseases. The former is aimed at reducing mosquito populations and, consequently, reducing mosquito bites and transmission of viruses. On the other hand, with population alteration mosquitoes still bite, but no longer transmit the target virus. Lastly, Dr. James discussed a technique called gene drive, which allows a gene to move quickly into a population. This would have a long-term effect, so he emphasized the need for laboratory tests and contained facility experiments before full implementation in a vector control program. He concluded by suggesting that the best scenario is cases involving one single pathogen causing a disease and one single mosquito vector. Later, in the discussion he gave a specific example when answering questions from Dr. Thomas Scott, University of California-Davis. He suggested that these genetic tools might not be the best strategies for ZIKV given that at this point there seem to be multiple vectors not only at the species but also at the population level. The current genetic technologies would not be appropriately applied to such complex systems. He would not recommend any specific genetic strategy for ZIKV at this point. By contrast, he indicated that DENV might be a good candidate, particularly in areas where clearly there is only one vector species. Dr. James noted that in the Aedes-DENV case replacement might be the best alternative given that it is difficult to achieve population suppression of Aedes mosquitoes.\n\nDr. Fiona Hunter examined closely the phylogeny of ZIKV-related viruses and showed that ZIKV belongs to a transition point between classical Culex-associated and Aedes-associated viruses. Her analysis suggested that ZIKV belongs to a clade (supported 99%) of neurotropic viruses, including West Nile Virus (WNV) and Saint Louis Encephalitis (SLE) virus, which are typically transmitted by Culex mosquitoes. ZIKV does not belong to a clade of hemorrhagic viruses, such as DENV and yellow fever, which are typically transmitted by Aedes mosquitoes. In short, there is apparently a dichotomy between the mosquito vector species vis-à-vis the taxonomy of the virus per International Committee on Taxonomy of Viruses. She suggested that we should keep an open mind, because ZIKV might have a larger range of vectors. In her field studies in the Dominican Republic, she collected one ZIKV-infected Culex mosquito, but was not able to identify the mosquito to the species level because of the damage caused by her trapping system. Additionally, she reported preliminary data on vector competence of Cx. pipiens collected in Canada by RT-qPCR and plaque assays showing at least 2% ZIKV transmission with her ongoing analysis. In summary, her taxonomic analysis along with field and laboratory findings support her hypothesis for a wider range of ZIKV vectors.\n\nDr. Constância Ayres suggested that Cx. quinquefasciatus is being held to a different standard than Ae. aegypti with regard to ZIKV transmission and its potential role as a ZIKV vector has been overlooked. She started her presentation by stressing the textbook15 criteria for incrimination of arthropods as vectors of humans and other animals. In particular, she emphasized that in multiple reported cases no data were available in the urban environment regarding criteria #3 and #4, i.e., “repeated demonstration that suspected vectors, collected under natural conditions, harbor the identifiable, infective stage of the pathogen” and “a biological association between clinical cases and infected mosquitoes in time and space,” respectively. For example, no infected mosquitoes were collected from Yap States and French Polynesia during the outbreaks. They studied vector competence under laboratory conditions and assumed that Aedes species were vectors, although they never fulfilled the above criteria. She stressed the fact that there are significant populations of Cx. quinquefasciatus in these locations, but they were not analyzed for ZIKV infection, and vector competence was not studied under laboratory conditions. Given this scenario, Dr. Ayres wrote a position paper16 arguing the need to determine unambiguously whether Cx. quinquefasciatus is a ZIKV vector. Ever since, various studies have suggested that Culex species are not ZIKV vectors, whereas evidence from other studies (see below) strongly suggests that Cx. quinquefasciatus might be a significant vector. Of note, Aliota and collaborators17 fed mosquitoes from laboratory colonies on mice previously infected with Asian lineage ZIKV strain PRVABC59. All samples from Cx. pipiens mosquitoes and all replicates were negative for ZIKV by plaque assay, 14 days post-infection (dpi). By contrast, at least one replicate for each, Ae. aegypti and Ae. albopictus, showed at least a 22% transmission rate. Huang et al.18 studied infection and dissemination rates of 7 and 14 dpi and demonstrated that Cx. pipiens and Cx. quinquefasciatus (Vero Beach strain) were refractory to ZIKV, although they did not report a positive control. Fernandes19 and collaborators achieved a remarkable feat. In a short period of time, they were able to capture Cx. quinquefasciatus from various suburbs in Rio de Janeiro and conduct vector competence studies under laboratory conditions using F1 generations. Although survival rate, yield of blood meals, and other parameters were not reported, their data showed zero transmission rates for Cx. quinquefasciatus at 7, 14, and 21 dpi. Their publication was widely highlighted in the press and social media, but a report from Guo et al.20 that appeared the next day got absolutely no coverage. As discussed (see also below), this report provides clear evidence of infection, dissemination, replication in salivary glands and transmission to infant mice by Cx. pipiens quinquefasciatus. As it stands now, we should take Dr. Hunter’s advice and keep in mind that the jury is still out regarding ZIKV vectors. It might as well be that virus strains and/or mosquito populations account for the discrepancies. Dr. Ayres pointed out in her presentation that one should not forget the socioeconomic and ecologic factors, environment, and behavior of Cx. quinquefasciatus in Recife when considering the full range of vectors.\n\nDr. Duschinka Guedes presented solid evidence demonstrating that ZIKV was detected in midgut, salivary glands, and saliva of Cx. quinquefasciatus from Recife, Brazil, which were artificially infected with a strain of the ZIKV isolated from a local patient. In addition to conventional vector competence studies under laboratory conditions, Dr. Guedes showed that both Cx. quinquefasciatus and Ae. aegypti expectorated the virus into honey-soaked filter papers, 9–12 dpi. Additional evidence of replication in the salivary glands was demonstrated by transmission electron microscopy data. ZIKV-infected acinar cells showed signs of cytopathic disruptions, and mature ZIKV particles were clearly observed. Lastly, she reported that multiple pools of Cx. quinquefasciatus mosquitoes collected from urban areas with a high incidence of microcephaly in Recife were infected with ZIKV21. Dr. Guedes’ presentation strongly suggests that in Recife, Brazil, both Ae. aegypti and Cx. quinquefasciatus are ZIKV vectors. Working independently in China, Dr. Tang-yan Zhao reached similar conclusions. Dr. Zhao performed conventional vector competence studies, which demonstrated ZIKV replication in midgut and salivary glands20. Additionally, she placed 91-day-old infant mice in a cage with Cx. quinquefasciatus mosquitoes previously infected with ZIKV. The infant mice developed red blotches on the skin, and at 10 dpi eight of the nine mice had high titers of viral RNA in their brains. These two independent studies discussed at the Zika Symposium are complementary in nature and they both support the notion that Cx. quinquefasciatus might be a ZIKV vector. It is, therefore, advisable that vector management programs aimed at mitigating ZIKV transmission do not ignore Cx. quinquefasciatus, unless new and unambiguous evidence will show that a target population of the common mosquito is not a vector. For the time being, it is prudent to consider that vector competence may vary among different populations of the same species and/or the strains of the virus.\n\nDuring the discussion at the end of the symposium, the forum was opened for questions and comments. “Is anyone looking for the virus in birds?” asked Dr. Scott Ritchie, James Cook University, Australia. This question captures the sentiment that both questions were thought provoking, and we still do not have all or many answers when it comes to ZIKV. Hopefully, we will be better prepared when convening in Finland for ICE 2020. Wouldn’t it be wonderful to report in Helsinki that mosquito vector populations have been reduced or eliminated, the Zika and other epidemics were contained, vaccines have been made available, and entomologists are ready to further improve the human condition by tackling other problems than the Zika epidemic?",
"appendix": "Competing interests\n\n\n\nW.S.L. has a long-term collaboration with FIOCRUZ-PE and is a co-author of the paper by Guedes et al.21, currently under consideration for publication.\n\n\nGrant information\n\nMy research on vector biology is supported by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under award numbers R01AI095514 and R21AI128931.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe author would like to thank Drs. Constância Ayres and Adriana Costero for organizing the symposium, all speakers for sharing the most recent data, and ICE 2016 delegates who attended the Zika Symposium, particularly those who asked question and shared their thoughts.\n\n\nReferences\n\nDick GW, Kitchen SF, Haddow AJ: Zika virus. I. Isolations and serological specificity. Trans R Soc Trop Med Hyg. 1952; 46(5): 509–20. PubMed Abstract | Publisher Full Text\n\nMacnamara FN: Zika virus: a report on three cases of human infection during an epidemic of jaundice in Nigeria. Trans R Soc Trop Med Hyg. 1954; 48(2): 139–45. PubMed Abstract | Publisher Full Text\n\nDuffy MR, Chen TH, Hancock WT, et al.: Zika virus outbreak on Yap Island, Federated States of Micronesia. N Engl J Med. 2009; 360(24): 2536–43. PubMed Abstract | Publisher Full Text\n\nOehler E, Watrin L, Larre P, et al.: Zika virus infection complicated by Guillain-Barre syndrome--case report, French Polynesia, December 2013. Euro Surveill. 2014; 19(9): pii: 20720. PubMed Abstract | Publisher Full Text\n\nJouannic JM, Friszer S, Leparc-Goffart I, et al.: Zika virus infection in French Polynesia. Lancet. 2016; 387(10023): 1051–2. PubMed Abstract | Publisher Full Text\n\nDiniz D: Zika - do sertao Nordestino a ameaca global. Rio de Janeiro: Civilizacao Brasileira; 2016. Reference Source\n\nZanluca C, Melo VC, Mosimann AL, et al.: First report of autochthonous transmission of Zika virus in Brazil. Mem Inst Oswaldo Cruz. 2015; 110(4): 569–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampos GS, Bandeira AC, Sardi SI: Zika virus outbreak, Bahia, Brazil. Emerg Infect Dis. 2015; 21(10): 1885–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCalvet G, Aguiar RS, Melo AS, et al.: Detection and sequencing of Zika virus from amniotic fluid of fetuses with microcephaly in Brazil: a case study. Lancet Infect Dis. 2016; 16(6): 653–60. PubMed Abstract | Publisher Full Text\n\nde Araújo TV, Rodrigues LC, de Alencar Ximenes RA, et al.: Association between Zika virus infection and microcephaly in Brazil, January to May, 2016: preliminary report of a case-control study. Lancet Infect Dis. 2016; pii: S1473-3099(16)30318-8. PubMed Abstract | Publisher Full Text\n\nFauci AS, Morens DM: Zika Virus in the Americas--Yet Another Arbovirus Threat. New Engl J Med. 2016; 374(7): 601–4. PubMed Abstract | Publisher Full Text\n\nFoy BD, Kobylinski KC, Chilson Foy JL, et al.: Probable non-vector-borne transmission of Zika virus, Colorado, USA. Emerg Infect Dis. 2011; 17(5): 880–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNicastri E, Castilletti C, Liuzzi G, et al.: Persistent detection of Zika virus RNA in semen for six months after symptom onset in a traveller returning from Haiti to Italy, February 2016. Euro Surveill. 2016; 21(32). PubMed Abstract | Publisher Full Text | Free Full Text\n\nNicastri E, Castilletti C, Balestra P, et al.: Zika Virus Infection in the Central Nervous System and Female Genital Tract. Emerg Infect Dis. 2016; 22(12). PubMed Abstract | Publisher Full Text\n\nEldridge BF, Edman JD: Medical Entomology: A Textbook on Public Health and Veterinary Problems Caused by Arthropods. Rev. ed. Dordrecht; Boston: Kluwer Academic Publishers. 2003; x, 659. Publisher Full Text\n\nAyres CF: Identification of Zika virus vectors and implications for control. Lancet Infect Dis. 2016; 16(3): 278–9. PubMed Abstract | Publisher Full Text\n\nAliota MT, Peinado SA, Osorio JE, et al.: Culex pipiens and Aedes triseriatus Mosquito Susceptibility to Zika Virus. Emerg Infect Dis. 2016; 22(10): 1857–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang YJ, Ayers VB, Lyons AC, et al.: Culex Species Mosquitoes and Zika Virus. Vector Borne Zoonotic Dis. 2016; 16(10): 673–6. PubMed Abstract | Publisher Full Text\n\nFernandes RS, Campos SS, Ferreira-de-Brito A, et al.: Culex quinquefasciatus from Rio de Janeiro Is Not Competent to Transmit the Local Zika Virus. PLoS Negl Trop Dis. 2016; 10(9): e0004993. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo XX, Li CX, Deng YQ, et al.: Culex pipiens quinquefasciatus: a potential vector to transmit Zika virus. Emerg Microbes Infect. 2016; 5(9): e102. PubMed Abstract | Publisher Full Text\n\nGuedes DR, Paiva MH, Donato MM, et al.: Zika virus replication in the mosquito Culex quinquefasciatus in Brazil. bioRxiv. 2016. Publisher Full Text"
}
|
[
{
"id": "17310",
"date": "31 Oct 2016",
"name": "Pedro Lagerblad de Oliveira",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article is essentially a meeting report, where most of the speakers showed data indicating that Culex mosquitoes could be vectors of Zika. However, as pointed by Dr Leal, several other groups published independent reports with negative results for Culex infection with Zika. While the major claim of this paper (that the vector competence of Culex for Zika is still an open question) seems a fair statement, reading of the published papers do not provide an immediate explanation for the discrepant results from different groups. Of course, although experimental errors can not be excluded, data can be reconciled as differences in virus and or mosquito strains (or even symbiont microbiota) can explain the distinct findings from each group. If Zika transmission was merely an academic question, I would say that time would solve the question. However, this is a major threat for global public health, and transmission mediated by one or two species calls for very distinct control strategies. Therefore, I would add to this report that there this debate needs urgently to evolve in the experimental field, ideally as collaborative research, but at least with exchange of virus and mosquitoes strains, and rigorous comparisons of the methodology.",
"responses": []
},
{
"id": "17135",
"date": "07 Nov 2016",
"name": "Coby Schal",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn summarizing a timely and well-attended symposium at the ICE2016 conference, Leal’s opinion article also masterfully encapsulates the history of the Zika epidemic and glaring gaps in our understanding of Zika virus transmission. While still in its early phase, the Zika outbreak is a relatively “minor” public health event on a global scale, compared to other infectious and arboviral diseases, such as Malaria. But because of its astonishingly rapid emergence, especially in Brazil, its unusual damaging effect on neurological development of the human neonatal brain, its spectacular evolution since it was first isolated, and its sexual transmission and stealthy existence in seminal fluid in humans, this virus has “gone viral”, especially during the 2016 summer Olympics in Rio de Janeiro. Leal’s summary highlights important needs in the research agenda to combat Zika. All public health interventions rally around “source/vector reduction”, but the full array of mosquito species that are competent to vector the Zika virus remains to be determined. Recent research has added genetic approaches to the tool-box for vector management, but it will be years before these powerful options can be broadly implemented.\nA couple of corrections and editorial points:\nConsider replacing “conquering” with “spreading into” in the sentence: “ZIKV was silently making headway out of the Zika forest (Entebbe, Uganda), hitchhiking on humans, and conquering new habitats” (p.2).\n\nChange “91-” to “nine one-” in the sentence: “Additionally, she placed 91-day-old infant mice in a cage” (p.5).",
"responses": []
},
{
"id": "17268",
"date": "08 Nov 2016",
"name": "Mario Alberto Cardoso da Silva-Neto",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have read the MS \"Zika mosquito vectors: the jury is still out\" by Walter S. Leal Department of Molecular and Cellular Biology, University of California-Davis, Davis, USA. This MS nicely describes the scientific and academic atmosphere and major findings of International Congress of Entomology (ICE 2016) held in Orlando, Florida (September 25-30, 2016). The MS goes far beyond the simple description of the common routine in Meetings such like this. In fact due to our major and now global concern on arboviroses the MS provides a precise account of the most relevant issues discussed on this meeting. Also, it poses an ultimate perspective that should be urgently followed regarding the discussion of the true vectors of Zika. Such relevant points must be immediately addressed concerning the final definition of the molecular and genetic strategies to be used for its control which were also discussed in the meeting and reported in the present MS. I believe the scientific content and the present format of this MS are suitable for indexing.",
"responses": []
},
{
"id": "17437",
"date": "09 Nov 2016",
"name": "Andre Freire Furtado",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this opinion article, Dr. Walter S. Leal presents not only the discrepant results in the literature concerning the vector or vectors mosquitoes of Zika (ZIKV) – (Aedes, Culex), but gives an up-to-date knowledge about the history of ZIKV infection and its consequence on the human health.\nDr. Leal clearly describes all the steps since the ZIKV isolation from a febrile monkey of Uganda forest, the isolation of ZIKV from human in 1954 and presents the results (opinions) of different groups of research, concerning the responsible vector of ZIKV transmission as well as the different concerns to explain the discrepancies.The paper also brings information about ZIKV epidemics in the world, the sexual transmission, and a very helpful discussion on the genetically based vector control strategies. A disease considered as a mild infection in the beginning is now well established as causing microcephaly through congenital infection. This teratogenic effect led the World Health Organization (WHO) to declare ZIKV an international public health emergency.\nI read this submission and my opinion is that it is worth to be indexed considering its high standard.",
"responses": []
},
{
"id": "17269",
"date": "09 Nov 2016",
"name": "Jeffery K. Tomberlin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title and abstract are very appropriate as they give the reader a quick overview of the subject matter discussed as related to Zika. This opinion piece provides a brief background of the International Congress of Entomology with an emphasis on a symposium exploring what is, and not, known about Zika. The article summarizes presentations given by a number of globally recognized researchers in the field of medical entomology and disease ecology. The article does an excellent job providing an overview of where society, and research, stands today with regards to Zika and its potential spread globally. However, as they discuss, additional research is needed to fully understand the ecology of the pathogen and vector capacity as related to disease manifestations and frequency of occurrence. I would like to stress the outbreak of Zika is a demonstration of the need to continue exploring the mechanisms regulating mosquito, and other arthropod vectors, attraction to, and blood-feeding on, hosts as the development of methods to disrupt such interactions could prove vital in protecting human populations and their associated livestock globally.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2546
|
https://f1000research.com/articles/5-2662/v1
|
10 Nov 16
|
{
"type": "Research Article",
"title": "Surgical proficiency decreases the rate of healing abnormalities using anterior transobturator mesh in cystocele women",
"authors": [
"Jin-Sung Yuk",
"Yong Jin Kim",
"Kyong Wook Yi",
"Jun-Young Hur",
"Jung-Ho Shin",
"Jin-Sung Yuk",
"Yong Jin Kim",
"Kyong Wook Yi",
"Jun-Young Hur"
],
"abstract": "Aims: The objective of this study is to report the outcomes of cystocele repair with anterior transobutrator mesh kits. Methods: 119 consecutive women with cystoceles were treated between January 2006 and November 2010 by a single surgeon at a university hospital using the anterior transobturator mesh kit procedure. Postoperative follow-up visits were scheduled at 1, 6, and 12 months after surgery. Results: A total of 114 women who were operated on with the anterior transobturator mesh kit completed 12 months of follow-up. The population had a mean age of 65.8 ± 7.0, a body mass index of 25.1 ± 3.0, and a parity average of 4.0 ± 1.7. An overall anatomic cure was reported for 108 patients (94.7%). The Ba point of the POP-Q exam used for grading cystoceles decreased significantly from 2.5 ± 1.6 cm to -2.8 ± 0.8 cm after 12 months (P < 0.01). One patient (0.9%) presented with bladder perforation, and five patients (4.4%) showed with healing abnormalities. Surgical case volume was negatively correlated with healing abnormalities after adjusting for age, body mass index, operation time, and parity (P = 0.15). Conclusion: The surgeon’s experience decreases the incidence of healing abnormalities using anterior transobturator mesh in cystocele women. The anatomical cure rate of anterior transobturator mesh is quite good.",
"keywords": [
"Cystocele",
"Surgical mesh",
"Treatment outcome"
],
"content": "Introduction\n\nAccording to the Women's Health Initiative, cystocele is a common condition affecting 34.4% of women1. Anterior colporrhaphy is the most common traditional surgical treatment for cystoceles. Although the reoperation rate for anterior colporrhaphy is low, it has a high rate of recurrence, varying from 30% to 70%2–6. The anterior transvaginal mesh kit is an alternative procedure to anterior colporrhaphy that has been used to repair cystoceles with a lower rate of recurrence7,8. Prospective observational studies indicate that the anterior transvaginal mesh kit has a success rate of 82.3%–95.8% over one to two years6,9.\n\nIn 2008, the U.S. Food and Drug Administration (FDA) warned of several complications associated with the transvaginal placement of surgical mesh used for pelvic organ prolapse10. Healing abnormalities are the complications of most concern10. Healing abnormality is a general term that includes erosion, rejection, infection, and exposure associated with the use of grafts11. Prior studies have reported that 10.4%–10.5% of surgeries involving an anterior transobturator mesh kit result in healing abnormalities9,12.\n\nThe objective of this study was to document our experience with the implantation of anterior transobturator mesh kits, including a report on the healing abnormalities we observed.\n\n\nMaterials and methods\n\nA retrospective chart review was carried out for 119 women who had undergone cystocele repair using the anterior transobturator mesh kit implantation at the Korea University Guro Hospital between January 2006 and November 2010. The inclusion criterion was second stage or greater cystocele based on Pelvic Organ Prolapse Quantification (POP-Q)13. Exclusion criteria were previous cancer of any pelvic organ, systemic glucocorticoid treatment, immunosuppressed disease, and previous pelvic radiation.\n\nOne of us (J-HS) performed all procedures as the only surgeon and had not performed the intravaginal mesh implantation procedure prior to this study. Preoperative evaluations included taking the patient’s history and performing the POP-Q exam by the surgeon. All patients were implanted with Seratom® (Serag-Wiessner KG, Naila, Germany) or a Gynecare Prolift® Anterior Pelvic Floor Repair System kit (Gynecare, Somerville, NJ, USA), as described by Reisenauer et al.14. The implantation was performed after at least two weeks of topical estrogen therapy. Our procedure differed slightly from the one described by Reisenauer et al. in that 20–30 ml of normal saline was injected without pitressin into the vaginal wall to hydrodissect the pubocervical fascia from the vaginal adventitia. After an incision was created with a scalpel, blunt dissection of the paravesical fossa was conducted using the index finger. Second, the mesh was designed to fit Korean women and was trimmed into a round form (Figure 1). Third, the mesh was stitched to the pubocervical fascia 1 cm away from the upper and lower margin of mesh, respectively. The Seratom® and Gynecare Prolift® Anterior systems kit are similar with only subtle differences in shape. Both are made with type I monofilament polypropylene mesh.\n\nOur unique mesh was designed to fit Korean women and trimmed into a round form. Unbroken line: designed mesh, broken line: derelict mesh.\n\nIn cases that required hysterectomy or when patients wanted a hysterectomy, a vaginal hysterectomy or total laparoscopic hysterectomy was done. In cases associated with stress urinary incontinence (SUI) symptoms, a urologist was consulted for diagnosis by urodynamic test and treatment. If an operation was needed, the urologist performed a transobturator tape procedure. In cases of second stage or greater rectocele, a posterior Gynecare Prolift® or posterior repair was conducted. In cases of second stage or greater uterine prolapse, a posterior Gynecare Prolift® or sacro-spinous ligament fixation was performed.\n\nPostoperative follow-up visits were scheduled at 1, 6, and 12 months after surgery. The visits included an assessment of side effects and a POP-Q exam. Failure of the procedure was defined as stage two or greater cystocele on the postoperative POP-Q exam. In other words, cure of the cystocele was defined as a stage one or lesser cystocele on the postoperative POP-Q exam. The International Urogynecological Association (IUGA)/International Continence Society (ICS) scale was used to classify healing abnormalities15. Healing abnormalities were defined as category 1–3. The urinary frequency and SUI symptoms were confirmed by patient’s self-reporting.\n\nThe Korea University Guro Hospital’s Institutional Review Board approved this study. In accordance with Institutional Review Board guidelines, informed consent from the patients was not required because the anonymized data were analyzed retrospectively.\n\nThe Statistical Package for the Social Sciences (version 12.0; SPSS Inc., Chicago, IL, USA) was used for all statistical analyses. All statistical tests were two-tailed, and results were considered significant at P < 0.05. MetaAnalyst version 3.13 (http://tuftscaes.org/meta_analyst/) was used for meta-analysis.\n\nIn statistical quality control to judge the proficiency of operation, the learning curve-cumulative summation (LC-CUSUM) is a sequential analysis technique used to determine when an operator has reached proficiency. The null hypothesis, H0, is “performance is inadequate.” The alternative hypothesis, H1, is “performance is adequate.” To conduct a LC-CUSUM analysis, four variables are needed: unacceptable failure rate (p1), acceptable failure rate (p0), type I error rate (α), and type II error rate (β). Limit h and constant s (always a positive number) were calculated from these four variables. The constant s is subtracted from the cumulative sum during successive performances in the negative direction of y-axis and the value (1-s) is added to the cumulative sum during successive performances in the positive direction of y-axis. LC-CUSUM curves start at zero and are plotted along the x-axis. Two holding barriers that cannot be crossed exist at zero and the limit H1. LC-CUSUM curves must be plotted with only negative y-axis values on a chart. If the LC-CUSUM curve reaches limit h, the null hypothesis is rejected. In other words, the trainee attains proficiency. After reaching the limit H1, the LC-CUSUM curve ends and the cumulative summation (CUSUM) curve starts thereafter.\n\nCUSUM is different from LC-CUSUM in several respects. First, CUSUM curves start at zero and are plotted along the x-axis from the x point at which LC-CUSUM ends. Second, the null hypothesis of CUSUM is the opposite of that of LC-CUSUM. In other words, the null hypothesis of CUSUM is “performance is adequate” and the alternative hypothesis of CUSUM is “performance is inadequate.” CUSUM curves must be plotted with only positive y-axis values on a chart. To analyze the cumulative summation curve (CUSUM), the acceptable failure rate (p0) and unacceptable failure rate (p1) were defined as the median value with an upper confidence interval (CI) based on previous studies9,12,16. For the LC-CUSUM, the acceptable failure rate (p0) and unacceptable failure rate (p1) were defined arbitrarily as the upper CI + 10% and upper CI, respectively. For these analyses, the type I and type II (α and β) error probabilities were set at 0.05 and 0.20, respectively.\n\n\nResults\n\nA total of 119 consecutive patients underwent operation for cystocele repair using the anterior transobturator mesh kit. A total of 114 of the 119 patients completed the 12-month follow-up assessment. Seventeen patients (13.0%) did not complete follow-up. Of the 114 patients, a total of 51 patients had undergone Seratom® (Serag-Wiessner KG) and 63 patients had undergone Prolift® (Gynecare) repair. The mean ± standard deviation age for the sample of 114 patients was 65.8 ± 7.0 years. The mean body mass index was 25.1 ± 3.0 kg/m2 and the median parity was 4.0 ± 1.7. Nine patients (7.9%) had undergone a hysterectomy prior to this study. No patients had undergone a vaginal mesh operation prior to this study.\n\nSixty-one patients (53.5%) had greater than second stage apical prolapse, and 21 patients (18.4%) had greater than second stage rectocele (Table 1). The mean operation time using anterior transobturator mesh kits without a concomitant procedure was 37.3 ± 17.3 minutes. The mean operation time using anterior transobturator mesh kits with a concomitant procedure was 73.9 ± 33.7 minutes. The types of concomitant procedures are reported in Table 2.\n\nAbbreviation: TVT-O, transobturator tape; VTH, vaginal total hysterectomy; TLH, total laparoscopic hysterectomy.\n\nAn overall anatomic cure was observed in 108 patients (94.7%). The Ba point of the POP-Q exam used to grade cystoceles decreased significantly from 2.5 ± 1.6 cm to -2.8 ± 0.8 cm after 12 months (P < 0.01). One patient (0.9%) presented with a postoperative bladder perforation, and five patients (4.4%) presented with healing abnormalities. One patient (0.9%) had an intraoperative hematoma and one patient (0.9%) reported dyspareunia. Sixteen patients (14.0%) presented with urinary frequency, whereas 25 patients (21.9%) presented with SUI. However, all side effects except postoperative bladder perforation and healing abnormality resolved without further treatment. The patient groups who had received Seratom® and Gynecare Prolift®, respectively, did not differ with regard to the incidence of healing abnormalities. Two patients (1.8%) underwent reoperation.\n\nMultivariate-adjusted odds ratios (ORs) for healing abnormality are shown in Table 3. The surgeon’s case volume was associated with decreased risk of healing abnormality (P = 0.01). In particular, there did not appear to be any additional healing abnormalities after the fifth healing abnormality (the 49th case volume) (Figure 2). The continuous healing abnormality rate showed a steady decreasing trend with increasing surgeon case volume (Figure 3).\n\nAbbreviation: OR, odds ratio; BMI, body mass index; CI, confidence interval.\n\n*ORs were adjusted for all variables in the table\n\nCox & Snell R2: 0.13\n\nThis figure shows the cumulative sum of healing abnormalities in our cystocele repair using anterior transobturator mesh kits. There did not appear to be any additional healing abnormalities after the fifth healing abnormality.\n\nThe graph presents the rate of healing abnormalities per procedure. The continuous healing abnormality rate showed a steady decreasing trend with increasing surgeon case volume. The rates of healing abnormalities in other studies are compared with our result. Triangle: Hinoul et al.8, circle: Abdel-Fattah and Ramsay11\n\nLC-CUSUM determine time to gain proficiency and CUSUM monitor quality control after acquisition of proficeincy. LC-CUSUM and CUSUM curves for healing abnormalities are shown in Figure 4. There was an unacceptable failure rate by the 31st procedure based on the LC-CUSUM, after which the curve changed to the CUSUM curve. There was no unacceptable failure rate for healing abnormalities on the CUSUM curve.\n\nThe acceptable failure rate and unacceptable failure rate of LC-CUSUM are 27.2% (upper confidence interval [CI] + 10%p) and 17.2% (upper CI), respectively, based on a meta-analysis (Hinoul et al.8, Abdel-Fattah and Ramsay11). The acceptable failure rate and unacceptable failure rate of CUSUM are 10.5% (median CI) and 17.2% (upper CI), respectively, based on a meta-analysis (Hinoul et al.8, Abdel-Fattah and Ramsay11). Dotted line: limit h of LC-CUSUM; limit h of LC-CUSUM: -4.72; limit h of CUSUM: 4.85\n\n\nDiscussion\n\nIn this study, the anatomical cure rate (94.7%) was similar to those of other studies with reported rates ranging from 82.3% to 97.4%6,9,12. In other studies, the incidence of healing abnormalities was about 10.4%–10.5%9,12. In contrast, the incidence of healing abnormalities in our study was much lower at 4.4%. There are several possible explanations for this discrepancy. First, the surgeon’s experience may have affected the incidence of healing abnormalities. In the first 49 cases, the incidence of healing abnormalities was 10.2%. This is comparable to previously reported values (10.4%–10.5%)9,12. However, the incidence decreased to 4.4% at 114 cases because no healing abnormalities occurred in the later 65 cases. Although several limitations to this study exist, including the fact that a single surgeon carried out the procedures and that different manufacturers' kits were used, Table 3 supports the hypothesis that the surgeon’s experience may have an effect on the incidence of healing abnormalities. If the number of patients in previous studies (respectively, n = 76, n = 48)9,12 were more similar to the number in this study, there is a possibility that the incidence of healing abnormalities would be lower and more similar to the rate in this study. Additionally, other studies have reported that the surgeon’s experience with prolapse repair reduces the risk of for mesh exposure, including decreased healing abnormalities17. Furthermore, our modified procedure may have decreased the rate of occurrence of healing abnormalities. Sufficient hydrodissection increases the space between the pubocervical fascia and the vaginal adventitia, which makes it possible to preserve the blood supply in the vaginal wall. Pitressin causes vasoconstriction that decreases the blood supply and is used with normal saline in the hydrodissection of anterior colporrhaphy. Our results suggest that using hydrodissection with normal saline but without pitressin might decrease the risk of vasoconstriction.\n\nBecause the transobturator mesh kit is designed for use in a Caucasian population, it is generally too large for Koreans. We trimmed the edge of the transobturator mesh kit to anatomically fit the patients and to reduce the risk of mesh folding. To minimize blood supply interference, we chose to use continuous over and over sutures instead of continuous interlocking sutures. Only one other study also successfully decreased mesh exposure with the posterior transobturator mesh kit with a short incision, infra-fascial layer dissection, and no vaginal wall trimming18.\n\nThe LC-CUSUM for healing abnormalities indicated that procedural proficiency in terms of healing abnormalities was achieved at the 31st procedure, after which only three healing abnormalities were reported. However, there was no unacceptable failure rate for healing abnormalities on the CUSUM curve. Therefore, quality control was acceptable after proficiency was attained.\n\nConcomitant sacrospinous ligament fixations or posterior transvaginal mesh kits were used in 46.5% of patients, which can cause confounding anatomical outcomes. However, previous studies demonstrated that correction of the posterior compartment with mesh increased cystocele incidence19,20. Similarly, concomitant sacrospinous ligament fixations or posterior transvaginal mesh kits may increase the incidence of cystocele, or at least not decrease it. Also, 38.6% of patients had concomitant vaginal hysterectomy, which may be confounding. However, one study reported that concurrent vaginal hysterectomy increases mesh erosion21. Therefore, the concomitant operations in this study may have increased the incidences of cystocele or healing abnormality, or at least did not decrease them.\n\nOur study has several limitations. First, we did not investigate the subjective cure rate. Second, our study is based on only a single surgeon’s experience, which limits the generalizability of these results to other surgeons. However, using data from one surgeon reduces confounding factors. Additional reports with a larger sample size are needed.\n\nIn conclusion, the surgeon’s experience decreases the incidence of healing abnormalities using anterior transobturator mesh operation in cystocele women. The anatomical cure rate of anterior transobturator mesh operation in cystocele is quite good.\n\n\nData availability\n\nF1000Research: Dataset 1. Pelvic organ prolapse characteristics of patients studied, 10.5256/f1000research.10012.d14155422.",
"appendix": "Author contributions\n\n\n\nStudy conception and design: JSY, JHS. Acquisition of data: JSY, KWY, JHS. Analysis and interpretation of data: JSY, YJK, JHS. Drafting of the manuscript: JSY, JHS. Critical revision: JYH.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHendrix SL, Clark A, Nygaard I, et al.: Pelvic organ prolapse in the Women's Health Initiative: gravity and gravidity. Am J Obstet Gynecol. 2002; 186(6): 1160–6. PubMed Abstract | Publisher Full Text\n\nWeber AM, Walters MD, Piedmonte MR, et al.: Anterior colporrhaphy: a randomized trial of three surgical techniques. Am J Obstet Gynecol. 2001; 185(6): 1299–304; discussion 1304–6. PubMed Abstract | Publisher Full Text\n\nSivaslioglu AA, Unlubilgin E, Dolen I: A randomized comparison of polypropylene mesh surgery with site-specific surgery in the treatment of cystocoele. Int Urogynecol J Pelvic Floor Dysfunct. 2008; 19(4): 467–71. PubMed Abstract | Publisher Full Text\n\nNguyen JN, Burchette RJ: Outcome after anterior vaginal prolapse repair: a randomized controlled trial. Obstet Gynecol. 2008; 111(4): 891–8. PubMed Abstract | Publisher Full Text\n\nKapoor DS, Nemcova M, Pantazis K, et al.: Reoperation rate for traditional anterior vaginal repair: analysis of 207 cases with a median 4-year follow-up. Int Urogynecol J. 2010; 21(1): 27–31. PubMed Abstract | Publisher Full Text\n\nAltman D, Väyrynen T, Engh ME, et al.: Anterior colporrhaphy versus transvaginal mesh for pelvic-organ prolapse. N Engl J Med. 2011; 364(19): 1826–36. PubMed Abstract | Publisher Full Text\n\nNiesel A, Gramalla O, Rohne A: A preliminary report on the use of a partially absorbable mesh in pelvic reconstructive surgery. Pelviperineology. 2008; 27: 22–5. Reference Source\n\nDebodinance P, Berrocal J, Clavé H, et al.: [Changing attitudes on the surgical treatment of urogenital prolapse: birth of the tension-free vaginal mesh]. J Gynecol Obstet Biol Reprod (Paris). 2004; 33(7): 577–88. PubMed Abstract\n\nHinoul P, Ombelet WU, Burger MP, et al.: A prospective study to evaluate the anatomic and functional outcome of a transobturator mesh kit (prolift anterior) for symptomatic cystocele repair. J Minim Invasive Gynecol. 2008; 15(5): 615–20. PubMed Abstract | Publisher Full Text\n\nU.S. Food and Drug Administration: FDA warning to mesh. 2011. Reference Source\n\nDavila GW, Drutz H, Deprest J: Clinical implications of the biology of grafts: conclusions of the 2005 IUGA Grafts Roundtable. Int Urogynecol J Pelvic Floor Dysfunct. 2006; 17(Suppl 1): S51–5. PubMed Abstract | Publisher Full Text\n\nAbdel-Fattah M, Ramsay I; West of Scotland Study Group: Retrospective multicentre study of the new minimally invasive mesh repair devices for pelvic organ prolapse. BJOG. 2008; 115(1): 22–30. PubMed Abstract | Publisher Full Text\n\nBump RC, Mattiasson A, Bø K, et al.: The standardization of terminology of female pelvic organ prolapse and pelvic floor dysfunction. Am J Obstet Gynecol. 1996; 175(1): 10–7. PubMed Abstract | Publisher Full Text\n\nReisenauer C, Kirschniak A, Drews U, et al.: Anatomical conditions for pelvic floor reconstruction with polypropylene implant and its application for the treatment of vaginal prolapse. Eur J Obstet Gynecol Reprod Biol. 2007; 131(2): 214–25. PubMed Abstract | Publisher Full Text\n\nHaylen BT, Freeman RM, Swift SE, et al.: An International Urogynecological Association (IUGA) / International Continence Society (ICS) joint terminology and classification of the complications related directly to the insertion of prostheses (meshes, implants, tapes) & grafts in female pelvic floor surgery. Int Urogynecol J. 2011; 22(1): 3–15. PubMed Abstract | Publisher Full Text\n\nPapanna R, Biau DJ, Mann LK, et al.: Use of the Learning Curve-Cumulative Summation test for quantitative and individualized assessment of competency of a surgical procedure in obstetrics and gynecology: fetoscopic laser ablation as a model. Am J Obstet Gynecol. 2011; 204(3): 218.e1–9. PubMed Abstract | Publisher Full Text\n\nWithagen MI, Vierhout ME, Hendriks JC, et al.: Risk factors for exposure, pain, and dyspareunia after tension-free vaginal mesh procedure. Obstet Gynecol. 2011; 118(3): 629–36. PubMed Abstract | Publisher Full Text\n\nNeuman M, Lavy Y: Reducing mesh exposure in Posterior Intra-Vaginal Slingplasty (PIVS) for vaginal apex suspension. Pelviperineology. 2007; 26(3): 117–21. Reference Source\n\nWithagen MI, Vierhout ME, Milani AL: Does trocar-guided tension-free vaginal mesh (Prolift) repair provoke prolapse of the unaffected compartments? Int Urogynecol J. 2010; 21(3): 271–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCosma S, Preti M, Mitidieri M, et al.: Posterior intravaginal slingplasty: efficacy and complications in a continuous series of 118 cases. Int Urogynecol J. 2011; 22(5): 611–9. PubMed Abstract | Publisher Full Text\n\nSayasneh A, Johnson H: Risk factors for mesh erosion complicating vaginal reconstructive surgery. J Obstet Gynaecol. 2010; 30(7): 721–4. PubMed Abstract | Publisher Full Text\n\nYuk JS, Kim YJ, Yi KW, et al.: Dataset 1 in: Surgical proficiency decreases the rate of healing abnormalities using anterior transobturator mesh in cystocele women. F1000Res. 2016. Data Source"
}
|
[
{
"id": "17533",
"date": "01 Dec 2016",
"name": "Sara Lee",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is well constructed and analyzed properly. The new information of this article can offer gynecologists good information about the surgery. I recommend this article especially to Asian urogynecologist.\nThe title is appropriate.\n\nAbstract - summarized the results well.\n\nThe authors provide the proper explanantions of the design of this study.\n\nThe conclusion is based on the study results.\n\nThey provided the sufficient data.",
"responses": []
},
{
"id": "18461",
"date": "14 Dec 2016",
"name": "Sumita Mehta",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is appropriate for the content of the article.\n\nAbstract is concise and informative.\n\nThe study design and the data have been structured well.\n\nThe objective and conclusion of the study do not match though the title of the study is relevant to the conclusion.\n\nIt is not clear if women with previous failed cystocele repair were included in the present study.\n\nPage 3, line 3 states: 114 patients of 119 patients completed the 12 month follow up. But the next line states that 17 patients (13%) did not complete the follow up - contradictory statements.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2662
|
https://f1000research.com/articles/5-2044/v1
|
22 Aug 16
|
{
"type": "Research Note",
"title": "A research note regarding \"Variation in cancer risk among tissues can be explained by the number of stem cell divisions\"",
"authors": [
"Maxime Tarabichi",
"Vincent Detours",
"Maxime Tarabichi"
],
"abstract": "Tomasetti and Vogelstein argued that 2/3 of human cancers are due to ‘bad luck’ and that “primary prevention measures [against cancer] are not likely to be very effective”. We demonstrate that their calculations for hepatocellular carcinomas overlooked a major subset of these cancers proven to be preventable through vaccination. The problem, which is not limited to hepatocellular carcinoma, arises from the general reliance of their analysis on average incidences in the United States and the omission of incidences in specific risk groups.",
"keywords": [
"Cancer incidence",
"cancer prevention",
"cancer etiology",
"cancer risk"
],
"content": "Introduction\n\nTomasetti and Vogelstein1 claimed that for tumors of relatively low incidence arising in organs undergoing many stem cell divisions “primary prevention measures are not likely to be very effective” because they arise mostly from random mutations fixed during stem cell division, independently of specific genetic or environmental factors. This conclusion—which has received much press coverage2–6—has important implications for public health and environmental research and policies. Here we re-interpret the results in light of additional data.\n\n\nResults\n\nThe authors argued that 2/3 of the variation of cancer incidence among human organs could be explained by the total number of lifetime stem cell divisions (lscd) which, according to them, drives the stochastic accumulation of random mutations. Yet, the incidence variation among organs is not informative about incidence variation among different risk groups. For example, worldwide cancer incidence variations and their association with regional risk factors are well documented7. But the study by Tomasetti and Vogelstein rests mostly on current average USA incidence statistics and is therefore blind to population-specific risk factors.\n\nTomasetti and Vogelstein did, however, consider risk-group specific incidences for a few cancers. For example, they calculated the excess risk score (ERS) for hepatocellular cancer (HCC) for the USA subpopulation infected by the hepatitis C virus (HCV) and the non HCV-infected subpopulation. The risk was 5.36 for HCV and -6.08 in non HCV cancers, which corresponded to the D-tumor (deterministic) and R-tumors (replicative) classes, respectively. This seems to support the validity of the ERS. But what would have been the classification of HCC if, as for most other cancers in the study, only the USA average incidence would have been taken into account? The ERS would be -5.65, well within the range of R-tumors, leading to the conclusion that HCC is a less preventable cancer (Figure 1). This would be a dangerous distraction from the fact that 10 to 33% of them, depending on world regions, are caused by HCV infections that are both preventable and treatable when responsible health policies are implemented. Furthermore, is the non-HCV HCC not preventable, as its classification suggests? Fifty nine percent of HCC cases in the developing world are associated with hepatitis virus B (HBV) infection7, which greatly increases the probability of developing the disease (Figure 1). Universal HBV vaccination has resulted in a 65–75% reduction of HCC incidence in 6–14 years children from Taiwan8. Other overlooked preventable risk factors for HCC include obesity, alcoholic cirrhosis, exposure to aflatoxin B and schistosomiasis. We focused on HCC due to space constrains, but similar arguments could be made for most cancers analyzed in ref. 1.\n\nWe also included in Figure 1 the ERS for the overall Taiwanese population. It is between the D- and R-tumors and higher than for the USA population. This is consistent with the fact that HCC is more preventable in Taiwan where HCV and HBV are more prevalent and supports, it seems, the potential usefulness of the ERS. Importantly, however, the ERS for all HCC rests on the same lscd estimate, thus the incidence data alone would produce the same ranking of the HCC groups [ERS=log10(lscd)×log10(incidence)]. On a more fundamental level, the ERS does not provide an absolute quantification of determinism because we do not know the baseline ERS for cancers occurring in the proven absence of any risk factor. Is this baseline universal or is it organ-specific? If the latter is correct, then the ERS will not be comparable among organs and will not be more informative than incidence data alone, as we have noted for HCC. If not, the ERS scale will be universal and the lscd will add information useful for the comparison of cancer determinism between organs. The modalities of DNA repair varies across the stem cell compartments of different organs9, suggesting an organ-specific baseline.\n\nThis figure is an enhanced version of Figure 2 in ref. 1 showing the adjusted ERS (aERS=ERS + 18.49) for human cancers. ‘Hepatocellular Average USA’ and ‘Hepatocellular Average TW’ denotes the entire population of HCC patients, including both HCV and non HCV cases, in the USA and in Taiwan, respectively. Incidence was taken from the SEER database. ‘Hepatocellular HBV’ denotes HCC patients who are also HBV carriers. HCC lifetime risk for HBsAg-positive patients was taken from ref. 11.\n\nTo our knowledge, a substantial variation of the lscd in the general population cannot be excluded. Hence the stratification problem encountered for incidence data may also arise because of lscd variation. The authors wrote that factors “such as those that affect height and weight” could play a role. We are not aware of any relation between cancer and height or weight (disregarding obesity), but we consider highly plausible that tissue repair following chronic and possibly preventable damage may also significantly affect the lscd. Similarly, the relation between mutation rates and lscd is modulated by a range of factors, including DNA repair efficiency and activation of APOBEC DNA mutators10. All of these were averaged out as were most known cancer risk factors.\n\nIn order to demonstrate the robustness of the correlation between the lscd and cancer lifetime risk, Tomasetti and Vogelstein varied randomly their lscd estimates over four orders of magnitude. We reproduced this calculation except that both lscd and incidences were varied randomly by two orders of magnitude (N=10,000). This calculation confirmed the robustness of the correlation (median ρ=0.54, 95% CI: 0.32–0.72; median p=0.002, 95% CI: 0.000005–0.08). We also collapsed to a single data point cancers sharing the same lscd estimate to address statistical independence concerns. Again, the correlation remained strong (ρ=0.67, p=0.0009). The R script and the data to derive these results are given in supplementary material.\n\n\nConclusion\n\nThe remarkable relation between cancer incidence and lscd uncovered by Tomasetti and Vogelstein is statistically robust. The ERS is typically high for known deterministic cancers. But we demonstrated that a cancer with a low ERS can include a sizable fraction of preventable diseases. This proves that their classification scheme, in its current form, is not suitable to gauge the likely effectiveness of prevention measures and to direct funding for research on cancer etiology. Many more risk factors for cancers will likely be discovered in the future. Hence, cancers ascribed to ‘bad luck’ today due to lack of proper risk stratification may someday become explainable and, hopefully, preventable.\n\n\nMethods\n\nData were retrieved online from Table S1 of the supplementary material of ref 1.\n\nAdditional data were retrieved from ref 11. Incidence for HCC with HBV was computed as the sum of incidences in women and men, divided by 2 (HCC with HBV incidence=0.17685%). Lscd for all HCC was set to 2.709 1011, taken from ref 1.\n\nThen data were analyzed in R v3.1.312. Like in ref 1, both Spearman’s and Pearson’s correlation coefficients and p-values were computed. Here, we report values for Spearman’s correlations.\n\nFirst, we reproduced Figure 2 of ref 1 after including data of HCC in Taiwan and with HBV. aERS were computed as described in ref 1, i.e. aERS=log10(lscd)×log10(incidence)+18.49.\n\nSecond, to assess the stability of the correlation between lscd and the incidence upon measurement errors, both lscd and incidence data were randomly multiplied or divided by 10, spanning two orders of magnitude. Incidences values were capped at 1. This was repeated 100 times for each variable, amounting to 100×100=10,000 pairs of randomly shifted lscd and incidences. From these 10,000 comparisons, distributions of correlation coefficients and p-values were obtained, from which confidence intervals were derived at percentiles 0.025 and 0.975.\n\nFinally, we recomputed the correlation coefficient and p-value between lscd and incidence data after removing osteosarcomas and doublons, i.e. cancer originating from the same tissue type, thus having the same lscd. These doublons included Colorectal adenocarcinoma with FAP, Colorecal adenocarcinoma with Lynch syndrome, Hepatocellular carcinoma with HCV, Head & neck squamous cell carcinoma with HPV-16, and Lung adenocarcinoma (smokers).\n\n\nData availability\n\nF1000Research: Dataset 1. R script and data to reproduce the analysis, 10.5256/f1000research.9448.d13356413.",
"appendix": "Author contributions\n\n\n\nMT ran the analyses. MT and VD conceived the analyses and wrote the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nMT was supported by an FNRS/FRIA grant. VD was funded by FNRS, grant J009714F.\n\n\nReferences\n\nTomasetti C, Vogelstein B: Cancer etiology. Variation in cancer risk among tissues can be explained by the number of stem cell divisions. Science. 2015; 347(6217): 78–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoseley S: Two-thirds of adult cancers largely “down to bad luck” rather than genes. The Guardian. 2015. Reference Source\n\nKnapton S: Most cancers are caused by bad luck not genes or lifestyle, say scientists. 2015; [Accessed: 15-Aug-2016]. Reference Source\n\nNew Study: Two-Thirds of Cancer Cases Can Be Attributed to Bad Luck. 2016; [Accessed: 15-Aug-2016]. Reference Source\n\nDunham W: Biological bad luck blamed in two-thirds of cancer cases, researchers say. Washington Post. 2015; [Accessed: 15-Aug-2016]. Reference Source\n\nGrady D: Cancer’s Random Assault. The New York Times. 2015. Reference Source\n\nAmerican Cancer Society: Global Cancer, Facts and Figures. 2nd edition, Atlanta, 2011. Reference Source\n\nChang MH, Chen CJ, Lai MS, et al.: Universal Hepatitis B Vaccination in Taiwan and the Incidence of Hepatocellular Carcinoma in Children. Taiwan Childhood Hepatoma Study Group. N Engl J Med. 1997; 336(26): 1855–1859. PubMed Abstract | Publisher Full Text\n\nBlanpain C, Mohrin M, Sotiropoulou PA, et al.: DNA-damage response in tissue-specific and cancer stem cells. Cell Stem Cell. 2011; 8(1): 16–29. PubMed Abstract | Publisher Full Text\n\nAlexandrov LB, Nik-Zainal S, Wedge DC, et al.: Signatures of mutational processes in human cancer. Nature. 2013; 500(7463): 415–421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang YT, Jen CL, Yang HI, et al.: Lifetime risk and sex difference of hepatocellular carcinoma among patients with chronic hepatitis B and C. J Clin Oncol. 2011; 29(27): 3643–3650. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Development Core Team: R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing, 2009. Reference Source\n\nTarabichi M, Detours V: Dataset 1 in: Comment on \"Variation in cancer risk among tissues can be explained by the number of stem cell divisions\". F1000Research. 2016. Data Source"
}
|
[
{
"id": "16205",
"date": "23 Sep 2016",
"name": "Benjamin Roche",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWe think that this article could be an interesting addition to the current debate about the 'bad luck' theory. Briefly, this paper shows that the Adjusted Extra Risk Score (aERS) is dependent on the risk group that has been considered and that calculating this score at a country-scale level, without considering precisely all risk-groups, could give biased estimates. This suggests that prevention measures cannot be discarded (at least based only on this aERS), especially when a pathogen could be involved in cancer pathogenesis.\nWe just have few minor remarks:\nIn the results section, it is said that there is no relation between cancer and height or weight. Some references, especially with the Boyd-Orr cohort, exist; 1 2\n\nWe think it could be also worth mentioning that many cancers are due to infections, which therefore calls for a prudent interpretation before withdrawing prevention strategies against some cancers.",
"responses": []
},
{
"id": "16419",
"date": "25 Oct 2016",
"name": "Maddalena Fratelli",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis research note adds on the current discussion on the results of Tomasetti and Vogelstein, demonstrating that stochastic effects associated with stem cell division contribute in a substantial way to cancer incidence. As this conclusion has important and possibly harmful implications in the public health decisions, this deserves attention.\nOn the one hand, this note confirms robustness of the T&V analysis of the correlation between lscd and cancer risk. On the other hand, it points out that cancers predicted to be replicative may include a proportion of preventable cases. This is demonstrated by the higher ERS index for hepatocellular carcinoma in Taiwan, where HCV and HBV prevalence, and therefore its preventability, is higher than in the USA.\nMinor points:\nIt would be interesting to show variation in ERS for other tumors such as, for example, esophageal and thyroid cancer. As discussed in Garattini & Tavani (2016), the incidence of these tumors show a great geographical variability. Even if it is not possible to associate these differences to preventable factors, this would add information on the global impact of geographical variation on ERS variability.\n\nThe discussion on lscd is also interesting. Perhaps the authors should make it explicit that more research is needed on the determination of the number of stem cell divisions and its dependence on factors such as different cell types in the same organ, damage, inflammation, individual height or weight and even ethnicity.\n\nFinally, the authors should better explain why they deemed it important to demonstrate the robustness of T&V analysis, as they reproduced it with minor changes.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2044
|
https://f1000research.com/articles/5-2659/v1
|
10 Nov 16
|
{
"type": "Opinion Article",
"title": "Should hydrogen therapy be included in a musculoskeletal medicine routine?",
"authors": [
"Sergej M. Ostojic"
],
"abstract": "Molecular hydrogen (H2) has recently been recognized as a potential novel therapeutic agent in biomedicine. Initially proposed to be a possible treatment for certain types of neuromuscular disorders, cardio-metabolic diseases and cancer, H2 improved clinical end-points and surrogate markers in several clinical trials, mainly acting as an anti-inflammatory agent and powerful antioxidant. In this paper, the medicinal properties of H2 in musculoskeletal medicine are discussed with the aim to provide an updated and practical overview for health professionals working in this field.",
"keywords": [
"FDA",
"Molecular hydrogen",
"Rheumatoid arthritis",
"Soft tissue injury"
],
"content": "Background\n\nAs the oldest and the most abundant molecule in the universe, molecular hydrogen (H2) has been traditionally recognized as a biologically inert gas. However, several trials in the past 10 years reported beneficial effects of H2 in the clinical environment, revealing its possible role as a novel therapeutic agent in medicine1–5. Usually administered orally or via inhalation, H2 improves both patient- and clinician-reported outcomes, and biomarkers of different pathologies and disorders, from metabolic diseases to chronic systemic inflammatory disorders to cancer [for detailed review see Ref. 6]. Its clinical relevance seems to be particularly notable in the musculoskeletal medicine, with several small-scale short-term studies7–9 reporting that H2 was able to restore the health and functional abilities of patients after acute injuries or chronic illnesses affecting the muscles and bones. Since musculoskeletal conditions account for a large proportion of a general practitioner's workload10, one might consider H2 as a promising medication or adjuvant that could alleviate these prevalent conditions. In this opinion paper, the medicinal properties of H2 in musculoskeletal medicine are discussed to provide an updated and practical overview for health professionals working in this field.\n\n\nPromising results from preliminary studies\n\nBeing prompted by the prominent effects of H2 on disuse muscle atrophy, cartilage trauma, and osteopenia in animal studies11–13, a number of clinical investigators from 2010 onwards evaluated the effectiveness of H2 in patients suffering from different muscle and bone ailments – from sprains and strains to chronic joint disorders to myopathies7–9. Typically, these studies were designed as single-blind pilot trials, with small sample sizes (< 40 participants) and of short duration (≤ 12 weeks). Although limited in size and scope, those studies can provide early support for specific therapeutic claims about H2 in musculoskeletal medicine. In a first trial, a combination of oral and topical H2 resulted in a faster return to normal joint flexibility in 36 young men who had suffered sports-related soft tissue injuries, when administered for 14 days as a complementary treatment to a traditional medical protocol for soft tissue injuries7. H2 intervention (hydrogen-rich packs 6 times per day for 20 min and 2 g of oral H2 daily) was found to augment plasma viscosity decrease after an injury, while other biomarkers of inflammation (C-reactive protein, interleukin-6) and clinical outcomes (pain scores at rest and at walking, degree of limb swelling) were not affected by the intervention7. Another study in Japan reported that drinking 530 ml of a liquid containing 4 to 5 ppm of H2 every day for 4 weeks significantly reduced disease activity in 20 patients with rheumatoid arthritis, as evaluated by changes in the degree of tenderness and swelling in 28 joints and C-reactive protein levels8. H2 was administered as an adjuvant to regular disease-modifying anti-rheumatic drugs and biological drugs, with the efficacy of H2 found to be not inferior comparing to abatacept, methotrexate or a combination of two. In total, 47.4% of patients went into remission, with anti-citrullinated protein antibody (ACPA)-positive patients (ACPA levels above 300 U/mL; patients with worse prognosis and higher rates of erosive damage) responding best to the treatment. Finally, the consumption of water containing a high concentration of H2 (31% saturation) for up to 12 weeks improved surrogate markers of muscle pain and fatigability in 22 patients with inherited and acquired myopathies treated with low-dose prednisone9. Taken together, the above studies seem to pave the way for a future use of H2 therapy in musculoskeletal medicine.\n\n\nTake it with a grain of salt\n\nCompared with conventional treatment protocols in musculoskeletal medicine, based on drugs and methods that are well-described with respect to efficacy and safety14,15, H2 still has a long journey ahead before it can be recognized as a common remedy in this medical discipline. At the moment, H2 therapy is not adequately described in terms of approval, labeling, side effects, and pharmacovigilance information in musculoskeletal medicine. There are no dose escalation studies yet, and the optimal and safest dose range for H2 remains unknown; furthermore, no federal agency or industrial entity provides appropriate patient counseling information about H2. The US Food and Drug Administration (FDA) recently issued a notice (GRAS Notice No. 520)16 of a claim that the use of H2 solubilized in water (up to a concentration of 2.14%) is generally recognized as safe (GRAS) when it is added to beverages and beverage containers in order to prevent oxidation. Based on the information provided by the H2 gas-manufacturing company, as well as other information available to the FDA, the agency had no questions about the conclusion that hydrogen gas is GRAS under the intended conditions of use. However, the FDA has not made its own determination regarding the GRAS status of the subject use of H2 gas16. This seems to be the only formal information currently available concerning the use of H2 in food or medicine! Despite this lack of formal approval, there are many formulations and devices widely available in the market that claim to supply H2 for the use in musculoskeletal disorders, from gas-producing machines to dietary supplements and beverages, with H2 amount varying greatly across the different products. Consequently, consumers might be exposed to easy-to-acquire but questionable products containing H2.\n\nAmong other important medical issues that need to be addressed, including long-term safety or pharmacokinetics, the main question remains whether H2 should be considered as a dietary supplement or a medicine, since the FDA declares that a product intended for inhalation (such as H2) is not a dietary supplement17. Therefore, considering H2, or at least some H2 forms, for much stricter assessment and regulation by formally recognizing it as a drug in the future, might be more appropriate for this promising bioactive gas. So, it will take many more studies and tighter regulation before H2 therapy can be endorsed as a routine protocol (or adjuvant to standard treatment) in musculoskeletal medicine. In the meantime, H2 should be regarded as an experimental agent and not recommended to treat muscle or bone conditions in the general population.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Science Foundation Serbia (Grant # 175037; Grant assigned to SM Ostojic).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nKajiyama S, Hasegawa G, Asano M, et al.: Supplementation of hydrogen-rich water improves lipid and glucose metabolism in patients with type 2 diabetes or impaired glucose tolerance. Nutr Res. 2008; 28(3): 137–43. PubMed Abstract | Publisher Full Text\n\nNakao A, Toyoda Y, Sharma P, et al.: Effectiveness of hydrogen rich water on antioxidant status of subjects with potential metabolic syndrome-an open label pilot study. J Clin Biochem Nutr. 2010; 46(2): 140–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNakayama M, Nakano H, Hamada H, et al.: A novel bioactive haemodialysis system using dissolved dihydrogen (H2) produced by water electrolysis: a clinical trial. Nephrol Dial Transplant. 2010; 25(9): 3026–33. PubMed Abstract | Publisher Full Text\n\nYoritaka A, Takanashi M, Hirayama M, et al.: Pilot study of H2 therapy in Parkinson’s disease: a randomized double-blind placebo-controlled trial. Mov Disord. 2013; 28(6): 836–9. PubMed Abstract | Publisher Full Text\n\nXia C, Liu W, Zeng D, et al.: Effect of hydrogen-rich water on oxidative stress, liver function, and viral load in patients with chronic hepatitis B. Clin Transl Sci. 2013; 6(5): 372–5. PubMed Abstract | Publisher Full Text\n\nOstojic SM: Molecular hydrogen: An inert gas turns clinically effective. Ann Med. 2015; 47(4): 301–4. PubMed Abstract | Publisher Full Text\n\nOstojic SM, Vukomanovic B, Calleja-Gonzalez J, et al.: Effectiveness of oral and topical hydrogen for sports-related soft tissue injuries. Postgrad Med. 2014; 126(5): 187–95. PubMed Abstract | Publisher Full Text\n\nIshibashi T, Sato B, Rikitake M, et al.: Consumption of water containing a high concentration of molecular hydrogen reduces oxidative stress and disease activity in patients with rheumatoid arthritis: an open-label pilot study. Med Gas Res. 2012; 2(1): 27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIto M, Ibi T, Sahashi K, et al.: Open-label trial and randomized, double-blind, placebo-controlled, crossover trial of hydrogen-enriched water for mitochondrial and inflammatory myopathies. Med Gas Res. 2011; 1(1): 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHunter DJ, Reddy KS: Noncommunicable diseases. N Engl J Med. 2013; 369(14): 1336–43. PubMed Abstract | Publisher Full Text\n\nFujita R, Tanaka Y, Saihara Y, et al.: Effect of molecular hydrogen saturated alkaline electrolyzed water on disuse muscle atrophy in gastrocnemius muscle. J Physiol Anthropol. 2011; 30(5): 195–201. PubMed Abstract | Publisher Full Text\n\nGuo JD, Li L, Shi YM, et al.: Hydrogen water consumption prevents osteopenia in ovariectomized rats. Br J Pharmacol. 2013; 168(6): 1412–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanaoka T, Kamimura N, Yokota T, et al.: Molecular hydrogen protects chondrocytes from oxidative stress and indirectly alters gene expressions through reducing peroxynitrite derived from nitric oxide. Med Gas Res. 2011; 1(1): 18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDerry S, Wiffen P, Moore A: Topical Nonsteroidal Anti-inflammatory Drugs for Acute Musculoskeletal Pain. JAMA. 2016; 315(8): 813–4. PubMed Abstract | Publisher Full Text\n\nStrehl C, Bijlsma JW, de Wit M, et al.: Defining conditions where long-term glucocorticoid treatment has an acceptably low level of harm to facilitate implementation of existing recommendations: viewpoints from an EULAR task force. Ann Rheum Dis. 2016; 75(6): 952–7. PubMed Abstract | Publisher Full Text\n\nThe Food and Drug Administration (FDA): Agency Response Letter GRAS Notice No. 520. (Assessed at October 28, 2016). Reference Source\n\nThe Food and Drug Administration (FDA): Inspections, Compliance, Enforcement, and Criminal Investigations. (Assessed at October 28, 2016). Reference Source"
}
|
[
{
"id": "18096",
"date": "08 Dec 2016",
"name": "Xiaoli Sun",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion paper provides an undated and practical overview on the properties of molecular hydrogen in musculoskeletal medicine. The paper focuses on the preliminary studies of H2 on musculoskeletal medicine, and the concerns over the general use of products containing H2. I sympathize the author’s prudent attitudes, which toward the hydrogen should be regarded as an experimental agent and not recommended to general use provisionally. However, I think this paper should also mention the long-term diving practices which high pressure hydrogen inhalation involved to prove the possible safe use of H2 gas.",
"responses": [
{
"c_id": "2357",
"date": "09 Dec 2016",
"name": "Sergej Ostojic",
"role": "Author Response",
"response": "This comment is well taken. Several preliminary studies indeed reported a relative safety of H2 inhalation in humans. The mixture of hydrogen, helium and oxygen (Hydra 10) was safely used in the deepest recorded diving (701 m) in an on-shore hyperbaric chamber (Lafay et al. Undersea Hyperb Med 1995;22:51–60). Another study reported no physiological disturbances in patients with cerebral ischemia after exposed to inhalational H2 (Ono et al. Med Gas Res. 2012;2:21). Our group demonstrated no side effects of gaseous H2 in women with age-related cognitive decline (Ostojic et al. 2016, unpublished data). Nevertheless, more studies are needed to confirm the long-term safety of H2 gas in clinical environment, including musculoskeletal medicine."
}
]
},
{
"id": "17912",
"date": "08 Dec 2016",
"name": "Steven R. Brenner",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is appropriate with reference to the content of the article.\nThe article is a review of the literature with reference to utilizing molecular hydrogen to enhance sports related injuries.\nAfter a detailed review of the literature, the conclusion is there is not enough information to make any solid recommendation concerning utilizing molecular hydrogen to treat sports related injuries, so the implication is probably molecular hydrogen doesn't improve recovery from sports related injuries enough to make any difference.\nThis appears to be a good review of the related literature.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2659
|
https://f1000research.com/articles/5-2658/v1
|
10 Nov 16
|
{
"type": "Research Article",
"title": "Intensive nutritional counselling obviates a weight gain in kidney transplant recipients with long-term graft survival in Estonia",
"authors": [
"Liidia Kiisk",
"Mai Ots-Rosenberg",
"Liidia Kiisk"
],
"abstract": "Background. After kidney transplantation body weight gain (WG) in patients generally increases and this can be influenced by improved appetite and reversal of the uremic state. Because of a lack of evidence for effective interventions that prevent body WG after kidney transplantation, Ryan et al. recently published a randomized controlled trial design in BMC Nephrology to assess the effects of intensive nutrition interventions on WG and metabolic parameters. Here we are demonstrating a part of a study performed at our transplantation centre where we also performed an intensive nutritional counselling intervention on kidney recipients. We hypothesised that kidney transplant (KTx) patients who received intensive counselling may have better long-term weight control compared with patients who did not receive such counselling. Methods. During intensive counselling the dietician took into consideration the results of anthropometry, densitometry, biochemistry, food frequency questionnaire and an analysis of 3-day dietary records. Anthropometrical measurements were assessed at 18 months, 36 months, and, finally, 10 years after the kidney transplantation. Results. We noticed WG in all KTx patients (n=56) both in males and females, as follows: in the standard care group the mean WG in males (N=21) was 6.4 kg, females (N=15) 7.0 kg and among intensively counselled males (N=9) 4.6 kg, females (N=11) 1.1 kg. Statistically non-significant weight change was found among counselled living KTx patients with long-term graft survival both in males (p=0.0680) and females ( p=0.3166) which was registered 10 years after the KTx compared with the first measurement mean weight. Among standard care control KTx patients, weight change was statistically significant in both males and females (males: p=0.0016; females: p=0.0040) 10 years after the KTx. Conclusions. The long-term WG data clearly showed that in our population of KT patients, the patients who received intensive individual dietary counselling had much more educated behaviour in the long-term, thereby preventing WG which is a well-known risk factor for long-term graft failure as well as for cardiovascular complications and mortality.",
"keywords": [
"body weight gain",
"intensive counselling",
"kidney transplantation"
],
"content": "Introduction\n\nThe management of nutritional status is an important component of the complex care of chronic kidney disease (CKD) patients. Nutritional status, especially a state of malnutrition or obesity in end stage CKD patients, is linked to increased morbidity and mortality. On the other hand, after successful kidney transplantation (KTx), renal function recovers and appetite in kidney recipients tends to increase rapidly, often leading to weight gain (WG), which, together with increased body mass index (BMI), has a negative impact on health, cardiovascular and surgical outcomes. Body WG is an important risk factor for the development of chronic transplant nephropathy. Historically, studies by many researchers have found overall trends in WG over the first couple of years after KTx, with most WG occurring in the first year after surgery1. Therefore, appropriate early nutritional management plays an important role in the prevention or reversal of this trend.\n\nDespite the abundance of cross-sectional data on the relationship between body composition and nutrient intake, there are few long-term studies of body composition changes in KTx patients or studies that combine dietary intake and body composition2. However, because of a lack of evidence for effective interventions that prevent weight gain after kidney transplantation, Australian researchers have recently designed a randomized controlled study3 called the INTENT trial (INTEnsive Nutrition interventions on weight gain after kidney Transplantation). The INTENT trial is a single-blind, randomized controlled trial for assessing the effects of intensive nutrition interventions, including exercise advice, on WG and metabolic parameters in the first year after transplantation4. Thus, the results may soon provide important data on the effects of intensive nutrition interventions on WG after transplantation and the associated metabolic consequences.\n\nIn addition to nutritional counselling after KTx at our transplant centre, which also routinely provides usual dietary guidance for patients, we also initiated more than 10 years ago a program that incorporates nutritional counselling and guidance for the whole CKD patient population. Besides implementing international4 and local5,6 guidelines into clinical practice, we also conducted a longitudinal prospective study in KTx recipients with long-term graft survival at Tartu University Hospital and looked at the impact of longitudinally intensive nutritional counselling on body composition. We hypothesised that patients who received intensive counselling may have better weight control compared with patients who did not receive such counselling. The purpose of the study was to assess the impact of intensive nutritional counselling on KTx patients in the prevention of post-transplant WG compared with standard dietary guidance in KTx patients. The secondary aim was to determine changes in macronutrient intake and body composition in patients after KTx. We further aimed to assess the long-term impact of intensive nutritional counselling 10 years after KTx on WG by comparing groups of intensively counselled and standard care patients.\n\n\nMethods\n\nAdult consecutive non-diabetic patients with stable kidney function who agreed to participate and gave written informed consent, were recruited for the first measurement (FM) at 18 months and for a follow-up at (FU) 36 months after kidney transplantation, performed in our centre between 2003 and 2006.\n\nIn total, 75 patients were studied; of these 28 KTx patients (12 males, mean age 42.8 ± 16.1 years and 16 females, mean age 47.0 ± 14.9 years) received intensive nutritional counselling. The remaining 47 KTx patients, who stayed in standard dietary guidance maintenance, formed the control group and this group consisted of 27 males (with a mean age of 45.6 ± 9.4 years and 20 females (with a mean age of 48.7 ± 12.8 years). Anthropometrical measurements (body weight, kg; body height, cm; body mass index, kg/m2) were assessed at FM, FU and then, finally, 10 years after the KTx in both groups of patients. WG was estimated taking into account a patients weight measurements at FM and comparing them with measurements after 10 years. Also, laboratory measurements and macronutrient intake were performed on counselled KTx patients twice: at FM and at FU.\n\nAll patients at our institution are provided with appropriate usual nutritional guidance before discharge after kidney transplantation. In our study, additional intensive nutritional counselling and dietary guidance at FM (18 months after the KTx) were carried out by a dietician who took into account individual patient’s anthropometrical, laboratory as well as nutritional investigations including the food frequency questionnaire (see Supplementary material 1) and the analysis of 3-day dietary records7 Individual nutritional counselling and dietary guidance were performed repeatedly. Nutritional counselling was performed by a dietician who took into consideration the patient`s age, gender, the stage of chronic kidney damage, proteinuria, build of the body, the 24-hour need for food energy and nutrients, and the patient`s eating habits. The consumption of energy and main nutrients content in the 3-day menus of kidney transplant patients was calculated and analysed with the Finnish food composition database program Micro-Nutrica Nutritional Analysis (https://fineli.fi/). All the data were compared with the Estonian Nutrition Recommendations8.\n\nEthics approval: The study was approved by the Ethics Committee on Human Research of the University of Tartu, Estonia (protocol no 141/30; 2005).\n\n\nResults\n\nOur previous results of detailed anthropological measurements at 18 and 36 months after KTx showed that females were more accurate to follow dietitians' advice compared to male patients9. We also showed previously that the mean body weight increased significantly among studied male KTx patients and in control KTx patients, both male and females, after the follow-up, but in female KTx patients the BW increase was non-significant9.\n\nNow, in our current study 10 years after the KTx we further aimed to clarify the longitudinal effect of intensive nutritional counselling and guidance on WG. We noticed WG in all KTx patients, both in males and females, as follows: in the standard care group, the males' mean WG was 6.4 kg, the females' 7.0 kg and among intensively counselled males 4.6 kg and females 1.1 kg. But in Table 1, we demonstrate statistically non-significant weight change among counselled living KTx patients with long-term graft survival both in males (N=9, p = 0.0680) and females (N=11, p = 0.3166) which was registered 10 years after the KTx compared with the FM mean weight. Among standard care control KTx patients, weight change was statistically significant in both males and females (males: N=21, p = 0.0016; females: N=15, p = 0.0040) 10 years after the KTx.\n\n*P < 0.05; SD, standard deviation; KTx, kidney transplantation.\n\nWe also observed and assessed macronutrient intake, which was determined at FM and at FU (Table 2). The food frequency questionnaire revealed that patients consumed various types of food (milk and dairy products; meat and fish products; cereal and flour products; vegetables; fruits; sweets) at different frequencies, but there was a tendency towards excessive consumption of food rich in proteins and carbohydrates in both males and females in FM. The following changes in nutrient intake were noticed: at FU, mean daily energy and protein intakes were significantly higher than FM in female patients, and in males, mean daily energy, protein, fat and carbohydrate intakes had a tendency to decrease after FU when compared with FM data. However, mean daily intakes of energy, fat and protein estimated at FM and at FU were adequate in both males and females when compared with the Estonian Dietary Reference Intakes8 (Table 2). Thus, there was a positive tendency in the change of the content of different macronutrients among all counselled subjects, although the directions in which macronutrient intake changed were not similar in males and females.\n\n*P < 0.05; KTx, kidney transplantation; FM, first measurements; FU, follow-up measurements.\n\n\nDiscussion\n\nThe impact of obesity and overweight on clinical outcomes after serious disease, have long been the object of research, with contradictory results. In our study, WG was not significant 10 years after KTx in patients who received additional intensive nutritional counselling compared with standard care control KTx patients group.\n\nWe conclude that nutritional counselling and guidance is important and should be offered early and regularly after kidney transplantation to maintain body weight and appropriate nutritional state longitudinally. Post-transplant outcomes will be optimized by a team approach for the comprehensive management of the kidney transplantation recipient combined with vigilant surveillance to detect WG in a timely fashion. However, long-term WG data clearly showed that in our KTx patient population, the patients who received intensive individual dietary counselling had much more educated behaviour in the long-term, thereby preventing WG which is a really well-known risk factor for long-term graft failure as well as for cardiovascular complications and mortality. We are awaiting the final results of the Australian researchers3 with great interest.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data supporting the patients’ long-term body weight gain findings in different study groups, 10.5256/f1000research.10035.d14156910",
"appendix": "Author contributions\n\n\n\nLK conceived of the study, carried out the data collection, participated in the design of the study, was involved in drafting the manuscript and revising it critically for important intellectual content.\n\nMO-R participated in the design of the study and performed the statistical analysis, carried out the data collection, participated in the design of the study and drafted the manuscript.\n\nBoth authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study was partly supported by the governmental scientific grants SF0182558s03 (PI Prof. A. Peetsalu, subproject leader Prof. M. Ots-Rosenberg), SF0180081s07 (PI Prof M. Lember), IUT2-8 (PI Prof. M. Lember).\n\n\nAcknowledgements\n\nThe authors thank doctors Mart Lintsi, Ingrid Kull, Siiri Mesikepp, Elviira Seppet for their kind help. The authors thank Viive Saar for proofreading the manuscript.\n\n\nSupplementary material\n\nFood-frequency questionnaire.\n\nThe research of nutritional habits was carried out on the basis of a food-frequency questionnaire (FFQ). A special food questionnaire \"The card of nutritional habits\" was worked out by the Centre of Physical Anthropology at the University of Tartu and was used for nutritional research. \"The card of nutritional habits\" was worked out for kidney transplant patients. A FFQ consisted of six parts and 180 variables which covered the questions. Click here to see the questions. An example FFQ with results are available in Kiisk et al.7.\n\nClick here to access the data.\n\n\nReferences\n\nAhmadi SF, Zahmatkesh G, Streja E, et al.: Body mass index and mortality in kidney transplant recipients: a systematic review and meta-analysis. Am J Nephrol. 2014; 40(4): 315–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeng AE, Montaurier C, Cano N, et al.: Energy expenditure, spontaneous physical activity and with weight gain in kidney transplant recipients. Clin Nutr. 2015; 34(3): 457–64. PubMed Abstract | Publisher Full Text\n\nRyan KJ, Casas JM, Mash LE, et al.: The effect of intensive nutrition interventions on weight gain after kidney transplantation: protocol of a randomised controlled trial. BMC Nephrol. 2014; 15: 148. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClinical practice guidelines for nutrition in chronic renal failure. K/DOQI, National Kidney Foundation. Am J Kidney Dis. 2000; 35(6 Suppl 2): S1–140. PubMed Abstract\n\nKiisk L, Kaarma H, Ots M: Treatment diets in Estonian health care institutions. Medicina (Kaunas). 2008; 44(8): 577–84. PubMed Abstract\n\nKiisk L: Treatment diets. Monography. 2002; (Tartu): Tartumaa Publisher.\n\nKiisk L, Ots-Rosenberg M, Kaarma H: Pecularities of nutrition of kidney transplant patients. Papers on Anthropology. 2010; XIX: 180–202. Reference Source\n\nEstonian Nutrition Recommendation and Food Based Dietary Guidelines. 2006: 1–80.\n\nKiisk L, Kaarma H, Ots-Rosenberg M: Impact of anthropometric measurements in clinical practice. Coll Antropol. 2012; 36(4): 1325–33. PubMed Abstract\n\nKiisk L, Ots-Rosenberg M: Dataset 1 in: Intensive nutritional counselling obviates a weight gain in kidney transplant recipients with long-term graft survival in Estonia. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17515",
"date": "28 Nov 2016",
"name": "Edita Ziginskiene",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article “Intensive nutritional counselling obviates a weight gain in kidney transplant recipients with long-term graft survival in Estonia” is an interesting manuscript evaluating impact of intensive nutritional counselling on patients after kidney transplantation in the prevention of post transplant weight gain. Diet is one of the important factors of management of this patients group.\nMy specific comments:\nTitle and abstract. The aims of the study must be in the abstract. It must be more clearly described compared groups of patients. Please write the results accurately identifying periods of study.\nArticle content. There is an error at the title of second column of Table 1 (“Body weight (kg) one half years...” Should be “... one and half years...” First study group at the Table 1 should be identified accurately (Kidney transplant patients with intensive nutritional counselling), as both groups of patients are” kidney transplant patients”.\nExplanation of results of Table 2: “Thus, there was a positive tendency in the change of the content of different macronutrients among all counselled subjects, although the directions in which macronutrients intake changed were not similar in males and females”. In my opinion, differences of males and females should be clarified in the text.\nThe discussion is quite short. The results must be compared with data from the literature, or at least you need to provide the data, which are found in the literature on the matter. Discussion should be based on both parts of the results.\nConclusions should accurately reflect the aims of study and results, so they must include changes in macronutrients intake and body composition in patients after kidney transplantation and long-term impact of intensive nutritional counselling 10 years after kidney transplantation on weight gain. Conclusions should accurately reflect the results of the study.\nIn my opinion, the article could be indexed after the corrections.",
"responses": []
},
{
"id": "20338",
"date": "07 Mar 2017",
"name": "Ieva Ziedina",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors of the article “Intensive nutritional counseling obviates a weight gain in kidney transplant recipients with long-term graft survival in Estonia” focused their attention on important and hardly manageable complication after kidney transplantation - weight gain.\nThe authors themselves admit that they publish a part of the study in this open access resource, therefore it's possible the aim and methods of study are not completely clear: when exactly did the dietician analyze and give his recommendations to the patients? Did he do it just twice- 18 and 36 months post-transplant? And do the authors believe that it affected patients’ weight 8, 5 and 7 years later? For how long has the follow up lasted - 10 years with half-way assessment 3 years post-transplant? Then why do they have the results of energy and nutrient content just 1,5 and 3 year post-transplant and don’t have 10 years post-transplant (table 2), but body weight data are reviewed 1,5 and 10 years post-transplant but not 3 years post-transplant?\nWhat about study cohort? There is stated in the methods that 75 patients were included, but in abstract they report only 56 patients? What happened to 19 patients? Have they gained / lost weight?\nThe authors assessed absolute weight gain (measured in kg), but there is no information about body mass index (except table 2), body composition, lean body mass and/or fat mass. It could be possible that some of patients had malnutrition after dialysis and then they acquired normal body weight. May be all patients had normal body weight (normal nutrition status) and all of them got obese (range of weight at the first assessment was from 57.1! kg till 134.1! kg)? By merging both tables it is possible to follow that men in counseled group had mean weight 75.1 – 83.3 – 75.77 kg at anthropometrical measurement time points but women had mean weight 74.4-76.6-78.59 kg at the same time points. Authors could discuss what was a trigger for weight loss in cohort of men. There is no information about results of energy and nutrient content in standard dietary guidance group. Therefore in my opinion it is not fair to compare groups.\nConclusions could not be made about graft failure, cardiovascular complications and mortality because this information wasn’t stated in the article.\n\nThe article could be approved with reservations.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2658
|
https://f1000research.com/articles/5-2654/v1
|
09 Nov 16
|
{
"type": "Correspondence",
"title": "Comment on Falade et al. (2016) DNA-barcoding of Clarias gariepinus, Coptedon zillii and Sarotherodon melanotheron from Southwestern Nigeria",
"authors": [
"Jeremy A. Miller",
"Isolde van Riemsdijk",
"Mohd Zacaery Khalik",
"David J. Scager",
"Menno Schilthuizen",
"Jeremy A. Miller",
"Isolde van Riemsdijk",
"Mohd Zacaery Khalik",
"David J. Scager"
],
"abstract": "A publication by Falade et al. was selected for discussion by a Naturalis Biodiversity Center-Leiden University Journal Club. The study focused on the identification of fish from Southwestern Nigeria using a DNA barcoding approach. Questions raised during the discussion led to a reanalysis and reinterpretation of the data presented. The authors characterize the process of deriving a taxonomic identification from their sequence data as straightforward, but we were concerned that their approach made it nearly impossible to fail to obtain a taxonomic name for each sequence. The value of sophisticated DNA taxonomy, as well as the pitfalls of its naïve application, are discussed. We suggest that journal discussion groups may be an untapped resource for expanding rigorous peer review, particularly for journals that have adopted an open review model.",
"keywords": [
"Barcode",
"DNA taxonomy",
"FISH-BOL",
"peer review",
"scientific publishing"
],
"content": "\n\nDNA sequence data has become widely accepted as a useful tool for taxonomic determination and discovery1–3. But the potential pitfalls of DNA taxonomy in operation have been forewarned for some time4–10.\n\nThe DNA barcode itself is simply a standard region selected to facilitate comparison11. A library built of many such sequences and based on a gene evolving at a rate that minimizes variation within and maximizes variation between species becomes a powerful taxonomic resource5. But the journey from DNA barcode sequence to species determination still requires critical application, particularly when applied to taxa or regions that are not currently well represented in sequence databases.\n\nFalade et al.12 obtained DNA sequences for sixteen individual fish from Southwest Nigeria, a region with relatively sparse coverage in sequence databases. Such data are valuable because broad geographic and taxonomic representation provide insight into genetic diversity within taxonomic groups and help us to refine hypotheses of species circumscription and phylogenetic relationships.\n\nFalade et al.,12 sequenced each specimen for the standard animal DNA barcode region cytochrome oxidase I (COI) and a region of the 16S mitochondrial ribosome. The authors queried their sequences against both the BOLD Systems (RRID: SCR_004278; boldsystems.org/index.php/IDS_OpenIdEngine) and NCBI GenBank (RRID: SCR_004860; BLASTN, RRID: SCR_001598; blast.ncbi.nlm.nih.gov/Blast.cgi) databases (because BOLD does not include 16S, these sequences were only compared to GenBank). Although the authors claim that “this resulted in straightforward identification”, we take a more nuanced view on their results.\n\nThe BOLD identification engine and BLASTN comparison with GenBank work differently and were created for different purposes13–15; only BOLD is specifically intended to be used as a taxonomic identification tool, while BLASTN assesses sequence similarity. BLASTN will always return the most similar sequences in GenBank. BOLD is more discriminating, since it is limited to a handful of specific loci and uses similarity thresholds to assess whether or not a query sequence can be matched to identified sequences in the database with high confidence. BOLD will alert the user when it determines that no confident identification could be made. DNA-based identification is complicated by the fact that both BOLD and GenBank include misidentified sequences16.\n\nBOLD failed to identify with confidence any of the sixteen COI sequences. Eight were classified as probably belonging to one of a handful of possible species, while the rest received no hit. From this, we infer that Falade et al. made their taxonomic determinations based almost entirely on BLASTN results. As reported (Table 1), all but one of these were scored as 98–99% identical to their top GenBank hit with the remaining sequence (KX231778; Coptodon_zilli_odooba_1) scoring 86% identical.\n\nTop BOLD hit and BOLD identification note summarize results from BOLD. Top Blast hit and Sequence name specify the best match in GenBank (excluding the Falade et al. sequences) according to BLASTN, with the Blast metrics Query cover and Ident. See also Table 2 in Falade et al. Note that BOLD contains no 16S data, so these sequences are listed as NA (not applicable).\n\nTo view the results in context, we downloaded from BOLD all COI sequences identified as one of the three species specified by Falade et al. [search ‘Taxonomy’ for Clarias gariepinus, Sarotherodon melanotheron, and Coptodon zillii (the latter also under the synonym Tilapia zillii)]. These sequences were combined with the Falade et al. data and initially aligned using MAFFT version 7.18717 with manual adjustments made using Mesquite version 3.1018 (mesquiteproject.wikispaces.com/). A phylogenetic analysis was performed using RAxML version 8.2.819. Initial alignment and phylogenetic analysis were performed through the CIPRES Science Gateway version 3.320 (RRID: SCR_008439; phylo.org/). Alignment required reversing or reverse-complementing some of the sequences from Falade et al. The problematic sequence KX231778 could not be satisfactorily aligned with the others and had to be excluded from the tree. The remaining COI sequences did cluster with other GenBank sequences in such a way as to suggest the remaining taxonomic determinations reported by Falade et al. are credible.\n\nAnother anomalous sequence is KX243287 (Clarias_gariepinus_asejire_12), a 16S sequence approximately twice the length of the others. We have no explanation for this.\n\nThe evidence presented by Falade et al. is not sufficient to determine at least the COI sequence KX231778. The method applied by Falade et al. made it nearly impossible to fail to obtain a taxonomic name for each sequence. This is a scientific flaw, and an example of the uncritical application of DNA taxonomy.\n\nThis paper was discussed as part of a regular journal discussion group offered by the Endless Forms research group at Naturalis Biodiversity Center, which involves students in the Evolution, Biodiversity, and Conservation program at Leiden University. Similar journal-article-based discussion groups can be found at many universities and Natural History Museums. We support the rationale behind open review journals (blog.f1000research.com/2014/05/21/what-is-open-peer-review/) and therefore decided to share the sense of our discussion with the broader community. We would like to encourage other journal discussion groups to include open review articles in their literature discussions, and consider sharing summaries of their discussions as article comments. Healthy science literature depends on a robust pool of potential reviewers21. We see journal discussion groups as an untapped resource for providing feedback on scientific literature, and also as incubators for developing student-scientists into constructive and rigorous peer reviewers.\n\n\nData availability\n\nF1000Research: Dataset 1. Aligned COI sequence data, 10.5256/f1000research.9829.d14138322\n\nF1000Research: Dataset 2. Phylogenetic tree, 10.5256/f1000research.9829.d14138423",
"appendix": "Author contributions\n\n\n\nMS, JM, IvR, MZK, and DS conceived the study and outlined major points. IvR and JM analyzed the data and wrote initial drafts of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe Endless Forms Research Group (Naturalis) budget footed the bill for the journal club drinks at Meneer Jansen in Leiden. IvR is supported by the 'Nederlandse organisatie voor Wetenschappelijk Onderzoek' (NWO Open Programme 824.14.014).\n\n\nReferences\n\nSchindel DE, Miller SE: DNA barcoding a useful tool for taxonomists. Nature. 2005; 435(7038): 17. PubMed Abstract | Publisher Full Text\n\nMiller SE: DNA barcoding and the renaissance of taxonomy. Proc Natl Acad Sci U S A. 2007; 104(12): 4775–4776. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBecker S, Hanner R, Steinke D: Five years of FISH-BOL: brief status report. Mitochondrial DNA. 2011; 22(Suppl 1): 3–9. PubMed Abstract | Publisher Full Text\n\nMoritz C, Cicero C: DNA barcoding: promise and pitfalls. PLoS Biol. 2004; 2(10): e354. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeier R: DNA sequences in taxonomy opportunities and challenges. In: Wheeler QD, editor. The New Taxonomy. CRC Press. 2008; 95–127. Publisher Full Text\n\nEbach MC, de Carvalho MR: Anti-intellectualism in the DNA barcoding enterprise. Zoologica (Curitiba). 2010; 27(2): 165–178. Publisher Full Text\n\nLipscomb D, Platnick N, Wheeler Q: The intellectual content of taxonomy: A comment on DNA taxonomy. Trends Ecol Evol. 2003; 18(2): 65–66. Publisher Full Text\n\nRubinoff D, Holland BS: Between two extremes: mitochondrial DNA is neither the panacea nor the nemesis of phylogenetic and taxonomic inference. Syst Biol. 2005; 54(6): 952–961. PubMed Abstract | Publisher Full Text\n\nRubinoff D, Cameron S, Will K: A genomic perspective on the shortcomings of mitochondrial DNA for \"barcoding\" identification. J Hered. 2006; 97(6): 581–594. PubMed Abstract | Publisher Full Text\n\nWill KW, Rubinoff D: Myth of the molecule: DNA barcodes for species cannot replace morphology for identification and classification. Cladistics. 2004; 20(1): 47–55. Publisher Full Text\n\nHebert PD, Cywinska A, Ball SL, et al.: Biological identifications through DNA barcodes. Proc Biol Sci. 2003; 270(1512): 313–321. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalade MO, Opeme AJ, Benson O: DNA barcoding of Clarias gariepinus, Coptodon zillii and Sarotherodon melanotheron from Southwestern Nigeria [version 1; referees: 1 approved]. F1000 Res. 2016; 5: 1268. Publisher Full Text\n\nRatnasingham S, Hebert PD: BOLD: The Barcode of Life Data System (http://www.barcodinglife.org). Mol Ecol Notes. 2007; 7(3): 355–364. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPertsemlidis A, Fondon JW 3rd: Having a BLAST with bioinformatics (and avoiding BLASTphemy). Genome Biol. 2001; 2(10): REVIEWS2002. PubMed Abstract | Free Full Text\n\nRatnasingham S, Hebert PD: A DNA-based registry for all animal species: the barcode index number (BIN) system. PLoS One. 2013; 8(7): e66213. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMutanen M, Kivelä SM, Vos RA, et al.: Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera. Syst Biol. 2016; 65(6): 1024–1040. pii syw044. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatoh K, Misawa K, Kuma K, et al.: MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform. Nucleic Acids Res. 2002; 30(14): 3059–3066. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaddison WP, Maddison DR: Mesquite: a modular system for evolutionary analysis. Version 3.10. 2016. Reference Source\n\nStamatakis A: RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014; 30(9): 1312–1313. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiller MA, Pfeiffer W, Schwartz T: Creating the CIPRES Science Gateway for inference of large phylogenetic trees. In: Proceedings of the Gateway Computing Environments Workshop (GCE). New Orleans, LA. 2010; 1–8. Publisher Full Text\n\nHochberg ME, Chase JM, Gotelli NJ, et al.: The tragedy of the reviewer commons. Ecol Lett. 2009; 12(1): 2–4. PubMed Abstract | Publisher Full Text\n\nMiller JA, van Riemsdijk I, Khalik MZ, et al.: Dataset 1 in: Comment on Falade et al. (2016) DNA-barcoding of Clarias gariepinus, Coptedon zillii and Sarotherodon melanotheron from Southwestern Nigeria. F1000Research. 2016. Data Source\n\nMiller JA, van Riemsdijk I, Khalik MZ, et al.: Dataset 2 in: Comment on Falade et al. (2016) DNA-barcoding of Clarias gariepinus, Coptedon zillii and Sarotherodon melanotheron from Southwestern Nigeria. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19279",
"date": "27 Jan 2017",
"name": "Shadi Shokralla",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors discussed many aspects in Falade et al. (2016) article but their explanation didn't convince me with their findings. For example:\nThe authors focused on CO1 data and almost ignored 16S data.\n\nI was expected to see more figures to proof their points.\n\n\"The evidence presented by Falade et al. is not sufficient to determine at least the COI sequence KX231778. The method applied by Falade et al. made it nearly impossible to fail ....\" what is the right way in the authors' eyes.\n\n\"The remaining COI sequences did cluster with other GenBank sequences in such a way as to suggest the remaining .....\" At which level the clustering parameters were set to? It is a vague expression.",
"responses": []
},
{
"id": "19277",
"date": "30 Jan 2017",
"name": "Dora Davies",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is appropriate for the content of the article. However, there is a spelling mistake since the correct genus is Coptodon, instead of Coptedon as the authors wrote it. The abstract concisely summarizes the ideas presented in the article. The authors explain with clarity their points of view about the paper under study. Conclusions are justified on the basis of the analysis performed. The information given is adequate, and discussed with clarity.\n\nMultiple factors converge on decision making, therefore studies that bring together morphology, life cycles, ecology, genetics, and bioinformatics are desirable to improve our comprehension about species, in particular those that come from understudied localities. Falade et al (2016) identified fish specimens at species level by morphology, later analyzing genes COI and 16S rRNA with the aim of correlating morphologic and genetic data. Miller et al (2017) made an objection to the bioinformatic methodology employed by Falade et al., stating that it was “nearly impossible to fail to obtain a taxonomic name for each sequence”. Miller et al objected particularly one sequence that produced no hits on BOLD database, a problem also addressed by Falade in their original paper. The absence of genetic sequences in public databases from specimens of remote or understudied areas is a problem that researchers from those areas face quite frequently. Even though Miller et al (2017) are correct in addressing the methodology shortness in Falade’s work, it is important to remark that Falade et al. made an important contribution in submitting genetic sequences from 3 fish species of the underrepresented country Nigeria to public databases such as GeneBank and BOLD. Hopefully, there will be more interdisciplinary studies on Nigerian fish fauna.\nIn a more philosophical note, none of the branches of biology can alone answer all the questions, or explain or predict the totality of biological phenomena. In particular, definition of the concept “species” is under discussion even today. Molecular biology and bioinformatics are two of the many tools that are available to elucidate the boundaries between 2 species. For example, to what extent a similarity percent of nearly 100%, based on the study of certain genes in a certain biological group, can be taken as an indicator that two species are different? That percent seems to be different for different taxa, and also varies depending on the genes under study. Bioinformatic tools to analyze genetic data are improving at a fast pace, but it is still important not to underestimate information about morphology, ecology, life cycles, etc. to complete the picture of each taxa. It is also worth noting that the improvement of bioinformatics tools relies on pre-existing information, and when that information is missing there might be a bit of a problem.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2654
|
https://f1000research.com/articles/4-925/v1
|
30 Sep 15
|
{
"type": "Software Tool Article",
"title": "EvolQG - An R package for evolutionary quantitative genetics",
"authors": [
"Diogo Melo",
"Guilherme Garcia",
"Alex Hubbe",
"Ana Paula Assis",
"Gabriel Marroig",
"Guilherme Garcia",
"Alex Hubbe",
"Ana Paula Assis",
"Gabriel Marroig"
],
"abstract": "We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable. Given this mathematical representation of available variation, the EvolQG package provides functions for calculation of relevant evolutionary statistics, estimation of sampling error, corrections for this error, matrix comparison via correlations and distances, and functions for testing evolutionary hypotheses on taxa diversification.",
"keywords": [
"P-matrix",
"G-matrix",
"multivariate evolution",
"drift",
"morphological evolution",
"directional selection",
"matrix comparison",
"covariance matrix"
],
"content": "Introduction\n\nQuantitative genetics deals with the evolution and inheritance of continuous traits, like body size, bone lengths, gene expressions or any other inheritable characteristic that can be measured on a continuous scale, or which can be transformed to a continuous scale. This framework has been used in selective breeding and in describing the different sources of variation in natural populations, as well as understanding the interaction of evolutionary processes with this variation25. Quantitative genetics has been successful in describing short term evolution, and is also useful in understanding diversification at a macroevolutionary level. The core development of modern evolutionary quantitative genetics started with the generalization of the univariate breeders equation to the multivariate response to selection equation, derived by Lande and also referred to as the Lande equation22,46.\n\nThe Lande equation relates the evolutionary change in trait means of a given population (Δz) to the interaction between the additive genetic variation (G-matrix) of this population and the directional selection (β) acting on this population. The additive genetic variation of a population is represented by a symmetric square matrix called the G-matrix, which contains the additive genetic variance of each trait on the diagonal and the additive genetic covariance between traits on the off-diagonal elements. From the Lande equation, Δz = Gβ, we can see that different populations may present markedly different responses (Δz) to the same directional selection (β) simply because these populations have distinct G-matrices. Other evolutionary forces affecting populations are also influenced by available variation, e.g., based on the G-matrix it is possible to test if morphological differentiation of extant taxa is compatible with genetic drift or stabilizing selection (e.g., 2,28). Thus, describing and understanding changes in standing variation among populations (5,24,27) as well as understanding constraints imposed by populations standing variation (e.g., 15,26,42,49) are major elements in evolutionary quantitative genetics.\n\nIn this article we describe the EvolQG package, developed to deal with the evolutionary quantitative genetics questions addressed above in the R environment for statistical computing44. Our goal was to provide a suite of tools in a single consistent source, and to standardize and facilitate the adoption of these tools.\n\n\nMeasurement error estimation\n\nBefore estimating a G-matrix, it is important to evaluate the influence of measurement error in data collection, since measurement error can drastically bias further analyses9. Measurement error can be estimated by measuring each individual at least twice and estimating the amount of variation associated with each individual, which is the measurement error, in relation to total variation (i.e., the sum of within and between individuals variation) using an analysis of variance. The proportion of variance not associated with the individuals is called the repeatability23. A repeatability of 1 means that no variation is associated with measurement error. The function CalcRepeatability() performs the calculation described in 23 for a set of multivariate traits measured at least twice for each individual.\n\n\nMatrix estimation\n\nIn the rest of this article we assume that the covariance matrix of interest has already been estimated by some procedure. This can be a simple covariance of all the observed traits, or an estimated parameter from a more complicated linear model. The simplest case of a linear model approach would be using a multivariate analysis of covariance (MANCOVA) to control for differences in trait means that are not of immediate interest in the analyses (e.g., sexual dimorphism, geographic variation, etc.). The residual pooled within-group covariance matrix can be used in subsequent analysis27. The EvolQG package function CalculateMatrix() uses R’s lm() model object to calculate variance-covariance matrices adjusting for the proper degrees of freedom in a simple fixed-effects MANCOVA. Of course more complicated methods may be used to obtain G-matrices, such as an animal model or a mixed model25,48.\n\nAccurate G-matrix estimation can be hard to achieve, requiring large sample sizes, many families and known genealogies50. One alternative that is usually more feasible is to use the phenotypic covariance matrix (the P-matrix) as a proxy of the population’s G-matrix6,47. Conditions on where this approximation is reasonable depend on the structure of developmental and environmental effects, and testing for similarity is an empirical question that should be undertaken before using the P-matrix as a proxy for the G-matrix, ideally by direct comparison (e.g., 11). As a general rule, high similarity between populations’ P-matrices is a good indicator of high similarity between P and G, and of a stable shared G-matrix pattern, since the similarity between populations must come from either a common genetic structure, or the unlikely scenario of a different genetic structure buffered by an exactly compensating environmental structure in each population that leads to high similarity between phenotypic covariation.\n\nSome of the methods described below are not applicable to covariance matrices, only to correlation matrices. Correlations are standardized measures of association that are bounded between [-1, 1], and, unlike covariances, can be directly compared for pairs of traits with different scales. In most instances, correlation matrices can be obtained directly from covariance matrices by using the R function cov2cor().\n\n\nMatrix error and repeatabilities\n\nA G-matrix will always be estimated with error16,30,33, and it is important to take this error into account in further analyses. In some circumstances we want to compare two or more G-matrices, calculating the matrices correlations (see section Matrix Comparison). However, due to error in estimating these matrices, their correlations will never be one, even if the actual population parameter values are identical6. Thus, matrix repeatabilities are used to correct matrix correlations by taking sampling error into account. The basic premise of all the methods is that taking repeated samples from the same population and comparing the resulting matrices would still give correlations that are lower than 1. We estimate the maximum correlation between matrices taken from the same population and correct the observed correlation by this maximum value. The corrected correlation between two observed matrices will be given by the original correlation divided by the geometric mean of their repeatabilities. If the repeatability of both matrices is one, the observed correlation does not change under the correction, and lower repeatabilities yield larger corrections. A number of methods for repeatability estimation are provided, and their results can be passed on to the functions that calculate matrix correlations (section Matrix Comparison):\n\nAlphaRep(): Cheverud6 describes an analytical expression for the repeatability of a correlation matrix. This expression is asymptotically convergent, so it should be used only when sample sizes are large, at least larger than the number of traits.\n\nBootstrapRep(): We may estimate the repeatability of the covariance (or correlation) structure of a given data set using a bootstrap procedure, sampling individuals with replacement from the data set and calculating a covariance (or correlation) matrix from each sample. The mean value of the correlation between the random sample matrix and the original estimated matrix is an estimate of the repeatability. This method has the advantage of not assuming any distribution on the data, but does provide inflated estimates of repeatabilities for small data sets. Even so, upwardly biased matrix repeatabilities are not so problematic, since they lead to conservative corrections of matrix correlations. However, users should be aware of this bias and should not interpret a high repeatability obtained from a small data set as indicating that the parameter is well estimated.\n\nMonteCarloRep(): We can use the observed covariance (or correlation) matrix as the Σ parameter in a multivariate normal distribution, and produce samples from this distribution, using a fixed sample size. The covariance (or correlation) matrix for each sample is compared to the observed matrix, and the mean of these comparisons is an estimate of the repeatability27. This method has the advantage of being easy to apply to matrices coming from linear models with many controlled effects, and not requiring the original data.\n\nSometimes the question we are trying to answer does not involve matrix comparisons, so other methods of assessing and correcting for error are needed.\n\nRarefaction(): Rarefaction consists of taking progressively smaller samples with replacement from the original data set, calculating some statistic on each data set and comparing this with the full data set. This gives a general idea of how the inferences would change if we had smaller sample sizes, and how robust our data set is with respect to sampling. The default operation is to calculate the covariance or correlation matrices and compare them using any of the matrix comparison methods (see section Matrix Comparison).\n\nExtendMatrix(): Marroig et al.30 showed that sampling error on covariance matrix estimation can have a dramatic effect on the reconstruction of net selection gradients using the multivariate response to selection equation22. One way to improve estimates is the simple procedure of \"extending\" the eigenvalues of the covariance matrix, where all the eigenvalues lower than a certain threshold are substituted by the smallest eigenvalue above the threshold. This causes minimal changes in the distribution of phenotypes, but improves dramatically the estimates of net selection gradients. See 30 for a thorough examination of the performance and consequences of the extension method on simulated and real data sets.\n\n\nEvolutionary statistics\n\nHansen and Houle15 provide a suite of statistics that have fairly good biological interpretations for a given G- or P-matrix. Marroig et al.31 is a comprehensive example of how these statistics may be used for interpreting morphological data.\n\nThe function MeanMatrixStatistics() calculates most of these statistics and their distributions, as shown below.\n\nIn the following, E[·]β represents the expected value over many random β vectors with unit norm, < ·, · > represents the dot product between two vectors, cos(·, ·) is the cosine between two vectors, G is an arbitrary covariance matrix, G-1 is the inverse G, tr(G) is the trace of G, and || · || the Euclidean norm of a vector. MeanMatrixStatistics() calculates:\n\nMean squared correlation (r2): Given a correlation matrix, the elements below the diagonal are squared and averaged resulting in a measure of integration, that is, overall association between traits (also see the section Modularity and Integration and 40).\n\nCoefficient of variation of eigenvalues (ICV): A measure of integration that is suitable for covariance matrices, as it takes the amount of variation into account. Notice that at least for mammals, mean squared correlations and ICV generally have very high correlation, but can lead to different conclusions if the traits included in the analysis have very different variances (due to scale, for example). If σλ is the standard deviation of the eigenvalues of a covariance matrix, and λ¯ is the mean of the eigenvalues, the ICV is:\n\nICV=σλλ¯\n\nPercent of variation in first principal component: If λ1G is the leading eigenvalue of G, we calculate this percentage as:\n\nPC1%=λ1Gtr(G)\n\nEvolvability (Figure 1): The mean projection of the response to random selection gradients with unit norm onto the selection gradient. This is a measure of a population’s available variation in the direction of a particular selection gradient, averaged across all directions15.\n\ne¯=E[<Gβ,β>]β\n\nFlexibility (Figure 1): The mean cosine of the angle between random selection gradients and the corresponding responses. Flexibility measures on average how the response to selection aligns with the selection gradient31.\n\nf¯=E(cos(Gβ,β))β\n\nRespondability (Figure 1): Mean norm of the response to random selection gradients with unit norm. It also estimates how fast the population mean will change under directional selection15,31.\n\nr¯=E[‖Gβ‖]β\n\nConditional Evolvability: Measures the mean response to selection in the direction of a given β when other directions are under stabilizing selection15.\n\nc¯=E[(<G−1β,β>)−1]β\n\nAutonomy: Measures the proportion of variance in the direction of a given β that is independent from variation in other directions. Therefore, mean Autonomy can also be calculated as the mean ratio between Conditional Evolvability (c¯) and Evolvability (e¯)15.\n\na¯=E[(<G−1β,β>)−1(<Gβ,β>)−1]β\n\nConstraints: The mean correlation between the response vector to random selection gradients and the matrix’s first principal component31. If Λ1G is the first principal component of G, constraints are measured as:\n\nE[cos(Gβ,Λ1G)]β\n\n\nMatrix comparison\n\nA G-matrix describes how the variation in particular populations is structured, but frequently the relevant question is how similar or dissimilar two populations are with respect to this standing variation. Since no two populations are identical, different patterns of variation are the norm. Depending on the evolutionary question at hand, different methods of comparing variation may be required. One possible application of matrix comparisons is when we wish to apply the Lande equation to micro and macroevolution, since this requires some additional assumptions, such as a relative stability of the G-matrix over generations. Comparing extant covariance matrices is a test of this required stability (e.g. 27). For a thoughtful discussion on the biological relevance of statistical significance in matrix comparisons, see the discussion in 13.\n\nOne approach to estimate the similarity or dissimilarity between matrices is to calculate the correlation between these matrices. EvolQG provides several functions for pairwise matrix correlation.\n\nRandomSkewers(): The Random Skewers (RS) method makes use of the Lande equation22, Δz = Gβ, where Δz represents the vector of response to selection, G the G-matrix and β the directional selection vector, or selection gradient. In the RS method, the two matrices being compared are multiplied by a large number of normalized random selection vectors, and the resulting response vectors to the same selection vector are compared via a vector correlation (the cosine between the two vectors). The mean value of the correlation between the responses to the same selective pressure is used as a simple statistic of how often two populations respond similarly (in the same direction) to the same selective pressure:\n\n\n\nWhere E[·]β is the expected value over random selection vectors β. Significance in the random skewers comparison can be determined using a null expectation of correlation between random vectors. If the observed correlation between two matrices is above the 95% percentile of the distribution of correlations between random vectors, we consider the correlation significant and infer that there is evidence the two populations behave similarly under directional selection. Other implementations of the RS method sometimes resort to other forms of calculating significance, such as generating random matrices and creating a random distribution of correlations between matrices. This is difficult to do because generating random matrices with the properties of biological covariance structures is hard, see the RandomMatrix() function for a quick discussion on this. The RS values range between -1 (the matrices have the opposite structure) and 1 (the matrices share the same structure), and zero means the matrices have distinct structures.\n\nMantelCor(): Correlation matrices can be compared using a simple Pearson correlation between the corresponding elements. Significance of this comparison must take the structure into account, so it is calculated by a permutation scheme, in which a null distribution is generated by permutation of rows and columns in one of the matrices and repeating the element-by-element correlation. The observed correlation is significant when it is larger than the 95% quantile of the permuted distribution. This method can not be used on covariance matrices because the variances might be very different, leading to large differences in the scale of the covariances. This scale difference can lead to a massive inflation in the correlation between matrices. The correlation between matrices range between -1 (the matrices have the opposite structure) and 1 (the matrices share the same structure), and zero means the matrices have distinct structures.\n\nKrzCor(): The Krzanowski shared space, or Krzanowski correlation, measures the degree to which the first principal components (eigenvectors) span the same subspace3,20, and is suitable for covariance or correlation matrices. If two n × n matrices are being compared, the first k=n2−1 principal components from one matrix are compared to the first k principal components of the other matrix using the square of the vector correlations, and the sum of the correlations is a measure of how congruent the spanned subspaces are. We can write the Krzanowski correlation in terms of the matrices’ principal components (ΛiA being the ith principal component of matrix A):\n\n\n\nThe Krzanowski correlation values range between 0 (two subspaces are dissimilar) and 1 (two subspaces are identical).\n\nPCAsimilarity(): The Krzanowski correlation compares only the subspace shared by roughly the first half of the principal components, but does not consider the amount of variation each population has in these directions of the morphological space52. In order to take the variation into account, we can add the eigenvalue associated with each principal component into the calculation, effectively pondering each correlation by the variance in the associated directions. If λiA is the ith eigenvalue of matrix A, we have:\n\n\n\nNote the sum spans all the principal components, not just the first k as in the Krzanowski correlation method. This method gives correlations that are very similar to the RS method, but is much faster. The PCA similarity values range between 0 (the shared subspaces have no common variation) and 1 (the shared subspaces have identical variation).\n\nSRD(): The RS method can be extended to give information into which traits contribute to differences in terms of the pattern of correlated selection due to covariation between traits in two populations29. The Selection Response Decomposition does this by treating the terms of correlated response in the Lande equation as separate entities. Writing out the terms in the multivariate response to selection equation:\n\n\n\nSeparating the terms in the sums of the right hand side:\n\n\n\nEach of these row vectors riAβ=(Aijβj)j=1…n are the components of the response to the selection gradient β on trait i. The term Aiiβi represents the response to direct selection on trait i, and the terms (Aijβj)i≠j represent the response to indirect selection due to correlation with the other traits. Given two matrices, A and B, we can measure how similar they are in their pattern of correlated selection on each trait by calculating the correlation between the vectors ri for each trait for random selection vectors of unit norm. The mean SRD score for trait i is then:\n\n\n\nAnd the standard deviation of the correlations gives the variation in SRD scores:\n\n\n\nWhen the same trait in different matrices share a correlated response pattern, μSRD is high and σSRD is low; if the correlated response pattern is different, μSRD is low and σSRD is high. See 29 for details and examples.\n\nAnother approach to estimate the similarity or dissimilarity between matrices is to calculate the distance between a pair of matrices. Matrices distances are different from correlations in that correlations are limited to [-1,+1], while distances must only be positive. Also, smaller values of distances mean more similarity. Two distances are in use in the current evolutionary literature, and are implemented in the function MatrixDistance().\n\nOverlap distance: Ovaskainen et al.39 proposed a distance based on probability distributions, where two covariance matrices would have a distance proportional to how distinguishable they are. This distance is natural if we think of covariance matrices as describing the probability distribution of phenotypes or additive values in the population. The higher the probability of a random draw coming from the distribution defined by one of the matrices being misclassified as coming from the distribution defined by the other, the lower the distance. For two probability distributions f and g, the probability of misclassifying a draw from f as coming from g is:\n\n\n\nwhere n is the dimensionality of the space in which the distributions are defined. If the distributions are indistinguishable, q(f, g) = 1/2, if they are completely distinguishable q(f, g) = 0. We can then define the distance as:\n\n\n\nSince q(f, g) is symmetrical, d(f, g) is also symmetrical, and the square root guaranties that d(f, g) satisfies the triangle inequality39. Calculation is straight forward and can be done with a simple sampling Monte Carlo scheme, see 39 for details.\n\nRiemann distance: Mitteroecker and Bookstein34 use a Riemannian metric in the space of positive definite matrices (either covariance or correlation matrices), based on exponential mapping35 to quantify transition in the ontogenetic trajectory of phenotypic covariance matrices. This metric is based on the eigenvalues of the product of one matrix to the inverse of the other. If λi are the eigenvalues of A-1B (or AB-1), we have:\n\n\n\nThis distance has the advantage of being invariable under changes in the base used to represent the matrices. See 34 for a discussion on the biological relevance of this distance.\n\n\nPhylogenetic comparisons\n\nAncestralStates(): Ancestral state reconstruction of continuous traits using maximum likelihood can be performed in R (for example, using ape or phytools), but most packages don’t provide a simple interface for multivariate data. AncestralStates() is a wrapper for fastAnc() in phytools45 that works on multivariate data, reconstructing each character independently.\n\nPhyloW(): Given a set of covariance matrices for the terminal taxa in a phylogeny, we can estimate the covariance matrix for internal nodes by taking means over sister taxa, weighted by sample size. The mean matrix at the root node is the within-group covariance matrix in a MANCOVA with the terminal clades as the fixed effects. PhyloW() does this by taking a tree and a set of measurements (covariance matrices) and returns means for internal nodes. The implementation is generic, so this function can also be used to calculate weighted means for any numerical measurement with an addition operation implemented in R.\n\nWhile using the within-group covariance matrix is a reasonable alternative as the estimator of an ancestral covariance matrix, this ignores branch lengths, and so should be used carefully when matrix differences are correlated to phylogenetic distance. An alternative when matrix evolution depends of branch lengths is to reconstruct every position of the covariance matrix independently via maximum likelihood, but this method can result in non positive-definite estimates.\n\nPhyloCompare(): Sometimes it is not practical to pairwise compare every single population in a study, since for a large number of populations these results can be difficult to interpret. In these cases, comparing populations in a phylogeneticaly structured way can be helpful in detecting major transitions or differences between clades. PhyloCompare() takes estimates for all the nodes in a tree and compares sister groups by any comparison method, providing comparison values for every inner node.\n\n\nHypothesis testing\n\nModularity is a general concept in biology, and refers to a pattern of organization that is widespread in many biological systems. In modular systems, we find that some components of a given structure are more related or interact more between themselves than with other components. These highly related groups are termed modules. The nature of this interaction will depend on the components being considered, but may be of any kind, like physical contact between proteins, joint participation of enzymes in given biochemical pathways, or high correlation between quantitative traits in a population. This last kind of modularity is called variational modularity, and is characterized by high correlations between traits belonging to the same module and low correlation between traits in different modules51. In the context of morphological traits, variational modularity is associated with the concept of integration38, that is, the tendency of morphological systems to exhibit correlations due to common developmental factors and functional demands7,14.\n\nBoth modularity and integration may have important evolutionary consequences, since sets of integrated traits will tend to respond to directional selection in an orchestrated fashion due to genetic correlations between them; if these sets are organized in a modular fashion, they will also respond to selection independently of one another31. At the same time, selection can alter existing patterns of integration and modularity, leading to traits becoming more or less correlated18,32. The pattern of correlation between traits in a G-matrix then carries important information on the expected response to selection and on the history of evolutionary change of a given population.\n\nTestModularity(): Variational modularity can be assessed by comparing a modularity hypothesis (derived from development and functional considerations) with the observed correlation matrix. If two traits are in the same variational module, we expect the correlation between them to be higher than between traits belonging to different modules. We test this by creating a modularity hypothesis matrix and comparing it via Mantel correlation with the observed correlation matrix. The modularity hypothesis matrix consists of a binary matrix where each row and column corresponds to a trait. If the trait in row i is in the same module of the trait in column j, position (i, j) in the modularity hypothesis matrix is set to one, if these traits are not in the same module, position (i, j) is set to zero. Significant correlation between the hypothetical matrix representing a modularity hypothesis and the observed correlation matrix represents evidence of the existence of this variational module in the population. We also measure the ratio between correlations within a module (AVG+) and outside the module (AVG-). This ratio (AVG+/AVG-) is called the AVG Ratio, and measures the strength of the within-module association compared to the overall association for traits outside the module. The higher the AVG Ratio, the bigger the correlations within a module in relation to all other traits associations in the matrix (e.g., 41). TestModularity() also provides the Modularity Hypothesis Index, which is the difference between AVG+ and AVG- divided by the coefficient of variation of eigenvalues. Although the AVG Ratio is easier to interpret (how many times greater the within-module correlation is compared to the between-module correlation) than the Modularity Hypothesis Index, the AVG Ratio cannot be used when the observed correlation matrix presents correlations that differ in sign, and this is usually the case for residual matrices after size removal (for example with RemoveSize(), but see 19 for other alternatives). In these cases, the Modularity Hypothesis Index is useful and allows comparing results between raw and residual matrices42.\n\nLModularity(): If no empirical or theoretical information is available for creating a modularity hypothesis, such as functional or developmental data, we can try to infer the modular partition of a given population by looking only at the correlation matrix and searching for the trait partition that minimizes some indicator of modularity. Borrowing from network theory, we can treat a correlation matrix as a fully connected weighted graph, and define a Newman-like modularity index36. If A is a correlation matrix we define L modularity as:\n\n\n\nThe terms gi and gj represent the partition of traits, that is, in what modules the traits i and j belong to. The function δ(·, ·) is the Kronecker delta, where:\n\n\n\nThis means only traits in the same module contribute to the value of L. The term ki represent the total amount of correlation attributed to trait i, or the sum of the correlation with trait i:\n\n\n\nAnd m is the sum of all k(m=∑iki). The term kikj2m plays the role of a null expectation for the correlation between the traits i and j. This choice for the null expectation is natural when we impose that it must depend on the values of ki and kj and must be symmetrical36. So, traits in the same module with correlations higher than the null expectation will contribute to increase the value of L, while traits in the same module with correlation less than the null expectation will contribute to decrease L. With this definition of L, we use a Markov Chain Monte Carlo annealing method to find the partition of traits (values of gi) that maximizes L. This partition corresponds to the modularity hypothesis inferred from the correlation matrix, and the value of L is a measure of modularity comparable to the AVG Ratio. The igraph package8 provides a number of similar community detection algorithms that can also be used in correlation matrices.\n\nRemoveSize(): If the first principal component of a covariance or correlation matrix corresponds to a very large portion of its variation, and all (or most) of the entries of the first principal component are of the same sign (a size principal component, see 26), it is useful to look at the structure of modularity after removing this dominant integrating factor. This is done using the method described in 4. Porto et al.42 show that modularity is frequently more easily detected in matrices where the first principal component variation was removed and provide biological interpretations for these results.\n\nSelection is frequently invoked to explain morphological diversification, but the null hypothesis of drift being sufficient to explain current observed patterns must always be entertained. We can test the plausibility of drift for explaining multivariate diversification by using the regression method described in 1, or the correlation of principal component scores28. Since both these tests use drift as a null hypothesis, failure to reject the null hypothesis is not evidence that selection was not involved in the observed pattern of diversification, only that the observed pattern is compatible with drift.\n\nDriftTest(): Under drift, we expect that the current between group variance for many populations will be proportional to the ancestral population’s covariance structure, which is approximated by the pooled within-group covariance matrix. Conditions for the validity of these assumptions are reviewed in 43. Under these conditions, if B is the between group covariance matrix, and W is the within group covariance matrix, t is the time in number of generations and Ne is the effective population size, we have:\n\n\n\nIf we express all these matrices in terms of the eigenvectors of W, so that W is diagonal, we can write B as the variance of the scores of the means on these eigenvectors. The relationship between B and W can be expressed as a log regression, where Bi is the variance between groups in the projected means and λiW are the eigenvalues of W:\n\n\n\nwhere β is the regression coefficient. Under drift we expect β to be one. If β is significantly different from one, we have evidence that drift is not sufficient to explain currently observed diversification.\n\nMultivDriftTest(): This drift test verifies the plausibility of drift in a multivariate context when only two populations are available, one ancestral (or reference) and one derived. Let z0 represent a vector of means from m traits in an ancestral population. After t generations, the expected traits mean for n populations under drift would correspond to z0 with variance given by B = (t/Ne)W, where B represents the expected between group covariance matrix, W is the genetic covariance matrix from the ancestral (or reference) population, and Ne is the effective population size17,21,22. So, given the ancestral population mean and G-matrix, we can use this model to estimate the B-matrix expected under drift. We can then use this B-matrix as the Σ parameter in a multivariate normal distribution and sample n populations from this distribution. Using this sample of random populations, we can assess the amount of divergence expected by drift, estimated as the norm of the difference vectors between ancestral (or reference) and simulated population means. Then, we can compare the observed amount of divergence between the ancestral and derived populations, calculated as the norm of the difference vector between them, taking into account the standard error of traits means. An observed divergence higher than the expectations under drift indicates that genetic drift is not sufficient to explain currently observed divergence, suggesting a selective scenario.\n\nPCScoreCorrelation(): This test of drift relies on the correlation between principal component scores of different populations. Under drift, we expect the mean scores of different populations in the principal components of the within-group covariance matrix to be uncorrelated28. Significant correlations between the scores of the means on any two principal components is an indication of correlated directional selection10.\n\nRandomMatrix(): Generating realistic random covariance matrices for null hypothesis testing is a challenging task, since random matrices must adequately sample the space of biologically plausible evolutionary parameters, like integration and flexibility. Most common covariance and correlation matrix sampling schemes fail at this, producing matrices with unrealistically low levels of integration, unless the level of integration is supplied a priori (as in 12). The method described in 37 provides correlation matrices with a reasonable range of evolutionary characteristics. However, the adequacy of the generated matrices in hypothesis testing has not been well established, and we recommend these random matrices be used only for informal tests requiring an arbitrary covariance or correlation matrix.\n\n\nSummary\n\nWe have described a suite of functions dedicated to analyzing multivariate data sets within an evolutionary quantitative genetics framework. These functions focus on the central role that covariance and correlation matrices play in this framework; therefore, we provide functions that perform both descriptive statistics and hypothesis testing related to such matrices within an evolutionary context.\n\nWe have intentionally neglected to include techniques like phylogenetic regression or more extensive linear model functionality. We also lack Bayesian implementations that would be possible for some functions (i.e. 3). Our reasons for this are twofold: the difficulty in transposing these methods efficiently to multiple traits, especially with respect to Bayesian implementations of existing functions, and the many different robust packages for performing some of these analyses, such as phytools, phylolm, pgls, nlme, MCMCglmm and others.\n\nSome of the material implemented here is available in other sources or through custom implementations. We have attempted to create a single consistent source for these techniques. This is by no means an exhaustive effort, and we hope to expand it given demand from the community and further developments in the field. We hope to contribute to standardization and wide adoption of these tools, and, since we opted for an open source implementation under R, this also allows the involvement of the R community in using, debugging and complementing these tools, in an effort to contribute to an open scientific environment in which, for example, truly reproducible results are the norm rather than the exception.\n\n\nSoftware availability\n\nThe most recent version of the EvolQG package can be installed from GitHub using the package devtools:\n\n\n\nA less up-to-date version is also available from CRAN:\n\n\n\n1. Software available from: https://cran.r-project.org/web/packages/evolqg/\n\n2. Latest source code: https://github.com/lem-usp/EvolQG\n\n3. Archived source code as at time of publication: http://dx.doi.org/10.5281/zenodo.3091053\n\n4. License: The MIT License (https://opensource.org/licenses/MIT)",
"appendix": "Author contributions\n\n\n\nD.M. compiled existing implementations, re-factored existing functions and contributed new code, documentation and unit tests. G.G. provided the initial set of functions and first implementations. A.H. tested and revised code and documentation. A.P.A. contributed code and documentation, G.M. developed methods and devised necessary elements for package. All authors wrote the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by funding from FAPESP. D.M. was funded by grants 2012/20180-0 and 2014/01694-9; G.G. was funded by grants 2008/56886-6 and 2011/52469-4; A.H. was funded by grant 2012/24937-9; A.P.A was funded by grant 2010/52369-0; G.M. was funded by grant 2011/14295-7.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Barbara Costa, Daniela Rossoni, Edgar Zanella and Fabio Machado for contributing code and revising documentation and results.\n\n\nReferences\n\nAckermann RR, Cheverud JM: Discerning evolutionary processes in patterns of tamarin (genus Saguinus) craniofacial variation. Am J Phys Anthropol. 2002; 117(3): 260–271. PubMed Abstract | Publisher Full Text\n\nAckermann RR, Cheverud JM: Detecting genetic drift versus selection in human evolution. Proc Natl Acad Sci U S A. 2004; 101(52): 17946–17951. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAguirre JD, Hine E, McGuigan K, et al.: Comparing G: multivariate analysis of genetic variation in multiple populations. Heredity (Edinb). 2014; 112(1): 21–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBookstein FL, Chernoff B, Elder RL, et al.: Morphometrics in evolutionary biology: the geometry of size and shape change, with examples from fishes. 1985. Reference Source\n\nCheverud JM: Phenotypic, Genetic, and Environmental Morphological Integration in the Cranium. Evolution. 1982; 36(3): 499–516. Publisher Full Text\n\nCheverud JM: A Comparison of Genetic and Phenotypic Correlations. Evolution. 1988; 42(5): 958–968. Publisher Full Text\n\nCheverud JM: Developmental Integration and the Evolution of Pleiotropy. Integr Comp Biol. 1996; 36(1): 44–50. Publisher Full Text\n\nCsardi G, Nepusz T: The igraph software package for complex network research. Inter Journal. Complex Systems: 1695, 2006. Reference Source\n\nFalconer DS, Mackay TF: Introduction to Quantitative Genetics. Benjamin Cummings, London, 4th edition, 1996. Reference Source\n\nFelsenstein J: Phylogenies And Quantitative Characters. Annu Rev Ecol Syst. 1988; 19(1): 445–471. Publisher Full Text\n\nGarcia G, Hingst-Zaher E, Cerqueira R, et al.: Quantitative Genetics and Modularity in Cranial and Mandibular Morphology of Calomys expulsus. Evol Biol. 2014; 41(4): 619–636. Publisher Full Text\n\nHaber A: A Comparative Analysis of Integration Indices. Evol Biol. 2011; 38(4): 476–488. Publisher Full Text\n\nHaber A: The Evolution of Morphological Integration in the Ruminant Skull. Evol Biol. 2015; 42(1): 99–114. Publisher Full Text\n\nHallgrímsson B, Jamniczky H, Young NM, et al.: Deciphering the Palimpsest: Studying the Relationship Between Morphological Integration and Phenotypic Covariation. Evol Biol. 2009; 36(4): 355–376. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHansen TF, Houle D: Measuring and comparing evolvability and constraint in multivariate characters. J Evol Biol. 2008; 21(5): 1201–1219. PubMed Abstract | Publisher Full Text\n\nHill WG, Thompson R: Probabilities of Non-Positive Definite between-Group or Genetic Covariance Matrices. Biometrics. 1978; 34(3): 429–439. Publisher Full Text\n\nHohenlohe PA, Arnold SJ: MIPoD: a hypothesis-testing framework for microevolutionary inference from patterns of divergence. Am Nat. 2008; 171(3): 366–385. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones AG, Arnold SJ, Bürger R: Evolution and stability of the G-matrix on a landscape with a moving optimum. Evolution. 2004; 58(8): 1639–1654. PubMed Abstract | Publisher Full Text\n\nJungers WL, Falsetti AB, Wall CE: Shape, relative size, and size-adjustments in morphometrics. Am J Phys Anthropol. 1995; 38(S2): 137–161. Publisher Full Text\n\nKrzanowski WJ: Between-Groups Comparison of Principal Components. J Am Stat Assoc. 1979; 74(367): 703–707. Publisher Full Text\n\nLande R: Natural Selection and Random Genetic Drift in Phenotypic Evolution. Evolution. 1976; 30(2): 314–334. Publisher Full Text\n\nLande R: Quantitative Genetic Analysis of Multivariate Evolution, Applied to Brain: Body Size Allometry. Evolution. 1979; 33(1): 402–416. Publisher Full Text\n\nLessells CM, Boag PT: Unrepeatable repeatabilities: a common mistake. Auk. 1987; 104(1): 116–121. Publisher Full Text\n\nLofsvold D: Quantitative Genetics of Morphological Differentiation in Peromyscus. I. Tests of the Homogeneity of Genetic Covariance Structure Among Species and Subspecies. Evolution. 1986; 40(3): 559–573. Publisher Full Text\n\nLynch M, Walsh B: Genetics and analysis of quantitative traits. Sinauer, Sunderland, MA., 1998. Reference Source\n\nMarroig G, Cheverud J: Size as a line of least resistance II: Direct selection on size or correlated response due to constraints? Evolution. 2010; 64(5): 1470–1488. PubMed Abstract | Publisher Full Text\n\nMarroig G, Cheverud JM: A comparison of phenotypic variation and covariation patterns and the role of phylogeny, ecology, and ontogeny during cranial evolution of new world monkeys. Evolution. 2001; 55(12): 2576–2600. PubMed Abstract | Publisher Full Text\n\nMarroig G, Cheverud JM: Did natural selection or genetic drift produce the cranial diversification of neotropical monkeys? Am Nat. 2004; 163(3): 417–428. PubMed Abstract | Publisher Full Text\n\nMarroig G, Melo D, Porto A, et al.: Selection Response Decomposition (SRD): A New Tool for Dissecting Differences and Similarities Between Matrices. Evol Biol. 2011; 38(2): 225–241. Publisher Full Text\n\nMarroig G, Melo DA, Garcia G: Modularity, noise, and natural selection. Evolution. 2012; 66(5): 1506–1524. PubMed Abstract | Publisher Full Text\n\nMarroig G, Shirai LT, Porto A, et al.: The Evolution of Modularity in the Mammalian Skull II: Evolutionary Consequences. Evol Biol. 2009; 36(1): 136–148. Publisher Full Text\n\nMelo D, Marroig G: Directional selection can drive the evolution of modularity in complex traits. Proc Natl Acad Sci U S A. 2015; 112(2): 470–475. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer K, Kirkpatrick M: Perils of parsimony: properties of reduced-rank estimates of genetic covariance matrices. Genetics. 2008; 180(2): 1153–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitteroecker P, Bookstein F: The ontogenetic trajectory of the phenotypic covariance matrix, with examples from craniofacial shape in rats and humans. Evolution. 2009; 63(3): 727–737. PubMed Abstract | Publisher Full Text\n\nMoakher M: On the Averaging of Symmetric Positive-Definite Tensors. J Elast. 2006; 82(3): 273–296. Publisher Full Text\n\nNewman ME: Modularity and community structure in networks. Proc Natl Acad Sci U S A. 2006; 103(23): 8577–8582. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNumpacharoen K, Atsawarungruangkit A: Generating correlation matrices based on the boundaries of their coefficients. PLoS One. 2012; 7(11): e48902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlson R, Miller E: Morphological integration. University of Chicago Press, Chicago, 1958. Reference Source\n\nOvaskainen O, Cano JM, Merilä J: A Bayesian framework for comparative quantitative genetics. Proc Biol Sci. 2008; 275(1635): 669–678. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPavlicev M, Cheverud JM, Wagner GP: Measuring Morphological Integration Using Eigenvalue Variance. Evol Biol. 2009; 36(1): 157–170. Publisher Full Text\n\nPorto A, de Oliveira FB, Shirai LT, et al.: The Evolution of Modularity in the Mammalian Skull I: Morphological Integration Patterns and Magnitudes. Evol Biol. 2009; 36(1): 118–135. Publisher Full Text\n\nPorto A, Shirai LT, de Oliveira FB, et al.: Size variation, growth strategies, and the evolution of modularity in the mammalian skull. Evolution. 2013; 67(11): 3305–3322. PubMed Abstract | Publisher Full Text\n\nPrôa M, O’Higgins P, Monteiro LR: Type I error rates for testing genetic drift with phenotypic covariance matrices: a simulation study. Evolution. 2013; 67(1): 185–195. PubMed Abstract | Publisher Full Text\n\nR Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2014. Reference Source\n\nRevell LJ: phytools: An R package for phylogenetic comparative biology (and other things). Methods Ecol Evol. 2012; 3(2): 217–223. Publisher Full Text\n\nRoff D: Evolutionary quantitative genetics: Are we in danger of throwing out the baby with the bathwater? Ann Zool Fennici. 2003; 40(4): 315–320. Reference Source\n\nRoff DA: The estimation of genetic correlations from phenotypic correlations: a test of Cheverud’s conjecture. Heredity. 1995; 74(5): 481–490. Publisher Full Text\n\nRuncie DE, Mukherjee S: Dissecting high-dimensional phenotypes with Bayesian sparse factor analysis of genetic covariance matrices. Genetics. 2013; 194(3): 753–767. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchluter D: Adaptive Radiation Along Genetic Lines of Least Resistance. Evolution. 1996; 50(5): 1766–1774. Publisher Full Text\n\nSteppan SJ, Phillips PC, Houle D: Comparative quantitative genetics: evolution of the G matrix. Trends Ecol Evol. 2002; 17(7): 320–327. Publisher Full Text\n\nWagner GP, Pavlicev M, Cheverud JM: The road to modularity. Nat Rev Genet. 2007; 8(12): 921–931. PubMed Abstract | Publisher Full Text\n\nYang K, Shahabi C: A PCA-based similarity measure for multivariate time series. In Proceedings of the 2nd ACM international workshop on Multimedia databases. ACM, 2004; 65–74. Publisher Full Text\n\nMelo D, Alvarenga EZ, Hubbe A, et al.: EvolQG: F1000Research. Zenodo. 2015. Data Source"
}
|
[
{
"id": "10607",
"date": "16 Oct 2015",
"name": "Mark Grabowsky",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Software Tool is an open source R package for performing a wide variety of evolutionary genetic analyses. It comes at a particularly useful time when large data sets required to use these analytical techniques are more and more available. I predict this package will be used to a greater and greater extent over time.The team, with lead author Diogo Melo, has made a number of major innovations in the field of evolutionary quantitative genetics and this software tool is well within their purview.The group describes the potential application of the new software well, cites the sources for the development of the different methods included, and briefly explains each function with an adequate level of discussion. I found the section on Phylogenetic Comparisons most interesting, and would have liked more discussion of these points.Issues:It would be nice to include data for which the tools could be tried out on and examples of the output included in the text. I also worry about the power of some of these tests, and would appreciate if the authors were to think about adding in caveats where available. There are a few papers out currently that address these issues (e.g. Haber, 2011). For example, I have both heard and performed some analyses using another version of the DriftTest() function and it appears about 20 traits are required to ever reject the null hypothesis. As this package is sure to be used by researchers who are less acquainted with these issues they may be unaware of issues with sample sizes, etc.",
"responses": [
{
"c_id": "2023",
"date": "27 Jun 2016",
"name": "Diogo Melo",
"role": "Author Response",
"response": "Thank you for your kind comments. We have included some additional caveats in relation to the power and level of replication required for some of the tests. With regards to examples, we fell a package vignette tutorial is much more suited to this, and we are working on more documentation to be distributed with the package."
}
]
},
{
"id": "10610",
"date": "16 Nov 2015",
"name": "David Houle",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes the functions implemented in the R package “evolvqg.” This package implements a large set of methods for characterizing and comparing variance-covariance matrices. A compendium of such methods is useful, but the user should be aware of shortcomings in this package. First, the authors justify grouping these methods together under the umbrella of “evolutionary quantitative genetics.” Under this label, the very diversity of methods is a bit misleading - the majority of implemented methods have no explicit relationship to evolution, and this is not clear in the paper. To a large degree this confusion reflects the state of the field. A great many ad hoc methods are proposed and widely applied, such as the popular Mantel test. For example, this package implements methods for quantifying repeatability of the phenotypic covariance and correlation matrices, which is a fine thing to do, but has little relevance to the genetic matrices that underlie evolution. For those matrices, any procedure for estimating them will partition out the non-genetic factors as part of the hierarchical model fitting. Repeatability is usually the least of our estimation problems. Other methods that are in this not-really-evolutionary category include, matrix correlations, the ‘modularity’ analyses. A related issue is that this package does not include state-of-the-art techniques. In particular, the advent of MCMC methods (plus other methods for getting sampling variation of matrices: Houle and Meyer 2015) provides proper measures of uncertainty for the evolutionarily relevant G matrices. The previously available R package “evolvability” uses the posterior distributions for G matrices as well as selection gradients, when available, to place confidence intervals on the Hansen and Houle measures of evolvabilities. The “evolqg” package under review here primarily implements methods developed and used by the Marroig group, lacking some of the most promising approaches and metrics developed by others (e.g. Hine et al. 2009, Houle and Fierst 2013; Aguirre et al. 2014). Programming time is of course a limitation, and no one is under obligation to implement everything. However, it is deficient scholarship that this paper makes no mention of these approaches, or of the availability of software that does these analyses.\n\nBased on their descriptions here, the phylogenetic comparison methods implemented in this package seem to be quite deficient. AncestralStates is contradictorily described as dealing with multivariate data, but reconstructs each character independently. It appears to implement an interface to a univariate method, and is not actually a multivariate approach. PhyloW and PhloCompare compute “weighted” estimates of matrices at internal nodes, but the proper weight to apply to a G matrix is not clear at all. This appears to only deal with sampling variance at the individual level, neglecting the more important sources of matrix variation. This disconnect appears to rest on the author's assumption that “As a general rule, high similarity between populations’ P-matrices is a good indicator of high similarity between P and G”. This is certainly not a general rule, as P and G matrices for traits with low heritability can be very different from each other. The classic examples are life history traits.For drift models, this implementation shares a major deficiency with previous work, in assuming that the expectation is that matrices will remain proportional. This is indeed the large-sample expectation, but any actual population will deviate from that expectation in ways that depend on the unknown parameters of the underlying system – the number of loci, their relative mutability, the underlying M matrix, and of course the effective population size of each part of the genome. For example, Griswold et al. 2007 show that even when the underlying M matrix is spherical, realizations based on this will have substantial deviations from sphericity. There is no general treatment of this problem, so all results from such analyses need to be treated with caution. A failure to reject a departure from proportionality is meaningful, but rejections of proportionality do not necessarily indicate that drift is not responsible. Finally, like much of the software being made freely available, there is no description of what the authors did to validate their implementation of these techniques, and no comparisons with previous analyses to indirectly validate them. One or both of these should really be standard with new software. I know that it is not standard, and do not want to single these authors out on that account. The user, however, should be aware of all unvalidated software, and should perform their own checks. Unfortunately, this is only easy for simple procedures for which a package is not really necessary. Some very specific issues:Errors: “The proportion of variance not associated with the individuals is called the repeatability.” The description of Mantel tests is misleading, as a high correlation does not mean that matrices are the “same,” nor does a negative correlation mean that matrices are “opposite.” The description of the PCA similarity algorithm is opaque. What is “pondering”?",
"responses": [
{
"c_id": "2022",
"date": "27 Jun 2016",
"name": "Diogo Melo",
"role": "Author Response",
"response": "- This paper describes the functions implemented in the R package “evolqg.” This package implements a large set of methods for characterizing and comparing variance-covariance matrices. A compendium of such methods is useful, but the user should be aware of shortcomings in this package. Thank you for your insightful comments. One of the reasons we chose F1000research for this article is our perception that something is amiss in the current system of peer-review, where good reviewers are burdened and swamped by manuscripts and authors tend to often ignore their criticisms and take the short road to the next journal inline instead of dealing with the criticisms and polish their manuscripts. The fully open system implemented by F1000 seems like a good way out of this trap we express our gratitude to Drs. Houle and Grabowsky for taking their time to criticize this package. We respond to comments individually after each of your considerations. - First, the authors justify grouping these methods together under the umbrella of “evolutionary quantitative genetics.” Under this label, the very diversity of methods is a bit misleading - the majority of implemented methods have no explicit relationship to evolution, and this is not clear in the paper. To a large degree this confusion reflects the state of the field. A great many ad hoc methods are proposed and widely applied, such as the popular Mantel test. For example, this package implements methods for quantifying repeatability of the phenotypic covariance and correlation matrices, which is a fine thing to do, but has little relevance to the genetic matrices that underlie evolution. For those matrices, any procedure for estimating them will partition out the non-genetic factors as part of the hierarchical model fitting. Repeatability is usually the least of our estimation problems. Other methods that are in this not-really-evolutionary category include, matrix correlations, the ‘modularity’ analyses. This is a fair comment, and indeed an argument could be made to separate the different aspects of evolutionary research that are presented here into different packages. We chose to keep these different methods in a single source since we feel a consistent workflow is very beneficial to research on evolution and covariation. For example, many of the latter hypothesis-testing methods presented in our paper should only be used if some level of matrix similarity is detected, and the matrix similarities, in turn, should not be interpreted without matrices' repeatabilities. Furthermore, while the development of multivariate evolutionary theory and the theory of integration and modularity were separate, Lande's, Cheverud's and Wagner's work since the 80s have linked these fields very intimately, to a point that it is hard for us to think of these fields as separate. From our point of view, the influence of covariation in evolution, and the genetic and developmental origin of these variational associations make integration and modularity central to modern evolutionary theory, even though admittedly there are a number of researchers who consider morphological integration and comparative quantitative genetics as two separate fields. We feel this integrative approach links the developmental and intra-populational causes of genetic covariation to their evolutionary consequences, leading to a more complete and robust understanding of micro- and macro-evolution. Finally, the inclusion of methods that are mostly \"statistical\" and not \"evolutionary\" should make it easier for researchers to check the quality of their data and its appropriateness for further analyses. - A related issue is that this package does not include state-of-the-art techniques. In particular, the advent of MCMC methods (plus other methods for getting sampling variation of matrices: Houle and Meyer 2015) provides proper measures of uncertainty for the evolutionarily relevant G matrices. The previously available R package “evolvability” uses the posterior distributions for G matrices as well as selection gradients, when available, to place confidence intervals on the Hansen and Houle measures of evolvabilities. The “evolqg” package under review here primarily implements methods developed and used by the Marroig group, lacking some of the most promising approaches and metrics developed by others (e.g. Hine et al. 2009, Houle and Fierst 2013; Aguirre et al. 2014). Programming time is of course a limitation, and no one is under obligation to implement everything. However, it is deficient scholarship that this paper makes no mention of these approaches, or of the availability of software that does these analyses. While we challenge the reviewer's assertion that we fail to mention these methods (see the second paragraph of our Summary section; and the method described in Houle and Meyer is implemented by the function MonteCarloStat()), the point is well taken. Another reason for choosing F1000Research was the ease of updating the manuscript as we add new functionality to the EvolQG package. In this spirit of continuous development, we added in the revised version of the package three of the methods described in Aguirre et al. 2014, including the eigentensor decomposition described by Hine et al. 2009, using fast and flexible implementations in R. - Based on their descriptions here, the phylogenetic comparison methods implemented in this package seem to be quite deficient. We would argue that the available methods in the literature are deficient in dealing with multivariate correlated traits, and we provide the rather simple available methods. - AncestralStates is contradictorily described as dealing with multivariate data, but reconstructs each character independently. It appears to implement an interface to a univariate method, and is not actually a multivariate approach. We clearly stated that AncestralStates is not a multivariate approach. Indeed AncestralStates is just a wrapper to facilitate reconstructing multiple traits independently. If the reviewer has a suggestion on how to implement this taking the multivariate covariance structure into account for many traits we would be very interested, but this remains an active research topic, and we have not found a satisfactory solution for this problem. Since this seems to be more misleading than helpful, we have removed this function from the package and manuscript. - PhyloW and PhyloCompare compute “weighted” estimates of matrices at internal nodes, but the proper weight to apply to a G matrix is not clear at all. This appears to only deal with sampling variance at the individual level, neglecting the more important sources of matrix variation. This is another unambitious method, and was intended only to calculate within-group phenotypic covariance matrices in a phylogenetically structured way. The use with G-matrices coming from more complex linear models would indeed be non-trivial, and we now make this explicit in the description. - This disconnect appears to rest on the author's assumption that “As a general rule, high similarity between populations’ P-matrices is a good indicator of high similarity between P and G”. This is certainly not a general rule, as P and G matrices for traits with low heritability can be very different from each other. The classic examples are life history traits. We would argue that in the case of low heritability the P-matrices between populations would also be dissimilar. The point here is not that P and G are always similar, but that similar Ps between populations are a fair indication of similar Gs and Ps, at least in the groups we have worked with. In mammals, this conclusion is supported empirically by the comparisons of 5 different G-matrices with P-matrices of several groups, which indicate similar responses to random selection on average (Porto, 2009). In any event, we add a caveat on the function description that structurally similar matrices are a key component of the methods implemented here. - For drift models, this implementation shares a major deficiency with previous work, in assuming that the expectation is that matrices will remain proportional. This is indeed the large-sample expectation, but any actual population will deviate from that expectation in ways that depend on the unknown parameters of the underlying system – the number of loci, their relative mutability, the underlying M matrix, and of course the effective population size of each part of the genome. For example, Griswold et al. 2007 show that even when the underlying M matrix is spherical, realizations based on this will have substantial deviations from sphericity. There is no general treatment of this problem, so all results from such analyses need to be treated with caution. A failure to reject a departure from proportionality is meaningful, but rejections of proportionality do not necessarily indicate that drift is not responsible. This is indeed a problem, and we try to remedy this by ensuring the matrices share some minimum level of similarity before using these drift models. Proa et al. 2013 analysed the type I error rate in the DriftTest method, and found that if matrices are similar the test is well behaved. This similarity must be tested on a case by case basis. Error rate analysis of the other tests is an open problem. Another option is to repeat the analysis using different matrices from the terminal taxa that represent extremes of variability, and check if the results are robust to this. We make these caveats and problems clear in the revised manuscript. This paragraph now reads: \"Since both these tests use drift as a null hypothesis, failure to reject the null hypothesis is not evidence that selection was not involved in the observed pattern of diversification, only that the observed pattern is compatible with drift. Also, these methods assume that the matrices involved share some degree of similarity, and should ideally be proportional to each other. We would be very weary of using these methods if the matrices are too dissimilar, or if the results change radically if different matrices are used as the ancestral matrix. Also, these tests rely on two levels of replication, taxa and traits. As a general guideline, at least 20 traits and at least 8 taxa should be sampled for using these methods with any confidence, and results should be analyzed in conjunction with other lines of evidence.\" With regards to Griswold et al. 2007, we believe that verifying the extant matrices are similar somewhat sidesteps these problems, and in any event their simulations do not include stabilizing selection on covariance patterns, which is very likely to exist if matrices are stable in evolutionary timescales. It’s important to realize that the methods we describe are for identifying drift on species means, not covariances. Evolution of covariance patterns is a different matter altogether. - Finally, like much of the software being made freely available, there is no description of what the authors did to validate their implementation of these techniques, and no comparisons with previous analyses to indirectly validate them. One or both of these should really be standard with new software. I know that it is not standard, and do not want to single these authors out on that account. The user, however, should be aware of all unvalidated software, and should perform their own checks. Unfortunately, this is only easy for simple procedures for which a package is not really necessary. We agree entirely, and feel that the bar for scientific software should be high, and so we took additional steps in this direction. While no implementation is bug free, we compared all results from our initial set of functions between different implementations done by members of our lab and available implementations in the literature. Also, all development for the package was done in a test driven development framework, and all functions have unit tests for the most or all of their functionality, that is run every time the package is built. This insures modifications do not alter previous results. The implementations in the package follow a modular design for most functionality, minimizing code duplication and reducing the chance of bugs. We also have a fast and constantly maintained issue and bug tracker in github, where users can ask questions, request new functionality, and report bugs. - Some very specific issues: - Errors: “The proportion of variance not associated with the individuals is called the repeatability.” Changed to something clearer. Now reads: \"The proportion of variance associated with among individual variation, and not within individual variation, is called the repeatability\" - The description of Mantel tests is misleading, as a high correlation does not mean that matrices are the “same,” nor does a negative correlation mean that matrices are “opposite.” We chose a more cautious wording of these general guidelines. Now reads: \"The correlation between matrices range between -1 and 1, and higher correlations indicate matrices have more similar structures, while null correlations indicate the matrices have very distinct correlation structures. Correlations near zero can also occur if the elements of the matrices have nonlinear relationships between them, as in all Pearson correlations. Negative correlations indicate the pattern of association between traits is reversed in the two matrices.\" - The description of the PCA similarity algorithm is opaque. What is “pondering”? Sorry, this was a rather hard false cognate with portuguese for us to catch, and now reads: \"In order to take the variation into account, we can add the eigenvalue associated with each principal component into the calculation, effectively weighting each correlation by the variance in the associated directions\" Literature cited Cheverud, J. M. (1996). Quantitative genetic analysis of cranial morphology in the cotton‐top (Saguinus oedipus) and saddle‐back (S. fuscicollis) tamarins. Journal of Evolutionary Biology, 9(1), 5-42. Marroig, G., & Cheverud, J. M. (2001). A comparison of phenotypic variation and covariation patterns and the role of phylogeny, ecology, and ontogeny during cranial evolution of New World monkeys. Evolution, 55(12), 2576-2600. Prôa, M., O'Higgins, P., & Monteiro, L. R. (2013). Type I error rates for testing genetic drift with phenotypic covariance matrices: a simulation study. Evolution, 67(1), 185-195. Porto, A., de Oliveira, F. B., Shirai, L. T., De Conto, V., Marroig, G., (2009). The Evolution of Modularity in the Mammalian Skull I: Morphological Integration Patterns and Magnitudes. Evolutionary Biology, 36(1), 118–135. doi:10.1007/s11692-008-9038-3"
}
]
}
] | 1
|
https://f1000research.com/articles/4-925
|
https://f1000research.com/articles/5-2631/v1
|
03 Nov 16
|
{
"type": "Research Article",
"title": "Complete genome of Pieris rapae, a resilient alien, a cabbage pest, and a source of anti-cancer proteins",
"authors": [
"Jinhui Shen",
"Qian Cong",
"Lisa N. Kinch",
"Dominika Borek",
"Zbyszek Otwinowski",
"Nick V. Grishin",
"Jinhui Shen",
"Qian Cong",
"Lisa N. Kinch",
"Dominika Borek",
"Zbyszek Otwinowski"
],
"abstract": "The Small Cabbage White (Pieris rapae) is originally a Eurasian butterfly. Being accidentally introduced into North America, Australia, and New Zealand a century or more ago, it spread throughout the continents and rapidly established as one of the most abundant butterfly species. Although it is a serious pest of cabbage and other mustard family plants with its caterpillars reducing crops to stems, it is also a source of pierisin, a protein unique to the Whites that shows cytotoxicity to cancer cells. To better understand the unusual biology of this omnipresent agriculturally and medically important butterfly, we sequenced and annotated the complete genome from USA specimens. At 246 Mbp, it is among the smallest Lepidoptera genomes reported to date. While 1.5% positions in the genome are heterozygous, they are distributed highly non-randomly along the scaffolds, and nearly 20% of longer than 1000 base-pair segments are SNP-free (median length: 38000 bp). Computational simulations of population evolutionary history suggest that American populations started from a very small number of introduced individuals, possibly a single fertilized female, which is in agreement with historical literature. Comparison to other Lepidoptera genomes reveals several unique families of proteins that may contribute to the unusual resilience of Pieris. The nitrile-specifier proteins divert the plant defense chemicals to non-toxic products. The apoptosis-inducing pierisins could offer a defense mechanism against parasitic wasps. While only two pierisins from Pieris rapae were characterized before, the genome sequence revealed eight, offering additional candidates as anti-cancer drugs. The reference genome we obtained lays the foundation for future studies of the Cabbage White and other Pieridae species.",
"keywords": [
"Butterfly genomics",
"Invasive species",
"Crop pest",
"Pieridae",
"Population history"
],
"content": "Introduction\n\nThe Small Cabbage White (Pieris rapae, Figure 1), also known as European Cabbage Butterfly, or Imported Cabbageworm, is one of the most common and widely spread butterflies in North America, ranging from Southern Canada to Mexico1. While mostly present in disturbed open habitats, it also invades valley bottoms, mountain tops, and forested areas2. In many northeastern USA states, it frequently outnumbers all other butterflies combined3. North American populations of the Cabbage Whites, currently numbering in billions, are likely a progeny of a single female accidentally introduced to Quebec, Canada during the second half of the 19th century4,5. By the beginning of the 20th century it had reached California Coast6. Around the same time, it was introduced into Hawaii, New Zealand and Australia6,7. Originally from Eurasia and Northern Africa1, Cabbage White has become one of the most ubiquitous butterfly species. The reasons for its population expansion across variable habitats as well as the population history of American invasion are poorly understood.\n\nDorsal (left) and ventral (right) views are shown. Voucher NVG-4113, male, USA: Texas: Dallas Co., Dallas, GPS 32.90516, -96.81546, 17-Jul-2015.\n\nWhile only very few butterflies are agricultural pests, the Small White is notorious for reducing cabbage plants to stems. Going through its life-cycle quickly and having up to 6 generations per year8, it is a serious pest of the mustard family crops5,9. In addition to damaging plants, caterpillars contaminate and stain produce with feces.\n\nThese butterflies are also a source of a protein with anti-cancer properties10. Aptly termed pierisin, this enzyme of a probable bacterial origin is unique to Pieris and its close relatives among Lepidoptera species10,11. Pierisin contains an N-terminal ADP-ribosylation catalytic domain followed by four ricin domains, and it can induce apoptosis and thus contribute to metamorphosis and resistance to parasitoids11,12. Due to its cytotoxic effects on many cancer cell lines, pierisin is an unexpected protein of medical importance10. Agricultural and medical significance of the Cabbage White has attracted broad attention from researchers and the general public. However, the lack of complete genome sequence hinders these studies.\n\nTo aid genetics, evolutionary, and biochemical studies of the Cabbage White, we sequenced and annotated its complete genome from North American specimens. At 246 Mbp, it is one of the smallest genomes among Lepidoptera genomes assembled to this day, and the first representative from the Pierinae subfamily. Overall, this diploid genome contains 1.5% heterozygous positions that is consistent with the expected high level of butterfly’s heterozygosity. However, the Pieris genome contains a large number of SNP-free segments that are at least 1000 bp long (with the median length equal to 38000 bp), which together constitute 18.3% of the assembled genome. This number is below 4% in other species. The high fraction of homozygous segments indicates low genetic diversity of the population, which supports the hypothesis that Cabbage White expansion in America started from a very small number of individuals, which could be as low as 1 or 2 fertilized females.\n\nComparison to other Lepidoptera genomes reveals several unique families of proteins that may contribute to the unusual resilience and adaptability of Pieris. For instance, the nitrile-specifier proteins, which converts plant defense chemicals to non-toxic molecules13 are unique to these species. The apoptosis-inducing pierisins could offer a defense mechanism against parasitic wasps. While only two pierisins from Pieris rapae were characterized before14,15, the genome sequencing revealed eight genes coding for pierisins, offering additional candidates for anti-cancer drugs development. The reference genome we obtained lays the foundation for future studies of the Cabbage White and other species of Pieridae.\n\n\nResults and discussion\n\nWe assembled a 246 Mb reference genome of Pieris rapae (Pra), which is one of the smallest among currently sequenced Lepidoptera genomes (Supplementary Table S1A)16–26. The scaffold N50 of Pra genome assembly is 617 kb, better than many other published Lepidoptera genomes (Table 1). The genome assembly is also better than many other Lepidoptera genomes in terms of completeness measured by the presence of Core Eukaryotic Genes Mapping Approach (CEGMA) genes (Supplementary Table S1B)27, cytoplasmic ribosomal proteins and independently assembled transcripts (Table 1). The genome sequence has been deposited at DDBJ/EMBL/GenBank under the accession LWME00000000. The version described in this paper is version LWME01000000. In addition, the main results from genome assembly, annotation and analysis can be downloaded at http://prodata.swmed.edu/LepDB/.\n\nn.a. Data not available\n\nPra: Pieris rapae; Lac: Lerema accius; Cce: Calycopis cecrops; Pgl: Pterourus glaucus; Dpl: Danaus plexippus; Hme: Heliconius melpomene; Mci: Melitaea cinxia; Bmo: Bombyx mori; Pxy: Plutella xylostella; Mse: Manduca sexta; Ppo: Papilio polytes; Pse: Phoebis sennae; Pxu: Papilio xuthus.\n\nHeterozygosity: Calculated as the percent of heterozygous positions detected by the Genome Analysis Toolkit (GATK) for Pgl, Lac, Cce, Pra and Pse; or taken from information in the literature for Dpl20; or estimated based on the histogram of K-mer frequencies for Pxy18,41.\n\nWe assembled the transcriptome of Pra using another specimen (NVG-3537) from the same locality. Based on the transcriptome, homologs from other Lepidoptera and Drosophila melanogaster, de novo gene predictions, and repeat identification (Supplementary Table S2B), we predicted 13,188 protein-coding genes in the Pra genome (Supplementary Table S2C). 74.4% of these genes are likely expressed in the adult, as they fully or partially overlap with the transcripts. We annotated the putative functions of the 10,747 protein-coding genes (Supplementary Table S2D). Comparison of the protein sets from Lepidoptera species revealed the presence of some proteins unique to the Cabbage White and not present in other species. Among these are pierisins and nitrile-specifier proteins that play important roles in resistance against parasites and toxins from plants and contribute to the successful spread of Pieris rapae across continents.\n\nWe identified orthologous proteins encoded by 13 Lepidoptera genomes (Plutella xylostella, Bombyx mori, Manduca sexta, Lerema accius, Papilio glaucus, Papilio polytes, Papilio xuthus, Phoebis sennae, Melitaea cinxia, Heliconius melpomene, Danaus plexippus, Calycopis cecrops and Pieris rapae) and detected 4906 universal orthologous groups, from which 1845 groups consist of a single-copy gene in each of the species. A phylogenetic tree built from the concatenated alignment of the single-copy orthologs using RAxML places Pieris as the sister to Phoebis (Figure 2), the only other member of the Pieridae family with sequenced genome. Our analysis places Papilionidae as a sister to all other butterflies, including skippers (Hesperiidae). Such placement contradicts morphology-based phylogeny, but is reproduced in all maximum-likelihood and Bayesian trees published recently26,28.\n\nMajority-rule consensus tree of the maximal likelihood trees constructed by RAxML on the concatenated alignment of universal single-copy orthologous proteins. Numbers by the nodes refer to bootstrap percentages. The numbers above are obtained from complete alignments, the number below are obtained on 1% of the dataset.\n\nAll nodes received 100% bootstrap support when the alignment of all single-copy orthologs was used. However, since bootstrap only measures internal consistency of phylogenetic signal in the alignment, very large datasets will almost always result in 100% support, even if the tree is incorrect and biased by various factors such as nucleotide composition and long branch attraction. To find the weakest nodes, we reduced the amount of data by randomly splitting the concatenated alignment of all single-copy orthologs into 100 alignments (about 3088 positions in each alignment). The consensus tree based on these alignments revealed that the node referring to relative position of skippers and swallowtails shows the lowest support (68%) compared to other nodes, and their evolutionary history remains to be further investigated when better taxon sampling by complete genomes is achieved.\n\nWe identified 8 copies of the pierisin gene (Supplementary Table S3A), while only 2 copies were previously reported from Pieris rapae (GenBank)14,15. At least 7 pierisin copies are likely expressed, as their partial sequences are present in the RNA-seq data from adult. The pierisin protein resembles a classic bacterial AB-toxin, with an enzymatically active A domain toxin that is delivered across the eukaryotic membrane through interaction with receptors on the cell surface by the B domain. Pierisin is closely related to the bacterial mosquitocidal toxin MTX NAD(+)-dependent ADP-ribosyltransferase for which the crystal structure is known29, with the closest pierisin sequence Pra57.2 having 32.56% identity to the structure sequence represented by the MTX holotoxin (PDB 2vse). The pierisin toxin transfers an ADP-ribosyl moiety to 2'-deoxyguanosine residues in DNA30, while the ricin domains mediate interactions with neutral glycosphingolipid receptors, globotriaosylceramide (Gb3), and globotetraosylceramide (Gb4)31. The toxin is thought to serve as a defense factor against parasitization by wasps12, but also induces apoptosis in cancer cell lines10,11,32.\n\nSeven copies of pierisin encoded by the Pieris rapae genome include an N-terminal ADP-ribosylation toxin followed by an inhibitory linker and four ricin domains. Mapping the Pieris rapae pierisin sequence conservations (in rainbow from conserved red to variable blue) to the MTX holotoxin structure revealed a strict conservation of the active site and residues surrounding the NAD-binding site (Figure 3A, NAD in ball and stick), as well as conservation of the inhibitory linker in the region that replaces NAD (Figure 3B, linker in tube). The receptor-interacting ricin domains include QxW motifs that contribute to cytotoxicity (Figure 3B, spheres), and display relatively lower overall conservation than the catalytic domain. Thus, the receptor-interacting function might be diverging across the different copies of the gene, potentially allowing broader receptor specificity.\n\nAn alignment of the MTX holotoxin (PDB 2vse) sequence with the Pieris rapae pierisins was used to map sequence conservations calculated for the pierisin sequences. Conservations were colored in rainbow from blue (variable) to red (conserved). (A) The N-terminal ADP-ribosylation toxin domain (shown in surface representation) of the MTX holotoxin structure was superimposed with the cholera ADP-ribosylation toxin bound to its NAD+ substrate (shown in ball and stick) to highlight the NAD+ binding pocket. An alignment of residues that contribute to the binding pocket are depicted below the structure, highlighted according to conservation, with the catalytic E marked by an asterisk. (B) The N-terminal ADP-ribosylation toxin domain (shown in surface representation) of the MTX holotoxin is inhibited by a conserved inhibitory linker region (shown in tube) that blocks the substrate binding pocket. The C-terminal ricin-like domains of the holotoxin are depicted in cartoon, with corresponding sidechains of QxW motifs depicted in sphere. (C) Phylogenetic tree of ricin domains in 8 pierisins from Pieris rapae.\n\nOne copy of pierisin (Pra57.3) lacks the N-terminal ADP-ribosylation domain, and is composed of four ricin domains following an N-terminal signal peptide, as validated by both the assembled genome and de novo assembled transcripts. In addition, the phylogenetic tree of the ricin domains in the eight copies of pierisin places this protein on the longest branch, suggesting that it has undergone rapid divergence from other pierisins and could have adopted a different function. Lacking the toxin domain, Pra57.3 may aid others toxins in entering the cells. Alternatively, it may be able to bind to the neutral glycosphingolipid receptors in the Pieris, and protect its own cells against other pierisins with the toxic ADP-ribosylation domains.\n\nDuring feeding, the cabbage white butterfly larvae possess the ability to counteract toxic secondary metabolites produced by the food plant glucosinolate–myrosinase major chemical defense system. The hydrolysis reaction of plant myrosinase, which normally produces toxic isothiocyanates, is redirected to the production of nitriles in the presence of the larval gut nitrile-specifier protein (NSP)13. The exact role of NSP in nitrile production is debatable, the protein could either serve as an enzyme catalyzing the formation of nitriles from the aglycone intermediate or as an allosteric cofactor for myrosinase13,33. The detoxifying NSP protein belongs to an insect-specific gene family consisting of variable tandem repeating units termed insect allergen-related repeats. While other Lepidoptera genomes appear to have no NSP genes, the Pieris genome encodes two copies of the NSPs (Supplementary Table S3B), each containing three copies of the insect allergen-related repeat domain34.\n\nRecently, a crystal structure of an insect allergen-related repeat domain from cockroach revealed a novel fold of twelve alpha-helices (two 6 helical repeating units) encapsulating a large hydrophobic cavity. While the sequence identity between the allergen structure and each of the three Pieris NSP domains is relatively low (~ 20% to each), their sequences can be confidently mapped to the known structure for functional inference. The cockroach allergen repeat cavity binds phospholipids such as phosphatidylethanolamine and phosphatidylglycerol when expressed in bacteria; and phosphatidylinositol (PI), phosphatidylserine and phosphatidylcholine when expressed in yeast. Alternately, the allergen purified from cockroach bound nonphosphorylated fatty acids such as palmitate, stearate, and oleate35, revealing a promiscuous binding capacity of the hydrophobic pocket. Such a promiscuous allergen binding activity might translate to the sequence-related NSP pockets, allowing binding of the various aglycone intermediates of the glucosinolate–myrosinase system.\n\nMapping the NSP-related protein sequences conservations to the allergen structure highlights invariant residues that both line the hydrophobic cavity of each domain, connect the repeating units, and connect adjacent α-helices of the repeat (Figure 4, conserved residues colored red). The hydrophobic nature of the binding cavity is preserved in the NSP sequences, including numerous invariant hydrophobic residues that likely contribute to function. Conserved NSP residues also reside near the PO4 group of the phospholipid binding site (Figure 4D), including a YxxxW motif found in each repeat that should restrict the site to accommodate smaller ligands. In fact, the aglycone intermediate SO4 group and adjacent backbone atoms could mimic the PO4 in phospholipid (Figure 4E).\n\nResidue conservation is colored from red (invariant) to blue (variable). The NSP N-terminal (A), middle (B), and C-terminal (C) domain repeats are represented in ribbon (upper panels) and surface (lower panels). Lipids from the insect allergen structure (4jrb) are in magenta sticks. (D) Zoom into the phospholipid binding site (N-terminal domain), with the head group colored by atom: P (orange), O (red), and C (black). The larger side group of the phospholipid ligand (white) is not compatable with the NSP YxxxW187 motif (shown in stick). (E) Comparison of phospholipid ligand (stick representation) with aglycone, with similar atom backbone orientations boxed. Sequence conservations were calculated using Al2CO72 from an alignment of the following: Pieridae NSP1 and NSP2, together with AAR84202.1, ABY88944.1, ABX39547.1, ABX39554.1, ABY88945.1, ABX39555.1, ABX39546.1, ABX39549.1, ABX39537.1, ABX39552.1, ABX39553.1 from the NCBI Non-redundant protein database.\n\nAlternately, the positions of invariant polar residues are limited to those that contribute to α-helical interactions, to the linker regions that do not line the hydrophobic cavity, or to insertions not present in the template allergen-repeat structure. While an active site could potentially form between repeating domains of the NSP structure, no obvious clusters of catalytic residues could be mapped to the individual cavities of any of the domain repeats present in NSP. Potentially, the NSP cavities could accommodate binding the various aglycone intermediates produced by myrosinase, allowing time for spontaneous conversion to simple nitriles in the low pH of the gut. Thus, the NSP binding cavity could act in a pseudo-enzymatic capacity, without traditional catalytic residues mediating chemistry.\n\nWhile the Pieris rapae genome is very heterozygous at 1.5%, the distribution of these SNPs in the genome is highly non-random. The histogram of SNP fraction in 1000 bp genomic windows for both Pieris rapae and Papilio glaucus (Pgl) is shown in Figure 4A. Since the reads from the highly heterozygous regions in the genome may not map well to the reference genome, such regions usually show lower-than-expected coverage and may hinder the detection of heterozygous positions. Therefore, in the analysis of both Pgl and Pra genomes, we focused on the genomic regions with coverage that are expected for a diploid genome. Compared to Pgl, the Pra genome contains a much higher fraction of homozygous (SNP-free) regions (Figure 4B). This difference cannot be simply explained by the relatively low heterozygosity of Pra (1.5% for Pra and 2.3% for Pgl), because the probability of observing SNP-free segments longer than 500 bp is below 1% for genome of this size having 1.5% of heterozygosity (Figure 4C).\n\nThe Pra genome assembly contains a large portion (18.3% of the total length) of SNP-free segments that are at least 1,000 bp. The average coverage of the SNP-free segments by the reads is 87 fold, which is higher than the average coverage of all the segments under study (coverage: 84 fold). Therefore, the lack of heterozygous positions does not arise from the failure of mapping reads from one haplotype to the reference genome which represents another haplotype in the highly heterozygous region. The Pgl genome contains only 1.55% long (>= 1000 bp) SNP-free segments, which also support that the large portion of SNP-free segments in the Pra genome is not an artifact.\n\nThe median length of these segments is 38,000 bp, and the longest SNP-free region in the P. rapae draft genome is 339,000 bp. The presence of such high proportion of SNP-free segments indicates that this Pra specimen inherited a large proportion of identical alleles from its parents. Two scenarios could explain this: (1) this specimen is a result of recent inbreeding between brothers and sisters or between cousins (2) the population started from a very small number of individuals or has been through very severe bottlenecks and therefore the genetic diversity within the population is low. In order to distinguish between these two scenarios, we simulated them.\n\nInbreeding between brother and sister would result in the presence of ~25% long homozygous segments, and this ratio goes down to 6.3% when the parents are cousins (Figure 6A). Inbreeding between half-blooded brother and sister from the same father (or mother) and whose mothers are sisters would result in 18.6% of homozygous segments. However, inbreeding between very close relatives would result in a very high median lengths of the SNP-free segments (Figure 6B), even if we assumed a very high recombination rate, 10 cM/Mb36. The median length of SNP-free segments in this scenario is still above 200,000 bp, which is much higher than the observed value, 38,000 bp. Therefore, inbreeding between close relatives cannot explain the observed SNP pattern.\n\n(A) Histogram of SNP rates in 1000 bp windows from the Pra (red orange curve) and Pgl (green curve) genome. (B) The fraction of SNP-free long genomic windows in the Pra (orange bars) and Pgl (green bars) genomes. Pra genome has a much larger fraction of SNP-free windows than Pgl, especially when the window size goes beyond 1,000 bp. (C) The fraction of SNP-free genomic windows in Pra (light orange bars) and Pgl (light green bars) if the SNPs are distributed randomly. The fraction of such windows goes down to 0 when the window size is equal or bigger than 1000 bp.\n\n(A) The fraction and (B) median length of SNP-free segments in the offspring of inbreeding between very close relatives. Inbreed 1: inbreeding between brother and sister; Inbreed 2: inbreeding between half-blooded brother and sister with common father (or mother) whose mothers (or fathers) are also sisters (or brothers) of each other. Inbreed 3: inbreeding between half-blooded brohter and sister with common father (or mother) whose mothers (or fathers) are not related. Inbreed 4: inbreeding between cousins. (C) The fraction and (D) median length of SNP-free segments in an individual from a in silico simulated population with low genetic diversity. Low divesity 1, 2, and 3: populations start from 2, 3, and 4 individuals, respectively. 500 generations with an effective population size of 50,000 were simulated. The recombination and mutation rates are 5 cM/Mb and 2.5e-3/Mb per generation.\n\nThe observed pattern of SNP-free segments agrees very well with the second scenario, i.e., the genetic diversity in the population is low, because the population started from very small number of individuals or has undergone very severe bottlenecks. The observed fraction and median lengths of the long SNP-free segments agrees very well with the simulated data assuming that the population started with 3 individuals (could be one female carrying spermatophores of two males) and has been developing for about 500 generations (Figure 6C, D). This supports the hypothesis that Pieris rapae came to America in 19th century and the population started from very few individuals introduced by human activity. It cannot be excluded that the population started with a larger number of introduced individuals, but the genetic diversity was reduced due to severe bottlenecks, possibly early on, so only the progeny of one or two females gave rise to American populations of Pieris rapae. However, as a widely spread butterfly species over all different habitats that is somewhat resistant to parasite and toxins in plant, bottlenecks in the later stage of population history is not very likely.\n\n\nMaterials and methods\n\nWe removed and preserved the wings and genitalia of three freshly caught Pieris rapae specimens (NVG-3537 female, NVG-3842 and NVG-4113 males from USA: Texas: Dallas Co., Dallas, GPS 32.90516, -96.81546, collected on 5-Jun-2015, 30-Jun-2015, 17-Jul-2015, respectively), while the rest of the bodies were stored in RNAlater solution (Life Technologies Corporation, Grand Island, NY USA). Wings and genitalia of these specimens will be deposited in the National Museum of Natural History, Smithsonian Institution, Washington, DC, USA (USNM).\n\nWe used specimens NVG-3842 and NVG-4113 for sequencing and assembly the reference genome. We extracted genomic DNA from the tissue with the ChargeSwitch gDNA mini tissue kit (Invitrogen, Waltham, MA USA). 250 bp and 500 bp paired-end libraries were prepared using genomic DNA from specimen NVG-3842 with enzymes from NEBNext Modules (New England Biolabs Inc., Ipswich, MA USA) and following the Illumina TruSeq DNA sample preparation guide http://prodata.swmed.edu/LepDB/Protocol/illumina_Paired-End_Sample_Preparation_Guide.pdf. 2 kb, 6 kb and 15 kb mate pair libraries were prepared using genomic DNA from both NVG-3842 and NVG-4113 with a protocol similar to previously published Cre-Lox-based method37. For the 250 bp, 500 bp, 2 kbp, 6 kbp and 15 kbp libraries, approximately 250 ng, 250 ng, 0.96 μg, 1.92 μg and 2.87 μg of isolated DNA were used, respectively. We quantified the amount of DNA from all the libraries with the KAPA Library Quantification Kit (Kapa Biosystems, Inc., Wilmington, MA USA), and mixed 250 bp, 500 bp, 2 kbp, 6 kbp, 15 kbp libraries at relative molar concentrations of 40:20:8:4:3. The mixed library was sequenced with PE-150 bp run using 64% of a single Illumina lane on HiSeq 2500 at UT Southwestern Medical Center Genomics and Microarray Core Facility.\n\nPart of specimen NVG-3537 was used to extract RNA using QIAGEN RNeasy Mini Kit (QIAGEN Inc., Valencia, CA USA). We further isolated mRNA using NEBNext Poly(A) mRNA Magnetic Isolation Module (New England Biolabs Inc., Ipswich, MA USA). RNA-seq libraries were prepared with NEBNext Ultra Directional RNA Library Prep Kit (New England Biolabs Inc., Ipswich, MA USA) for Illumina following manufacturer’s protocol. The RNA-seq library was sequenced with PE-150 bp run using 9% of an Illumina lane. The sequencing reads of all these libraries were deposited in the NCBI SRA database under accession SRP073457.\n\nWe removed sequence reads that did not pass the purity filter and classified the remaining reads according to their TruSeq adapter indices to get individual sequencing libraries. Mate pair libraries were processed by the Delox script37 to remove the loxP sequences and to separate true mate pair from paired-end reads. All reads were processed by mirabait38 v4.0.2 to remove contamination from the TruSeq adapters, an in-house script to remove low quality portions (quality score < 20) at the ends of both reads, JELLYFISH39 v2.2.3 to obtain k-mer frequencies in all the libraries, and QUAKE40 v0.3.5 to correct sequencing errors. The data processing resulted in seven libraries that were supplied to Platanus41 v1.2.4 for genome assembly: 250 bp and 500 bp paired-end libraries, 2 kbp, 6kbp, 15k bp true mate pair libraries, a library containing all the paired-end reads from the mate pair libraries, and a single-end library containing all reads whose pairs were removed in the process.\n\nWe mapped these reads to the initial assembly with Bowtie242 v2.2.3 and calculated the coverage of each scaffold with the help of SAMtools43 v1.0. Many short scaffolds in the assembly showed coverage that was about half of the expected value; they likely came from highly heterozygous regions that were not merged to the equivalent segments in the homologous chromosomes. We removed them if they could be fully aligned to another significantly less covered region (coverage > 90% and uncovered region < 500 bp) in a longer scaffold with high sequence identity (>95%). Similar problems occurred in the Heliconius melpomene, Pterourus glaucus and Lerema accius genome projects, and similar strategies were used to improve the assemblies19,24,26.\n\nThe RNA-seq reads were processed using a similar procedure as the genomic DNA reads to remove contamination from TruSeq adapters and the low quality portion of the reads. Afterwards, we applied three methods to assemble the transcriptomes: (1) de novo assembly by Trinity44 v2.0.6, (2) reference-based assembly by TopHat45 v2.0.10 and Cufflinks46 v2.2.1, and (3) reference-guided assembly by Trinity v2.0.6. The results from all three methods were then integrated by Program to Assemble Spliced Alignment (PASA)47 v2.0.2.\n\nTwo approaches were used to identify repeats in the genome: the RepeatModeler48 v1.0.7 pipeline and in-house scripts that extracted regions with coverage 3 times higher than expected. These repeats were submitted to the CENSOR49 server to assign them to the repeat classification hierarchy. The species-specific repeat library and all repeats classified in RepBase50 v18.12 were used to mask repeats in the genome by RepeatMasker51 v4.0.3.\n\nWe obtained two sets of transcript-based annotations from two pipelines: TopHat followed by Cufflinks and Trinity followed by PASA. In addition, we obtained eight sets of homology-based annotations by aligning protein sets from Drosophila melanogaster52 and seven published Lepidoptera genomes (Bombyx mori, Lerema accius, Papilio polytes, Papilio glaucus, Papilio xuthus, Heliconius melpomene, and Danaus plexippus) to the Pra genome with exonerate53 v2.2.0. Proteins from insects in the entire UniRef9054 database were used to generate another set of gene predictions by genblastG55 v1.38. We manually curated and selected 1256 confident gene models by integrating the evidence from transcripts and homologs to train de novo gene predictors: AUGUSTUS56 v3.1, SNAP57 and GlimmerHMM58 v3.0.3. These trained predictors, the self-trained Genemark59 v2.3e and a consensus-based pipeline Maker60 v2.31.8, were used to generate another five sets of gene models. Transcript-based and homology-based annotations were supplied to AUGUSTUS, SNAP and Maker to boost their performance. In total, we generated 16 sets of gene predictions and integrated them with EvidenceModeller47 v1.1.1 to generate the final gene models.\n\nWe predicted the function of Pra proteins by transferring annotations and GO-terms from the closest BLAST61 v2.2.30 hits (E-value < 10-5) in both the Swissprot62 database and Flybase63. Finally, we performed InterproScan64 v5.17-56.0 to identify conserved protein domains and functional motifs, to predict coiled coils, transmembrane helices and signal peptides, to detect homologous 3D structures, to assign proteins to protein families and to map them to metabolic pathways.\n\nWe identified the orthologous groups from 13 Lepidoptera genomes using OrthoMCL65 v2.0.9. The orthologous groups that contain only Pieris proteins were further investigated. Starting from these Pieris sequences, we attempted to identify their orthologs in other Lepidoptera genomes using reciprocal BLAST. Potential orthologs encoded by the genome but missed in the protein sets were predicted with the help of genblastG. Two groups of proteins, i.e. the pierisins and nitrile-specifier proteins discussed above turned out to be unique for Pieris. We manually curated the sequences for proteins in these two groups and submitted them to MESSA66 to perform secondary structure and disordered region prediction, domain identification and 3D structure prediction. We aligned the pierisin sequences using MAFFT v7.237 and built their evolutionary tree with RAxML67 v8.2.3 and visualized them in FigTree v1.4.2.\n\n1845 orthologous groups consisted of single-copy genes from every species, and they were used for phylogenetic analysis. An alignment was built for each universal single-copy orthologous group using both global sequence aligner MAFFT68 and local sequence aligner BLASTP. Positions that were consistently aligned by both aligners were extracted from each individual alignment and concatenated to obtain an alignment containing 308,750 positions. The concatenated alignment was used to obtain a phylogenetic tree using RAxML. Bootstrap resampling of the aligned positions was performed to assign the confidence level of each node in the tree. In addition, in order to detect the weakest nodes in the tree, we reduced the amount of data by randomly splitting the concatenated alignment into 100 alignments (about 3,088 positions in each alignment) and applied RAxML to each alignment. We obtained a 50% majority rule consensus tree and assigned confidence level to each node based on the percent of individual trees supporting this node.\n\nNSP family sequences were collected using BLAST (PMID: 9254694) of the nr database with NSP1 as a query (default settings), keeping subject sequences with over 90% coverage. Conservations were calculated using Al2CO (PMID: 11524371) from a MAFFT (PMID: 24170399) alignment of the following: Pieridae NSP1 and NSP2, together with AAR84202.1, ABY88944.1, ABX39547.1, ABX39554.1, ABY88945.1, ABX39555.1, ABX39546.1, ABX39549.1, ABX39537.1, ABX39552.1, ABX39553.1. The NSP family includes three copies of an Insect allergen related repeat domain, which has a structure representative of the cockroach allergen Bla G 1 (PDB: 4jrp). The 4jrp sequence was aligned with each of the three repeat domains in the NSP family using PSI-BLAST (PMID: 9254694) and HHPRED (PMID: 9626712) alignments as guides. Positional conservations for each domain were mapped to the B-factor column of the 4jrp structure coordinates with AL2CO (PMID: 11524371), and displayed with rainbow color scale (from blue variable to red conserved) using PyMOL Molecular Graphics System. Eight copies of pierisin from the sequenced genome were aligned as above with the related MTX holotoxin sequence HHPRED (PDB: 2vse), calculating and displaying positional conservations as above.\n\nWe analyzed the SNPs in Pra and Papilio glaucus (Pgl) genomes using the same protocol, in which we mapped each read to the genomes and detected SNPs using the Genome Analysis Toolkit69 v3.5. The distribution of genome coverage by the reads in 100 bp windows was plotted. For both Pra and Pgl genomes, this distribution shows two peaks. In addition to the main peak centered at the expected coverage for a diploid genome, there is an additional peak to the left that corresponds to highly divergent regions between the two homologous chromosomes. Owing to this sequence divergence, only the reads corresponding to the sequence of one of the homologous chromosomes can be mapped, which results in the lower-than-expected coverage. To analyze the distribution of SNPs, we used the regions whose coverage by the reads falls within the diploid peak.\n\nWe calculated the total number of positions with SNPs in such regions and simulated random distribution of these SNPs. The simulated distributions were used as controls. For the random control, experimental data, and the simulated genomes discussed below, we divided the scaffolds into 100, 200, 300, 400, 500, 1000, 2000, 5000, and 10000 bp windows (segments less than the window length at the ends of scaffolds are discarded), respectively, and calculated the presence of SNP-free windows. We concatenated neighboring SNP-free regions to obtain the longest SNP-free segments, and calculated the median length of these SNP-free segments.\n\nWe simulated Pieris rapae haplotypes by randomly introducing SNPs to the Pra reference genome, and the frequency of SNPs was set to be half of the frequency of heterozygous positions in the sequenced Pra individual (i.e., 0.7%). Two simulated haplotypes were randomly paired to represent another simulated Pieris rapae individual, and the rate of heterozygous positions in the simulated individuals would be comparable to that observed in the sequenced specimen. To simulate the mating between two individuals, we assumed the two haplotypes of each individual could recombine at a certain rate (recombination rate) and generate a new haplotype that is inherited to the offspring.\n\nThe recombination rates of insects are rather variable, and the recombination rates for Bombyx mori, Heliconius melpomene and Heliconius erato are estimated to be 2.6, 5.5 and 6.1, respectively36. Therefore, we estimated the recombination rate for Pieris rapae to range between 1cM/Mb and 10cM/Mb per generation. To simulate recent inbreeding, we randomly select a recombination rate within this range. The mutations in this process are not introduced because the per generation mutation rate for butterflies are expected to be in the magnitude of 1e-9 mutation per base pair70, much lower than the existing variation between haplotypes. We simulated three scenarios of inbreeding: (1) between brother and sister (2) between cousins and (3) between half-blooded brother and sister.\n\nTo simulate the evolution of Pieris rapae population, we assumed the population started from a certain number of individuals (2, 3 and 4). Several parameters would affect the population evolution, i.e., the number of generations since the species invaded America, the recombination rate, the mutation rate, and the effective population size. Pieris rapae was suggested to invade America in the second half of 19 century, and has 3–6 generations per year. Therefore, we assumed the number of generations to be 500. Based on the known values for other Lepidoptera species, we assumed the recombination rate to be 5cM/Mb and the mutation rate to be 2.5e-9. In the initial generations, the effective population size is mainly limited by the population size, and the population may undergo exponential growth. We assumed an exponential growth of the effective population size at rate of 10 fold per generation (each pair produce 20 off-springs). Later on, the population may reach its stationary phase, and the effective population size will be limited by the population structure and will not keep increasing. The effective population size of insects usually ranges between 1e5 and 1e671, and we assumed the effective population size to be 5e5 after the initial exponential growth phase.\n\n\nData availability\n\nSequencing reads were deposited in the NCBI SRA database under accession number SRP073457. The genome sequence was deposited at DDBJ/EMBL/GenBank under accession number LWME00000000.\n\nMajor in-house scripts and intermediate results are available at http://prodata.swmed.edu/LepDB/.\n\nArchived scripts at the time of publication: 10.5256/f1000research.9765.d14048673\n\nPlease see README.txt for a description of the files.",
"appendix": "Author contributions\n\n\n\nJ.S. and Q.C. designed and carried out the experiments, performed the computational analyses and drafted the manuscript; L.N.K. analyzed the proteins unique to Pieris; D.B. and Z.O. designed and supervised experimental studies; N.V.G. directed the project and drafted the manuscript. All authors wrote the manuscript.\n\n\nCompeting interests\n\n\n\nThe authors declare that they have no competing interests\n\n\nGrant information\n\nThis work was supported in part by the National Institutes of Health (GM094575 to N.V.G) and the Welch Foundation (I-1505 to N.V.G).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nWe acknowledge Texas Parks and Wildlife Department (Natural Resources Program Director David H. Riskind) for the permit #08-02Rev that makes research based on material collected in Texas State Parks possible. We thank R. Dustin Schaeffer and Raquel Bromberg for critical suggestions and proofreading of the manuscript; Qian Cong is a Howard Hughes Medical Institute International Student Research fellow.\n\n\nSupplementary material\n\nSupplementary Table S1. Quality and composition of Lepidoptera genomes, related to Table 1.\n\nClick here to access the data.\n\nSupplementary Table S2. Statistics for sequencing data and data processing related to experimental procedures and genome annotation.\n\nClick here to access the data.\n\nSupplementary Table S3. Protein sequences of pierisins and nitrile-specifier.\n\nClick here to access the data.\n\n\nReferences\n\nScudder SH: The introduction and spread of Pieris rapae in North America, 1860-1885. Boston, 1887; 53–69. Publisher Full Text\n\nKlots AB: Field Guide to the Butterflies of North America, East of the Great Plains. Houghton Mifflin, New York; 1978. Reference Source\n\n2015 NABA Butterfly Count Report. (ed. Wander S.) North American Butterfly Association; 2015. Reference Source\n\nBauer DL, Howe WH: The Butterflies of North America. 97 leaves of plates (Doubleday, Garden City, N.Y.). 1975; xiii: 633. Reference Source\n\nHolland WJ: The Butterfly Book. Doubleday, New York. 1931. Reference Source\n\nScott JA: The Butterflies of North America: A Natural History and Field Guide. Stanford University Press: Stanford, Calif; 1986. Reference Source\n\nGibbs GW: New Zealand Butterflies: Identification and Natural History. Collins, Auckland, New Zealand; 1980. Reference Source\n\nSaunders DS: Insect Clocks. Pergamon Press Inc., New York; 1976. Reference Source\n\nHeitzman RJ, Heitzman JE: Butterflies and Moths of Missouri. Missouri Department of Conservation, Jefferson City, MO; 1996. Reference Source\n\nKono T, Watanabe M, Koyama K, et al.: Cytotoxic activity of pierisin, from the cabbage butterfly, Pieris rapae, in various human cancer cell lines. Cancer Lett. 1999; 137(1): 75–81. PubMed Abstract | Publisher Full Text\n\nMatsushima-Hibiya Y, Watanabe M, Kono T, et al.: Purification and cloning of pierisin-2, an apoptosis-inducing protein from the cabbage butterfly, Pieris brassicae. Eur J Biochem. 2000; 267(18): 5742–50. PubMed Abstract | Publisher Full Text\n\nTakahashi-Nakaguchi A, Matsumoto Y, Yamamoto M, et al.: Demonstration of cytotoxicity against wasps by pierisin-1: a possible defense factor in the cabbage white butterfly. PLoS One. 2013; 8(4): e60539. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWittstock U, Agerbirk N, Stauber EJ, et al.: Successful herbivore attack due to metabolic diversion of a plant chemical defense. Proc Natl Acad Sci U S A. 2004; 101(14): 4859–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWatanabe M, Kono T, Matsushima-Hibiya Y, et al.: Molecular cloning of an apoptosis-inducing protein, pierisin, from cabbage butterfly: possible involvement of ADP-ribosylation in its activity. Proc Natl Acad Sci U S A. 1999; 96(19): 10608–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOrth JH, Schorch B, Boundy S, et al.: Cell-free synthesis and characterization of a novel cytotoxic pierisin-like protein from the cabbage butterfly Pieris rapae. Toxicon. 2011; 57(2): 199–207. PubMed Abstract | Publisher Full Text\n\nCong Q, Shen J, Warren AD, et al.: Speciation in Cloudless Sulphurs gleaned from complete genomes. Genome Biol Evol. 2016; 8(3): 915–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInternational Silkworm Genome Consortium: The genome of a lepidopteran model insect, the silkworm Bombyx mori. Insect Biochem Mol Biol. 2008; 38(12): 1036–45. PubMed Abstract | Publisher Full Text\n\nYou M, Yue Z, He W, et al.: A heterozygous moth genome provides insights into herbivory and detoxification. Nat Genet. 2013; 45(2): 220–5. PubMed Abstract | Publisher Full Text\n\nHeliconius Genome Consortium: Butterfly genome reveals promiscuous exchange of mimicry adaptations among species. Nature. 2012; 487(7405): 94–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhan S, Merlin C, Boore JL, et al.: The monarch butterfly genome yields insights into long-distance migration. Cell. 2011; 147(5): 1171–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang W, Yu L, He W, et al.: DBM-DB: the diamondback moth genome database. Database (Oxford). 2014; 2014: bat087. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhan S, Reppert SM: MonarchBase: the monarch butterfly genome database. Nucleic Acids Res. 2013; 41(Database issue): D758–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuan J, Li R, Cheng D, et al.: SilkDB v2.0: a platform for silkworm (Bombyx mori) genome biology. Nucleic Acids Res. 2010; 38(Database issue): D453–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Borek D, Otwinowski Z, et al.: Tiger Swallowtail Genome Reveals Mechanisms for Speciation and Caterpillar Chemical Defense. Cell Rep. 2015; pii: S2211-1247(15)00051-0. PubMed Abstract | Publisher Full Text\n\nAhola V, Lehtonen R, Somervuo P, et al.: The Glanville fritillary genome retains an ancient karyotype and reveals selective chromosomal fusions in Lepidoptera. Nat Commun. 2014; 5: 4737. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Borek D, Otwinowski Z, et al.: Skipper genome sheds light on unique phenotypic traits and phylogeny. BMC Genomics. 2015; 16(1): 639. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParra G, Bradnam K, Korf I: CEGMA: a pipeline to accurately annotate core genes in eukaryotic genomes. Bioinformatics. 2007; 23(9): 1061–7. PubMed Abstract | Publisher Full Text\n\nKawahara AY, Breinholt JW: Phylogenomics provides strong evidence for relationships of butterflies and moths. Proc Biol Sci. 2014; 281(1788): 20140970. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTreiber N, Reinert DJ, Carpusca I, et al.: Structure and mode of action of a mosquitocidal holotoxin. J Mol Biol. 2008; 381(1): 150–9. PubMed Abstract | Publisher Full Text\n\nTakamura-Enya T, Watanabe M, Totsuka Y, et al.: Mono(ADP-ribosyl)ation of 2'-deoxyguanosine residue in DNA by an apoptosis-inducing protein, pierisin-1, from cabbage butterfly. Proc Natl Acad Sci U S A. 2001; 98(22): 12414–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsushima-Hibiya Y, Watanabe M, Hidari KI, et al.: Identification of glycosphingolipid receptors for pierisin-1, a guanine-specific ADP-ribosylating toxin from the cabbage butterfly. J Biol Chem. 2003; 278(11): 9972–8. PubMed Abstract | Publisher Full Text\n\nSubbarayan S, Marimuthu SK, Nachimuthu SK, et al.: Characterization and cytotoxic activity of apoptosis-inducing pierisin-5 protein from white cabbage butterfly. Int J Biol Macromol. 2016; 87: 16–27. PubMed Abstract | Publisher Full Text\n\nBurow M, Markert J, Gershenzon J, et al.: Comparative biochemical characterization of nitrile-forming proteins from plants and insects that alter myrosinase-catalysed hydrolysis of glucosinolates. FEBS J. 2006; 273(11): 2432–46. PubMed Abstract | Publisher Full Text\n\nFischer HM, Wheat CW, Heckel DG, et al.: Evolutionary origins of a novel host plant detoxification gene in butterflies. Mol Biol Evol. 2008; 25(5): 809–20. PubMed Abstract | Publisher Full Text\n\nMueller GA, Pedersen LC, Lih FB, et al.: The novel structure of the cockroach allergen Bla g 1 has implications for allergenicity and exposure assessment. J Allergy Clin Immunol. 2013; 132(6): 1420–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilfert L, Gadau J, Schmid-Hempel P: Variation in genomic recombination rates among animal taxa and the case of social insects. Heredity (Edinb). 2007; 98(4): 189–97. PubMed Abstract | Publisher Full Text\n\nVan Nieuwerburgh F, Thompson RC, Ledesma J, et al.: Illumina mate-paired DNA sequencing-library preparation using Cre-Lox recombination. Nucleic Acids Res. 2012; 40(3): e24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChevreux B, Wetter T, Suhai S: Genome Sequence Assembly Using Trace Signals and Additional Sequence Information. Computer Science and Biology: Proceedings of the German Conference on Bioinformatics. 1999; 99: 45–56. Reference Source\n\nMarçais G, Kingsford C: A fast, lock-free approach for efficient parallel counting of occurrences of k-mers. Bioinformatics. 2011; 27(6): 764–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKelley DR, Schatz MC, Salzberg SL: Quake: quality-aware detection and correction of sequencing errors. Genome Biol. 2010; 11(11): R116. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKajitani R, Toshimoto K, Noguchi H, et al.: Efficient de novo assembly of highly heterozygous genomes from whole-genome shotgun short reads. Genome Res. 2014; 24(8): 1384–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaas BJ, Papanicolaou A, Yassour M, et al.: De novo transcript sequence reconstruction from RNA-seq using the Trinity platform for reference generation and analysis. Nat Protoc. 2013; 8(8): 1494–512. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim D, Pertea G, Trapnell C, et al.: TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 2013; 14(4): R36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoberts A, Pimentel H, Trapnell C, et al.: Identification of novel transcripts in annotated genomes using RNA-Seq. Bioinformatics. 2011; 27(17): 2325–9. PubMed Abstract | Publisher Full Text\n\nHaas BJ, Salzberg SL, Zhu W, et al.: Automated eukaryotic gene structure annotation using EVidenceModeler and the Program to Assemble Spliced Alignments. Genome Biol. 2008; 9(1): R7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmit AFA, Hubley R: RepeatModeler Open-1.0. 2008-2010. Reference Source\n\nJurka J, Klonowski P, Dagman V, et al.: CENSOR--a program for identification and elimination of repetitive elements from DNA sequences. Comput Chem. 1996; 20(1): 119–21. PubMed Abstract | Publisher Full Text\n\nJurka J, Kapitonov VV, Pavlicek A, et al.: Repbase Update, a database of eukaryotic repetitive elements. Cytogenet Genome Res. 2005; 110(1–4): 462–7. PubMed Abstract | Publisher Full Text\n\nSmit AFA, Hubley R, Green P: RepeatMasker Open-3.0. 1996–2010. Reference Source\n\nMisra S, Crosby MA, Mungall CJ, et al.: Annotation of the Drosophila melanogaster euchromatic genome: a systematic review. Genome Biol. 2002; 3(12): RESEARCH0083. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlater GS, Birney E: Automated generation of heuristics for biological sequence comparison. BMC Bioinformatics. 2005; 6: 31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuzek BE, Huang H, McGarvey P, et al.: UniRef: comprehensive and non-redundant UniProt reference clusters. Bioinformatics. 2007; 23(10): 1282–8. PubMed Abstract | Publisher Full Text\n\nShe R, Chu JS, Uyar B, et al.: genBlastG: using BLAST searches to build homologous gene models. Bioinformatics. 2011; 27(15): 2141–3. PubMed Abstract | Publisher Full Text\n\nStanke M, Schöffmann O, Morgenstern B, et al.: Gene prediction in eukaryotes with a generalized hidden Markov model that uses hints from external sources. BMC Bioinformatics. 2006; 7: 62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKorf I: Gene finding in novel genomes. BMC Bioinformatics. 2004; 5: 59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMajoros WH, Pertea M, Salzberg SL: TigrScan and GlimmerHMM: two open source ab initio eukaryotic gene-finders. Bioinformatics. 2004; 20(16): 2878–9. PubMed Abstract | Publisher Full Text\n\nBesemer J, Borodovsky M: GeneMark: web software for gene finding in prokaryotes, eukaryotes and viruses. Nucleic Acids Res. 2005; 33(Web Server issue): W451–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCantarel BL, Korf I, Robb SM, et al.: MAKER: an easy-to-use annotation pipeline designed for emerging model organism genomes. Genome Res. 2008; 18(1): 188–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Gish W, Miller W, et al.: Basic local alignment search tool. J Mol Biol. 1990; 215(3): 403–10. PubMed Abstract | Publisher Full Text\n\nUniProt Consortium: Activities at the Universal Protein Resource (UniProt). Nucleic Acids Res. 2014; 42(Database issue): D191–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSt Pierre SE, Ponting L, Stefancsik R, et al.: FlyBase 102--advanced approaches to interrogating FlyBase. Nucleic Acids Res. 2014; 42(Database issue): D780–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones P, Binns D, Chang HY, et al.: InterProScan 5: genome-scale protein function classification. Bioinformatics. 2014; 30(9): 1236–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi L, Stoeckert CJ Jr, Roos DS: OrthoMCL: identification of ortholog groups for eukaryotic genomes. Genome Res. 2003; 13(9): 2178–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCong Q, Grishin NV: MESSA: MEta-Server for protein Sequence Analysis. BMC Biol. 2012; 10: 82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStamatakis A: RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014; 30(9): 1312–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatoh K, Standley DM: MAFFT multiple sequence alignment software version 7: improvements in performance and usability. Mol Biol Evol. 2013; 30(4): 772–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKenna A, Hanna M, Banks E, et al.: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010; 20(9): 1297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeightley PD, Pinharanda A, Ness RW, et al.: Estimation of the spontaneous mutation rate in Heliconius melpomene. Mol Biol Evol. 2015; 32(1): 239–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLynch M, Conery JS: The origins of genome complexity. Science. 2003; 302(5649): 1401–4. PubMed Abstract | Publisher Full Text\n\nPei J, Grishin NV: AL2CO: calculation of positional conservation in a protein sequence alignment. Bioinformatics. 2001; 17(8): 700–12. PubMed Abstract | Publisher Full Text\n\nShen J, Cong Q, Kinch L, et al.: Dataset 1 in: Complete genome of Pieris rapae, a resilient alien, a cabbage pest, and a source of anti-cancer proteins. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19231",
"date": "12 Jan 2017",
"name": "James Mallet",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and abstract: Appropriate, and suitable summary\nArticle content: This is another of many butterfly genome assemblies pioneered by the Nick Grishin group based on new short-read sequencing technology. It will form a useful contribution, and the fact that the butterfly is a major crop pest is a good justification for sequencing this particular species. Also the authors mention some novel biochemical pathways the insects use, hitherto not found in any other species. One set of proteins (pierisins) may be useful in inducing apoptosis and has already been suggested as an anti-cancer treatment – the discovery of more of these proteins in the current genome and transcriptomics work may be important in medicine. In addition, the \"nitrile specifier proteins\" are involved in inactivation of plant defences, and are another novel protein found in this species only. I'm not an expert on structural biology, but the authors apparently are, and so I'd defer to their knowledge of this area.\n\nPieris rapae is introduced from the old world, and so it might be expected to have undergone a population bottleneck or founder event. There was an intriguing pattern of homozygous patches in the genome that were suggested to be consistent with the hypothesis of such a bottleneck from a single female or maybe a couple of individuals having been introduced to North America in the last century. However, the simulation approach based on only a couple of genomes made this aspect of the study rather weak, especially since a major part of the article was devoted to these results. If the authors were really interested in this topic, they might have sampled resequence data more broadly, especially in the putative native range (presumably Europe?), as well as across North America. There are such studies from way back, for instance I quickly found this old study by googling, and the enzyme (protein) heterozygosities therein do not look particularly low by butterfly standards: Vawter, A.T., & Brussard, P.F. 19841. There's also an intriguing difference between Southern and Western populations compared to Northeastern populations which may suggest greater heterozygosity in areas with climate more similar to Northern Europe. In other words, the long discussion in the paper about the heterozygosity and putative bottleneck in this study are misplaced, seem to ignore prior work, and seek to re-invent population genetic analyses rather than employing a more direct, standard approach to studying the putative bottleneck. I'd suggest greatly shortening this section, or doing the extra work to ensure that there are more comparisons with resequenced individuals, and more attention paid to prior work in this area.\n\nHow confident are the authors of their identification of repeats? I was a little unclear about the methods used here, but they appear to be using the assembly itself to identify repeats. This has very well known weaknesses in next-gen genome sequence assemblies. I recently examined the major Lepidoptera genomes for the presence of ribosomal DNA repeats, and found that none of these next-gen assembly genomes could not assemble the rDNA genes in any semblance of the way they ought to be. I think it's an assembly problem.\n\nConclusions: see above\n\nData: provision is adequate and standard for genome sequences.",
"responses": []
},
{
"id": "19401",
"date": "17 Jan 2017",
"name": "Andrei Sourakov",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is an important contribution to genomics. The choice of the cabbage white butterfly for this study is especially fitting, as chemical interactions between plants and herbivores have been recently studied on molecular level using this species and its relatives, with some very interesting insights into evolution of detoxification abilities in insects. The paper also addresses several additional questions from phylogenetics of Lepidoptera to the origins of the invasive exotic pest in North America. The authors provide creative solution for distinguishing between modern inbreeding and historical population bottleneck and interesting observations concerning recreation of evolutionary history using full-genome information. The potentially applied aspects of the paper - presence in the model organism of protein with anti-cancer properties and mapping of this protein – make this study a valuable foundation for future medical research.\nI am also attaching the pdf of the article with minor suggestions concerning grammar/wording.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2631
|
https://f1000research.com/articles/5-2629/v1
|
03 Nov 16
|
{
"type": "Software Tool Article",
"title": "The TrialsTracker: Automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions",
"authors": [
"Anna Powell-Smith",
"Ben Goldacre",
"Anna Powell-Smith"
],
"abstract": "Background: Failure to publish trial results is a prevalent ethical breach with a negative impact on patient care. Audit is an important tool for quality improvement. We set out to produce an online resource that automatically identifies the sponsors with the best and worst record for failing to share trial results. Methods: A tool was produced that identifies all completed trials from clinicaltrials.gov, searches for results in the clinicaltrials.gov registry and on PubMed, and presents summary statistics for each sponsor online. Results: The TrialsTracker tool is now available. Results are consistent with previous publication bias cohort studies using manual searches. The prevalence of missing studies is presented for various classes of sponsor. All code and data is shared. Discussion: We have designed, built, and launched an easily accessible online service, the TrialsTracker, that identifies sponsors who have failed in their duty to make results of clinical trials available, and which can be maintained at low cost. Sponsors who wish to improve their performance metrics in this tool can do so by publishing the results of their trials.",
"keywords": [
"Publication bias",
"transparency",
"audit"
],
"content": "Introduction\n\nThe results of clinical trials are used to make informed choices with patients about medical treatments. However, there is extensive and longstanding evidence that the results of clinical trials are routinely withheld from doctors, researchers, and patients. A current systematic review of all cohort studies following up registered trials, or trials with ethical approval, shows that approximately half fail to publish their results1. Evidence from an earlier review shows that studies with “negative” or non-significant results are twice as likely to be left unpublished2. Legislation, such as FDA Amendment Act 2007 (http://www.fda.gov/RegulatoryInformation/Legislation/SignificantAmendmentstotheFDCAct/FoodandDrugAdministrationAmendmentsActof2007/default.htm), which requires trials to post summary results on clinicaltrials.gov within 12 months of completion, have been widely ignored, with a compliance rate of one in five3,4. The FDA is entitled to impose fines of $10,000 a day on those breaching this law, but has never yet done so5,6. This public health problem has also been the subject of extensive campaigning. For example, the AllTrials campaign is currently supported by 89,000 individuals and 700 organisations, including major funders, professional bodies, patient organisations and government bodies (http://www.alltrials.net/).\n\nPrevious work suggests that some sponsors, companies, funders, and research sites may perform better than others5,7. In any sector, audit of the best and worst performers can be used to improve performance, allowing those with a poor performance to learn from those doing better. To be effective, however, audit should be repeated, and ideally ongoing8.\n\nAll work on publication bias to date relies on a single sweep of labour-intensive manual searches9,10, or a single attempt to automatically match registry entries to published papers using registry identification number11. Manual matching comes at high cost and does not give ongoing feedback. We therefore set out to: develop an online tool that automatically identifies trials with unreported results; present and rank the prevalence of publication failure, broken down by sponsor; and maintain the service, updating the data automatically, so that companies and research institutes are motivated to improve their performance.\n\n\nMethods\n\nThe methods used by the online tool are as follows. Raw structured data on all studies in clinicaltrials.gov are downloaded in XML format. Studies are kept if they: have a study type “interventional” (excluding observational studies); have a “status” of “completed”; have a completion date more than 24 months ago, and after Jan 1 2006; are phase 2, 3, 4, or “n/a” (generally a device or behavioural intervention); no application to delay results posting has been filed (ascertained from the firstreceived_results_disposition_date tag); are conducted by a sponsor who has sponsored more than 30 trials (to exclude trials conducted by minor sponsors and make the ranking in the tool more informative).\n\nResults are then sought for all included studies, using two methods. First the tool checks for structured results posted directly in clinicaltrials.gov, ascertained by the presence of the firstreceived_results_date tag. Secondly, the tool searches for the nct_id (registry ID number) of the trial in PubMed’s Secondary Source ID field. Since 2005, all trials with a registry ID in the body of the journal article text should have that ID replicated in this field (https://www.nlm.nih.gov/bsd/policy/clin_trials.html). However, since in our experience approximately 1.5% of PubMed records include a valid nct_id list in the abstract, but not the Secondary Source ID field, our tool additionally searches for this ID in the title or abstract text. We exclude results published before the completion date of the trial, or results that have the words “study protocol” in the title.\n\nA final filter is then applied, with the aim of excluding publications reporting protocols or additional analysis and commentary, rather than trial results; after experimenting with the standard validated PubMed “therapy” filters (both broad and narrow) and a rudimentary search for “study protocol”, the former was used. A comparison of the three methods is reported in the accompanying iPython notebook [https://github.com/ebmdatalab/trialstracker]12.\n\nAccepting that an automated tool cannot produce results with the accuracy of a manual search, we also performed some rudimentary checks of the output of the automated search against existing manual search cohorts. The overall prevalence of unreported studies found by the tool was compared against three previous studies on publication bias. In addition, disparities on individual studies found to be unreported by the tool were compared against the underlying data from a recent publication bias cohort study conducted using clinicaltrials.gov data.\n\nThe output data is then shared through an interactive website at https://trialstracker.ebmdatalab.net allowing users to rank sponsors by number of trials missing, number of trials conducted, and proportion of trials missing. Users can click on a sponsor name to examine the number and proportion of trials completed and reported from each year for that sponsor. The site URL changes as users focus on each organisation’s performance, so that users can easily share insights into the performance of an individual company or institution. By default sponsors are sorted by the highest number of unreported trials, rather than the highest proportion, in order to initially focus on larger and more well-known organisations. The site is designed responsively to be usable on mobile, tablet or desktop devices.\n\nFor transparency and replication, all code for the tool, with comments and all data sources, is available as an iPython notebook12. All software is shared as open source, under the MIT license. A full CSV is shared containing all data, including all studies before our filters are applied, allowing others to conduct additional analyses or sensitivity analyses with different filtering methods.\n\n\nResults\n\nThe TrialsTracker tool was successfully built and is now running online at https://trialstracker.ebmdatalab.net. Sample screenshots are presented in Figure 1 and Figure 2.\n\nhttps://trialstracker.ebmdatalab.net/.\n\nhttps://trialstracker.ebmdatalab.net/#mayo-clinic.\n\nSince Jan 2006, trial sponsors included in our dataset have completed 25,927 eligible trials, of which 11,714 (45.2%) have failed to make results available. Table 1 to Table 4 report the sponsors with the top five highest number of unreported trials, the highest number of eligible trials, the highest proportion of unreported trials, and the lowest proportion of unreported trials. In total, 2390/8799 (27.2%) trials with sponsors classed as “industry” were identified as unreported; 122/470 (26.0%) trials with sponsors classed as “US Fed” were identified as unreported; 361/996 (36.2%) trials with sponsors classed as “NIH” were identified as unreported; 8841/15662 (56.4%) trials with sponsors classed as “other” were identified as unreported. We find that 8.7 million patients were enrolled in trials that are identified as unreported.\n\nTrialsTracker, 20/10/2016.\n\nTrialsTracker, 20/10/2016.\n\nTrialsTracker, 20/10/2016.\n\nTrialsTracker, 20/10/2016.\n\nA previous paper automatically matching registry entries to PubMed records and clinicaltrials.gov results found 55% had no evidence of results11, consistent with our overall findings. A previous manual audit (of which BG is co-author) found 56% of trials conducted in the University of Oxford reported results; our method also found 56% for the same institution9. A previous manual audit examined 4347 trials across 51 academic medical centres7. We compared their individual study data against ours and found that 2562 trials (62.6%) in their cohort were also in ours, but note that their study only represented 2% of our total cohort. For studies in both cohorts we found 60% reported results, while they found 66%. Of studies in both cohorts: 1149 were found “reported” by both; 534 studies were found “unreported” by both; 497 were found “reported” by their method and “unreported” by ours; 382 were found “unreported” by theirs and “reported” by ours.\n\n\nDiscussion\n\nThe tool was successfully built, and is now fully functional online. We found non-publication rates consistent with those from previous work using manual searches, and reasonable consistency with individual study matches from a previous manual cohort. A wide range of publication failure rates were apparent in the data.\n\nOur tool is the first to provide live ongoing interactive monitoring of failure to publish the results of clinical trials. The method of automatic matching has strengths and weaknesses. It can be run automatically, at a lower unit cost than a manual search, and therefore allows coverage of more trials than any traditional cohort study. It also permits repeated re-analysis at minimal additional marginal cost compared to a manual search.\n\nIn corollary, the efficiency of automatic matching also brings challenges around specificity and sensitivity. Firstly, there may be false adjudications of non-publication, i.e. if a trial’s results paper does not include its registry identifier. However, since 2005 all major medical journals (through the International Committee of Medical Journal Editors; http://icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html) have required trials to be registered, and all trials should include their registry ID in the text. Therefore, in our view, the responsibility for results being undiscoverable, when the registry ID is not included by the trialists, lies solely with the trialists; research that is hard to discover is not transparently reported. We hope that in the future better methods for probabilistic record linkage will also be available for wider use13. Secondly, there may be false positives, where a study identified through ID matching and then filtered, is in fact not reporting results. We have used standard filters to account for this, and we are keen to improve our method in the light of concrete constructive feedback. Our checks for consistency against overall prevalence findings and individual study data from previous research to a large extent exclude gross errors in prevalence figures.\n\nNotably there are specific additional methods for linking clinicaltrials.gov records to PubMed records that we tried and rejected. Some trials have a link to a PubMed record directly in the clinicaltrials.gov results_reference tag, which ClinicalTrials documentation (https://prsinfo.clinicaltrials.gov/definitions.html) suggests indicates results from a publication. We found 2263 eligible trials had such tags, but no summary results on ClinicalTrials.gov. However, on manual examination, we found these are often erroneous, and commonly report results of unrelated studies from several years previously. In discussion, clinicaltrials.gov staff confirmed that this field is neither policed nor subject to substantial editorial control (personal communication with Annice Bergeris).\n\nOur findings are consistent with previous work on publication bias1, finding that approximately half of trials fail to report results. Previous studies have used 2007 as their start date for expecting results to be made available, reflecting the FDA Amendment Act 2007. We did not use this date, as this legislation has been widely ignored5,6, and because we regard sharing results as an ethical obligation, not a legal one. Our methods accept results posting at any time after study completion, and any sponsor posting results for any trial since 2006 will find their results improve in our live data.\n\nWe have previously argued that live ongoing monitoring of trials transparency will help to drive up standards, especially if this information is used by clinicians, policymakers, ethics committees, regulators, patients, patient groups, healthcare payers, and research funders, to impose negative consequences on those who engage in the unethical practice of withholding trial results from doctors, researchers, and patients14. Recent comments by US Vice President Joe Biden threatened to withhold financial support from publicly-funded researchers who fail to report clinical trial results, suggesting some consequences may arise6. We would be happy to collaborate or work with organisations seeking to get a better understanding of their own failure to publish, and wishing to act on this data.\n\nWe have also previously argued that medicine has an “information architecture” problem; all publicly accessible documents and data on all clinical trials should be aggregated and indexed for comparison and gap identification, and that good knowledge management and better use of trial identifiers will facilitate this15. At present, medicine faces serious shortcomings in this area. With 75 trials and 11 systematic reviews being published every day on average16 better knowledge management must be a priority.\n\nWe have shared all our underlying data so that others can explore in detail non-publication for specific studies, interventions, companies, funders, sponsors, or institutions that interest them. We believe that research work on research methods and reporting should go beyond identifying the overall prevalence of problems, and identify individual people and organisations who are performing poorly, in order to both support and incentivise them to improve. That is only possible with ongoing monitoring and feedback on individual studies, an approach we have taken on other projects such as COMPare17,18. We hope that others will also pursue this model of audit and feedback, and assess its impact on performance.\n\n\nConclusions\n\nWe have designed, built, and launched an easily accessible online service that identifies sponsors who have failed in their duty to make results of clinical trials available.\n\n\nSoftware availability\n\nWebsite available at: https://trialstracker.ebmdatalab.net\n\nLatest source code: https://github.com/ebmdatalab/trialstracker\n\nArchived source code as at the time of publication: DOI: 10.5281/zenodo.16352212\n\nLicense: MIT license",
"appendix": "Author contributions\n\n\n\nBG conceived the project; both authors developed the analyses, trial matching and filtering methods; APS wrote the data-analysis script and built the interactive website; BG drafted the manuscript; both authors revised the manuscript; both authors are guarantor.\n\n\nCompeting interests\n\n\n\nBG has received research funding from LJAF, the Wellcome Trust, the NHS National Institute for Health Research, the Health Foundation, and the WHO. BG is co-founder of the AllTrials campaign on trials transparency. BG receives personal income from speaking and writing for lay audiences on the misuse of science. APS receives income as a freelance software developer.\n\n\nGrant information\n\nBG is funded by the Laura and John Arnold Foundation (LJAF) to conduct work on research integrity; APS is employed on this grant.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements:\n\nWe are grateful for constructive discussions on design and impact with Jess Fleminger, Carl Heneghan and Sile Lane.\n\n\nReferences\n\nSchmucker C, Schell LK, Portalupi S, et al.: Extent of Non-Publication in Cohorts of Studies Approved by Research Ethics Committees or Included in Trial Registries. PLoS One. 2014; 9(12): e114023. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSong F, Parekh S, Hooper L, et al.: Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010; 14(8): iii,ix–xi, 1–193. PubMed Abstract | Publisher Full Text\n\nAnderson ML, Chiswell K, Peterson ED, et al.: Compliance with results reporting at ClinicalTrials.gov. N Engl J Med. 2015; 372(24): 1031–1039. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrayle AP, Hurley MN, Smyth AR: Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study. BMJ. 2012; 344: d7373. PubMed Abstract | Publisher Full Text\n\nPatients endangered as law is ignored. STAT. 2015.\n\nJoe Biden: Agencies don’t report clinical trials should lose funds. STAT. 2016. Reference Source\n\nChen R, Desai NR, Ross JS, et al.: Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016; 352: i637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBenjamin A: Audit: how to do it in practice. BMJ. 2008; 336(7655): 1241–1245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTompson AC, Petit-Zeman S, Goldacre B, et al.: Getting our house in order: an audit of the registration and publication of clinical trials supported by the National Institute for Health Research Oxford Biomedical Research Centre and the Musculoskeletal Biomedical Research Unit. BMJ Open. 2016; 6(6): e009285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiller JE, Korn D, Ross JS: Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012. BMJ Open. 2015; 5(5): e009758. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuser V, Cimino JJ: Linking ClinicalTrials.gov and PubMed to track results of interventional human clinical trials. PLoS One. 2013; 8(7): e68409. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPowell-Smith A: ebmdatalab/trialstracker: First release [Data set]. Zenodo. 2016.Data Source\n\nBashir R, Dunn AG: Systematic review protocol assessing the processes for linking clinical trial registries and their published results. BMJ Open. 2016; 6(10): e013048. PubMed Abstract | Publisher Full Text\n\nGoldacre B: How to Get All Trials Reported: Audit, Better Data, and Individual Accountability. PLoS Med. 2015; 12(4): e1001821. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoldacre B, Gray J: OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials. 2016; 17: 164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBastian H, Glasziou P, Chalmers I: Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010; 7(9): e1000326. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoldacre B: Make journals report clinical trials properly. Nature. 2016; 530(7588): 7. PubMed Abstract | Publisher Full Text\n\nGoldacre B, Drysdale H, Slade E, et al.: The COMPare Trials Project. COMPare. Reference Source"
}
|
[
{
"id": "17811",
"date": "21 Nov 2016",
"name": "Andrew P. Prayle",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and abstract The title is appropriate and discusses the content of the paper in one sentence. The abstract starts generally, drills down into the methods concisely and discusses the contribution to the literature which this manuscript and software project appropriately.\n\nArticle content Powell-Smyth and Goldacre report on a piece of work which will make a substantial contribution to the clinical trials enterprise.\n\nThey have developed an open source web application which automatically takes data from the US based ClinicalTrials.gov registry and searches for results (either summary results on ClinicalTrials.gov or an abstract on PubMed). The software then ranks study sponsors by the proportion of trials which have reported results.\n\nThis approach is novel in its approach to on-line availability of data. This means that the dataset is easily searchable through a web based application. Automated systems have been explored in the past (e.g. Huser et al 2013), as have manual searches (Tompson 2016), and the results of the automated system presented appear consistent with these.\n\nI have reviewed the online web based software, this is simple to use and demonstrates the ability of the approach to hold institutions which sponsor research to account, by summarising their contribution of results to the clinical trial literature.\n\nThe central contribution is an automated system for determining if a trial registered on ClinicalTrials.gov has published summary records on clinicaltrials.gov, or has an abstract indexed on PubMed. The work hinges on whether their automated system can in fact do this. The authors make a persuasive case that they are able to find summary results and abstracts where these have been published. They provide what they have said they can do in the on-line Jupyter notebook. Additionally, the open source code in the Github repository is straightforward to read, and supports their case. Finally, I downloaded the full dataset and explored it, and in the cases which I looked at their spreadsheet had correctly identified completed trials and the accompanying Pubmed abstract.\n\nTherefore, although there may be a few trials which have been misclassified, I think that the methods used appear very robust. Additionally, if trials have been misclassified, the authors give suggestions of how to adjust this through changes to the journal entries on Pubmed, or through summary results on Pubmed.\n\nIn the discussion the strengths and limitations of their automated approach are carefully elaborated upon. The key strength is that a large proportion of the clinical trials landscape is included in their study. The limitation is of course that automated analysis may incorrectly label some trials as unreported when in fact they are, but my assessment of their raw data is that this must be infrequent as I have not been able (in and admittedly unscientific sample obtained by scrolling through the raw data, and looking at trials which I am familiar with if I see them) to identify such a case.\n\nConclusions The authors state that they present this work to aim to improve the clinical trials landscape in terms of the ‘information architecture’ of missing results. I believe that we should take this work at face value as a genuine, innovative approach which is trying to improve the problem of non-reporting, by giving transparency of reporting at the study sponsor level. It is reported carefully. The data presented back up the case for a clear need for improved trial reporting.\n\nData This study is an exemplar of how to publish reproducible research. The data and code and extensive documentation are available and free to download and explore. My only suggestion is to have a second repository in case GitHub disappears.",
"responses": []
},
{
"id": "17404",
"date": "19 Dec 2016",
"name": "James Hetherington",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors published an online ranking system which illustrates how the major sponsors share their clinical trial information, in particular through reporting on completed trials. This research offers a new way to automatically identify and match trials registered on ClinicalTrials.gov with their published results in both the ClinicalTrials.gov trial registry and abstracts or metadata of publications (indexed in Pubmed). This automated process can result in a much more frequent update and provide more precise information to the public, in part by encouraging more accessible reporting.\nIn this review, we would like to focus our comments on the author’s data processing and software. The authors have provided a code repository containing their website along with some Python code related to the data analysis process. The latter comprises a clear and straightforward IPython notebook detailing all the data analysis steps, including raw data processing, missing trial identification, and validation against other studies. We found it is an intuitive way to present works of this scale, although as discussed later we would like to suggest more modularization. In general, the code is understandable and easy to read. Both unit tests and behavioural tests are included to give more confidence in its reliability. We were able to re-run the entire IPython notebook with only some minor modifications.\nWe do have some minor comments and suggestions regarding the coding quality and reproducibility aspects of this project.\nWe have noticed that the XML parsing and Pubmed data extraction parts break easily due to variations in the source files or network problems. It would therefore be beneficial to make these two parts into functions with associated unit tests to ensure the correctness and robustness of the code.\n\nCompounding the problem, these parts also take a very long time to compute. We left the program running for several days trying to update the trial-abstract database, only to have it fail part-way through. Further incremental updating mechanisms would help greatly here, for instance adding an extra column to the database to register the last search date so that recently searched entries will not be queried again.\n\nOne hopes that the ‘live’ website is updated from time to time with more recent results. It would be nice to have details on how frequently this happens - is it an automated process?\n\nThe current data on Github have some small differences compared to the results presented in the paper. We can fully understand that the data in the repository should be updated, and the development is an ongoing process. However, it would have been good from an audibility point of view to make the data which have been used for the paper available. For instance, the specific git commit id used for the paper could be given in the paper itself and the repository’s README.\n\nA requirements.txt is provided in the source code to facilitate installing the project’s dependencies, however, not all of the dependencies are on the list. Changes in recent versions of some of these cause the code to break. Please specify all the dependencies (even indirect ones) including the versions used in the requirements.txt file. We have submitted a pull request with the list we found worked.\n\nOverall, the new tool offered by the authors enables more frequent and larger-scale identification of whether trials have been reported. Their code is clear and reflects the methodology faithfully. This tool will help in the push for improving clinical trial transparency.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2629
|
https://f1000research.com/articles/5-2628/v1
|
03 Nov 16
|
{
"type": "Case Report",
"title": "Case Report: Congenital left ventricular diverticulum: a pediatric case report",
"authors": [
"Myriam Jrad",
"Meriem Affes",
"Salma Behi",
"Amine Laabidi",
"Habiba Mizouni",
"Meriem Affes",
"Salma Behi",
"Amine Laabidi",
"Habiba Mizouni"
],
"abstract": "Congenital left ventricular diverticulum is a rare cardiac abnormality consisting of a localized outpouching from the free wall of the cardiac chamber. It is usually asymptomatic. However, complications such as infective endocarditis, arrhythmias, and embolism may occur. Medical ultrasound, echocardiography, CT angiography, MRI and invasive ventriculography are diagnostic tools used to identify congenital left ventricular diverticulum. Surgical resection is the treatment of choice in symptomatic patients, whereas the management of asymptomatic patients often represents a therapeutic dilemma. We report the case of a symptomatic left ventricular diverticulum on a 3-day-old female with Cantrell’s syndrome treated surgically. Postoperative course was uneventful. The patient was discharged with a good clinical condition. The prognosis of this malformation is poor if not diagnosed in the perinatal period. A diagnosis can be suspected with echocardiography; however, a CT scan allows a complete study of the problem. The treatment is always surgical with a good postoperative prognosis.",
"keywords": [
"CT angiography",
"MRI",
"left ventriculum",
"echocardiography"
],
"content": "Introduction\n\nCongenital left ventricular diverticulum is a rare cardiac abnormality, consisting of a localized out pouching from the free wall of the cardiac chamber. Commonly, this is from the left ventricular apex; however, non-apical diverticula may also occur1. There are two types of ventricular diverticulum: muscular or fibrotic2.\n\nVentricular diverticulum is usually associated with a thoracoabdominal wall defect as seen in the spectrum of Cantrell’s pentalogy1,2. Cantrell’s syndrome is a very rare congenital disease, described by Cantrell, Haller, and Ravitch in 1958, associating a lower sternal defect, a supraumbilical abdominal wall defect, a deficiency of the anterior portion of the diaphragm, a deficiency in the diaphragmatic portion of the pericardium, and cardiac malformations3.\n\nThis study reports a rare case of a left ventricular diverticulum on a new born infant with Cantrell’s syndrome.\n\n\nCase report\n\nA 3-day-new born African female, resulting from an irregularly monitored pregnancy, was referred to the Department of Pediatric Surgery of la Rabta Hospital of Tunisia for the investigation of an umbilical mass measuring 3cm in diameter. The baby was the first child of parents with no history of familial disease, and there was no significant antenatal history. Clinical examination showed a well-looking infant, presenting with a pulsatile mass with a palpable thrill in concordance with cardiac contractions. An electrocardiogram showed a normal sinus rhythm (150bpm) with a right deviation of the QRS axis and right ventricle hypertrophy signs.\n\nEchocardiography showed a normal left ventricle with conserved contractility. Important dilatation of the right ventricle and a ventricular septum defect were seen. Both the aorta and pulmonary artery arose from the right ventricle, and the pulmonary artery was posterior to the aorta (Figure 1). Abnormal flow was seen in the cardiac apex.\n\nEchocardiography showing abnormal flow in the left ventricle apex.\n\nA 64-channel multi-detector CT (GE LightSpeed VCT) was performed for additional characterization. Sedation of the infant was not necessary. Thoracoabdominal helicoidal acquisition after the injection of non-ionic contrast agent was realized in a cranio caudal direction. The contrast enhanced multi-slice CT showed a thin walled channel extending up from the left ventricular apex to the anterior abdominal wall (Figure 2). This diverticulum was 6cm long following the abdominal midline through a defect of the anterior diaphragm and extending up to the umbilical region (Figure 3).\n\nVolume rendered 3D CT image showing a diverticulum originating from the left ventricle free wall.\n\nMid-sagittal maximum intensity projection (MIP) thin CT image shows the diverticulum extending up to the umbilical region.\n\nMyocardial thickness of the outpouching was 3mm (Figure 4A). No herniating bowels were seen (Figure 4B). The examination did not show any other abnormality of intra abdominal organs.\n\n(A) Axial maximum intensity projection (MIP) CTscan showing the diverticulum wall consisting of a 3mm thick myocardium. (B) Axial enhanced multiple detector CT shows the defect of the anterior abdominal wall without herniating bowels.\n\nThe multi detector CT scan also confirmed the dextro transposition of the aorta and pulmonary artery (Figure 5) and the ventricular septum defect.\n\nSurgical treatment was decided upon. The patient was connected to cardiopulmonary bypass and the diverticulum was opened and resected. The entry site was obliterated with a polytetrafluoroethylene patch. Further inspection revealed a normal-sized left ventricle and normal-sized coronary arteries, with no coronary aneurysms. Overlapping reconstruction of the abdominal anterior wall and diaphragm defects were also performed without using any prosthetic material.\n\nVolume rendered 3D CT image showing the dextro transposition of the great vessels.\n\nThe postoperative period was uneventful, and the child was discharged from the hospital on the sixth postoperative day. We proposed medical management for our patient, comprising aspirin at a dosage of 5mg/kg/day to prevent any thromboembolic situation.\n\nAt a 6 month follow-up examination, the infant had a good clinical condition and a normal cardiac function on echocardiography.\n\n\nDiscussion\n\nCongenital left ventricular diverticulum is a rare cardiac malformation. Its incidence has been reported to be approximately 0.04% in the general population and approximately 0.02% in a consecutive pediatric autopsy series2,4. Although ventricular diverticulum may exist alone, it can also be associated with cardiac, vascular, or thoracoabdominal abnormalities4–6. In fact, cardiovsacular desease is a component of Cantrell’s pentalogy in some patients3. Cantrell’s pentalogy consists of a defect in the lower sternum, a supra umbilical abdominal wall defect, a deficiency of the anterior portion of the diaphragm, a deficiency in the diaphragmatic portion of the pericardium, and a congenital heart defect2.\n\nPatients with isolated cardiac diverticulum are usually asymptomatic; however, there are reports associated with arrhythmias, embolic events, and even death due to diverticulum rupture2. Spontaneous rupture occurs very frequently, and can be explained by an increase in pressure inside the diverticulum as a result of a difference in the phase of contraction between the left ventricle and the diverticulum7.\n\nPatients with a diverticulum sometimes present with an abnormal electrocardiogram8,9. In the case of our patient, a right deviation of the QRS axis and right ventricle hypertrophy signs were noted. Accurate diagnosis cans be made with ultra sonography or echocardiography2–10, and prenatal diagnostics have been reported in the literature5. CT angiography, MRI and invasive ventriculography give a clearer picture of the problem1,11. In the case of our patient, the diagnosis was suspected on echocardiography, and the CT angiography allowed a complete study of the pathology and confirmed the association with other cardiac, diaphragmatic and abdominal abnormalities. Surgical treatment is usually recommended when left ventricular diverticulum is associated with other cardiac or abdominal abnormalities. Perioperative management requests a multidisciplinary experienced team, due to the complexity of cardiac and thoraco abdominal abnormalities associated in Cantrell’s syndrome7. Recently, the field of percutaneous correction for congenital left ventricular diverticulum has witnessed tremendous development and a percutaneous transcatheter device treatment was reported12.\n\nThe strength of our study is the completeness of the observation with a 6 month follow-up. However, the limitation of our case is the absence of the full perioperative findings.\n\nIn conclusion, congenital left ventricular diverticulum is a rare cardiac malformation. The prognosis of this malformation is poor if not diagnosed in the perinatal period. Complications, such as embolism, infective endocarditis, arrhythmia and, rarely, rupture can occur. Although it may exist alone, it can also be associated with cardiac, vascular, or thoracoabdominal abnormalities (e.g., Cantrell’s syndrome). A diagnosis can be suspected with echocardiography. CT angiography allow a complete study of the problem. The treatment is always surgical with good postoperative prognosis.\n\n\nConsent\n\nWritten informed consent for publication of their clinical details and/or clinical images was obtained from the parent of the patient.",
"appendix": "Author contributions\n\n\n\nAll authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nShah D, Kumar CP, Shah MS, et al.: Case series: Congenital left ventricular diverticulum. Indian J Radiol Imaging. 2010; 20(3): 211–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRuiz-Esparza E, Roldan FJ, Vazquez-Antona C, et al.: 2D and 3D echocardiography of a left ventricular diverticulum. Echocardiography. 2009; 26(9): 1087–8. PubMed Abstract | Publisher Full Text\n\nCantrell JR, Haller JA, Ravitch MM: A syndrome of congenital defects involving the abdominal wall, sternum, diaphragm, pericardium and heart. Surg Gynecol Obstet. 1958; 107(5): 602–14. PubMed Abstract\n\nBinnetoğlu FK, Altun G, Kaya A, et al.: Congenital left ventricular diverticulum associated with congenital ileo-jejunal atresia. Pediatr Cardiol. 2012; 33(7): 1224–6. PubMed Abstract | Publisher Full Text\n\nErek E, Odemis E, Tanidir IC: Right-ventricular diverticulum and associated cyst. Pediatr Cardiol. 2013; 34(8): 2093–5. PubMed Abstract | Publisher Full Text\n\nQuandt D, Dave H, Valsangiacomo Buechel E: Heart with a trunk: form fruste of Cantrell's Syndrome. Eur Heart J. 2011; 32(1): 123. PubMed Abstract | Publisher Full Text\n\nSuehiro K, Okutani R, Ogawa S, et al.: Perioperative management of a neonate with Cantrell syndrome. J Anesth. 2009; 23(4): 572–5. PubMed Abstract | Publisher Full Text\n\nWang W, Zhu W, Wang Y, et al.: Congenital left ventricular diverticulum manifested as T-wave inversion in a child. Pediatr Cardiol. 2010; 31(6): 881–3. PubMed Abstract | Publisher Full Text\n\nGorgels AP: No value of the ECG in congenital left ventricular aneurysms and diverticula? Europace. 2009; 11(12): 1577–8. PubMed Abstract | Publisher Full Text\n\nPark SD, Shin SH, Kim DH, et al.: Three-dimensional contrast echocardiography in assessing left ventricular diverticulum. Echocardiography. 2012; 29(9): E230–2. PubMed Abstract | Publisher Full Text\n\nDwivedi AN, Thangiah AG, Rai M, et al.: Computed tomographic features of congenital left ventricular diverticulum. J Clin Imaging Sci. 2012; 2: 48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain S, Mahajan R, Rohit MK: Percutaneous transcatheter device closure of an isolated congenital LV diverticulum: first case report. Pediatr Cardiol. 2011; 32(8): 1219–22. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "17737",
"date": "18 Nov 2016",
"name": "Mazeni Alwi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting case report of congenital LV diverticulum in the setting of Cantrell’s syndrome. The authors describe well the clinical findings and imaging techniques with echocardiography of MDCT and contrast angiography which confirmed the diagnosis. Advances in non-invasive imaging have played an important role in confirming rare congenital cardiac anomalies such as this.\n\nThe patient underwent surgical resection of diverticulum, however there was no mention of treatment of the co-existing congenital heart disease, namely double outlet right ventricle, ventricular septal defect and transposition of great artery (DORV-TGA-VSD). Was this done at the same time? That the patient was well at 6 months review, does this mean that the major cardiac abnormalities were also repaired? If so, how did resection of the diverticulum impact or repair of the other cardiac anomalies?\n\nSecondly, in the “discussion” section (para3, line 1-3), the authors state that patients with a diverticulum sometimes present with an abnormal ECG, as did their patient who had right axis deviation and right ventricular hypertrophy. This abnormality is likely to attributed to the DORV-TGA-VSD rather than the LV diverticulum.",
"responses": [
{
"c_id": "2297",
"date": "21 Nov 2016",
"name": "Myriam Jrad",
"role": "Author Response",
"response": "Firstly, thank you for your interesting comments. Indeed, given the heaviness of the operation and the lack of means of anesthesia in our department at this age the abnormality of large vessels was treated at the age of 1 year. Secondly, it has been described in the literature that the abnormalities described with the ECG during this pathology are not specific and this can be explained by an associated cardiac disease what is the case of our patient."
}
]
},
{
"id": "18844",
"date": "29 Dec 2016",
"name": "Ilya Soynov",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case is very interesting because of the combination Cantrell’s pentalogy and transposition of great arteries is extremely rare.\nBut there are several questions to the authors.\nI have a doubt of the presence of transposition of the great arteries on CT scans in the article. On the scans it seems that the pulmonary artery arises in front of the aorta (anterior). I think you need to provide another scan where it would clearly show the location of the great vessels.\n\nThere are not data about LV volume before and after the surgery. Where was the VSD? Sizes?\n\nSurgery is preferable in patients with symptoms of heart failure or in those with complications caused by left ventricle diverticula, whereas resection in asymptomatic patients remains controversial. What was the indication to resection of the diverticulum?\n\nThere is little information about surgery. You wrote that “Surgical treatment is usually recommended when left ventricular diverticulum is associated with other cardiac or abdominal abnormalities” but there is not any information about the surgery of TGA. Did you preform the correction of TGA and VSD simultaneously or not? What were the time of aortic cross clamp and CPB? Currently, there is no single generally accepted method for the resection of left ventricle diverticula or for left ventricle reconstruction. What did you do in this case?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2628
|
https://f1000research.com/articles/5-2624/v1
|
02 Nov 16
|
{
"type": "Research Article",
"title": "Engagement of introductory biostatistics students in a novel hybrid course format",
"authors": [
"Marcel Ramos",
"Misato Gbedemah",
"Heidi Jones",
"Levi Waldron",
"Marcel Ramos",
"Misato Gbedemah",
"Heidi Jones"
],
"abstract": "Background While online-only courses can be delivered without constraints of geography or synchronization of lecture times, traditional in-person lectures continue to be central to teaching and to student experiences at most universities. Hybrid learning courses fill the gap between, providing convenience and schedule flexibility while maintaining the traditional face-to-face experience. The CUNY Graduate School of Public Health and Health Policy and Hunter College are urban campuses that draw working students from a wide commuting area, leading us to experiment with hybrid course formats that offer a maximum degree of flexibility to students. Methods Lectures and lab sessions were held on campus in a traditional lecture style, but streamed live over YouTube with remote attendees able to type questions and have them answered by the instructor in real time, and later made available for viewing. Results Through a post-course survey we found students to be divided in their preferences for in-person course formats (46%) versus synchronous and asynchronous online formats (35% and 19%, respectively). In the hypothetical scenario of retaking the course, the option of attending each lecture either in-person or online was preferred (69% for this hybrid format, 11% for in-person only and another 11% for online only). Although long commuting times, work and family responsibilities were common, interactivity and learning preference were the most important considerations for students regardless of their decision for online or in-person attendance. Analysis of YouTube viewing data showed a majority of online participation occurring during live lectures, with an additional peak of viewing of all lectures occurring immediately before exams. Conclusions We found this hybrid format to be a viable and popular way to accommodate diverse personal circumstances and learning preferences within the framework of a traditional lecture format.",
"keywords": [
"online learning",
"format preference",
"graduate-level",
"hybrid-course format"
],
"content": "Introduction\n\nHybrid learning courses are defined as courses that deliver 30% to 79% of material via an online medium (Allen & Seaman, 2008). Hybrid courses provide convenience and flexibility through online components while also maintaining the traditional face-to-face experience (Cavanagh, 2011). When designed well, they may also accommodate a wider variety of student learning styles. A meta-analysis conducted by the U.S. Department of Education concluded that students taking hybrid or blended instruction courses had improved outcomes compared to either pure face-to-face learning or pure online learning (Means et al., 2009).\n\nThere is little research on hybrid course engagement among adult learners at the graduate level. Adult learners are thought to show greater motivation and learning success based on course metrics like final grades, assignment completion and knowledge retention over time (Coogan, 2009). Further, some researchers posit that modern technology may present some challenges to adult learners that differ from those of younger generations (Coogan, 2009). Thus, the age of students could play a role in hybrid course engagement and outcomes. Further, hybrid course learning may increase accountability as students are expected to endorse self-learning and self-discipline (Coogan, 2009; Hu & Hui, 2012).\n\nMuch of the literature on the determinants of course format uptake tests for associations with student learning preferences, self-efficacy and engagement (Hu & Hui, 2012; Shukor et al., 2014). The literature on graduate-level hybrid courses rarely mentions the influence of life circumstances on course options in addition to individual characteristics and preferences. Among adult learners, external influences could potentially be strong indicators of course format preferences. These influences may include travel time, home and work responsibilities.\n\nThe present study aims to investigate course format preferences among graduate-level students and possible influential factors for such format preferences. We offered an Introductory Biostatistics course resembling a traditional in-person lecture and lab format, but with added options for synchronous online attendance (while class was occurring) and asynchronous online viewing (at a later time). Students were allowed to choose how to attend or view any class, and we studied their preferences through a post-course survey of habits, life situation, and learning style, and through analysis of YouTube viewing data.\n\n\nMaterials and methods\n\nThe master's level introductory biostatistics course was offered in a 12-week semester with classes, two exams, and a final project poster presentation, and is a required course for students pursuing a master of public health (MPH). Each class involved one hour of lecture and two hours of laboratory instruction. In-person attendance was mandatory for the first class, exams, and final project poster presentation; for all other sessions students were given the choice of attendance in-person, online during class (synchronous), or online after class ended (asynchronous). The synchronous offering was provided using Google’s “Hangouts on Air,” with a live screen share and audio broadcast viewable through YouTube®. The “Q&A Module” allowed viewers to type questions that appeared on the instructor’s screen during lecture. These questions were answered verbally as were questions raised by in-person attendees, and became a clickable index of the lecture. Recordings became available immediately afterwards on the course’s YouTube channel (https://www.youtube.com/user/ph750spring2014).\n\nThis format required that the computer used to present material in class be connected to internet and have a working microphone. In the absence of a dual-screen setup, the “Q&A” module was shown side-by-side with a Microsoft PowerPoint® window used to show lecture slides. A “Lavalier” microphone was used to improve sound quality, but an ordinary webcam microphone would suffice.\n\nA total of 47 introductory biostatistics students were given the opportunity to participate in a survey at the end of the semester. An additional 58 introductory epidemiology students whose course followed a more typical hybrid format of alternating in-person and online sessions were also surveyed for comparison. This study was reviewed by the Hunter College (City University of New York) Institutional Review Board (590445-1) and determined exempt according to federal regulations, under 45 CFR 46.101(b). Informed electronic consent was obtained as part of an anonymous online questionnaire. Study participants were given the option and incentive of entering a draw for a $50 Amazon gift certificate upon completion of the questionnaire.\n\nThe end of course survey included demographic characteristics such as age, gender, race/ethnicity and travel time to campus. Participants were asked to choose which format was their preferred way to attend lectures. They were subsequently asked an open-ended question, “Why did you prefer this format?” Responses that pertained directly to the question asked were analyzed and assigned primary and secondary topics based on recurring and prevalent themes within the total set of responses. The keyword topics used to categorize responses were as follows: 1) interactivity, 2) convenience, 3) avoiding commute and 4) learning preference. The complete survey and summarized responses are provided in the Supplementary material and Data availability sections, respectively.\n\nData were collected via the Google Forms® encrypted website and is noted under the Data availability section.\n\nViewership among other metrics are automatically documented by the YouTube® website and can be accessed using the analytics dashboard. The dashboard can be found under “analytics” in the “creator studio” section of the Hunter College PH750 Spring 2014 YouTube® account. Data within the time frame of February 1, 2014 to March 4, 2014 were extracted from this dashboard and saved as a comma separated values file. See the Data availability section.\n\nStatistical power was estimated using the pwr.2p2n.test function from the R pwr package. For a sample size of N=26 and a two-proportion test, we estimated 90% power to detect an effect size of between 1.3 and 1.6 for balanced groups and for groups with a 4:1 prevalence ratio respectively.\n\nThe survey data were downloaded from the Google Forms® encrypted website and is noted under the Data availability section. Bivariable relationships were only investigated among biostatistics students due to a low response rate from the introductory epidemiology course (19% or 11/58 of these students responded to the survey). Two respondents who were registered for both courses were considered as biostatistics students.\n\nAssociations between continuous variables like age and travel time to class and course format preference were analyzed by non-parametric Kruskal-Wallis analysis of variance. Associations between categorical variables and course format preference were assessed by Fisher’s Exact Test. YouTube longitudinal viewing data were visualized as stacked line plots.\n\nData cleaning and all analyses and plotting were performed using R version 3.1.1 (R Core Team, 2014). Code and data to reproduce all results in this manuscript are provided at https://github.com/LiNk-NY/H-HybridCE.\n\n\nResults\n\nA total of 26 introductory biostatistics students out of 47 or 55% participated in the anonymous survey. Demographic characteristics of the biostatistics students are shown in Table 1. The majority of these respondents were female (72%), with an average age of 33 years (SD = 8). The median age of the sample was 30 years with a range between 22 and 55 years of age. Of those who answered the survey, 46% of students identified as Non-Hispanic white, 27% as Non-Hispanic black and 15% as Hispanic. The average travel time to the Hunter College Silberman School of Social Work building was 97 minutes (SD = 63 minutes). The median commute time was 90 minutes with a range from 2 to 240 minutes.\n\na Multiple preference reasons were possible\n\nTable 1 also shows self-reported course format preferences and preference reasons. Forty-six percent of respondents preferred the in-person format; thirty-five percent preferred the synchronous course format, and nineteen percent preferred the asynchronous online format. Students were asked, “If you were to take this class again next semester, which format would you choose?” The majority of students (69%) would prefer having the option of both for each class, in-person or online, as was done in this class. “Learning preference” was the most endorsed reason with 10 mentions (making up 38% of respondents), followed by “interactivity” with 9 mentions (35%), “convenience” with 7 (27%), and “avoiding commute” with 6 or 23% of biostatistics students.\n\nA qualitative analysis of the long response items in the survey extracted primary and secondary reasons for course preferences. Primary and secondary reasons were determined according to order of appearance. Secondary reasons were only documented for those with more than one reason in the same long response item. These preferences were categorized into four salient topics: 1) interactivity, 2) convenience, 3) avoiding commute, and 4) learning preference. Interactivity was defined as the advantage to be able to ask questions and interact with either the professor or classmates during in-person or online lectures. For example, responses similar to “I prefer class in-person because it allows for interaction with classmates and for me to ask questions in real time” were coded as “interactivity”. The “convenience” topic was determined in any comments where students indicated learning with less external effort or difficulty. Comments similar to, “I liked to be able to pause the lectures and rewind/repeat statements to better understand difficult topics,” were categorized as “convenience” reasons. The “avoiding commute” topic was frequent enough to warrant its own category. Responses that indicated saving time, for example, “I did not have to commute to Harlem,” were coded as “avoiding commute”. Learning preference is indicated when students’ comments explain cognitive advantages to their course preferences.\n\nTable 2 shows relationships between reported course format preference and a number of demographic characteristics along with primary preference reasons. Only the given qualitative reasons for preference were found to be significantly related to course format preference (p <.001, Fisher’s Exact Test). Choice of traditional in-person course format was strongly associated with preference for “interactivity” (58% of those who prefer the in-person format) and with ingrained “learning preference” (50% of those who prefer the in-person format). Conversely, among those who favored the synchronous online course format, “avoiding the commute” was the more frequent reason for such choice. Those who preferred the asynchronous online course format most commonly reported “convenience” as the reason for their choice followed by “learning preference”.\n\nNote. STEM = Science, Technology, Engineering, and Mathematics. Fisher's exact test p-values are shown for all tests unless otherwise indicated. a. Kruskal-Wallis test p-values. b. Median Fisher's exact test p-values are shown. c. Percentages shown by preference group where multiple preference reasons were possible.\n\nFigure 1 shows both primary and secondary reasons provided in qualitative survey responses, by course format preference. Those who preferred the in-person course format endorsed “interactivity” and “learning preference” more often than those who preferred either the synchronous or the asynchronous formats. Those who preferred the asynchronous format tended to mention “convenience” and “learning preference” as reasons for such choice. Among those who preferred the synchronous format, avoiding the commute was the dominant reason for format preference.\n\nCombined primary and secondary self-reported reasons for course format preference were extracted from qualitative topic analysis of an open-ended survey item. Among students who preferred the in-person course format, interactivity was mentioned 7 times in response to the open-ended question, “Why did you prefer this format?” Those who preferred the asynchronous online format mentioned its convenience 5 times in response to the open-ended question. Students who preferred the synchronous online format were likely to mention avoiding the commute as a reason for such preference.\n\nAssociation between life circumstances and course format preference was nearly significant, with students responsible for the care of children and students working full-time more likely than others to prefer asynchronous viewing (p=.09 and p=.08 respectively, Fisher’s Exact Test). Age, travel time to campus, gender, race-ethnicity, having a STEM undergraduate degree (Science, Technology, Engineering, Mathematics), having previously taken a hybrid-format course, and self-reported confidence level with statistics were not associated with course format preference.\n\nYouTube view frequency patterns of lecture videos were recorded and visualized as stacked line plots. Figure 2 shows the number of views per online video lecture up until the first exam, where the reference line indicates the date of the first exam. The viewing patterns show a high number of views on the date the lecture video is uploaded and an increased number of views for all lecture videos right before the occurrence of the exam. The small increase in viewing between two holidays, Lincoln’s birthday (February 12) and President’s day (February 17) relative to other non-class days shows that some students used those days off to watch lecture videos. This viewing pattern was consistent during the rest of the course and before the second exam.\n\nShaded peaks reflect increased live-online lecture viewership. Stacked areas show cumulative views per lecture video. An increase in online views for all lectures is observed just before the first exam on March 3rd (indicated by a dashed reference line). Smaller stacked shaded areas reflect asynchronous online viewership of lecture videos with a marked increase in views just before the exam. This pattern of most online attendance occurring during the lecture, followed by low but steady rates of asynchronous viewing and a viewing peak immediately before exam was observed among the remaining video lectures and before the second exam.\n\n\nDiscussion and conclusions\n\nWe propose a flexible hybrid course format that allows students to choose between traditional on-campus format, interactive online attendance, or viewing lectures at a later time. This course was offered to mature graduate students (average age over 30), mostly working, spread across a large area within and around New York City with an average commuting time to and from campus of over 90 minutes. This format enabled some students to save long commutes, to attend class while traveling, and to re-watch lectures to solidify difficult concepts, while recognizing that a substantial proportion still prefer to attend class in person. Students in this study were evenly split between preference for in-person or online course format, an observation often qualitatively noted by our faculty, and corresponding closely to the actual proportion of students who attended each class on campus. Among students who preferred to attend online, most preferred to attend while class was occurring, largely for reasons of interactivity and ingrained learning preferences. Having to care for children, employment status, and travel time to campus may also contribute to format preference, but these associations were weaker and not statistically significant likely due to a small sample size and thus, insufficient statistical power (see Statistical Power section of Methods).\n\nOur survey indicated that learning preferences were a stronger indicator of course format preference than commute time, job or family status. As an example, one student travelled 3.5 hours round trip to every class because it was “easier to pay attention and take notes without distractions such as email, Facebook etc”. In an environment of increasing commitment by colleges to online education and to expanding reach and enrollment, it is important to recognize the importance of the on-campus, in-person learning experience to many students. This flexible hybrid course format recognizes the diversity of personal circumstances and learning preferences and allows effective remote learning without sacrificing the on-campus experience. It requires only modest adaptation of a traditional in-person lecture format, inexpensive technology, and software that is free of cost. It is an example of how learning technology can be used to expand available learning options and make higher education possible for a greater number of students, without restricting the choice of students who still prefer to sit face-to-face with the teacher and other students.\n\n\nData and software availability\n\nData and code to reproduce the results of this manuscript: https://github.com/LiNk-NY/H-HybridCE.\n\nAvailable data files are:\n\n• Demographics: YouTubeDemographics.csv\n\n• YouTube Data: Figure 2 YouTubeData.csv\n\n• Survey Data: PH750-2SurveyData.xlsx\n\nArchived data and code as at the time of publication: http://doi.org/10.5281/zenodo.159623 (Ramos, 2016)\n\n\nConsent\n\nWritten informed consent for publication of the participant’s responses was obtained from the participant.",
"appendix": "Author contributions\n\n\n\nLW, HJ, MR, and MG conceived the study. LW and HJ designed the survey. LW and HJ carried out the data collection. MR and LW prepared the first draft of the manuscript. HJ and MG contributed to the study design. MR prepared the manuscript and analyses. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThank you to Professor Mary Schooling for the original idea of the study.\n\n\nSupplementary material\n\nHybrid course preferences survey.\n\nClick here to access the data.\n\n\nReferences\n\nAllen IE, Seaman J: Staying the Course: Online Education in the United States, 2008. ERIC, 2008. Reference Source\n\nCavanagh T: The Blended Learning Toolkit: Improving Student Performance and Retention. (EDUCAUSE Quarterly) | EDUCAUSE.edu. EduCause Review Online. 2011. Reference Source\n\nCoogan TA: Exploring the hybrid course design for adult learners at the graduate level. J Online Learn Teach. 2009; 5(2): 316–324. Reference Source\n\nHu PJH, Hui W: Examining the role of learning engagement in technology-mediated learning and its effects on learning effectiveness and satisfaction. Decis Support Syst. 2012; 53(4): 782–792. Publisher Full Text\n\nMeans B, Toyama Y, Murphy R, et al.: Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. NIH Guide for Grants and Contracts / U.S. Department of Health, Education, and Welfare. 2009. Reference Source\n\nR Core Team: R: A Language and Environment for Statistical Computing. Vienna, Austria, 2014.\n\nRamos M: LiNk-NY/H-HybridCE: Publication. Zenodo. 2016. Data Source\n\nShukor NA, Tasir Z, Van der Meijden H, et al.: A Predictive Model to Evaluate Students’ Cognitive Engagement in Online Learning. Procedia Soc Behav Sci. 2014; 116: 4844–4853. Publisher Full Text\n\nWaldron L: Hunter College PH750 Spring 2014 YouTube channel. YouTube. YouTube, LLC. San Bruno, CA; 2014. Reference Source"
}
|
[
{
"id": "19362",
"date": "05 Apr 2017",
"name": "Sara Brownell",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs more courses become hybrid courses with an in-person and out-of-class component, this becomes an important area to explore. While the data presented in this paper may be very useful to instructors at this particular institution, it is unclear how the aims of the research paper fill a gap in the science education literature. Further, the analysis of two closed ended response questions and one open ended question does not provide enough data to fully answer the present study aims. I would strongly encourage the researchers develop more complex research questions that are novel to the education community and use more robust qualitative methodologies with a larger sample size. I have included specific comments below.\n\nAbstract\nPlease see the line under the Results section of the abstract that begins “in the hypothetical scenario of retaking the course.” The percentages in parenthesis do not support the claim in this sentence. Please clarify.\n\nIntroduction\nThe authors mention that hybrid courses may accommodate a wider variety of learning styles. I encourage the authors to reconsider their language and further consider the theory behind this statement. There is very little evidence to support that students have learning styles and the theory of learning styles has been deemed as a myth by many in the higher education community. Please provide the grade levels included in the Means et al., 2009 meta-analysis. Please define “adult learners” and reconsider the claim that technology may present challenges to adult learners as this may be specific to a particular age range of students. Someone who is 18 years old in 2017 was likely exposed to technology quite regularly during their education, compared with someone who is 50 years old. It is unclear from the introduction why these research aims are novel and broadly relevant outside of this particular institution.\n\nMaterials and methods Course format\nPlease explain the difference between lecture and laboratory instruction. Please explain what a clickable index of lecture means.\n\nCourse survey\nThe aims of this study are exploratory and do not require a research design which includes a comparison group of students. Thus, it seems irrelevant to mention the epidemiology students. However, if the researchers had a reason for including these data from the epidemiology students, then the readers need much more information about these students in order to interpret the data. Please provide more information about how the four categories were developed. Were qualitative research methodologies used? The references below may be useful:\nKrippendorff, K. (2004)1 Strauss, A., & Corbin, J. M. (1997)2\n\nWhat percentage of students answered each closed ended question? What percentage of students clearly answered the open-ended question? Of these responses, how many were able to be coded into at least one of the four categories?\n\nData analysis\nIt is unclear where, if at all, the data from the epidemiology students appear in the paper. Further, if the epidemiology class was used as a comparison group, students who are enrolled in both the biostatistics course and the introductory epidemiology course should be removed from the study. However, it appears that this group of students is not actually begin used as a comparison group.\n\nResults Demographics\nPlease explain the difference between the terms travel time and commute time.\n\nCourse format preferences\nTo properly analyze what format students preferred, the readers need information about what type of format students experienced. For example, a student who experienced in-person, synchronous, and asynchronous classes has the ability to evaluate the three formats against each other. However, a student who has attended all in-person classes can report what they experienced, but not what they prefer because they have not experienced other modes. Describing what mode students engaged with needs to be separated from measuring what mode a student prefers. The “course format preference” question measures whether students prefer in-person, synchronous, or asynchronous courses. However, the “re-take course format preference’” question is measuring whether students want a mandatory in-person class, a mandatory online class or whether they prefer to choose. These questions are measuring very different ideas and this needs to be better explained to the reader. Further, it is unclear which of these questions the open-ended response question is asking students about. Regardless of which question the opened ended question is referring to, open-ended student responses needed to be organized by how students responded to the closed ended question. For example, if the open ended question is asking about student course format preference, all of the open-ended responses from students who identified that they prefer to take a course in person needed to be analyzed together. Analyzing all student responses without taking into account their response to the closed ended question could cause the researchers to miss themes that were specific to a particular preference. For example, I would imagine that students who cite “interactivity” as a reason for attending in person classes may be referring to a different phenomenon than students who cite “interactivity” as a reason for attending synchronous courses. These nuances are important and need to be further explored and explained. The sample size is not large enough for this type of analysis. More data needs to be collected. Are the percentages of preferred reasons endorsed that are listen in Table 1 primary, secondary, or both? How many students reported secondary reasons? Did some students list three reasons? The authors should mention that because this was an open ended question that the percentages are not representative of all students who would possibly agree with a particular reason. I would suggest creating a table with a row per preference category (e.g. interactivity, convenience) and a column which includes a description of the category and a column for a representative student quote that was coded as the particular category.\n\nSee table 2 in the for an example3.\n\nAre students really explaining “cognitive advantages” of in-person vs online delivery of material? Please further explain or consider rewording and provide student example quotes.\n\nSynchronous/asynchronous viewing patterns of the lecture\nIt is unclear from the stated purpose of this paper why the authors report out the viewing pattern of lectures. These data seem unconnected to the rest of the paper.\n\nDiscussion and conclusions\nIt is unclear how the results of this study fill a gap in the education literature.",
"responses": []
},
{
"id": "23231",
"date": "02 Oct 2017",
"name": "Nurbiha A. Shukor",
"expertise": [
"Reviewer Expertise Educational technology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors described the study in a proper manner but a few questions such as how did they select the variables to represent ‘learning preference’ can be improved. For example, there should be proper reference on the definition of learning preference and how learning preference is different than learning style?\nThere should also be literature support on how did the authors select the variables to represent demographic information (such as, why ‘age’ matters?). The reliability and validity of the administered survey should be included in the report.\nOverall, considering the authors might not came from educational background, this article can be potentially improved.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "23230",
"date": "09 Oct 2017",
"name": "John McGready",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is an interesting descriptive study that details student characteristics and learning preferences in three self-selected versions of the same introductory biostatistics class: live, in class sections, synchronous online viewings of the live-streaming live class, or asynchronous viewings of recorded live class lectures. This reviewer appreciates the efforts the instructors made to accommodate anticipated differences in students scheduling needs to potentially offer the class to a wider pool of enrollees.\n\nWith regard to the study itself, however, it is purely descriptive in nature. It was not designed to test any formal hypotheses regarding differences between the three sections, despite the power computations, and the reference to a comparison group of students in a hybrid-style epidemiology course. (Curiously, this comparison group was never mentioned again after the \"Course Survey\" section.) The authors should emphasize the descriptive nature of this work up-front (The first sentence of the final paragraph of the introductions section does this well : perhaps just removing the bit about the epidemiology course comparison group will help emphasize this)\nFrom a scientific standpoint, the response rate of 55% is problematic, although the norm for survey studies. If there is other, non-survey result information on both the responders and non-responders, it would be interesting to se even a basic comparison of the two groups.\n\nThe biggest area of confusion for this reviewer is the distinction between \"Course Format Preference\" and \"Retake Course Format Preference\" as these were assessed a the same time points. Clearly the response distributions of both items differ, but it is not clear what the implications are. Is it because , for example, some persons prefer the in person option ideally, but for practical reasons (commuting, work schedules etc..) it would be more convenient to take it in a hybrid format if offered again? The distribution of reasons for preference given are ostensibly for the \"Course Format Preference\" and not the \"Retake Course Format Preference\", and as such gives little insight as to why preferences would change. As such, though the samples are small, it would be interesting to see the conditional distributions of for both items, separately for students enrolled in each of the three course versions. Additionally, from a qualitative standpoint, knowing why people would choose a different format from their preference when (hypothetically) retaking the class would be informative. As it stands, it is not possible to conclude that students prefer the hybrid approach despite the majority percentage that would retake the course in this format. If the authors can provide some clarity about the distinction between \"Course Format Preference\" and\" Retake Course Format Preference\", and which is driving their understanding of student preferences this would be helpful.\nIn closing, the study provides interesting descriptive data, and some insights on how to (relatively) easily allow for a hybridization of a course that is originally an on-campus only offering. This is not to say that it's not a lot or work to pull this off, but the resources (youtube etc..) are relatively accessible. The study, however, does not add any scientific insights regarding hybrid courses as compared to traditional, on-campus or offerings or completely online offerings.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2624
|
https://f1000research.com/articles/5-1492/v1
|
24 Jun 16
|
{
"type": "Research Article",
"title": "Bioconductor workflow for microbiome data analysis: from raw reads to community analyses",
"authors": [
"Ben J. Callahan",
"Kris Sankaran",
"Julia A. Fukuyama",
"Paul J. McMurdie",
"Susan P. Holmes",
"Ben J. Callahan",
"Kris Sankaran",
"Julia A. Fukuyama",
"Paul J. McMurdie"
],
"abstract": "High-throughput sequencing of PCR-amplified taxonomic markers (like the 16S rRNA gene) has enabled a new level of analysis of complex bacterial communities known as microbiomes. Many tools exist to quantify and compare abundance levels or microbial composition of communities in different conditions. The sequencing reads have to be denoised and assigned to the closest taxa from a reference database. Common approaches use a notion of 97% similarity and normalize the data by subsampling to equalize library sizes. In this paper, we show that statistical models allow more accurate abundance estimates. By providing a complete workflow in R, we enable the user to do sophisticated downstream statistical analyses, including both parameteric and nonparametric methods. We provide examples of using the R packages dada2, phyloseq, DESeq2, ggplot2 and vegan to filter, visualize and test microbiome data. We also provide examples of supervised analyses using random forests, partial least squares and linear models as well as nonparametric testing using community networks and the ggnetwork package.",
"keywords": [
"microbiome",
"taxonomy",
"community analysis"
],
"content": "Introduction\n\nThe microbiome is formed from the ecological communities of microorganisms that dominate the living world. Bacteria can now be identified through the use of next generation sequencing applied at several levels. Shotgun sequencing of all bacteria in a sample delivers knowledge of all the genes present. Here we will only be interested in the identification and quantification of individual taxa (or species) through a ‘fingerprint gene’ called 16s rRNA which is present in all bacteria. This gene presents several variable regions which can be used to identify the different taxa.\n\nPrevious standard workflows depended on clustering all 16s rRNA sequences (generated by next generation amplicon sequencing) that occur within a 97% radius of similarity and then assigning these to ‘Operational Taxonomic Units’ (OTUs) from reference trees1,2. These approaches do not make use of all the data, in particular sequence quality scores and statistical information available on the reads were not incorporated into the assignments. In contrast, the de novo read counts used here will be constructed through the incorporation of both the quality scores and sequence frequencies in a probabilistic noise model for nucleotide transitions. For more details on the algorithmic implementation of this step see 3.\n\nAfter filtering the sequences and removing the chimerae, the data are compared to a standard database of bacteria and labeled. In this workflow, we have used the labeled sequences to build a de novo phylogenetic with the phangorn package.\n\nThe key step in the sequence analysis is the manner in which reads are denoised and assembled into groups we have chosen to call RSVs (Ribosomal Sequence Variants) instead of the traditional OTUs.\n\nThis article describes a computational workflow for performing denoising, filtering, data transformations, visualization, supervised learning analyses, community network tests, hierarchical testing and linear models. We provide all the code and give several examples of different types of analyses and use cases. There are often many different objectives in experiments involving microbiome data and we will only give a flavor for what could be possible once the data has been imported into R. In addition, the code can be easily adapted to accommodate batch effects, covariates and multiple experimental factors.\n\nThe workflow is based on software packages from the open-source Bioconductor project4. We describe a complete project pipeline, from the denoising and identification of reads input as raw fastq sequence files to the comparative analysis of samples based on microbial abundances.\n\n\nMethods\n\nThis section demonstrates the “full stack” of amplicon bioinformatics: construction of the sample-by-sequence feature table from the raw reads, assignment of taxonomy and creation of the phylogenetic tree relating the sample sequences.\n\nFirst we load the necessary packages.\n\n\n\n\n\n\n\n\n\nThe data we will process here are highly-overlapping Illumina Miseq 2×250 amplicon sequences from the V4 region of the 16S gene5. These 360 fecal samples were collected from 12 mice longitudinally over the first year of life, to investigate the development and stabilization of the murine microbiome6. These data can be downloaded from the following location: http://www.mothur.org/MiSeqDevelopmentData/StabilityNoMetaG.tar.\n\n\n\nWe begin by filtering out low-quality sequencing reads and trimming the reads to a consistent length. While generally recommended filtering and trimming parameters serve as a starting point, no two datasets are identical and therefore it is always worth inspecting the quality of the data before proceeding.\n\n\n\nMost Illumina sequencing data show a trend of decreasing average quality towards the end of sequencing reads. Figure 1 demonstrates that the forward reads maintain high quality throughout, while the quality of the reverse reads drops significantly at about position 160. Therefore, we choose to truncate the forward reads at position 245, and the reverse reads at position 160. We also choose to trim the first 10 nucleotides of each read based on empirical observations across many Illumina datasets that these base positions are particularly likely to contain pathological errors.\n\nWe combine these trimming parameters with standard filtering parameters, the most important being the enforcement of a maximum of two expected errors per read7. Trimming and filtering is performed on paired reads jointly – both reads must pass the filter for the pair to pass.\n\n\n\nAfter filtering, the typical amplicon bioinformatics workflows cluster sequencing reads into OTUs: groups of sequencing reads that differ by less than a fixed dissimilarity threshold. Here we instead use the high-resolution DADA2 method to to infer sequence variants without any fixed threshold, thereby resolving variants that differ by as little as one nucleotide3.\n\nThe sequence data are imported into R from demultiplexed fastq files (i.e. one fastq for each sample) and simultaneously dereplicated to remove redundancy. We name the resulting ‘derep-class’ objects by their sample.\n\n\n\nThe DADA2 method relies on a parameterized model of substitution errors in order to distinguish sequencing errors from real biological variation. Because error rates can – and often do – vary substantially between sequencing runs and PCR protocols, the model parameters can be discovered from the data itself using a form of unsupervised learning in which sample inference is alternated with parameter estimation until both are jointly consistent.\n\nParameter learning is computationally intensive, as it requires multiple iterations of the sequence inference algorithm, and therefore it is often useful to estimate the error rates from a (sufficiently large) subset of the data.\n\n\n\nIn order to verify that the error rates have been reasonably well-estimated, we inspect the fit between the observed error rates (black points) and the fitted error rates (black lines).\n\n\n\n\n\nThe DADA2 sequence inference method can run in two different modes: independent inference by sample (pool=FALSE), and pooled inference from the set of sequencing reads combined from all samples (pool=TRUE). Independent inference has two major advantages: computation time is linear in the number of samples, and memory requirements are flat with the number of samples. This allows scaling out to datasets of almost unlimited size. Pooled inference is more computationally taxing, and can become intractable for datasets of tens of millions of reads. However, pooling improves the detection of rare variants that were seen just once or twice in an individual sample but many times across all samples. As this dataset is not particularly large, we perform pooled inference.\n\n\n\nSequence inference removed nearly all substitution and indel errors from the data. We now merge together the inferred forward and reverse sequences, while removing paired sequences that do not perfectly overlap as a final control against residual errors.\n\n\n\nThe DADA2 method produces a sequence table that is a higher-resolution analogue to the common OTU table. This is a sample by sequence feature table whose entries are the number of times each sequence was observed in each sample.\n\n\n\nNotably, chimeras have not yet been removed. The error model in the sequence inference algorithm does not include a chimera component, and therefore we expect this sequence table to include many chimeric sequences. We now remove chimeric sequences by comparing each inferred sequence to the other sequences in the table, and removing those that can be reproduced by stitching together two more abundant sequences.\n\n\n\nTypically a substantial fraction of inferred sequence variants, but only a small fraction of all reads, are found to be chimeric. That is what is observed here: 1502 of 1892 sequence variants were chimeric, but these only represented 10% of all reads.\n\nOne of the benefits of using well-classified marker loci like the 16S rRNA gene is the ability to taxonomically classify the sequenced variants. The dada2 package implements the naive Bayesian classifier method for this purpose8. This classifier compares sequence variants to a training set of classified sequences. Here we use the RDP v14 training set9.\n\n\n\nTraining set fasta files formatted for use with the dada2 package are available for download at https://purl.stanford.edu/wh250nn9648.\n\nPhylogenetic relatedness is commonly used to inform downstream analyses, especially the calculation of phylogeny-aware distances between microbial communities. The DADA2 sequence inference method is reference-free, so we must construct the phylogenetic tree relating the inferred sequence variants de novo. We begin by performing a multiple-alignment of the inferred sequences.\n\n\n\nThe phangorn package is then used to construct a phylogenetic tree. Here we first construct a neighbor-joining tree, and then fit a GTR+G+I maximum likelihood tree using the neighbor-joining tree as a starting point.\n\n\n\nThis completes the amplicon bioinformatics portion of the workflow.\n\nThe phyloseq package organizes and synthesizes the different data types from a typical amplicon sequencing experiment into a single data object that can be easily manipulated. The last bit of information needed is the sample data contained in a .csv file.\n\n\n\nThe full suite of data for this study – the sample-by-sequence feature table, the sample metadata, sequence taxonomies, and the phylogenetic tree – can now be combined into a single object.\n\n\n\nphyloseq10 is an R package that allows users to import, store, analyze, and graphically display complex phylogenetic sequencing data that has already been clustered into Operational Taxonomic Units (OTUs) or appropriately denoised and collected under Ribosomal Sequence Variants (RSVs). The package is most useful when there is also associated multitype sample data, a phylogeny, and/or taxonomic assignment of each taxa. phyloseq leverages and builds upon many of the tools available in R for ecology and phylogenetic analysis (vegan11, ade412, ape13), while also using advanced and flexible layered graphic systems (ggplot214) to easily produce publication-quality graphics of complex phylogenetic data. The phyloseq package uses a specialized system of S4 data classes to store all related phylogenetic sequencing data as a single, self-consistent, self-describing experiment-level object, making it easier to share data and reproduce analyses. In general, phyloseq seeks to facilitate the use of R for efficient interactive and reproducible analysis of amplicon count data jointly with important sample covariates.\n\nThis article shows a useful workflow, but many more analyses are available in phyloseq, and R in general, than can fit in a single example. The phyloseq home page is a good place to begin browsing additional phyloseq documentation, as are the three vignettes included within the package, and linked directly at the phyloseq release page on Bioconductor.\n\nMany use cases result in the necessity to import and combine different data into a phyloseq class object, this can be done using the import_biom function to read recent QIIME format files, older files can still be imported with import_qiime. More complete details can be found on the phyloseq FAQ page.\n\nIn the previous section the results of dada2 sequence processing were organized into a phyloseq object. This object was also saved in R-native serialized RDS format. We will reload this here for completeness.\n\n\n\nIt can be beneficial to start the data exploration process interactively, this often saves time in detecting outliers and specific features of the data. Shiny-phyloseq15 is an interactive web application that provides a graphical user interface to the phyloseq package. The object just loaded into the R session in this workflow is suitable for this graphical interaction with Shiny-phyloseq.\n\nphyloseq provides useful tools for filtering, subsetting, and agglomerating taxa – a task that is often appropriate or even necessary for effective analysis of microbiome count data. In this subsection, we graphically explore the prevalence of taxa in the example dataset, and demonstrate how this can be used as a filtering criteria. One of the reasons to filter in this way is to avoid spending much time analyzing taxa that were only rarely seen. This also turns out to be a useful filter of noise (taxa that are actually just artifacts of the data collection process), a step that should probably be considered essential for datasets constructed via heuristic OTU-clustering methods, which are notoriously prone to generating spurious taxa.\n\n\n\n\n\nEach point is a different taxa. Exploration of the data in this way is often useful for selecting filtering parameters, like the minimum prevalence criteria we will used to filter the data above.\n\nFor some experimental questions, it is useful to agglomerate closely-related taxa. In this subsection we explore two separate ways in which closely-related taxa can be grouped together as a single feature in the phyloseq data object. Note that this is only helpful if a biological phenomenon of interest actually occurs at the chosen level of agglomeration, in which case the grouping-together of these features can increase statistical power.\n\n\n\nUse phyloseq’s plot_tree() to plot tree of original filtered data, the tree after taxonoic agglomeration, and the tree after phylogenetic agglomeration. Save these as separate plot objects, and then render them together on one plot using gridExtra::grid.arrange.\n\n\n\n\n\nIt is usually necessary to transform microbiome count data to account for differences in library size, variance, scale, etc. The phyloseq package provides a flexible interface for defining new functions to accomplish these transformations of the abundance values via the transform_sample_counts function. The first argument to this function is the phyloseq object you want to transform, and the second argument is an R function that defines the transformation. The R function is applied sample-wise, expecting that the first unnamed argument is a vector of taxa counts in the same order as the phyloseq object. Additional arguments are passed on to the function specified in the second argument, providing an explicit means to include precomputed values, previously defined parameters/thresholds, or any other object that might be appropriate for computing the transformed values of interest.\n\nThis example begins by defining a custom plot function, plot_abundance, that uses phyloseq’s psmelt function to define a relative abundance graphic as shown in Figure 5 and Figure 6. We will use this to compare more easily differences in scale and distribution of the abundance values in our phyloseq object before and after transformation.\n\n\n\nThe transformation in this case converts the counts from each sample into their frequencies, often referred to as proportions or relative abundances. This function is so simple that it is easiest to define it within the function call to transform_sample_counts.\n\n\n\nNow we plot the abundance values before and after transformation. The results are in Figure 5.\n\n\n\nNotice on the previous plot that Lactobacillales appears to be a taxonomic Order with bimodal abundance profile in the data. We can check for a taxonomic explanation of this pattern by plotting just that taxonomic subset of the data. For this, we subset with the subset_taxa() function, and then specify a more precise taxonomic rank to the Facet argument of the plot_abundance function that we defined above.\n\n\n\nHere it is clear that the apparent biomodal distribution of Lactobacillales on the previous plot was the result of a mixture of two different genera, with the typical Lactobacillus relative abundance much larger than Streptococcus.\n\nAt this stage in the workflow, after converting raw reads to interpretable species abundances, and after filtering and transforming these abundances to focus attention on scientifically meaningful quantities, we are in a position to consider more careful statistical analysis. R is an ideal environment for performing these analyses, as it has an active community of package developers building simple interfaces to sophisticated techniques. As a variety of methods are available, there is no need to commit to any rigid analysis strategy a priori. Further, the ability to easily call packages without reimplementing methods frees researchers to iterate rapidly through alternative analysis ideas. The advantage of performing this full workflow in R is that this transition from bioinformatics to statistics is effortless.\n\nWe back these claims by illustrating several analysis on the mouse data prepared above. We experiment with several flavors of exploratory ordination before shifting to more formal testing and modeling, explaining the settings in which the different points of view are most appropriate. Finally, we provide example analysis of multitable data, using a study in which both metabolomic and microbial abundance measurements were collected on the same samples, to demonstrate that the general workflow presented here can be adapted to the multitable setting.\n\nBefore doing the multivariate projections, we will add a few columns to our sample data, which can then be used to annotate plots. From Figure 7, we see that the ages of the mice come in a couple of groups, and so we make a categorical variable corresponding to young, middle-aged, and old mice. We also record the total number of counts seen in each sample and log-transform the data as an approximate variance stabilizing transformation.\n\n\n\nThe histogram on the left motivates the creation of a new categorical variable, binning age into one of the three peaks. The histogram on the right suggests that a log (1 + x) transformation is sufficient for normalizing the abundance data.\n\nFor a first pass, we look at principal coordinates analysis (PCoA) with either the Bray-Curtis dissimilarity on the weighted Unifrac distance. We see immediately that there are six outliers. These turn out to be the samples from females 5 and 6 on day 165 and the samples from males 3, 4, 5, and 6 on day 175. We will take them out, since we are mainly interested in the relationships between the non-outlier points.\n\n\n\n\n\nNotice that the variability explained by the second axis is five times less than that of the horizontal axis.\n\nBefore we continue, we should check the two female outliers – they have been taken over by the same RSV, which has a relative abundance of over 90% in each of them. This is the only time in the entire data set that this RSV has such a high relative abundance – the rest of the time it is below 20%. In particular, its diversity is by far the lowest of all the samples.\n\n\n\nAs we have seen, an important first step in analyzing microbiome data is to do unsupervised, exploratory analysis. This is simple to do in phyloseq, which provides many distances and ordination methods.\n\nAfter documenting the outliers, we are going to compute ordinations with these outliers removed and more carefully study the output. We see that there is a fairly substantial age effect that is consistent between all the mice, male and female, and from different litters. We’ll first perform a PCoA using Bray-Curtis dissimilarity.\n\nThe first plot shows the ordination of the samples, and we see that the second axis corresponds to an age effect, with the samples from the younger and older mice separating fairly well. The first axis correlates fairly well with library size (this is not shown). The first axis explains about twice the variability than the first, this translates into the elongated form of the ordination plot.\n\n\n\n\n\nNext we look at double principal coordinates analysis (DPCoA)16,17, which is a phylogenetic ordination method and which gives a biplot representation of both samples and taxonomic categories. We see again that the second axis corresponds to young vs. old mice, and the biplot suggests an interpretation of the second axis: samples that have larger scores on the second axis have more taxa from Bacteroidetes and one subset of Firmicutes.\n\n\n\n\n\n\n\nFinally, we can look at the results of PCoA with weighted Unifrac. As before, we find that the second axis is associated with an age effect, which is fairly similar to DPCoA. This is not surprising, because both are phylogenetic ordination methods taking abundance into account. However, when we compare biplots, we see that the DPCoA gave a much cleaner interpretation of the second axis, compared to weighted Unifrac.\n\n\n\nMicrobial abundance data are often heavy-tailed, and sometimes they can be hard to identify a transformation that brings the data to normality. In these cases, it can be safer to ignore the raw abundances altogether, and work instead with ranks. We demonstrate this idea using a rank-transformed version of the data to perform PCA. First, we create a new matrix, representing the abundances by their ranks, where the microbe with the smallest abundance in a sample gets mapped to rank 1, second smallest rank 2, etc.\n\n\n\nNaively using these ranks would make differences between pairs of low and high abundance microbes comparable. In the case where many bacteria are absent or present at trace amounts, an artificially large difference in rank could occur18 for minimally abundant taxa. To avoid this, all those microbes with rank below some threshold are set to be tied at 1. The ranks for the other microbes are shifted down, so there is no large gap between ranks. This transformation is illustrated in Figure 15.\n\n\n\n\n\nCompared to the representation in Figure 12, this display is harder to interpret.\n\n\n\nThe numbers of the y-axis are those supplied to PCA.\n\n\n\nWe can now perform PCA and study the resulting biplot, given in Figure 16. To produce annotation for this figure, we used the following block.\n\n\n\nThe results are similar to the PCoA analyses computed without applying a truncated-ranking transformation, reinforcing our confidence in the analysis on the original data.\n\n\n\nCanonical Correspondence Analysis (CCpnA) is an approach to ordination of a species by sample table that incorporates supplemental information about the samples. As before, the purpose of creating biplots is to determine which types of bacterial communities are most prominent in different mouse sample types. It can be easier to interpret these biplots when the ordering between samples reflects sample characteristics – variations in age or litter status in the mouse data, for example – and this is central to the design of CCpnA.\n\nThe function allows us to create biplots where the positions of samples are determined by similarity in both species signatures and environmental characteristics; in contrast, principal components analysis or correspondence analysis only look at species signatures. More formally, it ensures that the resulting CCpnA directions lie in the span of the environmental variables; thorough treatments are available in 19,20.\n\nLike PCoA and DPCoA, this method can be run using ordinate in phyloseq. In order to use supplemental sample data, it is necessary to provide an extra argument, specifying which of the features to consider – otherwise, phyloseq defaults to using all sample_data measurements when producing the ordination.\n\n\n\nTo access the positions for the biplot, we can use the scores function in the vegan package. Further, to facilitate figure annotation, we also join the site scores with the environmental data in the sample_data slot. Of the 23 total taxonomic orders, we only explicitly annotate the four most abundant – this makes the biplot easier to read.\n\n\n\nFigure 17 and Figure 18 plot these annotated scores, splitting sites by their age bin and litter membership, respectively. We have labeled individual microbes that are outliers along the second CCpnA direction.\n\nEvidently, the first CCpnA direction distinguishes between mice in the two main age bins. Circles on the left and right of the biplot represent microbes that are characteristic of younger and older mice, respectively. The second CCpnA direction splits off the few mice in the oldest age group; it also partially distinguishes between the two litters. These samples low in the second CCpnA direction have more of the outlier microbes than the others.\n\nThis CCpnA analysis supports our conclusions from the earlier ordinations – the main difference between the microbiome communities of the different mice lies along the age axis. However, in situations where the influence of environmental variables is not so strong, CCpnA can have more power in detecting such associations. In general, it can be applied whenever it is desirable to incorporate supplemental data, but in a way that (1) is less aggressive than supervised methods, and (2) can use several environmental variables at once.\n\n\n\nThe sites and species are triangles and circles, respectively. The separate panels indicate different age groups.\n\n\n\nHere we illustrate some supervised learning methods that can be easily run in R. The caret package wraps many prediction algorithms available in R and performs parameter tuning automatically. Since we saw that microbiome signatures change with age, we’ll apply supervised techniques to try to predict age from microbiome composition.\n\nWe’ll first look at Partial Least Squares (PLS)21. The first step is to divide the data into training and test sets, with assignments done by mouse, rather than by sample, to ensure that the test set realistically simulates the collection of new data. Once we split the data, we can use the train function to fit the PLS model.\n\n\n\nNext we can predict class labels on the test set using the predict function and compare to the truth. We see that the method does an excellent job of predicting age.\n\n\n\nAs another example, we can try out random forests. This is run in exactly the same way as PLS, by switching the method argument from pls to rf. Random forests also perform well at the prediction task on this test set, though there are more old mice misclassified than young.\n\n\n\nTo interpret these PLS and random forest results, it is standard to produce biplots and proximity plots, respectively. The code below extracts coordinates and supplies annotation for points to include on the PLS biplot.\n\n\n\nThe resulting biplot is displayed in Figure 19; it can be interpreted similarly to earlier ordination diagrams, with the exception that the projection is chosen with an explicit reference to the binned age variable. Specifically, PLS identifies a subspace to maximize discrimination between classes, and the biplot displays sample projections and RSV coefficients with respect to this subspace.\n\n\n\nA random forest proximity plot is displayed in Figure 20. To generate this representation, a distance is calculated between samples based on how frequently sample occur in the same tree partition in the random forest’s bootstrapping procedure. If a pair of samples frequently occur in the same partition, the pair is assigned a low distance. The resulting distances are then input to PCoA, giving a glimpse into the random forests’ otherwise complex classification mechanism. The separation between classes is clear, and manually inspecting points would reveal what types of samples are easier or harder to classify.\n\n\n\nTo further understand the fitted random forest model, we identify the microbe with the most influence in the random forest prediction. This turns out to be a microbe in family Lachnospiraceae and genus Roseburia. Figure 21 plots its abundance across samples; we see that it is uniformly very low from age 0 to 100 days and much higher from age 100 to 400 days.\n\n\n\nPhyloseq has functionality for creating graphs based on thresholding a distance matrix, and the resulting networks can be plotting using ggnetwork. This package overloads the ggplot syntax, so you can use the function ggplot on an igraph object and add geom_edges and geom_nodes geoms to plot the network. To be able to color the nodes or edges a certain way, we need to add these attributes to the igraph object. Below we create a network by thresholding the Jaccard dissimilarity (the default distance for the function make_network) at .35, and then we add an attribute to the vertices indicating which mouse the sample came from and which litter the mouse was in. Then we can plot the network with the coloring by mouse and shape by litter. We see the resulting network in Figure 22, and we can see that there is grouping of the samples by both mouse and litter.\n\nThe colors represent which mouse the sample came from and the shape represents which litter the mouse was in.\n\n\n\n\n\n\n\nGraph-based two-sample tests were introduced by Friedman and Rafsky22 as a generalization of the Wald-Wolfowitz runs test. They proposed the use of a minimum spanning tree (MST) based on the distances between the samples, and then counting the number of edges on the tree that were between samples in different groups. It is not necessary to use an MST, graphs made by linking nearest neighbors23 or distance thresholding can also be input. No matter what graph we build between the samples, we can approximate a null distribution by permuting the labels of the nodes of the graph.\n\nWe first perform a test using an MST with Jaccard dissimilarity. We want to know whether the two litters (family_relationship) come from the same distribution. Since there is a grouping in the data by individual (host_subject_id), we can’t simply permute all the labels, we need to maintain this nested structure – this is what the grouping argument does. Here we permute the family_relationship labels but keep the host_subject_id structure intact.\n\nThis test has a small p-value, and we reject the null hypothesis that the two samples come from the same distribution. From the plot of the minimum spanning tree in Figure 23, we see by eye that the samples group by litter more than we would expect by chance.\n\n\n\n\n\n\n\nThe k-nearest neighbors graph is obtained by putting an edge between two samples whenever one of them is in the set of k-nearest neighbors of the other. We see from Figure 24 that if a pair of samples has an edge between them in the nearest neighbor graph, they are overwhelmingly likely to be in the same litter.\n\n\n\n\n\n\n\nWe can compute the analogous test with two-nearest neighbors and the Bray-Curtis dissimilarity. The results are in Figure 25.\n\n\n\n\n\n\n\nAnother way of making a graph between samples is to threshold the distance matrix, this is called a geometric graph24. The testing function lets the user supply an absolute distance threshold; alternatively, it can find a distance threshold such that there are a prespecified number of edges in the graph. Below we use a distance threshold so that there are 720 edges in the graph, or twice as many edges as there are samples. Heuristically, the graph we obtain isn’t as good, because there are many singletons. This reduces power, and so if the thresholded graph has this many singletons it is better to either modify the threshold or consider a MST or k-nearest neighbors graph.\n\n\n\n\n\n\n\nThen we can try a similar procedure with an increased number of edges to see what happens.\n\n\n\n\n\n\n\nIt is often of interest to evaluate the degree to which microbial community diversity reflects characteristics of the environment from which it was sampled. Unlike ordination, the purpose of this analysis is not to develop a representation of many microbes with respect to sample characteristics; rather, it is to describe how a single measure of overall community structure (In particular, it need not be limited to diversity – defining univariate measures of community stability is also common, for example.) is associated with sample characteristics. This is a somewhat simpler statistical goal, and can be addressed through linear modeling, for which there are a range of approaches in R. As an example, we will used a mixed-effects model to study the relationship between mouse microbial community diversity and the age and litter variables that have been our focus so far. This choice was motivated by the observation that younger mice have noticeably lower Shannon diversities, but that different mice have different baseline diversities. The mixed-effects model is a starting point for formalizing this observation.\n\nWe first compute the Shannon diversity associated with each sample and join it with sample annotation.\n\n\n\nWe use the nlme package to estimate coefficients for this mixed-effects model.\n\n\n\nTo interpret the results, we compute the prediction intervals for each mouse by age bin combination. These are displayed in Figure 28. The intervals reflect the slight shift in average diversity across ages, but the wide intervals emphasize that more samples would be needed before this observation can be confirmed.\n\nThe timepoints have been split into three bins, according to the mices’ age. The prediction intervals obtained from mixed-effects modeling are overlaid.\n\n\n\n\n\nHypothesis testing can be used to identify individual microbes whose abundance relates to sample variables of interest. A standard approach is to compute a test statistic for each microbe individually, measuring its association with sample characteristics, and then jointly adjust p-values to ensure a False Discovery Rate upper bound. This can be accomplished through the Benjamini-Hochberg procedure, for example25. However, this procedure does not exploit any structure among the tested hypotheses – for example, it is likely that if one Ruminococcus species is strongly associated with age, then others are as well. To integrate this information26,27, proposed a hierarchical testing procedure, where taxonomic groups are only tested if higher levels are found to be be associated. In the case where many related species have a slight signal, this pooling of information can increase power.\n\nWe apply this method to test the association between microbial abundance and age. This provides a complementary view of the earlier analyses, identifying individual microbes that are responsible for the differences between young and old mice.\n\nWe digress briefly from hierarchical testing to describe an alternative form of count normalization. Rather than working with the logged data as in our earlier analysis, we consider a variance stabilizing transformation introduced by 28 for RNA-seq data and in 29 for 16S rRNA generated count data and available in the DESeq2 package. The two transformations yield similar sets of significant microbes. One difference is that, after accounting for size factors, the histogram of row sums for DESeq is more spread out in the lower values, refer to Figure 29. This is the motivation of using such a transformation, although for high abundance counts, it is equivalent to the log, for lower and mid range abundances it does not crush the data and yields more powerful results. The code below illustrates the mechanics of computing DESeq2’s variance stabilizing transformation on a phyloseq object.\n\nThe bottom histogram is the same as that in Figure 7, and is included to facilitate comparison.\n\n\n\n\n\n\n\nWe use structSSI to perform the hierarchical testing30. For more convenient printing, we first shorten the names of each microbe.\n\n\n\nUnlike standard multiple hypothesis testing, the hierarchical testing procedure needs univariate tests for each higher-level taxonomic group, not just every microbe. A helper function, treePValues, is available for this; it expects an edgelist encoding parent-child relationships, with the first row specifying the root node.\n\n\n\nWe can now correct p-value using the hierarchical testing procedure. The test results are guaranteed to control several variants of FDR control, but at different levels; we defer details to 26,27,30.\n\n\n\n\n\n\n\n\n\n\n\nThe plot opens in a new browser – a static screenshot of a subtree is displayed in Figure 30. Nodes are shaded according to p-values, from blue to orange, representing the strongest to weakest associations. Grey nodes were never tested, to focus power on more promising subtrees. Scanning the full tree, it becomes clear that the association between age group and microbe abundance is present in only a few isolated taxonomic groups, but that it is quite strong in those groups. To give context to these results, we can retrieve the taxonomic identity of the rejected hypotheses.\n\nCurrently the user is hovering over the node associated with microbe GCGAG.33; this causes the adjusted p-value (0.0295) to appear.\n\n\n\nIt seems that the most strongly associated microbes all belong to family Lachnospiraceae, which is consistent with the random forest results in Section.\n\nMany microbiome studies attempt to quantify variation in the microbial, genomic, and metabolic measurements across different experimental conditions. As a result, it is common to perform multiple assays on the same biological samples and ask what features – microbes, genes, or metabolites, for example – are associated with different sample conditions. There are many ways to approach these questions, which to apply depends on the study's focus.\n\nHere, we will focus on one specific workflow that uses sparse Canonical Correlation Analysis (sparse CCA), a method well-suited to both exploratory comparisons between samples and the identification of features with interesting variation. We will use an implementation from package PMA31.\n\nSince the mouse data used above included only a single table, we use a new data set, collected by the study32. There are two tables here, one for microbes and another with metabolites. Twelve samples were obtained, each with measurements at 637 m/z values and 20,609 OTUs; however, about 96% of the entries of the microbial abundance table are exactly zero. The code below retrieves this data.\n\n\n\nOur preprocessing mirrors that done for the mouse data. We first filter down to microbes and metabolites of interest, removing those that are zero across many samples. Then, we transform them to weaken the heavy tails.\n\n\n\nWe can now apply sparse CCA. This method compares sets of features across high-dimensional data tables, where there may be more measured features than samples. In the process, it chooses a subset of available features that capture the most covariance – these are the features that reflect signals present across multiple tables. We then apply PCA to this selected subset of features. In this sense, we use sparse CCA as a screening procedure, rather than as an ordination method.\n\nOur implementation is below. The parameters penaltyx and penaltyz are sparsity penalties. Larger values of penaltyx will result in fewer selected microbes, similarly penaltyz modulates the number of selected metabolites. We tune them manually to facilitate subsequent interpretation – we generally prefer more sparsity than the default parameters would provide.\n\n\n\nWith these parameters, five microbes and 15 metabolites have been selected, based on their ability to explain covariation between tables. Further, these 20 features result in a correlation of 0.974 between the two tables. We interpret this to mean that the microbial and metabolomic data reflect similar underlying signals, and that these signals can be approximated well by the 20 selected features. Be wary of the correlation value, however, since the scores are far from the usual bivariate normal cloud. Further, note that it is possible that other subsets of features could explain the data just as well – sparse CCA has minimized redundancy across features, but makes no guarantee that these are the \"true\" features in any sense.\n\nNonetheless, we can still use these 20 features to compress information from the two tables without much loss. To relate the recovered metabolites and OTUs to characteristics of the samples on which they were measured, we use them as input to an ordinary PCA.\n\n\n\n\n\nFigure 31 displays a PCA triplot, where we show different types of samples and the multidomain features (Metabolites and OTUs). This allows comparison across the measured samples – triangles for Knockout and circles for wild type – and characterizes the influence the different features – diamonds with text labels. For example, we see that the main variation in the data is across PD and ST samples, which correspond to the different diets. Further, large values of 15 of the features are associated with ST status, while small values for 5 of them indicate PD status. The advantage of the sparse CCA screening is now clear – we can display most of the variation across samples using a relatively simple plot, and can avoid plotting the hundreds of additional points that would be needed to display all of the features.\n\n\n\nThe programs and source for this article can be run using version 3.3 of R and version 3.3 of Bioconductor.\n\n\nConclusions\n\nWe have shown how a complete workflow in R is now available to denoise, identify and normalize next generation amplicon se quencing reads using probabilistic models with parameters fit using the data at hand.\n\nWe have provided a brief overview of all the analyses that become possible once the data has been imported into the R environment. Multivariate projections using the phylogenetic tree as the relevant distance between OTUs/RSVs can be done using weighted Unifrac or double principal coordinate analyses using the phyloseq package. Biplots provide the user with an interpretation key. These biplots have been extended to triplots in the case of multidomain data incorporating genetic, metabolic and taxa abundances. We illustrate the use of network based analyses, whether the community graph is provided from other sources or from a taxa co-occurrence computation using a Jaccard distance.\n\nWe have briefly covered a small example of using two supervised learning functions (random forests and partial least squares) to predict a response variable.\n\nThe main challenges in tackling microbiome data come from the many different levels of heterogeneity both at the input and output levels. These are easily accommodated through R's capacity to combine data into S4 classes. We are able to include layers of information, trees, sample data description matrices and contingency tables in the phyloseq data structures. The plotting facilities of ggplot2 and ggnetwork allow for the layering of information in the output into plots that combine graphs, multivariate information and maps of the relationships between covariates and taxa abundances. The layering concept allows the user to provide reproducible publication level figures with multiple heterogeneous sources of information. Our main goal in providing these tools has been to enhance the statistical power of the analyses by enabling the user to combine frequencies, quality scores and covariate information into complete and testable projections.\n\n\nSummary\n\nThis illustration of possible workflows for microbiome data combining trees, networks, normalized read counts and sample information showcases the capabilities and reproducibility of an R based system for analyzing bacterial communities. We have implemented key components in C wrapped within the Bioconductor package dada2 to enable the different steps to be undertaken on a laptop.\n\nOnce the sequences have been filtered and tagged they can be assembled into a phylogenetic tree directly in R using the maximum likelihood tree estimation available in phangorn. The sequences are then assembled into a phyloseq object containing all the sample covariates, the phylogenetic tree and the sample-taxa contingency table.\n\nThese data can then be visualized interactively with Shiny-phyloseq, plotted with one line wrappers in phyloseq and filtered or transformed very easily.\n\nThe third component of the paper shows more complex analyses that require direct use of ggplot2 and advanced statistical analyses. This will be of interest to power users with a good working knowledge of R, ggplot2 and statistical learning techniques. We use ggnetwork to plot community networks and perform a permutation test on a categorical response. We show that partial least squares and random forests give very similar quality predictions on this data and show how to plot the resulting proximities. Multivariate ordination methods allow useful lower dimensional projections in the presence of phylogenetic information or multi-domain data as shown in an example combining metabolites and OTU abundances.\n\nSupervised learning methods provide lists of the most relevant taxa in discriminating between groups. To improve the power of the testing techniques designed to identify taxa that are the most changed between two groups of subjects, we provide an optimized variance stabilizing transformation and multiple hypothesis correction using the DESeq2 package. We have also incorporated a more original way of controlling for multiple hypothesis testing at the different levels of the phylogenetic tree through the use of structSSI, a package that implements FDR control for hierarchical structures26,27,30. This package is interactive so we have supplied a snapshot of the output tree.\n\nThe last example in the paper shows how to combine data from multiple domains32: metabolites, taxa counts, genetic data and diet. We illustrate the combination of sparse canonical correlation analysis with PCA to provide a useful triplot projection of the data.\n\n\nData availability\n\nIntermediary data for the analyses are made available at the Stanford digital repository permanent url for this paper: http://purl.stanford.edu/wh250nn9648. All other data have been previously published and the links are included in the paper.\n\n\nSoftware availability\n\nBioconductor packages at https://www.bioconductor.org/. CRAN packages at https://cran.r-project.org/.\n\nPermanent repository for the data and program source of this paper: https://purl.stanford.edu/wh250nn9648\n\nLatest source code as at the time of publication: https://github.com/spholmes/F1000_workflow\n\nArchived source as at the time of publication: Zenodo: F1000_workflow: MicrobiomeWorkflowv0.9, doi: 10.5281/zenodo.5454433",
"appendix": "Author contributions\n\n\n\nBJC, KS, JAF, PJM and SPH developed the software tools, BJC, KS, JAF, PJM and SPH developed statistical methods and tested the workflow on the data sets. BJC, KS, JAF, PJM and SPH wrote the article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was partially supported by the NSF (DMS-1162538 to SPH), the NIH (R01AI112401 to SPH), JAF received support from an Stanford Interdisciplinary Graduate Fellowship and KS was supported by an NIH TR32 training grant.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank the members of the Relman Lab for their valuable insights on microbiology and sequencing and the users of phyloseq who have provided bug reports and feature requests.\n\n\nReferences\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchloss PD, Westcott SL, Ryabin T, et al.: Introducing mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009; 75(23): 7537–7541. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCallahan BJ, McMurdie PJ, Rosen MJ, et al.: DADA2: High-resolution sample inference from Illumina amplicon data. Nat Methods. 2016; 1–4. PubMed Abstract | Publisher Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKozich JJ, Westcott SL, Baxter NT, et al.: Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq illumina sequencing platform. Appl Environ Microbiol. 2013; 79(17): 5112–5120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchloss PD, Schuber AM, Zackular JP, et al.: Stabilization of the murine gut microbiome following weaning. Gut Microbes. 2012; 3(4): 383–393. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar RC, Flyvbjerg H: Error filtering, pair assembly and error correction for next-generation sequencing reads. Bioinformatics. 2015; 31(21): 3476–3482. PubMed Abstract | Publisher Full Text\n\nWang Q, Garrity GM, Tiedje JM, et al.: Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy. Appl Environ Microbiol. 2007; 73(16): 5261–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCole JR, Wang Q, Cardenas E, et al.: The Ribosomal Database Project: improved alignments and new tools for rRNA analysis. Nucleic Acids Res. 2009; 37(Database issue): D141–D145. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcMurdie PJ, Holmes S: phyloseq: an R package for reproducible interactive analysis and graphics of microbiome census data. PLoS One. 2013; 8(4): e61217. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOksanen J, Blanchet FG, Kindt R, et al.: vegan: Community Ecology Package. 2016. R package version 2.3-5. Reference Source\n\nChessel D, Dufour AB, Thioulouse J: The ade4 package - i: One-table methods. R News. 2004; 4(1): 5–10. Reference Source\n\nParadis E, Claude J, Strimmer K: APE: Analyses of Phylogenetics and Evolution in R language. Bioinformatics. 2004; 20(2): 289–290. PubMed Abstract | Publisher Full Text\n\nWickham H: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2009; 216. Publisher Full Text\n\nMcMurdie PJ, Holmes S: Shiny-phyloseq: Web application for interactive microbiome analysis with provenance tracking. Bioinformatics. 2015; 31(2): 282–283. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPavoine S, Dufour AB, Chessel D: From dissimilarities among species to dissimilarities among communities: a double principal coordinate analysis. J Theor Biol. 2004; 228(4): 523–537. PubMed Abstract | Publisher Full Text\n\nFukuyama J, McMurdie PJ, Dethlefsen L, et al.: Comparisons of distance methods for combining covariates and abundances in microbiome studies. Pac Symp Biocomput. World Scientific, 2012; 213–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolmes S, Alekseyenko A, Timme A, et al.: Visualization and statistical comparisons of microbial communities using R packages on Phylochip data. Pac Symp Biocomput. 2011; 142–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nter Braak C: Correspondence analysis of incidence and abundance data: Properties in terms of a unimodal response model. Biometrics. 1985; 41(4): 859–873. Publisher Full Text\n\nGreenacre M: Correspondence analysis in practice. CRC press, 2007. Reference Source\n\nWold S, Ruhe A, Wold H, et al.: The collinearity problem in linear regression. The partial least squares (pls) approach to generalized inverses. SIAM J Sci Stat Comput. 1984; 5(3): 735–743. Publisher Full Text\n\nFriedman JH, Rafsky LC: Multivariate generalizations of the wald-wolfowitz and smirnov two-sample tests. Ann Statist. 1979; 7(4): 697–717. Publisher Full Text\n\nSchilling MF: Multivariate two-sample tests based on nearest neighbors. J Am Stat Assoc. 1986; 81(395): 799–806. Publisher Full Text\n\nPenrose M: Random geometric graphs. Oxford University Press, Oxford, 2003; 5. Publisher Full Text\n\nBenjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Stat Soc B. 1995; 57(1): 289–300. Reference Source\n\nBenjamini Y, Yekutieli D: Hierarchical fdr testing of trees of hypotheses. Technical report, Department of Statistics and Operations Research. Tel Aviv University, 2003.\n\nBenjamini Y, Bogomolov M: Selective inference on multiple families of hypotheses. J R Stat Soc Series B Stat Methodol. 2014; 76(1): 297–318. Publisher Full Text\n\nLove MI, Huber W, Anders S: Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12): 550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcMurdie PJ, Holmes S: Waste not, want not: why rarefying microbiome data is inadmissible. PLoS Comput Biol. 2014; 10(4): e1003531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSankaran K, Holmes S: structSSI: Simultaneous and Selective Inference for Grouped or Hierarchically Structured Data. J Stat Softw. 2014; 59(13): 1–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitten D, Tibshirani R, Gross S, et al.: Pma: Penalized multivariate analysis. R package version. 2009; 1(5).\n\nKashyap PC, Marcobal A, Ursell LK, et al.: Genetically dictated change in host mucus carbohydrate landscape exerts a diet-dependent effect on the gut microbiota. Proc Natl Acad Sci U S A. 2013; 110(42): 17059–17064. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCallahan BJ, Sankaran K, Fukuyama JA, et al.: F1000_workflow: MicrobiomeWorkflowv0.9. Zenodo. 2016. Data Source"
}
|
[
{
"id": "14572",
"date": "08 Jul 2016",
"name": "Leo Lahti",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis work reports a standard R/Bioconductor open source workflow for the analysis of microbial community profiling data based on (Illumina MiSeq) 16S rRNA amplicon sequencing. The main contribution of the paper is to present a compact overview of a typical microbiome analysis workflow in R, and to integrate accumulated knowledge by the authors regarding best practices in microbiome bioinformatics based on the R statistical programming environment.\nThe workflow covers key steps from raw sequencing data prepreprocessing to standard statistical testing, data integration, and visualization. The methodologies are rigorous, and represent a straightforward combination of previously published R tools that are among the state-of-the-art in the field. Reliance on Bioconductor packages provides further guarantees for high quality of the software components. All data and code underlying the paper are openly available, and I was also able replicate the complete workflow after some initial setups. I examined about half of the examples in more detail, and could reproduce manuscript figures in all cases that I tested.\nNo new methods are introduced, and the main contribution of the work is to showcase good statistical practice based on existing software components, some of which have been previously published by the authors of this manuscript. Appropriate references are provided throughout the text. Such overview papers are useful, however, as they can provide benchmarks and recommendations on complete workflows, where the different analysis steps are not independent in any real study and deserve analysis in their own right.\nThe analysis steps are explained in clear language and with sufficient detail. The work is technically sound. The main drawback is that the manuscript is somewhat scattered as it aims to cover a large and versatile set of tools in a single paper. The quality of the analysis is high, and the overview is useful, and the paper could be accepted after taking into account my comments below.\nMajor comments\nThe work is somewhat scattered due to the wide coverage. The paper could benefit from having less figures and and increasing focus on key aspects. For instance the number of biplots and network figures could be reduced. The data integration part (CCA etc.) is useful but very brief and probably difficult to comprehend by readers who are new to those approaches. I would recommend either cutting or expanding this part and also otherwise checking if the manuscript can be made more compact by removing some examples (perhaps by moving some examples into supplementary material or online documentation?).\n\nThe examples with DADA2 and the hierarchical testing procedure are particularly useful; these recently published methods would deserve to become more widely used. Sufficient details have been given for this work.\n\nInstructions on how to exactly use the source files provided in Github are missing. The rnw files are missing latex headers so I could not readily generate final readable reports from the rnw files. The code itself was clear, and after some setups I could replicate all analyses after changing some path definitions and running the code interactively on R command line. But this relied on my earlier good knowledge on R and automated document generationsystems. Users who are less experienced with these tools would benefit from improved instructions on how to run the workflow. The README.md file in Github should give more detailed instructions (or link to instructions) on how to exactly reproduce the complete example workflow and generate the final reports.\n\nIn the \"Infer sequence variants\" section it is mentioned that \"Sequence inference removed nearly all substitution and indel errors from the data\". How this was quantified to reach this conclusion?\n\nMinor comments\nThe phyloseq R package has been published earlier and represents an extremely useful class structure for microbiome profiling data that has high potential of becoming a popular standard in R. These tools, and their (online) documentation form essential background material for this manuscript. Better separaration of the data structures and tools in this manuscript, the R packages (in particular phyloseq) and their documentation. This would make it easier for the wider R community to build on this work and contribute further tools that take advantage of the phyloseq data structure. This is not required for this manuscript but a suggestion for improvement.\n\nI had to investigate the code a while to see that the file http://www.mothur.org/MiSeqDevelopmentData/StabilityNoMetaG.tar has to be stored in a data/MiSeq_SOP/ directory after download and extraction. Not a big deal but it would be even more handy to have a download script (R or shell) available in the F1000_workflow/data/ directory, and the instructions would then give clear advice on how to automate the complete analysis workflow. To streamline the workflow example, consider providing some example data sets as R data packages.\n\nAt the github repo README.md the command knit(\"PartIIphyloseq.Rnw\") should be knit(\"PartIIphyloseq.rnw\")\n\nIn PartIIIanalysis.rnw the script gets stuck at:\noptions(digits = 3, width = 80,prompt = \" \", continue = \" \") (I was waiting 24 hours; then restarted and tried again with same result).\nTherefore I skipped this row in my tests. Please fix.\n\nIs it intentional that figures 8, 10, 11, 12, 13, 14 and some other figures have an unbalanced width/height ratio? The figures might seem more clear if the width/height ratio was more balanced.\n\nThe plot_abundance function could be readily provided in the phyloseq package?\n\nQuality of Figure 1 is relatively poor and could be improved.\n\nFigure 31: in title: fix \"muliple\" into \"multiple\"",
"responses": []
},
{
"id": "14570",
"date": "18 Jul 2016",
"name": "Zachary Charlop-Powers",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThere is a growing push in the computational sciences for adopting software practices that promote replicability and provide methodological transparency. In the field of microbiome research these practices should minimize the standard culprits of error-creep such as file proliferation, and incompatible formats; they should provide sound default choices for the core computational steps of sequence clustering and taxonomic assignment; and they should facilitate reproducible statistical analyses of the resulting data. By providing a step-by-step analysis of a microbiome dataset that can be completed entirely from within the R statistical computing environment, this workflow does an admirable job of bringing these best-practices to the world of microbiome science.\nThe article takes a reader through the steps of processing raw sequence data and loading the data into R. It then demonstrates how to use basic exploratory data analysis to get a sense of the data and finally introduces the use of various statistical packages to search-for and validate patterns. The majority of the article focuses on the application of statistical concepts to microbiome data and this is where scientists would like to be spending their time. However, this allocation of ink-space is only possible because the recent release of the DADA2 package allows the authors (and subsequent users) to condense all the read-processing portion of the tutorial into a few short steps. DADA2 provides a new and arguably superior method for clustering raw amplicon reads and, by processing the reads and assigning taxonomy, it fills in the computational gap required to work completely within R. The benefits of this workflow are fairly self-evident in the amount of space in their workflow devoted to data processing versus exploration, however, there are other benefits as well, of which I will name two. First, by using packages hosted on CRAN or Bioconductor, the authors can leverage the Bioconductor build system and ensure a fully working environment, a non-trivial prerequisite in a field with myriad tools. Second, by providing an integrated set of tools there are few, if any, intermediate files required to analyze a dataset. In addition to reducing the cognitive burden of a newcomer, this generally reduces the footprint for errors.\nThis article is an excellent introduction on how to process and analyze a 16S amplicon dataset. Because of the relative ease of working entirely within a single environment, and for the sound design principles used by the core R packages in this analysis, I predict this workflow will become a useful resource, if not a direct template, for many microbiome scientists learning to process their data.",
"responses": []
},
{
"id": "14571",
"date": "08 Aug 2016",
"name": "Nandita R. Garud",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is a valuable resource for the metagenomics field. The thorough examples of several statistical analyses of metagenomic data will help both the novice and expert in analyzing their own data. Additionally, this paper sets a standard in the field for documenting analyses.\n\nBoth DADA2 and PhyloSeq have much to offer. DADA2 identifies OTUs, which are termed in this paper ‘Ribosomal Sequence Variants,’ reflecting the extra granularity with which DADA2 is capable of resolving OTUs. The RSVs identified by DADA2 offer the ability to conduct higher resolution analyses on 16S data. PhyloSeq is comprised of numerous capabilities to analyze metagenomic data, making it quite easy for a user to load and analyze their data.\nBelow I make a few suggestions for clarification purposes. I enjoyed reading this article and have already benefited greatly from using DADA2 and PhyloSeq in my own work.\nMinor critiques and suggestions:\nA very attractive feature of DADA2 is its ability to resolve RSVs. I wonder if the authors could expand more on the findings they have made with the higher resolution OTUs found by DADA2. This would highlight why DADA2 is such a powerful tool.\n\nI wonder if the examples that the authors provide could be more biologically motivated. For example, could the authors explain the mouse data set in greater depth in the introduction? What did Kozich et al. 2013 and Schloss et al. 2012 find in these data sets? Were DADA2 and PhyloSeq used to analyze the data in these two papers? If not, are the findings different? I enjoyed reading about the different metagenomic properties of mice of different ages. More description along these lines in the introduction would make it motivating to understand why the various preprocessing steps are done and an overview of what is to come.\n\nPage 4 – it could be helpful to illustrate some of the properties of the software with numbers and data. For example, DADA2 has the ability to infer OTUs from pooled or unpooled data. Could the authors illustrate the number of RSVs found in the two scenarios?\n\nFigure 2 -- Could the authors explain on Page 4 what sequencing error rates are being inferred (i.e. transition and transversion errors)? Which parameters are inferred to come up with the solid black line? An explicit reference to Figure 2 in the text could help. Additionally, headers indicating Forward and Reverse reads in Figure 2 could help to distinguish the plots.\n\nPage 6 – Is the multiple sequence alignment feature capable of multiple methods? If so, do you advocate for using ClustalW for metageonomic data? Why?\n\nPage 6 -- Could the authors define what a GTR+G+I model is?\n\nI wonder if the authors could give some more guidance on how to construct the PhyloSeq object from scratch without relying import functions. For example, I tried making a PhyloSeq object using Metaphlan2 output. Unfortunately I could not figure out how to merge Metaphlan2 biom files for each sample, and so I had to fiddle with Phyloseq for sometime to manually create the OTU, sample, and taxa tables for multiple samples.",
"responses": [
{
"c_id": "2270",
"date": "07 Nov 2016",
"name": "Benjamin Callahan",
"role": "Author Response",
"response": "Thanks for your comments and suggestions. We made several improvements to the revised manuscript in response: We added an explicit reference to Figure 2 in the text. The error rates being estimated in each plot are indicated in the text just above each plot. A2C (A to C) is shorthand for an A being converted to a C by errors in the amplicon sequencing process. We changed the multiple-sequence alignment method in the workflow to that implemented by the DECIPHER package, largely because of its improved computational performance. We added a brief text description of GTR+G+I (Generalized time-reversible with Gamma rate variation). We did not expand our evaluation of RSVs vs. OTUs or pooled vs. unpooled inference. Performing such evaluations well is a significant undertaking and would take significant space to explain, and our primary purpose here is to demonstrate the many features of an R/Bioconductor amplicon analysis workflow. For evaluation of DADA2, our manuscript introducing the method examines differences between the output of DADA2 and OTU methods and we are writing another manuscript that looks at performance on datasets with many samples. On the issue of pooled vs. unpooled results, the short answer is that we find both approaches work well. If just counting the number of output OTU sequences, pooled inference generally finds more because of its higher sensitivity to sequences that are found in many samples but are rare in each. Of note, we generally find these pooled-only sequences to be very highly enriched for contaminants (eg. kit contaminants), which are expected to distributed in just this way. We also did not expand much on the biological findings from this dataset in the initial paper (Stabilization of the murine gut microbiome following weaning, Schloss et al. 2012), as they were quite limited, essentially boiling down to the observation that gut sample early in life differed by more on average than samples from later in life. However, the dataset has been used in a number of studies as an example dataset for testing new methods (as in Kozich et al. 2013) and that is the way in which we are using it here."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1492
|
https://f1000research.com/articles/5-2122/v1
|
31 Aug 16
|
{
"type": "Software Tool Article",
"title": "A step-by-step workflow for low-level analysis of single-cell RNA-seq data",
"authors": [
"Aaron T.L. Lun",
"Davis J. McCarthy",
"John C. Marioni",
"Davis J. McCarthy",
"John C. Marioni"
],
"abstract": "Single-cell RNA sequencing (scRNA-seq) is widely used to profile the transcriptome of individual cells. This provides biological resolution that cannot be matched by bulk RNA sequencing, at the cost of increased technical noise and data complexity. The differences between scRNA-seq and bulk RNA-seq data mean that the analysis of the former cannot be performed by recycling bioinformatics pipelines for the latter. Rather, dedicated single-cell methods are required at various steps to exploit the cellular resolution while accounting for technical noise. This article describes a computational workflow for low-level analyses of scRNA-seq data, based primarily on software packages from the open-source Bioconductor project. It covers basic steps including quality control, data exploration and normalization, as well as more complex procedures such as cell cycle phase assignment, identification of highly variable and correlated genes, clustering into subpopulations and marker gene detection. Analyses were demonstrated on gene-level count data from several publicly available data sets involving haematopoietic stem cells, brain-derived cells, T-helper cells and mouse embryonic stem cells. This will provide a range of usage scenarios from which readers can construct their own analysis pipelines.",
"keywords": [
"Single cell",
"RNA-seq",
"bioinformatics",
"Bioconductor",
"workflow"
],
"content": "Introduction\n\nSingle-cell RNA sequencing (scRNA-seq) is widely used to measure the genome-wide expression profile of individual cells. From each cell, mRNA is isolated and reverse transcribed to cDNA for high-throughput sequencing (Stegle et al., 2015). This can be done using microfluidics platforms like the Fluidigm C1 (Pollen et al., 2014), protocols based on microtiter plates like Smart-seq2 (Picelli et al., 2014), or droplet-based technologies like inDrop (Klein et al., 2015; Macosko et al., 2015). The number of reads mapped to each gene is then used to quantify its expression in each cell. Alternatively, unique molecular identifiers (UMIs) can be used to directly measure the number of transcript molecules for each gene (Islam et al., 2014). Count data are analyzed to detect highly variable genes (HVGs) that drive heterogeneity across cells in a population, to find correlations between genes and cellular phenotypes, or to identify new subpopulations via dimensionality reduction and clustering. This provides biological insights at a single-cell resolution that cannot be achieved with conventional bulk RNA sequencing of cell populations.\n\nStrategies for scRNA-seq data analysis differ markedly from those for bulk RNA-seq. One technical reason is that scRNA-seq data are much noisier than bulk data (Brennecke et al., 2013; Marinov et al., 2014). Reliable capture (i.e., conversion) of transcripts into cDNA for sequencing is difficult with the low quantity of RNA in a single cell. This increases the frequency of drop-out events where none of the transcripts for a gene are captured. Dedicated steps are required to deal with this noise during analysis, especially during quality control. In addition, scRNA-seq data can be used to study cell-to-cell heterogeneity, e.g., to identify new cell subtypes, to characterize differentiation processes, to assign cells into their cell cycle phases, or to identify HVGs driving variability across the population (Fan et al., 2016; Trapnell et al., 2014; Vallejos et al., 2015). This is simply not possible with bulk data, meaning that custom methods are required to perform these analyses.\n\nThis article describes a computational workflow for basic analysis of scRNA-seq data using software packages from the open-source Bioconductor project (Huber et al., 2015). Starting from a count matrix, this workflow contains the steps required for quality control to remove problematic cells; normalization of cell-specific biases, with and without spike-ins; cell cycle phase classification from gene expression data; data exploration to identify putative subpopulations; and finally, HVG and marker gene identification to prioritize interesting genes. The application of different steps in the workflow will be demonstrated on several public scRNA-seq datasets involving haematopoietic stem cells, brain-derived cells, T-helper cells and mouse embryonic stem cells, generated with a range of experimental protocols and platforms (Buettner et al., 2015; Kołodziejczyk et al., 2015; Wilson et al., 2015; Zeisel et al., 2015). The aim is to provide a variety of modular usage examples that can be applied to construct custom analysis pipelines.\n\n\nAnalysis of haematopoietic stem cells\n\nTo introduce most of the concepts of scRNA-seq data analysis, we use a relatively simple dataset from a study of haematopoietic stem cells (HSCs) (Wilson et al., 2015). Single mouse HSCs were isolated into microtiter plates and libraries were prepared for 96 cells using the Smart-seq2 protocol. A constant amount of spike-in RNA from the External RNA Controls Consortium (ERCC) was also added to each cell’s lysate prior to library preparation. High-throughput sequencing was performed and the expression of each gene was quantified by counting the total number of reads mapped to its exonic regions. Similarly, the quantity of each spike-in transcript was measured by counting the number of reads mapped to the spike-in reference sequences. Counts for all genes/transcripts in each cell were obtained from the NCBI Gene Expression Omnibus (GEO) as a supplementary file under the accession number GSE61533 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61533).\n\nFor simplicity, we forego a description of the read processing steps required to generate the count matrix, i.e., read alignment and counting into features. These steps have been described in some detail elsewhere (Chen et al., 2016; Love et al., 2015), and are largely the same for bulk and single-cell data. The only additional consideration is that the spike-in information must be included in the pipeline. Typically, spike-in sequences can be included as additional FASTA files during genome index building prior to alignment, while genomic intervals for both spike-in transcripts and endogenous genes can be concatenated into a single GTF file prior to counting. For users favouring an R-based approach to read alignment and counting, we suggest using the methods in the Rsubread package (Liao et al., 2013; Liao et al., 2014). Alternatively, rapid quantification of expression with alignment-free methods such as kallisto (Bray et al., 2016) or Salmon (Patro et al., 2015) can be performed using the functions runKallisto and runSalmon in the scater package.\n\nThe first task is to load the count matrix into memory. In this case, some work is required to retrieve the data from the Gzip-compressed Excel format. Each row of the matrix represents an endogenous gene or a spike-in transcript, and each column represents a single HSC. For convenience, the counts for spike-in transcripts and endogenous genes are stored in a SCESet object from the scater package.\n\n\n\n\n\nWe identify the rows corresponding to ERCC spike-ins and mitochondrial genes. For this dataset, this information can be easily extracted from the row names. In general, though, identifying mitochondrial genes from standard identifiers like Ensembl requires extra annotation (this will be discussed later in more detail).\n\n\n\nFor each cell, we calculate quality control metrics such as the total number of counts or the proportion of counts in mitochondrial genes or spike-in transcripts. These are stored in the pData of the SCESet for future reference.\n\n\n\n\n\nWe need to explicitly indicate that the ERCC set is, in fact, a spike-in set. This is necessary as spike-ins require special treatment in some downstream steps such as variance estimation and normalization. We do this by supplying the name of the spike-in set to isSpike.\n\n\n\nLow-quality cells need to be removed to ensure that technical effects do not distort downstream analysis results. Two common measures of cell quality are the library size and the number of expressed features in each library. The library size is defined as the total sum of counts across all features, i.e., genes and spike-in transcripts. Cells with relatively small library sizes are considered to be of low quality as the RNA has not been efficiently captured (i.e., converted into cDNA and amplified) during library preparation. The number of expressed features in each cell is defined as the number of features with non-zero counts for that cell. Any cell with very few expressed genes is likely to be of poor quality as the diverse transcript population has not been successfully captured. The distributions of both of these metrics are shown in Figure 1.\n\n\n\nPicking a threshold for these metrics is not straightforward as their absolute values depend on the protocol and biological system. For example, sequencing to greater depth will lead to more reads, regardless of the quality of the cells. To obtain an adaptive threshold, we assume that most of the dataset consists of high-quality cells. We remove cells with log-library sizes that are more than 3 median absolute deviations (MADs) below the median log-library size. (A log-transformation improves resolution at small values, especially when the MAD of the raw values is comparable to or greater than the median.) We also remove cells where the log-transformed number of expressed genes is 3 MADs below the median.\n\n\n\nAnother measure of quality is the proportion of reads mapped to genes in the mitochondrial genome. High proportions are indicative of poor-quality cells (Ilicic et al., 2016; Islam et al., 2014), possibly because of increased apoptosis and/or loss of cytoplasmic RNA from lysed cells. Similar reasoning applies to the proportion of reads mapped to spike-in transcripts. The quantity of spike-in RNA added to each cell should be constant, which means that the proportion should increase upon loss of endogenous RNA in low-quality cells. The distributions of mitochondrial and spike-in proportions across all cells are shown in Figure 2.\n\n\n\nAgain, the ideal threshold for these proportions depends on the cell type and the experimental protocol. Cells with more mitochondria or more mitochondrial activity may naturally have larger mitochondrial proportions. Similarly, cells with more endogenous RNA or that are assayed with protocols using less spike-in RNA will have lower spike-in proportions. If we assume that most cells in the dataset are of high quality, then the threshold can be set to remove any large outliers from the distribution of proportions. We use the MAD-based definition of outliers to remove putative low-quality cells from the dataset.\n\n\n\nSubsetting by column will retain only the high-quality cells that pass each filter described above. We examine the number of cells removed by each filter as well as the total number of retained cells. Removal of a substantial proportion of cells (> 10%) may be indicative of an overall issue with data quality. It may also reflect genuine biology in extreme cases (e.g., low numbers of expressed genes in erythrocytes) for which the filters described here are inappropriate.\n\n\n\n\n\nAn alternative approach to quality control is to perform a principal components analysis (PCA) based on the quality metrics for each cell, e.g., the total number of reads, the total number of features and the proportion of mitochondrial or spike-in reads. Outliers on a PCA plot may be indicative of low-quality cells that have aberrant technical properties compared to the (presumed) majority of high-quality cells. In Figure 3, no obvious outliers are present which is consistent with the removal of suspect cells in the preceding quality control steps.\n\n\n\nMethods like PCA-based outlier detection and support vector machines can provide more power to distinguish low-quality cells from high-quality counterparts (Ilicic et al., 2016). This is because they are able to detect subtle patterns across many quality metrics simultaneously. However, this comes at some cost to interpretability, as the reason for removing a given cell may not always be obvious. Thus, for this workflow, we will use the simple approach whereby each quality metric is considered separately. Users interested in the more sophisticated approaches are referred to the scater and cellity packages.\n\nLow-abundance genes are problematic as zero or near-zero counts do not contain enough information for reliable statistical inference (Bourgon et al., 2010). In addition, the discreteness of the counts may interfere with downstream statistical procedures, e.g., by compromising the accuracy of continuous approximations. Here, low-abundance genes are defined as those with an average count below a filter threshold of 1. These genes are likely to be dominated by drop-out events (Brennecke et al., 2013), which limits their usefulness in later analyses. Removal of these genes mitigates discreteness and reduces the amount of computational work without major loss of information.\n\nThe first and second components are shown on each axis, along with the percentage of total variance explained by each component. Bars represent the coordinates of the cells on each axis.\n\n\n\n\n\nTo check whether the chosen threshold is suitable, we examine the distribution of log-means across all genes (Figure 4). The peak represents the bulk of moderately expressed genes while the rectangular component corresponds to lowly expressed genes. The filter threshold should cut the distribution at some point along the rectangular component to remove the majority of low-abundance genes.\n\n\n\nThe filter threshold is represented by the blue line.\n\nWe also look at the identities of the most highly expressed genes (Figure 5). This should generally be dominated by constitutively expressed transcripts, such as those for ribosomal or mitochondrial proteins. The presence of other classes of features may be cause for concern if they are not consistent with expected biology. For example, a top set containing many spike-in transcripts suggests that too much spike-in RNA was added during library preparation, while the absence of ribosomal proteins and/or the presence of their pseudogenes are indicative of suboptimal alignment.\n\n\n\nFor each feature, each bar represents the percentage assigned to that feature for a single cell, while the circle represents the average across all cells. Bars are coloured by the total number of expressed features in each cell, while circles are coloured according to whether the feature is labelled as a control feature.\n\nAn alternative approach to gene filtering is to select genes that have non-zero counts in at least n cells. This provides some more protection against genes with outlier expression patterns, i.e., strong expression in only one or two cells. Such outliers are typically uninteresting as they can arise from amplification artifacts that are not replicable across cells. (The exception is for studies involving rare cells where the outliers may be biologically relevant.) An example of this filtering approach is shown below for n set to 10.\n\n\n\n\n\nThe relationship between the number of expressing cells and the mean is shown in Figure 6. The two statistics tend to be well-correlated so filtering on either should give roughly similar results.\n\n\n\nSpike-in transcripts are highlighted in red.\n\nIn general, we prefer the mean-based filter as it tends to be less aggressive. A gene will be retained as long as it has sufficient expression in any subset of cells. Genes expressed in fewer cells require higher levels of expression in those cells to be retained, but this is not undesirable as it avoids selecting uninformative genes (with low expression in few cells) that contribute little to downstream analyses, e.g., HVG detection or clustering. In contrast, the “at least n” filter depends heavily on the choice of n. With n = 10, a gene expressed in a subset of 9 cells would be filtered out, regardless of the level of expression in those cells. This may result in the failure to detect rare subpopulations that are present at frequencies below n. While the mean-based filter will retain more outlier-driven genes, this can be handled by choosing methods that are robust to outliers in the downstream analyses.\n\nThus, we apply the mean-based filter to the data by subsetting the SCESet object as shown below. This removes all rows corresponding to endogenous genes or spike-in transcripts with abundances below the specified threshold.\n\n\n\nUsing the deconvolution method to deal with zero counts. Read counts are subject to differences in capture efficiency and sequencing depth between cells (Stegle et al., 2015). Normalization is required to eliminate these cell-specific biases prior to downstream quantitative analyses. This is often done by assuming that most genes are not differentially expressed (DE) between cells. Any systematic difference in count size across the non-DE majority of genes between two cells is assumed to represent bias and is removed by scaling. More specifically, “size factors” are calculated that represent the extent to which counts should be scaled in each library.\n\nSize factors can be computed with several different approaches, e.g., using the estimateSizeFactorsFromMatrix function in the DESeq2 package (Anders & Huber, 2010; Love et al., 2014), or with the calcNormFactors function (Robinson & Oshlack, 2010) in the edgeR package. However, single-cell data can be problematic for these bulk data-based methods due to the dominance of low and zero counts. To overcome this, we pool counts from many cells to increase the count size for accurate size factor estimation (Lun et al., 2016). Pool-based size factors are then “deconvolved” into cell-based factors for cell-specific normalization.\n\n\n\n\n\nIn this case, the size factors are tightly correlated with the library sizes for all cells (Figure 7). This suggests that the systematic differences between cells are primarily driven by differences in capture efficiency or sequencing depth. Any DE between cells would yield a non-linear trend between the total count and size factor, and/or increased scatter around the trend. This does not occur here as strong DE is unlikely to exist within a homogeneous population of cells.\n\nAxes are shown on a log-scale.\n\n\n\nComputing separate size factors for spike-in transcripts. Size factors computed from the counts for endogenous genes are usually not appropriate for normalizing the counts for spike-in transcripts. Consider an experiment without library quantification, i.e., the amount of cDNA from each library is not equalized prior to pooling and multiplexed sequencing. Here, cells containing more RNA have greater counts for endogenous genes and thus larger size factors to scale down those counts. However, the same amount of spike-in RNA is added to each cell during library preparation. This means that the counts for spike-in transcripts are not subject to the effects of RNA content. Attempting to normalize the spike-in counts with the gene-based size factors will lead to over-normalization and incorrect quantification of expression. Similar reasoning applies in cases where library quantification is performed. For a constant total amount of cDNA, any increases in endogenous RNA content will suppress the coverage of spike-in transcripts. As a result, the bias in the spike-in counts will be opposite to that captured by the gene-based size factor.\n\nTo ensure normalization is performed correctly, we compute a separate set of size factors for the spike-in set. For each cell, the spike-in-specific size factor is defined as the total count across all transcripts in the spike-in set. This assumes that none of the spike-in transcripts are differentially expressed, which is reasonable given that the same amount and composition of spike-in RNA should have been added to each cell. (See below for a more detailed discussion on spike-in normalization.) These size factors are stored in a separate field of the SCESet object by setting general.use=FALSE in computeSpikeFactors. This ensures that they will only be used with the spike-in transcripts but not the endogenous genes.\n\n\n\nApplying the size factors to normalize gene expression. The count data are used to compute normalized log-expression values for use in downstream analyses. Each value is defined as the log-ratio of each count to the size factor for the corresponding cell, after adding a prior count of 1 to avoid undefined values at zero counts. Division by the size factor ensures that any cell-specific biases are removed. If spike-in-specific size factors are present in sce, they will be automatically applied to normalize the spike-in transcripts separately from the endogenous genes.\n\n\n\nThe log-transformation provides some measure of variance stabilization (Law et al., 2014), so that high-abundance genes with large variances do not dominate downstream analyses. The computed values are stored as an exprs matrix in addition to the other assay elements.\n\nWe check whether there are technical factors that contribute substantially to the heterogeneity of gene expression. If so, the factor may need to be regressed out to ensure that it does not inflate the variances or introduce spurious correlations. For this dataset, the simple experimental design means that there are no plate or batch effects to examine. Instead, we use the (log-transformed) total count for the spike-in transcripts as a proxy for the relative bias in each sample. This bias is purely technical in origin, given that the same amount of spike-in RNA should have been added to each cell. Thus, any association of gene expression with this factor is not biologically interesting and should be removed.\n\nFor each gene, we calculate the percentage of the variance of the expression values that is explained by the spike-in totals (Figure 8). The percentages are generally small (1–3%), indicating that the expression of most genes does not associate with this factor. This result is consistent with successful removal of cell-specific biases by scaling normalization. Thus, the spike-in total does not need to be explicitly modelled in our downstream analyses.\n\nFor each gene, the percentage of the variance of the normalized log-expression values across cells that is explained by each factor is calculated. Each curve corresponds to one factor and represents the distribution of percentages across all genes.\n\n\n\nNote that the use of the spike-in total as an accurate proxy for the relative technical bias assumes that no library quantification is performed. Otherwise, the coverage of the spike-in transcripts would be dependent on the total amount of endogenous RNA in each cell. This means that the spike-in totals could be confounded with genuine biological effects associated with changes in RNA content.\n\nWe use the prediction method described by Scialdone et al. (2015) to classify cells into cell cycle phases based on the gene expression data. Using a training dataset, the sign of the difference in expression between two genes was computed for each pair of genes. Pairs with changes in the sign across cell cycle phases were chosen as markers. Cells in a test dataset can then be classified into the appropriate phase, based on whether the observed sign for each marker pair is consistent with one phase or another. This approach is implemented in the cyclone function using a pre-trained set of marker pairs for mouse data. The result of phase assignment for each cell in the HSC dataset is shown in Figure 9. (Some additional work is necessary to match the gene symbols in the data to the Ensembl annotation in the pre-trained marker set.)\n\n\n\nCells are classified as being in G1 phase if the G1 score is above 0.5 and greater than the G2/M score; in G2/M phase if the G2/M score is above 0.5 and greater than the G1 score; and in S phase if neither score is above 0.5. Here, the vast majority of cells are classified as being in G1 phase. We will focus on these cells in the downstream analysis. Cells in other phases are removed to avoid potential confounding effects from cell cycle-induced differences. Alternatively, if a non-negligible number of cells are in other phases, we can use the assigned phase as a blocking factor in downstream analyses. This protects against cell cycle effects without discarding information.\n\n\n\nPre-trained classifiers are available in scran for human and mouse data. While the mouse classifier used here was trained on data from embryonic stem cells, it is still accurate for other cell types (Scialdone et al., 2015). This may be due to the conservation of the transcriptional program associated with the cell cycle (Bertoli et al., 2013; Conboy et al., 2007). The pair-based method is also a non-parametric procedure that is robust to most technical differences between datasets. However, it will be less accurate for data that are substantially different from those used in the training set. In such cases, users can construct a custom classifier from their own training data using the sandbag function. This will be necessary for other model organisms where pre-trained classifiers are not available.\n\nWe identify HVGs to focus on the genes that are driving heterogeneity across the population of cells. This requires estimation of the variance in expression for each gene, followed by decomposition of the variance into biological and technical components. HVGs are then identified as those genes with the highest biological components. This avoids prioritizing genes that are highly variable due to technical factors such as sampling noise during RNA capture and library preparation.\n\nIdeally, the technical component would be estimated by fitting a mean-variance trend to the spike-in transcripts using the trendVar function. Recall that the same set of spike-ins was added in the same quantity to each cell. This means that the spike-in transcripts should exhibit no biological variability, i.e., any variance in their counts should be technical in origin. Given the mean abundance of a gene, the fitted value of the trend can be used as an estimate of the technical component for that gene. The biological component of the variance can then be calculated by subtracting the technical component from the total variance of each gene with the decomposeVar function.\n\nIn practice, this strategy is compromised by the small number of spike-in transcripts, the uneven distribution of their abundances and (for low numbers of cells) the imprecision of their variance estimates. This makes it difficult to accurately fit a complex mean-dependent trend to the spike-in variances. An alternative approach is to fit the trend to the variance estimates of the endogenous genes, using the use.spikes=FALSE setting as shown below. This assumes that the majority of genes are not variably expressed, such that the technical component dominates the total variance for those genes. The fitted value of the trend is then used as an estimate of the technical component. Obviously, this is the only approach that can be used if no spike-ins were added in the experiment.\n\n\n\nWe assess the suitability of the trend fitted to the endogenous variances by examining whether it is consistent with the spike-in variances (Figure 10). The trend passes through or close to most of the spike-in variances, indicating that our assumption (that most genes have low levels of biological variability) is valid. This strategy exploits the large number of endogenous genes to obtain a stable trend, with the spike-in transcripts used as diagnostic features rather than in the trend fitting itself. However, if our assumption did not hold, we would instead fit the trend directly to the spike-in variances with the default use.spikes=TRUE. This sacrifices stability to reduce systematic errors in the estimate of the biological component for each gene.\n\n\n\nThe blue line represents the mean-dependent trend fitted to the variances of the endogenous genes. Variance estimates for spike-in transcripts are highlighted in red.\n\nHVGs are defined as genes with biological components that are significantly greater than zero at a false discovery rate (FDR) of 5%. These genes are interesting as they drive differences in the expression profiles, and should be prioritized for further investigation. In addition, we only consider a gene to be a HVG if it has a biological component greater than or equal to 0.5. For transformed expression values on the log2 scale, this means that the average difference in true expression between any two cells will be at least 2-fold. (This reasoning assumes that the true log-expression values are Normally distributed with variance of 0.5. The root-mean-square of the difference between two values is treated as the average log2-fold change between cells and is equal to unity.) We rank the results by the biological component to focus on genes with larger biological variability.\n\n\n\n\n\n\n\n\n\nWe recommend checking the distribution of expression values for the top HVGs to ensure that the variance estimate is not being dominated by one or two outlier cells (Figure 11).\n\n\n\nEach point represents the log-expression value in a single cell.\n\nThere are many other strategies for defining HVGs, e.g., by using the coefficient of variation (Brennecke et al., 2013; Kim et al., 2015; Kołodziejczyk et al., 2015), with the dispersion parameter in the negative binomial distribution (McCarthy et al., 2012), or as a proportion of total variability (Vallejos et al., 2015). Some of these methods are available in scran – for example, see DM or technicalCV2 for calculations based on the coefficient of variation. Here, we use the variance of the log-expression values because the log-transformation protects against genes with strong expression in only one or two cells. This ensures that the set of top HVGs is not dominated by genes with (mostly uninteresting) outlier expression patterns.\n\nAnother useful procedure is to identify the HVGs that are highly correlated with one another. This distinguishes between HVGs caused by random noise and those involved in driving systematic differences between subpopulations. Gene pairs with significantly large positive or negative values for Spearman’s rho are identified using the correlatePairs function. We only apply this function to the set of HVGs – doing so for all possible gene pairs would require too much computational time and may prioritize uninteresting genes that have strong correlations but low variance, e.g., tightly co-regulated house-keeping genes.\n\n\n\n\n\nThe significance of each correlation is determined using a permutation test. For each pair of genes, the null hypothesis is that the expression profiles of two genes are independent. Shuffling the profiles and recalculating the correlation will yield a null distribution that is used to obtain a p-value for each observed correlation value (Phipson & Smyth, 2010). Correction for multiple testing across many gene pairs is performed by controlling the FDR at 5%. Correlated gene pairs can be directly used for experimental validation with orthogonal techniques (e.g., fluorescence-activated cell sorting, immunohistochemistry or RNA fluorescence in situ hybridization) to verify that these expression patterns are genuinely present across the cell population.\n\n\n\n\n\nLarger sets of correlated genes are assembled by treating genes as nodes in a graph and each pair of genes with significantly large correlations as an edge. In particular, an undirected graph is constructed using methods in the RBGL package. Highly connected subgraphs are then identified and defined as gene sets. This provides a convenient summary of the pairwise correlations between genes.\n\n\n\n\n\nSignificant correlations provide evidence for substructure in the dataset, i.e., subpopulations of cells with systematic differences in their expression profiles. The number of significantly correlated HVG pairs represents the strength of the substructure. If many pairs were significant, this would indicate that the subpopulations were clearly defined and distinct from one another. For this particular dataset, a relatively low number of HVGs exhibit significant correlations. This suggests that any substructure in the data will be modest, which is expected given that rigorous selection was performed to obtain a homogeneous population of HSCs (Wilson et al., 2015).\n\nWe visualize the expression profiles of the correlated HVGs with a heatmap (Figure 12). All expression values are mean-centred for each gene to highlight the relative differences in expression between cells. If any subpopulations were present, they would manifest as rectangular “blocks” in the heatmap, corresponding to sets of genes that are systematically up- or down-regulated in specific groups of cells. This is not observed in Figure 12, consistent with the lack of strong substructure. There may be a subpopulation of Fos and Jun-negative cells, but it is poorly defined given the small numbers of cells and genes involved.\n\n\n\nDendrograms are formed by hierarchical clustering on the Euclidean distances between genes (row) or cells (column).\n\nWe also apply dimensionality reduction techniques to visualize the relationships between cells. This is done by constructing a PCA plot from the normalized log-expression values of the correlated HVGs (Figure 13). Cells with similar expression profiles should be located close together in the plot, while dissimilar cells should be far apart. We only use the correlated HVGs in plotPCA because any substructure should be most pronounced in the expression profiles of these genes. Even so, no clear separation of cells into distinct subpopulations is observed.\n\n\n\nFirst and second components are shown, along with the percentage of variance explained. Bars represent the coordinates of the cells on each axis. Each cell is coloured according to its total number of expressed features.\n\nAnother widely used approach is the t-stochastic neighbour embedding (t-SNE) method (Van der Maaten & Hinton, 2008). t-SNE tends to work better than PCA for separating cells in more diverse populations, at the cost of more computational effort and complexity. In particular, t-SNE is a stochastic method, so users should run the algorithm several times to ensure that the results are representative, and then set a seed to ensure that the chosen results are reproducible. It is also advisable to test different settings of the “perplexity” parameter as this will affect the distribution of points in the low-dimensional space. This is demonstrated below in Figure 14, though no consistent substructure is observed in all plots.\n\n\n\nIn each plot, each point represents a cell in the HSC dataset. Bars represent the coordinates of the cells on each axis. Each cell is coloured according to its total number of expressed features.\n\nThere are many other dimensionality reduction techniques that we do not consider here but could also be used, e.g., multidimensional scaling, diffusion maps. These have their own advantages and disadvantages – for example, diffusion maps (see plotDiffusionMap) place cells along a continuous trajectory and are suited for visualizing graduated processes like differentiation (Angerer et al., 2016). For each visualization method, additional cell-specific information can be incorporated into the colour, size or shape of each point. Here, cells are coloured by the total number of expressed features to demonstrate that this metric does not drive any systematic differences across the population. The selectorPlot function from scran can also be used to interactively select groups of cells in two-dimensional space. This facilitates data exploration as visually identified subpopulations can be directly selected for further examination.\n\nFinally, putative subpopulations can be computationally defined by cutting the dendrogram in heat.out$colDendrogram with cutree to form clusters. We do not attempt this here as the substructure is too weak for reliable clustering. In fact, users should generally treat clustering results with some caution. If the differences between cells are subtle, the assignment of cells into clusters may not be robust. Moreover, different algorithms can yield substantially different clusters by focusing on different aspects of the data. Experimental validation of the clusters is critical to ensure that the putative subpopulations actually exist.\n\nOnce the basic analysis is completed, it is often useful to save the SCESet object to file with the saveRDS function. The object can then be easily restored into new R sessions using the readRDS function. This allows further work to be conducted without having to repeat all of the processing steps described above.\n\n\n\nA variety of methods are available to perform more complex analyses on the processed expression data. For example, cells can be ordered in pseudotime (e.g., for progress along a differentiation pathway) with monocle (Trapnell et al., 2014) or TSCAN (Ji & Ji, 2016); cell-state hierarchies can be characterized with the sincell package (Julia et al., 2015); and oscillatory behaviour can be identified using Oscope (Leng et al., 2015). HVGs can be used in gene set enrichment analyses to identify biological pathways and processes with heterogeneous activity, using packages designed for bulk data like topGO or with dedicated single-cell methods like scde (Fan et al., 2016). Full descriptions of these analyses are outside the scope of this workflow, so interested users are advised to consult the relevant documentation.\n\n\nAnalysis of cell types in the brain\n\nWe proceed to a more heterogeneous dataset from a study of cell types in the mouse brain (Zeisel et al., 2015). This contains approximately 3000 cells of varying types such as oligodendrocytes, microglia and neurons. Individual cells were isolated using the Fluidigm C1 microfluidics system and library preparation was performed on each cell using a UMI-based protocol. After sequencing, expression was quantified by counting the number of UMIs mapped to each gene. Count data for all endogenous genes, mitochondrial genes and spike-in transcripts were obtained from http://linnarssonlab.org/cortex.\n\nThe count data are distributed across several files, so some work is necessary to consolidate them into a single matrix. We define a simple utility function for loading data in from each file. (We stress that this function is only relevant to the current dataset, and should not be used for other datasets. This kind of effort is generally not required if all of the counts are in a single file and separated from the metadata.)\n\n\n\nUsing this function, we read in the counts for the endogenous genes, ERCC spike-ins and mitochondrial genes.\n\n\n\nWe also need to rearrange the columns for the mitochondrial data, as the order is not consistent with the other files.\n\n\n\nThe counts are then combined into a single matrix for constructing a SCESet object. For convenience, metadata for all cells are stored in the same object for later access.\n\n\n\n\n\nWe also add annotation identifying rows that correspond to each class of features.\n\n\n\nThe original authors of the study have already removed low-quality cells prior to data publication. Nonetheless, we compute some quality control metrics to check whether the remaining cells are satisfactory.\n\n\n\nWe examine the distribution of library sizes and numbers of expressed genes across cells (Figure 15).\n\n\n\nWe also examine the distribution of the proportions of UMIs assigned to mitochondrial genes or spike-in transcripts (Figure 16). The spike-in proportions here are more variable than in the HSC dataset. This may reflect a greater variability in the total amount of endogenous RNA per cell when many cell types are present.\n\n\n\nWe remove small outliers in Figure 15 and large outliers in Figure 16, using a MAD-based threshold as previously described.\n\n\n\nRemoval of low-quality cells is then performed by combining the filters for all of the metrics. The vast majority of cells are retained, which suggests that the original quality control procedures were generally adequate.\n\n\n\n\n\nLow-abundance genes are removed by applying a simple mean-based filter. We use a lower threshold for UMI counts compared to that used for read counts. This is because the number of transcript molecules will always be lower than the number of reads generated from such molecules. While some information and power will be lost due to the decrease in the size of the counts, this is mitigated by a concomitant reduction in the variability of the counts. Specifically, the use of UMIs eliminates technical noise due to amplification biases (Islam et al., 2014).\n\n\n\nFigure 17 suggests that our choice of threshold is appropriate. The filter removes the bulk of lowly expressed genes while preserving the peak of moderately expressed genes.\n\n\n\nThe filter threshold is represented by the blue line.\n\nThe mean-based filter is applied to the dataset by subsetting sce as previously described. Despite the reduced threshold, the number of retained genes is lower than that in the HSC dataset, simply because the library sizes are much smaller with UMI counts.\n\n\n\n\n\nSome datasets also contain strong heterogeneity in mitochondrial RNA content, possibly due to differences in mito-chondrial copy number or activity between cell types. This heterogeneity will cause mitochondrial genes to dominate the top set of results, e.g., for identification of correlated HVGs. However, these genes are largely uninteresting given that most studies focus on nuclear regulation. As such, we filter them out prior to further analysis. Other candidates for removal include pseudogenes or ribosome-associated genes, which might not be relevant for characterising cell types but can still interfere with the interpretation of the results.\n\n\n\nNormalization of cell-specific biases is performed using the deconvolution method in the computeSumFactors function. Here, we cluster similar cells together and normalize the cells in each cluster using the deconvolution method. This improves normalization accuracy by reducing the number of DE genes between cells in the same cluster. Scaling is then performed to ensure that size factors of cells in different clusters are comparable.\n\n\n\nCompared to the HSC analysis, more scatter is observed around the trend between the total count and size factor for each cell (Figure 18). This is consistent with an increased amount of DE between cells of different types, which compromises the accuracy of library size normalization (Robinson & Oshlack, 2010). In contrast, the size factors are estimated based on median ratios and are more robust to the presence of DE between cells.\n\n\n\nAxes are shown on a log-scale.\n\nWe also compute size factors specific to the spike-in set, as previously described.\n\n\n\nFinally, normalized log-expression values are computed for each endogenous gene or spike-in transcript using the appropriate size factors.\n\n\n\nLarger experiments contain more technical factors that need to be investigated. In this dataset, factors include the sex of the animal from which the cells were extracted, the age of the animal, the tissue of origin for each cell, and the total spike-in count in each cell. Figure 19 shows that the tissue of origin explains a substantial proportion of the variance for a subset of genes. This is probably because each tissue contains a different composition of cell types, leading to systematic differences in gene expression between tissues. The other factors explain only a small proportion of the variance for most genes and do not need to be incorporated into our downstream analyses.\n\n\n\nNonetheless, we demonstrate how to account for uninteresting technical factors by using sex as an example. We set up a design matrix with the sex of the animal as the explanatory factor for each cell. This ensures that any sex-specific changes in expression will be modelled in our downstream analyses. We do not block on the tissue of origin, despite the fact that it explains more of the variance than sex in Figure 19. This is because the tissue factor is likely to be associated with genuine differences between cell types, so including it in the model might regress out interesting biological effects.\n\n\n\nFor each gene, the percentage of the variance of the normalized log-expression values that is explained by the (log-transformed) total spike-in counts, the sex or age of the mouse, or the tissue of origin is calculated. Each curve corresponds to one factor and represents the distribution of percentages across all genes.\n\nOther relevant factors include the chip or plate on which the cells were processed and the batch in which the libraries were sequenced. Blocking on these factors may be necessary to account for batch effects that are often observed in scRNA-seq data (Hicks et al., 2015; Tung et al., 2016).\n\nWe attempt to classify cells into cell cycle phases using the cyclone method. However, examination of Figure 20 indicates that many of the G1 and G2/M scores are ambiguous. This highlights the risks of using a classifier on a dataset that is substantially different to the training set. In particular, the classifier was trained on C1 SMARTer data (Scialdone et al., 2015) and accounts for the biases in that protocol. The brain dataset uses UMI counts, which has an entirely different set of biases, e.g., 3’-end coverage only, no length bias, no amplification noise. These new biases and the absence of expected biases will confound the classifier and interfere with accurate calling of cell cycle phase.\n\n\n\nGiven the lack of definitive classification, we will not perform any processing of the dataset by cell cycle phase. This is unlikely to be problematic for this analysis, as the cell cycle effect will be relatively subtle compared to the obvious differences between cell types in a diverse population. Thus, the former is unlikely to distort the conclusions regarding the latter. Indeed, classification may be irrelevant for many neuronal cell types which are postmitotic and do not belong in any phase of the cell cycle.\n\nWe identify HVGs that may be involved in driving population heterogeneity. This is done by fitting a trend to the technical variances for the spike-in transcripts. We then compute the biological component of the variance for each endogenous gene by subtracting the fitted value of the trend from the total variance.\n\n\n\nFigure 21 suggests that the trend is fitted accurately to the technical variances. Errors in fitting are negligible due to the precision of the variance estimates in a large dataset containing thousands of cells. The technical variances are also much smaller than those in the HSC dataset. This is due to the use of UMIs which reduces the noise caused by variable PCR amplification. Furthermore, the spike-in trend is consistently lower than the variances of the endogenous genes. This reflects the heterogeneity in gene expression across cells of different types. It also means the previous strategy of fitting a trend to the endogenous variances would not be appropriate here (or necessary, given the quality of the spike-in trend).\n\n\n\nThe red line represents the mean-dependent trend in the technical variance of the spike-in transcripts (also highlighted as red points).\n\nHVGs are identified as genes with large positive biological components. These are saved to file for future reference. Note that some of the p-values are reported as zero due to numerical imprecision.\n\n\n\n\n\n\n\n\n\nAgain, we check the distribution of expression values for the top 10 HVGs to ensure that they are not being driven by outliers (Figure 22). Some tweaking of the plotExpression parameters is necessary to visualize a large number of cells.\n\n\n\nFor each gene, each point represents the log-expression value for an individual cell.\n\nTo identify genes involved in defining subpopulations, the set of HVGs is tested for significant pairwise correlations. Given the size of the set, we only use the top 500 HVGs to reduce computational work. Here, the number of significantly correlated pairs is much higher than in the HSC dataset, indicating that strong substructure is present. These results are also saved to file for use in designing validation experiments.\n\n\n\n\n\n\n\n\n\nWe first remove the sex effect using the removeBatchEffect function from the limma package (Ritchie et al., 2015). This ensures that any sex-specific differences will not dominate the visualization of the expression profiles. (However, if an analysis method can accept a design matrix, blocking on nuisance factors in the design matrix is preferable to manipulating the expression values with removeBatchEffect. This is because the latter does not account for the loss of residual degrees of freedom, nor the uncertainty of estimation of the blocking factor terms.) We store these sex-corrected expression values in the norm_exprs field of the SCESet object for later use.\n\n\n\nWe perform dimensionality reduction on the correlated HVGs to check if there is any substructure. Cells separate into clear clusters in the t-SNE plot (Figure 23), corresponding to distinct subpopulations. This is consistent with the presence of multiple cell types in the diverse brain population.\n\n\n\nEach point represents a cell and is coloured according to its expression of the top HVG (left) or Mog (right).\n\nThe PCA plot is less effective at separating cells into many different clusters (Figure 24). This is because the first two principal components are driven by strong differences between specific subpopulations, which reduces the resolution of more subtle differences between some of the other subpopulations. Nonetheless, some substructure is still visible.\n\n\n\nEach point represents a cell and is coloured according to its expression of the top HVG (left) or Mog (right).\n\nFor both methods, we colour each cell based on the expression of a particular gene. This is a useful strategy for visualizing changes in expression across the lower-dimensional space. It can also be used to characterise each cluster if the selected genes are known markers for particular cell types. For example, Mog can be used to identify clusters corresponding to oligodendrocytes.\n\nThe normalized and sex-adjusted log-expression values for correlated HVGs are used to cluster cells into putative sub-populations. Specifically, we perform hierarchical clustering on the Euclidean distances between cells, using Ward’s criterion to minimize the total variance within each cluster. This yields a dendrogram that groups together cells with similar expression patterns across the chosen genes. An alternative approach is to cluster on a matrix of distances derived from correlations (e.g., as in quickCluster). This is more robust to noise and normalization errors, but is also less sensitive to subtle changes in the expression profiles.\n\n\n\nClusters are explicitly defined by applying a dynamic tree cut (Langfelder et al., 2008) to the dendrogram. This exploits the shape of the branches in the dendrogram to refine the cluster definitions, and is more appropriate than cutree for complex dendrograms. Greater control of the empirical clusters can be obtained by manually specifying cutHeight in cutreeDynamic.\n\n\n\nFigure 25 contains a clear block-like pattern, representing systematic differences between clusters of cells with distinct expression profiles. This is consistent with the presence of well-defined subpopulations that were previously observed in the dimensionality reduction plots.\n\n\n\nDendrograms are formed by hierarchical clustering on the Euclidean distances between genes (row) or cells (column). Column colours represent the cluster to which each cell is assigned after a dynamic tree cut.\n\nThis heatmap can be stored at a greater resolution for detailed inspection later.\n\n\n\nOnce putative subpopulations are identified, we can identify marker genes for specific subpopulations of interest. This is done by identifying genes that are consistently DE in one subpopulation compared to the others. DE testing can be performed using a number of packages, but for this workflow, we will use the edgeR package (Robinson et al., 2010). First, we set up a design matrix specifying which cells belong to each cluster. Each cluster* coefficient represents the average log-expression of all cells in the corresponding cluster. We also block on uninteresting factors such as sex.\n\n\n\n\n\nWe set up a DGEList object for entry into the edgeR analysis. This new object contains all relevant information from the original SCESet object, including the counts and (library size-adjusted) size factors.\n\n\n\nedgeR uses negative binomial (NB) distributions to model the read/UMI counts for each sample. We estimate the NB dispersion parameter that quantifies the biological variability in expression across cells in the same cluster. Large dispersion estimates above 0.5 are often observed in scRNA-seq data due to technical noise, in contrast to bulk data where values of 0.05–0.2 are more typical. We then use the design matrix to fit a NB GLM to the counts for each gene (McCarthy et al., 2012).\n\n\n\n\n\nWe assume that one of the clusters corresponds to our subpopulation of interest. Each gene is tested for DE between the chosen cluster and every other cluster in the dataset. We demonstrate this below for cluster 1, though the same process can be applied to any other cluster by changing chosen.clust.\n\n\n\nPotential marker genes are identified by taking the top set of DE genes from each pairwise comparison between clusters. We arrange the results into a single output table that allows a marker set to be easily defined for a user-specified size for the top set. For example, to construct a marker set from the top 10 genes of each comparison, one would filter marker.set to retain rows with Top less than or equal to 10.\n\n\n\n\n\nWe save the list of candidate marker genes for further examination. We also examine their expression profiles to verify that the DE signature is robust. Figure 26 indicates that most of the top markers have strong and consistent up- or downregulation in cells of cluster 1 compared to some or all of the other clusters. Thus, cells from the subpopulation of interest can be identified as those that express the upregulated markers and do not express the downregulated markers.\n\n\n\nColumn colours represent the cluster to which each cell is assigned, as indicated by the legend.\n\nMany of the markers in Figure 26 are not uniquely up- or downregulated in the chosen cluster. Testing for unique DE tends to be too stringent as it overlooks important genes that are expressed in two or more clusters. For example, in a mixed population of CD4+-only, CD8+-only, double-positive and double-negative T cells, neither Cd4 or Cd8 would be detected as subpopulation-specific markers because each gene is expressed in two subpopulations. With our approach, both of these genes will be picked up as candidate markers as they will be DE between at least one pair of subpopulations. A combination of markers can then be chosen to characterize a subpopulation, which is more flexible than trying to find uniquely DE genes.\n\nIt must be stressed that the p-values computed here cannot be interpreted as measures of significance. This is because the clusters have been empirically identified from the data. edgeR does not account for the uncertainty and stochasticity in clustering, which means that the p-values are much lower than they should be. As such, these p-values should only be used for ranking candidate markers for follow-up studies. However, this is not a concern in other analyses where the groups are pre-defined. For such analyses, the FDR-adjusted p-value can be directly used to define significant genes for each DE comparison, though some care may be required to deal with plate effects (Hicks et al., 2015; Tung et al., 2016).\n\nHaving completed the basic analysis, we save the SCESet object with its associated data to file. This is especially important here as the brain dataset is quite large. If further analyses are to be performed, it would be inconvenient to have to repeat all of the pre-processing steps described above.\n\n\n\n\nAlternative parameter settings and strategies\n\nScaling normalization strategies for scRNA-seq data can be broadly divided into two classes. The first class assumes that there exists a subset of genes that are not DE between samples, as previously described. The second class uses the fact that the same amount of spike-in RNA was added to each cell. Differences in the coverage of the spike-in transcripts can only be due to cell-specific biases, e.g., in capture efficiency or sequencing depth. Scaling normalization is then applied to equalize spike-in coverage across cells.\n\nThe choice between these two normalization strategies depends on the biology of the cells and the features of interest. If the majority of genes are expected to be DE and there is no reliable house-keeping set, spike-in normalization may be the only option for removing cell-specific biases. Spike-in normalization should also be used if differences in the total RNA content of individual cells are of interest. In any particular cell, an increase in the amount of endogenous RNA will not increase spike-in coverage (with or without library quantification). Thus, the former will not be represented as part of the bias in the latter, which means that the effects of total RNA content on expression will not be removed upon scaling. With non-DE normalization, an increase in RNA content will systematically increase the expression of all genes in the non-DE subset, such that it will be treated as bias and removed.\n\nWe demonstrate the use of spike-in normalization on a dataset involving different cell types – namely, mouse embryonic stem cells (mESCs) and mouse embryonic fibroblasts (MEFs) (Islam et al., 2011). The count table was obtained from NCBI GEO as a supplementary file under the accession GSE29087 (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE29087). We load the counts into R and specify the rows corresponding to spike-in transcripts. The negative control wells do not contain any cells and are useful for quality control but need to be removed prior to downstream analysis.\n\n\n\nWe then apply the computeSpikeFactors method to estimate size factors for all cells. This method computes the total count over all spike-in transcripts in each cell, and calculates size factors to equalize the total spike-in count across cells. Here, we set general.use=TRUE as we intend to apply the spike-in factors to all counts.\n\n\n\nApplying normalize will use the spike-in-based size factors to compute normalized log-expression values. Unlike in the previous analyses, we do not have to set separate size factors for the spike-in transcripts. This is because the relevant factors are already being used for all genes and spike-in transcripts when general.use=TRUE. (The exception is if the experiment uses multiple spike-in sets that behave differently and need to be normalized separately.)\n\n\n\nFor comparison, we also compute the deconvolution size factors and plot them against the spike-in factors. We observe a negative correlation between the two sets of values (Figure 27). This is because MEFs contain more endogenous RNA, which reduces the relative spike-in coverage in each library (thereby decreasing the spike-in size factors) but increases the coverage of endogenous genes (thus increasing the deconvolution size factors). If the spike-in size factors were applied to the counts, the expression values in MEFs would be scaled up while expression in mESCs would be scaled down. However, the opposite would occur if deconvolution size factors were used.\n\n\n\nAxes are shown on a log-scale, and cells are coloured according to their identity. Deconvolution size factors were computed with small pool sizes owing to the low number of cells of each type.\n\nWhether or not total RNA content is relevant – and thus, the choice of normalization strategy – depends on the biological hypothesis. In the HSC and brain analyses, variability in total RNA across the population was treated as noise and removed by non-DE normalization. This may not always be appropriate if total RNA is associated with a biological difference of interest. For example, Islam et al. (2011) observe a 5-fold difference in total RNA between mESCs and MEFs. Similarly, the total RNA in a cell changes across phases of the cell cycle (Buettner et al., 2015). Spike-in normalization will preserve these differences in total RNA content such that the corresponding biological groups can be easily resolved in downstream analyses.\n\nCell cycle phase is usually uninteresting in studies focusing on other aspects of biology. However, the effects of cell cycle on the expression profile can mask other effects and interfere with the interpretation of the results. This cannot be avoided by simply removing cell cycle marker genes, as the cell cycle can affect a substantial number of other transcripts (Buettner et al., 2015). Rather, more sophisticated strategies are required, one of which is demonstrated below using data from a study of T Helper 2 (TH2) cells (Mahata et al., 2014). Buettner et al. (2015) have already applied quality control and normalized the data, so we can use them directly as log-expression values (accessible as Supplementary Data 1 of https://dx.doi.org/10.1038/nbt.3102).\n\n\n\nWe empirically identify the cell cycle phase using the pair-based classifier in cyclone. The majority of cells in Figure 28 seem to lie in G1 phase, with small numbers of cells in the other phases.\n\n\n\nWe can block directly on the phase scores in downstream analyses. This is more graduated than using a strict assignment of each cell to a specific phase, as the magnitude of the score considers the uncertainty of the assignment. The phase covariates in the design matrix will absorb any phase-related effects on expression such that they will not affect estimation of the effects of other experimental factors. Users should also ensure that the phase score is not confounded with other factors of interest. For example, model fitting is not possible if all cells in one experimental condition are in one phase, and all cells in another condition are in a different phase.\n\n\n\nFor analyses that do not use design matrices, we remove the cell cycle effect directly from the expression values using removeBatchEffect. The result of this procedure is visualized with some PCA plots in Figure 29. Before removal, the distribution of cells along the first two principal components is strongly associated with their G1 and G2/M scores. This is no longer the case after removal, which suggests that the cell cycle effect has been mitigated.\n\n\n\nEach cell is represented by a point with colour and size determined by the G1 and G2/M scores, respectively. Only HVGs were used to construct each plot.\n\nAs an aside, this dataset contains cells at various stages of differentiation (Mahata et al., 2014). This is an ideal use case for diffusion maps which perform dimensionality reduction along a continuous process. In Figure 30, cells are arranged along a trajectory in the low-dimensional space. The first diffusion component is likely to correspond to TH2 differentiation, given that a key regulator Gata3 (Zhu et al., 2006) changes in expression from left to right.\n\n\n\nFeature-counting tools typically report genes in terms of standard identifiers from Ensembl or Entrez. These identifiers are used as they are unambiguous and highly stable. However, they are difficult to interpret compared to the gene symbols which are more commonly used in the literature. We can easily convert from one to the other using annotation packages like org.Mm.eg.db. This is demonstrated below for Ensembl identifiers in a mESC dataset (Kołodziejczyk et al., 2015) obtained from http://www.ebi.ac.uk/teichmann-srv/espresso. The select call extracts the specified data from the annotation object, and the match call ensures that the first gene symbol is used if multiple symbols correspond to a single Ensembl identifier.\n\n\n\n\n\nTo identify which rows correspond to mitochondrial genes, we need to use extra annotation describing the genomic location of each gene. For Ensembl, this involves using the TxDb.Mmusculus.UCSC.mm10.ensGene package.\n\n\n\n\n\nIdentification of rows that correspond to spike-in transcripts is much easier, given that the ERCC spike-ins were used.\n\n\n\n\n\nAll of this information can be consolidated into a SCESet object for further manipulation. Alternatively, annotation from BioMart resources can be directly added to the object using the getBMFeatureAnnos function from scater.\n\n\n\nWe filter out rows that do not correspond to endogenous genes or spike-in transcripts. This will remove rows containing mapping statistics such as the number of unaligned or unassigned reads, which would be misleading if treated as gene expression values. The object is then ready for downstream analyses as previously described.\n\n\n\n\n\n\nConclusions\n\nThis workflow provides a step-by-step guide for performing basic analyses of single-cell RNA-seq data in R. It provides instructions for a number of low-level steps such as quality control, normalization, cell cycle phase assignment, data exploration, HVG and marker gene detection, and clustering. This is done with a number of different datasets to provide a range of usage examples. In addition, the processed data can be easily used for higher-level analyses with other Bioconductor packages. We anticipate that this workflow will assist readers in assembling analyses of their own scRNA-seq data.\n\n\nSoftware availability\n\nAll software packages used in this workflow are publicly available from the Comprehensive R Archive Network (https://cran.r-project.org) or the Bioconductor project (http://bioconductor.org). The specific version numbers of the packages used are shown below, along with the version of the R installation. The workflow takes less than an hour to run on a desktop computer with 8 GB of memory.\n\n\n\n",
"appendix": "Author contributions\n\n\n\nA.T.L.L. developed and tested the workflow on all datasets. A.T.L.L. and D.J.M. implemented improvements to the software packages required by the workflow. J.C.M. provided direction to the software and workflow development. All authors wrote and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nA.T.L.L. and J.C.M. were supported by core funding from Cancer Research UK (award no. A17197). D.J.M. was supported by a CJ Martin Fellowship from the National Health and Medical Research Council of Australia. D.J.M and J.C.M. were also supported by core funding from EMBL.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Antonio Scialdone for helpful discussions, and Michael Epstein for testing the workflow on other datasets.\n\n\nReferences\n\nAnders S, Huber W: Differential expression analysis for sequence count data. Genome Biol. 2010; 11(10): R106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAngerer P, Haghverdi L, Büttner M, et al.: destiny: diffusion maps for large-scale single-cell data in R. Bioinformatics. 2016; 32(8): 1241–1243. PubMed Abstract | Publisher Full Text\n\nBertoli C, Skotheim JM, de Bruin RA: Control of cell cycle transcription during G1 and S phases. Nat Rev Mol Cell Biol. 2013; 14(8): 518–528. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBourgon R, Gentleman R, Huber W: Independent filtering increases detection power for high-throughput experiments. Proc Natl Acad Sci U S A. 2010; 107(21): 9546–9551. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBray NL, Pimentel H, Melsted P, et al.: Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016; 34(5): 525–527. PubMed Abstract | Publisher Full Text\n\nBrennecke P, Anders S, Kim JK, et al.: Accounting for technical noise in single-cell RNA-seq experiments. Nat Methods. 2013; 10(11): 1093–1095. PubMed Abstract | Publisher Full Text\n\nBuettner F, Natarajan KN, Casale FP, et al.: Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat Biotechnol. 2015; 33(2): 155–160. PubMed Abstract | Publisher Full Text\n\nChen Y, Lun AT, Smyth GK: From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline [version 2; referees: 5 approved]. F1000Res. 2016; 5: 1438. PubMed Abstract | Publisher Full Text | Free Full Text\n\nConboy CM, Spyrou C, Thorne NP, et al.: Cell cycle genes are the evolutionarily conserved targets of the E2F4 transcription factor. PLoS One. 2007; 2(10): e1061. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFan J, Salathia N, Liu R, et al.: Characterizing transcriptional heterogeneity through pathway and gene set overdispersion analysis. Nat Methods. 2016; 13(3): 241–244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHicks SC, Teng M, Irizarry RA: On the widespread and critical impact of systematic bias and batch effects in single-cell RNA-Seq data. bioRxiv. 2015. Publisher Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIlicic T, Kim JK, Kolodziejczyk AA, et al.: Classification of low quality cells from single-cell RNA-seq data. Genome Biol. 2016; 17(1): 29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIslam S, Kjällquist U, Moliner A, et al.: Characterization of the single-cell transcriptional landscape by highly multiplex RNA-seq. Genome Res. 2011; 21(7): 1160–1167. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIslam S, Zeisel A, Joost S, et al.: Quantitative single-cell RNA-seq with unique molecular identifiers. Nat Methods. 2014; 11(2): 163–166. PubMed Abstract | Publisher Full Text\n\nJi Z, Ji H: TSCAN: Pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis. Nucleic Acids Res. 2016; 44(13): e117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJuliá M, Telenti A, Rausell A: Sincell: an R/Bioconductor package for statistical assessment of cell-state hierarchies from single-cell RNA-seq. Bioinformatics. 2015; 31(20): 3380–3382. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim JK, Kolodziejczyk AA, Ilicic T, et al.: Characterizing noise structure in single-cell RNA-seq distinguishes genuine from technical stochastic allelic expression. Nat Commun. 2015; 6: 8687. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKlein AM, Mazutis L, Akartuna I, et al.: Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. Cell. 2015; 161(5): 1187–1201. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKolodziejczyk AA, Kim JK, Tsang JC, et al.: Single Cell RNA-Sequencing of Pluripotent States Unlocks Modular Transcriptional Variation. Cell Stem Cell. 2015; 17(4): 471–485. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangfelder P, Zhang B, Horvath S: Defining clusters from a hierarchical cluster tree: the Dynamic Tree Cut package for R. Bioinformatics. 2008; 24(5): 719–720. PubMed Abstract | Publisher Full Text\n\nLaw CW, Chen Y, Shi W, et al.: voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15(2): R29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeng N, Chu LF, Barry C, et al.: Oscope identifies oscillatory genes in unsynchronized single-cell RNA-seq experiments. Nat Methods. 2015; 12(10): 947–950. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao Y, Smyth GK, Shi W: The Subread aligner: fast, accurate and scalable read mapping by seed-and-vote. Nucleic Acids Res. 2013; 41(10): e108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao Y, Smyth GK, Shi W: featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014; 30(7): 923–930. PubMed Abstract | Publisher Full Text\n\nLove MI, Anders S, Kim V, et al.: RNA-Seq workflow: gene-level exploratory analysis and differential expression [version 1; referees: 2 approved]. F1000Res. 2015; 4: 1070. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLove MI, Huber W, Anders S: Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15(12): 550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLun AT, Bach K, Marioni JC: Pooling across cells to normalize single-cell RNA sequencing data with many zero counts. Genome Biol. 2016; 17: 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacosko EZ, Basu A, Satija R, et al.: Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets. Cell. 2015; 161(5): 1202–1214. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahata B, Zhang X, Kolodziejczyk AA, et al.: Single-cell RNA sequencing reveals T helper cells synthesizing steroids de novo to contribute to immune homeostasis. Cell Rep. 2014; 7(4): 1130–1142. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarinov GK, Williams BA, McCue K, et al.: From single-cell to cell-pool transcriptomes: stochasticity in gene expression and RNA splicing. Genome Res. 2014; 24(3): 496–510. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarthy DJ, Chen Y, Smyth GK: Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40(10): 4288–4297. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Duggal G, Kingsford C: Accurate, fast, and model-aware transcript expression quantification with Salmon. bioRxiv. 2015. Publisher Full Text\n\nPhipson B, Smyth GK: Permutation P-values should never be zero: calculating exact P-values when permutations are randomly drawn. Stat Appl Genet Mol Biol. 2010; 9: Article39. PubMed Abstract | Publisher Full Text\n\nPicelli S, Faridani OR, Björklund AK, et al.: Full-length RNA-seq from single cells using Smart-seq2. Nat Protoc. 2014; 9(1): 171–181. PubMed Abstract | Publisher Full Text\n\nPollen AA, Nowakowski TJ, Shuga J, et al.: Low-coverage single-cell mRNA sequencing reveals cellular heterogeneity and activated signaling pathways in developing cerebral cortex. Nat Biotechnol. 2014; 32(10): 1053–1058. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRitchie ME, Phipson B, Wu D, et al.: limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, Oshlack A: A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 2010; 11(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScialdone A, Natarajan KN, Saraiva LR, et al.: Computational assignment of cell-cycle stage from single-cell transcriptome data. Methods. 2015; 85: 54–61. PubMed Abstract | Publisher Full Text\n\nStegle O, Teichmann SA, Marioni JC: Computational and analytical challenges in single-cell transcriptomics. Nat Rev Genet. 2015; 16(3): 133–145. PubMed Abstract | Publisher Full Text\n\nTrapnell C, Cacchiarelli D, Grimsby J, et al.: The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat Biotechnol. 2014; 32(4): 381–386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTung PY, Blischak JD, Hsiao C, et al.: Batch effects and the effective design of single-cell gene expression studies. bioRxiv. 2016. Publisher Full Text\n\nVallejos CA, Marioni JC, Richardson S: BASiCS: Bayesian analysis of single-cell sequencing data. PLoS Comput Biol. 2015; 11(6): e1004333. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan der Maaten L, Hinton G: Visualizing data using t-SNE. J Mach Learn Res. 2008; 9: 2579–2605. Reference Source\n\nWilson NK, Kent DG, Buettner F, et al.: Combined single-cell functional and gene expression analysis resolves heterogeneity within stem cell populations. Cell Stem Cell. 2015; 16(6): 712–724. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZeisel A, Muñoz-Manchado AB, Codeluppi S, et al.: Brain structure. Cell types in the mouse cortex and hippocampus revealed by single-cell RNA-seq. Science. 2015; 347(6226): 1138–1142. PubMed Abstract | Publisher Full Text\n\nZhu J, Yamane H, Cote-Sierra J, et al.: GATA-3 promotes Th2 responses through three different mechanisms: induction of Th2 cytokine production, selective growth of Th2 cells and inhibition of Th1 cell-specific factors. Cell Res. 2006; 16(1): 3–10. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "15990",
"date": "09 Sep 2016",
"name": "Diana H.P. Low",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nLun and colleagues describe a low-level analysis specific for single-cell RNA-seq experiments, using open-source packages available on Bioconductor. This paper could potentially be a valuable resource for those who want to carry out such analysis in R.\nThe steps are very descriptive, and they even include 2 different datasets presenting different types and conditions for analysis. They have done a very thorough job in explaining the decisions taken at each step of QC, filtering, normalization and provide some basic but important visualization examples (clustering, heatmaps) that would help in assessing not only the quality of the dataset technically, but also provided information on the outcome of the experiment itself.\nUnfortunately I could not run some of the steps in the workflow which prevented me from assessing the code. Some I could figure out and \"fix\" in the attempt to run the code, but others not so much.\nI provide some (not exhaustive) examples below to help in the troubleshooting, and if these (and the subsequent code relying on these outputts) could be solved, I would be happy to continue the review further.\nisSpike(sce) <- \"ERCC\" //worked with isSpike(sce) <- is.spike\n\ncan't find the function nexprs [I had to use numcells <- rowSums(exprs(sce)!=0)]\n\nis.ercc <- isSpike(sce, type=\"ERCC\") //worked with [is.ercc <- isSpike(sce)\n\nCould not run code from the section: Identifying HVGs from the normalized log-expression\nvar.fit <- trendVar(sce, trend=\"loess\", use.spikes=FALSE, span=0.2) Error in simpleLoess(y, x, w, span, degree = degree, parametric = parametric, :\n\ninvalid 'x'",
"responses": [
{
"c_id": "2178",
"date": "09 Sep 2016",
"name": "Aaron Lun",
"role": "Author Response",
"response": "Thanks for your comments, Diana. The code actually depends on Bioconductor version 3.4 (i.e., BioC \"devel\"), rather than the current Bioconductor 3.3 (i.e., BioC \"release\"). This allows us to include cutting-edge features from all packages to provide a high level of functionality in the workflow. However, some of these features are not present in the release version, thus leading to execution failure. The devel versions of all packages can be easily installed by setting useDevel() followed by biocLite(), as described on the Bioconductor website. We will also modify the text to explicitly state that Bioconductor 3.4 is required - currently, this can only be implicitly determined from the package versions, which admittedly is not obvious to casual users."
}
]
},
{
"id": "15987",
"date": "19 Sep 2016",
"name": "Hongkai Ji",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, the authors introduce a computational workflow to perform low-level analysis of single-cell RNA-seq (scRNA-seq) data based on R and Bioconductor. The workflow takes a read count matrix as input, and it provides R commands for loading data, quality control, gene filtering, data normalization (with or without spike-in controls), classifying cells based on their cell cycle phase, identifying highly variable genes, analyzing genes’ pairwise correlation, and basic data exploration such as clustering and visualization. The workflow is demonstrated using a number of real data examples. Overall, I think that the workflow provides a timely and very useful guide for people who want to analyze scRNA-seq data.\n\nThis study is largely reproducible. I am able to obtain all major results in this article by running the commands provided by the authors. I have several comments and suggestions which I hope the authors can address in order to make their workflow more user-friendly.\n1. It seems that installing the right version of R and Bioconductor is crucial for this pipeline to work. Some commands in the workflow depend on R version 3.3.1 or higher and the developing (devel) version of Bioconductor. The first time I tried the workflow, I encountered numerous errors. For example,\n> isSpike(sce) <- \"ERCC\" Error in `isSpike<-`(`*tmp*`, value = \"ERCC\") :\n\n'isSpike' must be logical or NULL\n\n> numcells <- nexprs(sce, byrow=TRUE) Error: could not find function \"nexprs\"\n\n> sce <- computeSpikeFactors(sce, type=\"ERCC\", general.use=FALSE) Error in .local(x, ...) :\n\nunused arguments (type = \"ERCC\", general.use = FALSE)\n\nIt turns out that I used an older version of R and Bioconductor. I then updated my R and Bioconductor packages and still had many problems. Finally, I decided to completely remove R and Bioconductor from my computer. I then installed R 3.3.1 and Bioconductor (devel version), and the pipeline worked. Although I eventually fixed the problem, I feel that this trial and error process can be frustrating for users. I therefore suggest that the authors make the R/Bioconductor dependencies clear at the beginning of the article. It would be even better if the authors could minimize the pipeline’s dependency on certain versions of R/Bioconductor.\n\n2. This workflow uses a number of R and Bioconductor packages. A user may not have all packages installed on their computer. Installing these packages one by one manually can be a little tedious. It would be nice if the authors can provide an R script that automatically finds missing packages on a user’s computer and install them. This could improve the pipeline’s user experience.\n\n3. It will also be useful if the authors can provide an R file that contains all commands in the workflow so that users only need to slightly edit their code for future datasets. It might be beyond the scope of this article, but the authors may consider delivering the pipeline using an R shiny graphical user interface in the future to make it accessible to users without R coding experience.",
"responses": [
{
"c_id": "2194",
"date": "19 Sep 2016",
"name": "Aaron Lun",
"role": "Author Response",
"response": "Thanks for your comments, Hongkai. Our responses to each of your points is below: Yes, this was an oversight on our part. The pipeline was developed using packages from BioC-devel, to take advantage of cutting-edge methods in each package. For that reason, the pipeline is strictly dependent on Bioconductor release version 3.4, a fact that we will make explicit in the next revision. We do not think that this is a major inconvenience given that the next release of Bioconductor is less than a month away. This is a good point. In fact, this article would ideally coincide with a parallel release on the Bioconductor workflow page, where the workflow installation machinery will automatically install all dependencies required for the package. Unfortunately, because this article was using packages from BioC-devel, we were unable to coordinate its release with that on the Bioconductor workflow page (which is limited to BioC-release packages). This will be fixed in the next revision where we will add a reference to Bioconductor-based installation of required packages. While we understand the convenience that an R script can offer, we feel that supplying such a script would invite attempts to blindly use the code without considering the context or caveats of the various methods. We believe that some initial copy-pasting is a small price to pay if the user is consistently reminded of how to properly interpret the output. Note that the Bioconductor workflow site and our Github page provide an Rmarkdown file containing all the necessary code blocks for easy execution of the entire workflow; if necessary, users can change the input files to generate an analysis report similar to the article. Of course, a graphical user interface is even more intuitive, but this is difficult to set up in a manner that is amenable to rigorous and reproducible data analysis."
}
]
},
{
"id": "16243",
"date": "30 Sep 2016",
"name": "Andrew McDavid",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nLun, McCarthy and Marioni share a workflow for analysis of single cell RNAseq (scRNA-seq) data using software they have developed. The workflow is illustrated on two data sets of varying size and characteristic. The computational and statistical findings of the workflow are interpreted in their experimental context. Having a well-documented protocol for the analysis of scRNA-seq is an important contribution to the community, since it is still a wilderness in terms of methods and processing, for better or worse.That scRNA-seq is a quickly evolving discipline--and the implications this has for the workflow--forms the bulk of my criticism of this paper.\n1a. The paper describes a currently-unreleased version of software. Other reviewers have indicated the difficulties this poses. I trust the authors will verify the correctness of their code and reproducibility of the analysis when their packages are finalized in Bioconductor 3.4. I also trust that this workflow will be made available as a literate (e.g knitr) document so that readers won't have to cut and paste from their web-browser. This reviewer was able to reproduce the figures reported in the first data set after loading the development version of `scater` (now version 1.1.14).\n1b. The main software package `scater` defines a `SCESet` inheriting from `ExpressionSet`, which has been superceded by `SummarizedExperiment`. SummarizedExperiment is more likely to scale to large data sets (it can store data out of core or in sparse matrix formats). In practice, this is not such a big deal since it's relatively easy to coerce between the two object types.\n\n2. The title of this article stipulates that it is for \"low-level\" analysis of RNA-seq data, but the all-important question of how to process the data as many analysts will get them (short reads as .fasta files) is elided.\n\n(Pseudo)-Alignment and quantification is an important, and probably overlooked step in scRNA-seq analysis. Counting transcripts by counting overlaps with features, a la countOverlaps or htSeq is inefficient1, since many reads (30%-80% of those that map anywhere, in this reviewer's experience) do not align uniquely. Hence the need and value to use quantification tools that respect the degeneracy of multimapping reads, ie, RSEM, Star, Kallisto, Sailfish, et al. A low-level analysis thus may wish to consider remapping with an appropriate tool. Fortunately, it does appear that `scater` has provisions for doing (re)-alignment with Kallisto.\n3. The authors may consider referencing other extant methods that could address areas of their workflow, especially methods that are adapted to deal with the non-normality of scRNA-seq data.\nFor normalization, there is scone, which tests many different normalization procedures and optimizes over the \"best\" one. For identification of highly variable genes, there is Basics2, which applies a hierarchical Bayesian model to test for over-dispersion, as opposed to modeling departures from an overall mean-variance relationship. For single cell differential expression and gene set enrichment for bimodal distributions found in scRNA-seq, there is MAST3. For clustering, there is clusterExperiment. For multi-dimensional scaling on bimodal data, there is ZIFA4. All of the above, aside from ZIFA are R/Bioconductor packages.",
"responses": [
{
"c_id": "2244",
"date": "18 Oct 2016",
"name": "Aaron Lun",
"role": "Author Response",
"response": "Thanks for your comments, Andrew. Our responses are as below. 1a. The paper describes a currently-unreleased version of software. Other reviewers have indicated the difficulties this poses. I trust the authors will verify the correctness of their code and reproducibility of the analysis when their packages are finalized in Bioconductor 3.4. I also trust that this workflow will be made available as a literate (e.g knitr) document so that readers won't have to cut and paste from their web-browser. This reviewer was able to reproduce the figures reported in the first data set after loading the development version of `scater` (now version 1.1.14). Yes, this was an oversight on our part. The revised verison will include a link to the Bioconductor workflow page, where users can simply run a command to automatically download the relevant data files and packages prior to running the workflow. 1b. The main software package `scater` defines a `SCESet` inheriting from `ExpressionSet`, which has been superceded by `SummarizedExperiment`. SummarizedExperiment is more likely to scale to large data sets (it can store data out of core or in sparse matrix formats). In practice, this is not such a big deal since it's relatively easy to coerce between the two object types. We considered the practicality of storing data in sparse matrix format. Unfortunately, most existing tools for downstream data analysis require a full-sized matrix as input, so any gains in memory efficiency during storage seem to be countered by the need to (repeatedly) expand the matrix at multiple analysis steps. Moreover, a sparse matrix only improves efficiency for raw count data where unambiguous zeroes are present; upon applying normalization and transformation steps, this may no longer be the case, such that a full-sized matrix will ultimately be required anyway. 2. The title of this article stipulates that it is for \"low-level\" analysis of RNA-seq data, but the all-important question of how to process the data as many analysts will get them (short reads as .fasta files) is elided. (Pseudo)-Alignment and quantification is an important, and probably overlooked step in scRNA-seq analysis. Counting transcripts by counting overlaps with features, a la countOverlaps or htSeq is inefficient1, since many reads (30%-80% of those that map anywhere, in this reviewer's experience) do not align uniquely. Hence the need and value to use quantification tools that respect the degeneracy of multimapping reads, ie, RSEM, Star, Kallisto, Sailfish, et al. A low-level analysis thus may wish to consider remapping with an appropriate tool. Fortunately, it does appear that `scater` has provisions for doing (re)-alignment with Kallisto. We find that conventional feature counting works quite well for read-based scRNA-seq data, having used this approach in several recent studies (Achim et al., 2015; Kolodziejczyk et al., 2015; Scialdone et al., 2016). While ignoring multi-mapped reads during quantification is conservative, we feel that it does provide a greater degree of confidence in our downstream inferences. Certainly, there may be gains in power from using tools that extract more information from multi-mapping reads, but we do not consider this advantage to be so pronounced that it should be standard procedure for all scRNA-seq data analyses. For UMI-based data, there does not yet appear to be any clear \"gold standard\" approach for UMI processing into counts, so we have not provided any description of that step. In summary, we decided to start the workflow from the raw count data, rather than starting from read sequences, as conventional approaches for quantification described elsewhere seem to work well; to maintain some flexibility with respect to future developments in this field; and because our workflow focuses on the steps of the analysis that are carried out in R/Bioconductor, whereas most existing quantification tools require manual installation and execution from the command-line. References: Achim et al. (2015), Nature Biotechnology 33:503–509 Kolodziejczyk et al. (2015), Cell Stem Cell 17(4):471-485 Scialdone et al. (2016), Nature 535:289-293 The authors may consider referencing other extant methods that could address areas of their workflow, especially methods that are adapted to deal with the non-normality of scRNA-seq data. As you have stated, there are many alternative approaches that could be used in various parts of the workflow. However, we feel that it is beyond the scope of this article to enter into discussions about the relative advantages of different methods. In fact, this may undermine the pedagogical value of the workflow by providing too many options to inexperienced users. The methods we have described work well in a variety of situations, so we have chosen them for use in the various analysis steps. We have added a sentence to the discussion about the existence of alternative methods for low-level processing, and encouraged experienced users to explore them."
}
]
},
{
"id": "15991",
"date": "18 Oct 2016",
"name": "David duVerle",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe pipeline described in this article seems promising. I was able to partly reproduce the results, as well as run similar treatment on a single cell dataset of my own.\nHowever:\nThe fundamental flaws mentioned by other reviewers over a month ago, still haven't been addressed: the pipeline requires Dev versions of R and bioconductor packages, yet makes no mention of it anywhere in the article.\n\nIn fact, even after installing the Bioconductor Dev versions of all required modules, it would appear the pipeline no longer works with the latest versions (e.g. scran_1.1.10, with R 3.3.1):\n\n> isSpike(sce) <- \"ERCC\" Error in `isSpike<-`(`*tmp*`, value = \"Spike\") :\n\n'isSpike' must be logical or NULL\netc.\nWhile likely easy to fix, this type of incompatibility issues undermine the entire point of the article and perfectly illustrate the dangers of relying on development versions for this type of pipeline.\nAdditionally, the example dataset used by the article, is loaded from an Excel spreadsheet, which is generally considered extremely bad practice. It would behoove the authors of a software walkthrough aimed at somewhat-novice bioinformaticians to encourage best practices.\n\nIn the current conditions, and until some major revision work is done, it is impossible to properly review the pipeline and approve this article unreservedly.",
"responses": [
{
"c_id": "2245",
"date": "18 Oct 2016",
"name": "Aaron Lun",
"role": "Author Response",
"response": "Thanks for your comments, David. Regarding the incompatibility in software versions, we have been waiting for the imminent release of the latest version of Bioconductor (3.4) before revising the article. It seemed more prudent to wait for the latest software to become available, rather than making stop-gap modifications to accommodate soon-to-be-obsolete versions. We believe that this update should clear up any problems with execution of the workflow. We agree that Excel spreadsheets are a poor formatting choice for bioinformatics work. Unfortunately, the processed dataset is provided in this format from NCBI GEO (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61533, see Supplementary files). While having to tackle Excel formatting is not ideal, it is preferable to having to re-process the entire dataset to obtain counts from the raw read sequences. Moreover, at no point do we save into Excel - analysis results are always stored in simple tab-delimited formats, and the R objects themselves are saved in serialized form."
}
]
},
{
"id": "15986",
"date": "20 Oct 2016",
"name": "Antonio Rausell",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the Software tool article “A step-by-step workflow for low-level analysis of single-cell RNA-seq data”, Lun, McCarthy and Marioni thoroughly describe a comprehensive pipeline for the low-level analysis of single-cell RNA-seq data. The article covers important topics such as the quality control of cells and genes, normalization of expression levels, control for technical factors and cell cycle, detection of highly variable genes, assessment of subpopulations of cells and associated differentially expressed genes. The workflow is illustrated in a number of datasets offering diverse scenarios that nicely guide the reader on the different criteria that may be adopted throughout the analysis. The manuscript is clearly presented, the quality of the code and figures is excellent and a great effort has been done to introduce complex questions in an easily accessible manner to a broad audience. Importantly, the authors discuss situations where it is difficult to provide a clear-cut recipe, and the need for experimental validation is stressed. Overall I think the article is an important contribution to the community and that it should quickly become a reference guide in the field.\nI report here a number of comments, questions and suggestions with the hope that they may contribute to improve an already excellent article:\n1.\n\nIn addition to the approaches proposed by the authors to detect low quality cells, I would suggest the readers the possibility of identifying outlier cells by performing a PCA on the normalized gene expression matrix restricted to protein-coding genes (e.g. using biotype annotations from Ensembl biomart). On the one hand, outlier cells will dominate the first principal components, which will show high percentage of variances simply accounting from the separation of the outlier from the compact cloud of “normal” cells. On the other hand, a PCA analysis could suggest keeping a cell whose relative similarity to the rest of the cells in a low-dimensional space seems rather normal, even if it could still have an allegedly \"bad\" quality metric.\n2.\n\nAs an additional quality control check for the cells, I would also suggest to assess whether the sequencing depth was generally deep enough for most of the cells, by inspecting for instance the saturation curve of the number of detected genes (or other features like the known exon-exon junctions) as a function of the fraction of down-sampled reads.\n3.\n\nIn the text it is proposed to filter out low-abundance genes, defined as “those with an average count below a filter threshold of 1.\" However, the average count is assessed before the normalization step. Would it be more meaningful to apply this filter on the normalized counts?\n4.\n\nA priori it is difficult to rule out the possibility that the filtering of low-abundance genes could eventually hamper the identification of relevant genes in rare populations of cells. I would generally suggest being rather inclusive at this stage, especially when no clustering of single-cells has been done yet, so that it would still be possible to check whether e.g. the few cells expressing some genes -even if at low levels- are actually forming a distinctive and biologically relevant cluster.\n5.\n\nAuthors propose as an alternative approach to gene filtering, to select genes that have non-zero counts in at least n cells. As illustrated in Figure 6, the number of cells expressing a given gene may be modeled by its mean expression level. This was elegantly addressed in Shalek et al. (2014) through a likelihood ratio test comparing a null model -where all cells express a gene in a lognormal fashion- with an alternate model -where a gene is not expressed in a subpopulation of cells α (See section \"Controlling for relationship between expression level and detection efficiency\" in the supplementary material: http://www.nature.com/nature/journal/v510/n7505/extref/nature13437-s1.pdf). Genes for which the null model is rejected may be indicative of a subpopulation of cells not expressing the gene at a higher fraction than the one expected from technical noise (e.g. dropout events). I would suggest exploring such approach in order to avoid filtering out relevant genes due to a sharp threshold on the number of cells expressing it.\n6.\n\nIn the section “Filtering out low-abundance genes”, the sentence \"This provides some more protection against genes with outlier expression patterns, i.e., strong expression in only one or two cells. Such outliers are typically uninteresting as they can arise from amplification artifacts that are not replicable across cells.[...]\" would be better followed by setting alt.keep <- numcells >= 2 instead of >= 10\n7.\n\nIt would be interesting to complement Figure 7 and 18 with a second panel representing the correlation between size factors from deconvolution versus spike-in-specific size factors, as done in Figure 27. In the eventual case that a low correlation between them was found in a non-DE scenario, would it be advisable to neglect spike-ins from the analysis?\n8.\n\nAs pointed by the authors, spike-in molecules have been extensively used to infer the amount of variability in the expression levels of one gene that can be explained from technical noise (e.g. Brennecke et al., 2013; Grün et al., 2014; Islam et al., 2014). Ding et al. (2015) went further on the applications of spike-in levels, by using them to explicitly remove technical noise and compute de-noised gene expression levels (R software GRM, http://wanglab.ucsd.edu/star/GRM/). I would suggest the readers such possibility that could largely benefit downstream analysis such as the detection of subpopulation of cells and cell trajectories, as they would mainly rely on biological variation. This would still be compatible with an assessment of HVG only based on biological variation by fitting the trend to the variance estimates of the endogenous genes (after technical denoising).\n9.\n\nAuthors state that the technical component estimation through the fitting of a mean-variance trend to the spike-in transcripts “is compromised by the small number of spike-in transcripts, the uneven distribution of their abundances and (for low numbers of cells) the imprecision of their variance estimates”. Do the same remarks generally apply to a spike-in-specific normalization? And if so, should spike-in normalization be considered accurate enough when applied to cases with strong DE even if it is conceptually more appropriate than a deconvolution approach?\n10.\n\nIn the section “Identifying HVGs from the normalized log-expression” the authors justify their choice of \"the variance of the log-expression values because the log-transformation protects against genes with strong expression in only one or two cells. This ensures that the set of top HVGs is not dominated by genes with (mostly uninteresting) outlier expression patterns”. However, the filtering of genes with such patterns has already been proposed in a previous section, so those cases should no longer be a risk here.\n11.\n\nThe interpretability of the approach of \"Identifying correlated gene pairs with Spearman’s rho\" is to some extent limited without a previous analysis such as PCA, ICA or MDS, transforming the high-dimensional space into a space of independent (uncorrelated) dimensions. I would rather favor the identification of sets of genes with a high weight on each of the retained independent axis (i.e. driving the variance in such axes, and therefore disentangling sets of correlated genes for each of the orthogonal dimensions). Otherwise, the analysis could risk to be dominated by the first component, probably neglecting other relevant hidden factors.\n12.\n\nIn any case, I advise not to restrict to correlated HVGs downstream dimensionality reduction analysis such as PCA or ICA aiming at the identification of subpopulations of cells and their gene signatures. Such methods exploit correlation patterns (linear or non-linear) in a well-grounded way and they do not require a feature selection step. The sentence \"We only use the correlated HVGs in plotPCA because any substructure should be most pronounced in the expression profiles of these genes\" could eventually not hold true in some instances: correlated HVGs were assessed without considering those independent components, the relative contribution of each dimension to the total variance, and the relative contribution of each gene to each dimension.\n13.\n\nIn the brain dataset, correlated HVGs genes were assessed considering the design <- model.matrix(~sce$sex); correlatePairs(sce, design=design). It would be useful to further explain here how this function accounts for the design matrix on the assessment of Spearman’s rho.\n14.\n\nIn the brain dataset, removeBatchEffect from limma package is used to remove the sex effect. Then tSNE and PCA are applied on the sex-corrected expression values restricted to correlated HVGs. Consistently, correlated HVGs were assessed considering the very same factor: design <- model.matrix(~sce$sex); correlatePairs(sce, design=design). I would further warn the reader and stress the necessity of that consistency between both steps.\n15.\n\nFor consistency with the assessment of correlations based on Spearman's rho, in the heatmap I would recommend to assess first the dendrograms for the cells and the genes by using also a spearman correlation: For instance:\ncells.cor <- cor(expressionmatrix, method=\"spearman\") cells.cor.dist <- as.dist(1-samples.cor) cells.tree <- hclust(cells.cor.dist,method='complete') And then in heatmap.2 setting Colv=as.dendrogram(cells.tree) And analogously with the genes for Rowv This should be adapted in the case that a design is used as in correlatePairs(sce, design=design) Personally I would also suggest to check how the heatmaps look by setting scale='row' in heatmap.2 function.\n16.\n\nI would suggest explicitly mentioning in the pipeline which approaches are based on linear or non-linear assumptions. The workflow alternates methods from both categories, which should be taken into account to understand their downstream consequences. For instance:\nThe function plotExplanatoryVariables from scater package, with the default method= \"density\", produces a density plot of R-squared values for each variable when fitted as the only explanatory variable in a linear model. The function removeBatchEffect from limma package fits a linear model to the data, including both batches and regular treatments, then removes the component due to the batch effects. Then the analysis is restricted to correlated HVGs, which are assessed on spearman’s rho, i.e. rank-based, non-linear tSNE is non-linear based, PCA is linear Clusters are defined through dynamic tree cut to the dendograms assessed by hierarchical clustering on the Euclidean distances between cells (linear, although in a non-orthogonal space)\n17.\n\nThe use of hierarchical clustering for clustering cells into putative subpopulations is based on Euclidean distances (or correlations) assessed in a non-orthogonal space. I would rather favor an analytical clustering directly performed in a low-dimensional orthogonal space such us those led by PCA, ICA or MDS, in which the most-informative dimensions can be selected (e.g. through their eigen values in PCA).\n18. A PCA analysis should be accompanied by a plot representing the % of variance explained by each principal component, so that it can be judged the number of relevant dimensions to be retained while disregarding the rest as “noise”. It could be the case that more than 2 dimensions are relevant to separate subpopulations in a finer detail. The inspection of eigenvalues would help supporting that \"PCA plot is less effective at separating cells into many different clusters (Figure 24). This is because the first two principal components are driven by strong differences between specific subpopulations, which reduces the resolution of more subtle differences between some of the other subpopulations.\"\n19.\n\nIn the brain analysis, three main steps are: 1) the deconvolution method is used to normalize expression levels. Here similar cells are clustered together and cells are normalized in each cluster. Authors state: “This improves normalization accuracy by reducing the number of DE genes between cells in the same cluster”. Clustering is performed here with the quickCluster function from scran package, where a distance matrix is constructed using Spearman’s correlation on the counts between cells. 2) A hierarchical clustering is then performed and a dynamic tree cut is used to define clusters of cells. Then, the batch(sex)-corrected expression values of the (Spearman’s rho) correlated HVG are used to build a dendogram assessed through hierarchical clustering on the Euclidean distances between cells, where clusters are defined. And 3) those clusters are used to assess DE with edgeR on the counts, normalized using the library size-adjusted size factors (if I well understood) and including all genes (not only correlated HVG). I personally found such procedure a bit cumbersome as it is relying on different types of expression matrices and metrics in each of the 3 steps (see also next comment). I also wonder to what extent the initial quickCluster results could be biasing the clusters detected downstream, and, if so, whether the normalization step would be biasing in turn the differentially expression results. The correspondence between the quickCluster results with the clusters from the dendograms should at least be inspected and discussed.\n20.\n\nIn line with the previous comment, in the brain analysis I wonder whether the pipeline could somehow be simplified by 1) performing spike-in normalization (which seems possible given the quality of the spike-in trend observed in Figure 21), 2) doing a PCA on the batch(sex)-corrected expression values of all genes (not only correlated HVGs), and performing clustering on the retained principal components, and 3) assessing DE with edgeR on the counts normalized using the spike-in factors.\n\nMinor comments\n21.\n\nSome code at the beginning of the analysis to check and install all the required packages would be welcome\n22.\n\nEverything run smoothly in our hands except for the gdata package when trying to read the xls file. The perl command interpreter was running abnormally long and it was using a large amount of RAM. We finally opened the xls file in excel and converted into tab separated file, then read it using the general read.table command.",
"responses": [
{
"c_id": "2252",
"date": "27 Oct 2016",
"name": "Aaron Lun",
"role": "Author Response",
"response": "Thanks for your comments, Antonio. Our responses are as below: 1. In addition to the approaches proposed by the authors to detect low quality cells, I would suggest the readers the possibility of identifying outlier cells by performing a PCA on the normalized gene expression matrix restricted to protein-coding genes (e.g. using biotype annotations from Ensembl biomart). On the one hand, outlier cells will dominate the first principal components, which will show high percentage of variances simply accounting from the separation of the outlier from the compact cloud of “normal” cells. On the other hand, a PCA analysis could suggest keeping a cell whose relative similarity to the rest of the cells in a low-dimensional space seems rather normal, even if it could still have an allegedly \"bad\" quality metric. This is certainly a valid approach, though we do not mention it here for several reasons. The first reason is that there is an increased risk of being confounded by biological effects when gene expression patterns are directly used, e.g. where uncommon cell types are classified as outliers and removed. The second is that we do not want to confuse readers with a variety of possible options - while our approach is not the only way to do it, it does work, and thus serves its purpose in this workflow. Finally, the use of PCA-based outlier detection has been explored in some detail by Ilicic et al. (2016), which we have already mentioned in the text. 2. As an additional quality control check for the cells, I would also suggest to assess whether the sequencing depth was generally deep enough for most of the cells, by inspecting for instance the saturation curve of the number of detected genes (or other features like the known exon-exon junctions) as a function of the fraction of down-sampled reads. This is an interesting idea, though it seems to be more useful as a diagnostic for future experiments rather than for an already existing dataset. Even if saturation is not reached, it would not affect the data analysis provided that the existing counts were large enough. Our diagnostics focus on the quality of the data that we currently have, rather than the potential for improving the experiment by collecting more data. 3. In the text it is proposed to filter out low-abundance genes, defined as “those with an average count below a filter threshold of 1.\" However, the average count is assessed before the normalization step. Would it be more meaningful to apply this filter on the normalized counts? Unfortunately, most normalization methods (e.g. deconvolution, TMM, DESeq) perform poorly with unfiltered data due to the poor precision of low counts. This necessitates some degree of filtering prior to normalization. We do not think that this has a major effect on the mean count for most genes, given that the size factors average out to unity across all cells. 4. A priori it is difficult to rule out the possibility that the filtering of low-abundance genes could eventually hamper the identification of relevant genes in rare populations of cells. I would generally suggest being rather inclusive at this stage, especially when no clustering of single-cells has been done yet, so that it would still be possible to check whether e.g. the few cells expressing some genes -even if at low levels- are actually forming a distinctive and biologically relevant cluster. In the context of this workflow, one of the roles of filtering is to reduce the number of genes that need to be tested as being highly variable. This improves power by reducing the severity of the multiple testing correction, increasing the chance that potentially informative genes are detected as HVGs and used in downstream analyses. Thus, while relaxing the filter may retain more genes, fewer of these genes may actually be used in the downstream analysis. (This is more likely than not - low-abundance genes are not generally detected as being highly variable, due to inherent limits on the scope of variability in count data.) Indeed, in the example of few cells expressing few genes at low levels, it is difficult to see how such genes would be detected as being significant in a HVG analysis. 5. Authors propose as an alternative approach to gene filtering, to select genes that have non-zero counts in at least n cells. As illustrated in Figure 6, the number of cells expressing a given gene may be modeled by its mean expression level. This was elegantly addressed in Shalek et al. (2014) through a likelihood ratio test comparing a null model -where all cells express a gene in a lognormal fashion- with an alternate model -where a gene is not expressed in a subpopulation of cells α (See section \"Controlling for relationship between expression level and detection efficiency\" in the supplementary material: http://www.nature.com/nature/journal/v510/n7505/extref/nature13437-s1.pdf). Genes for which the null model is rejected may be indicative of a subpopulation of cells not expressing the gene at a higher fraction than the one expected from technical noise (e.g. dropout events). I would suggest exploring such approach in order to avoid filtering out relevant genes due to a sharp threshold on the number of cells expressing it. There are several arguments against using such an approach, at least during the filtering stage. Firstly, this approach specifically selects for bimodal genes whereas it is entirely possible that interesting genes could vary across a continuum of expression values (or, in fact, are bimodal at two non-zero locations). The second is that the significance threshold effectively serves the same purpose as a threshold on the percentage of expressing genes - only less interpretable, as it depends on the vagaries and assumptions of the model. Indeed, default thresholds for significance (e.g. 1%, 5%) may not be appropriate for filtering and exploratory analyses. Thus, some tuning of the significance thresholds is likely to be required, further reducing interpretability. Consequently, we feel that the approach we have suggested is more likely to be generally useful to the wider biological community. 6. In the section “Filtering out low-abundance genes”, the sentence \"This provides some more protection against genes with outlier expression patterns, i.e., strong expression in only one or two cells. Such outliers are typically uninteresting as they can arise from amplification artifacts that are not replicable across cells.[...]\" would be better followed by setting alt.keep <- numcells >= 2 instead of >= 10 The \"ideal\" threshold depends largely on the biological context. The HSC dataset contains a highly purified and homogeneous population. We would expect that most expressed genes would be present in a substantial number of these cells, hence the choice of threshold. While relaxing the filter is possible, this runs into the problems discussed above in our response to point 4. Of course, in other situations where rare cell types are present (e.g. olfactory neurons expressing unique receptors), relaxing the filter might be necessary to retain biological information. We have added a comment about this in the revised manuscript. 7. It would be interesting to complement Figure 7 and 18 with a second panel representing the correlation between size factors from deconvolution versus spike-in-specific size factors, as done in Figure 27. In the eventual case that a low correlation between them was found in a non-DE scenario, would it be advisable to neglect spike-ins from the analysis? We considered adding this, but felt that it would make this part of the workflow somewhat difficult to follow given that we use only the deconvolution factors for normalisation of the endogenous genes. Nevertheless, we agree that this is an important point and are glad that the reviewer pointed out Figure 27 where we discuss this issue in some detail. Low correlations between the spike-in and deconvolution size factors are not a cause for concern. As we have mentioned, this is entirely possible due to differences in total mRNA content. In terms of normalization, the two sets of size factors simply deal with different biases, so differences between them do not provide any indication of spike-in quality. 8. As pointed by the authors, spike-in molecules have been extensively used to infer the amount of variability in the expression levels of one gene that can be explained from technical noise (e.g. Brennecke et al., 2013; Grün et al., 2014; Islam et al., 2014). Ding et al. (2015) went further on the applications of spike-in levels, by using them to explicitly remove technical noise and compute de-noised gene expression levels (R software GRM, http://wanglab.ucsd.edu/star/GRM/). I would suggest the readers such possibility that could largely benefit downstream analysis such as the detection of subpopulation of cells and cell trajectories, as they would mainly rely on biological variation. This would still be compatible with an assessment of HVG only based on biological variation by fitting the trend to the variance estimates of the endogenous genes (after technical denoising). The GRM strategy is an interesting one. However, we do not use it here because the denoising is performed based on a curve fitted to the spike-in log-FPKMs against the known concentrations. This is philosophically similar to spike-in-based normalization, in that it will preserve information about total RNA content. For example, cells with more endogenous RNA will have larger gene counts and unchanged (or smaller) spike-in counts; this results in larger de-noised expression values compared to other cells with less total RNA. Such behaviour may not be desirable in situations where cell size is not of interest. 9. Authors state that the technical component estimation through the fitting of a mean-variance trend to the spike-in transcripts “is compromised by the small number of spike-in transcripts, the uneven distribution of their abundances and (for low numbers of cells) the imprecision of their variance estimates”. Do the same remarks generally apply to a spike-in-specific normalization? And if so, should spike-in normalization be considered accurate enough when applied to cases with strong DE even if it is conceptually more appropriate than a deconvolution approach? In general, no, the remarks do not apply for spike-in normalization. This is because spike-in normalization computes a single size factor, using information across all spike-in transcripts. As a result, the size factor is generally quite precise. Fitting of the mean-variance trend is less stable because it uses information from each individual spike-in transcript. This is subject to the issues described in the text, thus reducing the stability of the outcome. 10. In the section “Identifying HVGs from the normalized log-expression” the authors justify their choice of \"the variance of the log-expression values because the log-transformation protects against genes with strong expression in only one or two cells. This ensures that the set of top HVGs is not dominated by genes with (mostly uninteresting) outlier expression patterns”. However, the filtering of genes with such patterns has already been proposed in a previous section, so those cases should no longer be a risk here. This depends on the type of abundance filtering that was chosen. In this workflow, we performed filtering based on the average count, which does not explicitly protect against strong outliers. Thus, some additional protection is needed during the downstream analysis. If filtering was performed based on an \"at least n\" strategy, then outliers will be less of an issue during HVG detection. Of course, the \"at least n\" filter has problems of its own regarding an appropriate choice for \"n\", as we have discussed in the text and in our response to point 6, which is why we have not used it as the default filtering strategy. 11. The interpretability of the approach of \"Identifying correlated gene pairs with Spearman’s rho\" is to some extent limited without a previous analysis such as PCA, ICA or MDS, transforming the high-dimensional space into a space of independent (uncorrelated) dimensions. I would rather favor the identification of sets of genes with a high weight on each of the retained independent axis (i.e. driving the variance in such axes, and therefore disentangling sets of correlated genes for each of the orthogonal dimensions). Otherwise, the analysis could risk to be dominated by the first component, probably neglecting other relevant hidden factors. This point of calculating these correlations is to provide a simple screen for genes that are likely to be involved in defining the substructure of the dataset. Interpretation of the cause of these correlations can then be performed using PCA, ICA, etc. as suggested on the subset of interesting genes. Without some pre-selection of genes (in terms of high variance or correlation), biological and technical noise may interfere with dimensionality reduction - see our response to point 12. Our approach allows relevant genes to be selected in a statistically rigorous manner based on significant correlations. In contrast, it is unclear how selection would be performed based on the PCA weights. For example, what should be considered a \"high weight\", and from how many principal components should genes be selected? The simplicity of the calculation of significant pairwise correlations also provides a useful sanity check for conclusions drawn from more complex downstream analyses. Finally, if there are hidden factors, these are likely to increase the correlations and cause rejection of the null hypothesis for the relevant genes. So, genes that are affected by these factors will still be retained for downstream analysis and interpretation. 12. In any case, I advise not to restrict to correlated HVGs downstream dimensionality reduction analysis such as PCA or ICA aiming at the identification of subpopulations of cells and their gene signatures. Such methods exploit correlation patterns (linear or non-linear) in a well-grounded way and they do not require a feature selection step. The sentence \"We only use the correlated HVGs in plotPCA because any substructure should be most pronounced in the expression profiles of these genes\" could eventually not hold true in some instances: correlated HVGs were assessed without considering those independent components, the relative contribution of each dimension to the total variance, and the relative contribution of each gene to each dimension. The aim of selecting correlated HVGs is to reduce the amount of technical and (uncorrelated/uninteresting) biological noise in the data to be used for downstream analyses. This improves the performance of dimensionality reduction approaches, especially if the substructure is relatively weak. For example, with PCA, adding a large number of uncorrelated genes will interfere with correct placement of cells along a trajectory: par(mfrow=c(1,2)) loc <- 1:100/100 # True placement of cells a1 <- matrix(jitter(rep(loc, 50)), nrow=50, byrow=TRUE) # Correlated genes x1 <- prcomp(t(a1)) plot(x1$x[,1]) # Should be on the diagonal a2 <- rbind(a1, matrix(rnorm(100000), ncol=100)) # Adding uncorrelated noise x2 <- prcomp(t(a2)) plot(x2$x[,1]) # Correct placing is disrupted Similar arguments can be made for distance-based approaches like t-SNE and diffusion maps, where the nearest neighbours become more difficult to identify correctly with increasing noise. Finally, the identification of correlated HVGs does not need to consider the nature of the substructure. We only need to identify the genes that are affected by this substructure, in one way or the other - it is the function of downstream analyses to determine what the substructure actually represents. 13. In the brain dataset, correlated HVGs genes were assessed considering the design <- model.matrix(~sce$sex); correlatePairs(sce, design=design). It would be useful to further explain here how this function accounts for the design matrix on the assessment of Spearman’s rho. For one-way layouts, a value of rho is first computed within each group of cells. The average across all groups (weighted by the number of cells) is then used as the final value of rho for any given pair of genes. For more complex designs, a linear model is fitted to the log-normalized counts, and rho is calculated using the residuals of the model fit. (While the linear model approach also works for one-way layouts, it requires some additional assumptions that can be avoided with a simpler group-based approach.) More details can be found in the documentation for the correlatePairs() function. 14. In the brain dataset, removeBatchEffect from limma package is used to remove the sex effect. Then tSNE and PCA are applied on the sex-corrected expression values restricted to correlated HVGs. Consistently, correlated HVGs were assessed considering the very same factor: design <- model.matrix(~sce$sex); correlatePairs(sce, design=design). I would further warn the reader and stress the necessity of that consistency between both steps. We have added a comment on this to the manuscript. 15. For consistency with the assessment of correlations based on Spearman's rho, in the heatmap I would recommend to assess first the dendrograms for the cells and the genes by using also a spearman correlation... Our dendrograms are constructed based on the distances between cells, which is different from the correlations between genes. Using the correlations to cluster the genes makes more sense with respect to checking consistency, but the primary aim of our analysis is to identify clusters of cells (potential subpopulations) rather than clusters of genes. The latter is certainly a worthwhile analysis (e.g. to identify gene modules) but, in the context of this workflow, the differences in clustering on correlations versus Euclidean distances would only have a minor effect. 16. I would suggest explicitly mentioning in the pipeline which approaches are based on linear or non-linear assumptions. The workflow alternates methods from both categories, which should be taken into account to understand their downstream consequences... Obviously, each computational method makes a number of assumptions. For the sake of readability and simplicity (especially for inexperienced readers), we have not discussed most of these assumptions in this workflow, except for those that are critical to choosing between methods, e.g. spike-in normalization versus deconvolution. Nonetheless, we have modified the manuscript to elaborate on the reasons for using non-linear methods such as Spearman's rho and t-SNE. 17. The use of hierarchical clustering for clustering cells into putative subpopulations is based on Euclidean distances (or correlations) assessed in a non-orthogonal space. I would rather favor an analytical clustering directly performed in a low-dimensional orthogonal space such us those led by PCA, ICA or MDS, in which the most-informative dimensions can be selected (e.g. through their eigen values in PCA). There are many possible approaches to clustering, each with their own advantages and disadvantages. For example, pre-selection of a low-dimensional space via PCA may reduce noise during clustering, but it may also discard subtle features present in lower-ranked PCs. Our clustering approach is simple but effective enough, which is why we have used it in this workflow. Other methods may well do better, but a discussion of the pros and cons of different clustering strategies is beyond the scope of this article. 18. A PCA analysis should be accompanied by a plot representing the % of variance explained by each principal component, so that it can be judged the number of relevant dimensions to be retained while disregarding the rest as “noise”. It could be the case that more than 2 dimensions are relevant to separate subpopulations in a finer detail. The inspection of eigenvalues would help supporting that \"PCA plot is less effective at separating cells into many different clusters (Figure 24). This is because the first two principal components are driven by strong differences between specific subpopulations, which reduces the resolution of more subtle differences between some of the other subpopulations.\" We only use PCA for visualization, rather than selection of principal components for further quantitative analysis in low-dimensional space. For this purpose, knowing the relative contributions to the total variance from non-visualized components is less helpful. For example, even if we determined that the top 10 dimensions were \"relevant\", it is unclear how this would assist visualization. Nonetheless, we now mention in the text how this information can be generated and used. 19. In the brain analysis, three main steps are: 1) the deconvolution method is used to normalize expression levels. Here similar cells are clustered together and cells are normalized in each cluster. Authors state: “This improves normalization accuracy by reducing the number of DE genes between cells in the same cluster”. Clustering is performed here with the quickCluster function from scran package, where a distance matrix is constructed using Spearman’s correlation on the counts between cells. 2) A hierarchical clustering is then performed and a dynamic tree cut is used to define clusters of cells. Then, the batch(sex)-corrected expression values of the (Spearman’s rho) correlated HVG are used to build a dendogram assessed through hierarchical clustering on the Euclidean distances between cells, where clusters are defined. And 3) those clusters are used to assess DE with edgeR on the counts, normalized using the library size-adjusted size factors (if I well understood) and including all genes (not only correlated HVG). I personally found such procedure a bit cumbersome as it is relying on different types of expression matrices and metrics in each of the 3 steps (see also next comment). I also wonder to what extent the initial quickCluster results could be biasing the clusters detected downstream, and, if so, whether the normalization step would be biasing in turn the differentially expression results. The correspondence between the quickCluster results with the clusters from the dendograms should at least be inspected and discussed. In terms of the choice of matrices and metrics, we have chosen approaches that we feel are suitable for each step of the workflow. Given that each step examines a different aspect of the data, some flexibility is inevitably required in supplying the correct input to each method. Regarding quickCluster, Lun et al. (2016) show that unbiased size factor estimates are still obtained after clustering. This is because size factors computed within each cluster are explicitly corrected to be comparable between clusters. As for the similarity between the quickCluster results and the dendrograms, we do not believe that this provides a useful indication of method performance. Some agreement is expected, as the two methods should recover similar structure in the data. However, some disagreement is also expected, as quickCluster provides a quick-and-dirty clustering to reduce the amount of DE genes present during deconvolution, while the dendrograms are much more refined due to feature selection. Such incongruences are not a problem for normalization - even if quickCluster identifies the \"incorrect\" clusters, it is still adequate if it separates cells with vastly different transcriptomic profiles. 20. In line with the previous comment, in the brain analysis I wonder whether the pipeline could somehow be simplified by 1) performing spike-in normalization (which seems possible given the quality of the spike-in trend observed in Figure 21), 2) doing a PCA on the batch(sex)-corrected expression values of all genes (not only correlated HVGs), and performing clustering on the retained principal components, and 3) assessing DE with edgeR on the counts normalized using the spike-in factors. One could certainly perform such an analysis. However, we chose to use the approach described in the workflow, because feature selection can improve the results of downstream analyses, as discussed in our response to point 12; and the choice of whether or not to do spike-in normalization depends primarily on whether total RNA content is interesting, not on the quality of the spike-ins. 21. Some code at the beginning of the analysis to check and install all the required packages would be welcome We have added a link to the Bioconductor workflow page, which provides instructions for installing all required packages and running the workflow. 22. Everything run smoothly in our hands except for the gdata package when trying to read the xls file. The perl command interpreter was running abnormally long and it was using a large amount of RAM. We finally opened the xls file in excel and converted into tab separated file, then read it using the general read.table command. We understand the suboptimality of dealing with Excel files in bioinformatics analysis. Unfortunately, the authors of this study provided the count data in Excel format on NCBI GEO. We decided to load the data directly rather than manually supplying the counts in a simpler format. The latter would make the workflow less generalisable as it would no longer use data from public, well-recognised sources. In our hands, loading of the Excel file usually requires a couple of minutes and 3-4 GB of RAM."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2122
|
https://f1000research.com/articles/5-2606/v1
|
31 Oct 16
|
{
"type": "Case Report",
"title": "Case Report: X-linked recessive anhidrotic ectodermal dysplasia with immunodeficiency and an unusual Aspergillus infection",
"authors": [
"Tahaamin Shokuhfar",
"Zahra Mo’mmen",
"Elnaz Panah",
"Abdollvahhab Alborzei",
"Babak Torabi Sagvand",
"Asghar Aghamohamadi",
"Zahra Mo’mmen",
"Elnaz Panah",
"Abdollvahhab Alborzei",
"Babak Torabi Sagvand",
"Asghar Aghamohamadi"
],
"abstract": "NEMO (NF-kB essential modulator) is a regulatory factor involved in signaling pathways of the innate and adaptative immune systems. Hypomorphic mutation of the NEMO gene (also called IKBKG gene) on the X chromosome leads to X-linked recessive anhidrotic ectodermal dysplasia with immunodeficiency. Affected male children present a developmental phenotype with hypotrichosis, hypohydrosis, and hypodontia with conical incisors and susceptibility to pyogenic bacteria, mycobacteria and viruses. Most also have impaired antibody response to polysaccharide antigens. Here we present the case of a 7-year-old boy with disseminated BCGitis and unusual Aspergillus infection who was later diagnosed with a homozygous mutation of the NEMO gene. Appropriate long term anti-mycobacterial medications, prophylactic anti-fungal therapy and current monthly intravenous immunoglobulin (IVIG) stabilized the patient’s condition and has significantly improved his general health. High incidence of atypical mycobacterial infection in such cases emphasize the need for prophylaxis. In conclusion, attention to gender, pattern of infections, and precise physical exam helped us to diagnose and appropriately manage this case. We propose prophylactic therapy for mycobacterial and opportunistic infections after the confirmation of homozygous NEMO gene mutation.",
"keywords": [
"Anhidrotic Ectodermal Dysplasia",
"NF-B essential modulator",
"immunodeficiency",
"recurrent infections",
"Aspergillosis"
],
"content": "Introduction\n\nX-linked anhidrotic ectodermal dysplasia with immunodeficiency (XL-EDA-ID) is a rare congenital disease, characterized by susceptibility to infectious diseases and abnormal development of ectoderm-derived skin appendages1–5.\n\nXL-EDA-ID clinical and immunological phenotypes are highly mutation dependent. Previous reports showed the associations of specific mutations with particular phenotypes including susceptibility to poorly virulent mycobacteria, invasive pyogenic bacterial fungal and severe viral infections, due to the essential role of NEMO (nuclear factor-kappa B (NF-κB) essential modulator) in both innate and adaptive immunity3,6–14. Both cellular and humoral abnormalities were recorded in XL-EDA-ID cases15,16. Most patients bearing NEMO gene mutations (also called IKBKG, encoder of NEMO protein) have an impaired antibody response, in particular to glycans17. However, impairments in CD40-mediated B cell activation, isotype class switching, NK cell cytotoxicity, response to LPS stimulation, and production of TNF and IL-12 have been verified by in vitro studies for some NEMO-deficient patients14. In this report, we present a 7-year-old boy with XL-EDA-ID suffering from disseminated BCGitis and fungal infection with specific antibody deficiency against glycan antigens.\n\n\nCase presentation\n\nThis report describes a male child born to a non-consanguineous parents with no history of immunodeficiency in the family. The mother had a history of Behçet's disease. Birth growth parameters and mental development were in normal range. Vaccinations were up-to-date without any complication except diffuse lymphadenopathy following Bacillus Calmette-Guérin (BCG) vaccination at the age of 3 days. At 1 month of age, he was admitted to hospital with low fever, dry cough and respiratory distress and was diagnosed with pneumonia. During admission, abnormal signs such as tremor of upper and lower extremities and upward gaze were inspected. Cerebrospinal fluid analysis was normal (sugar: 36mg/dl; protein: 26mg/dl; no cells). He was diagnosed with suspected febrile convulsion due to a viral infection. During the first 9 months, he developed recurrent episodes of respiratory tract infections. Later on, another episode of disseminated BCGitis was detected while he was under continuous phase of isoniazid (INH) and rifampin (RIF) (10 mg/kg daily) for one month. Physical examination found multiple cervical lymphadenopathies, which later revealed caseating granulomatous lymphadenitis on biopsy. Spiral CT-scan of the abdomen illustrated hepatomegaly with inflammatory parenchyma and multiple para-aortic lymphadenopaties. Bone marrow study was normal. Continuation of antimycobacterial therapy at maximum dose of INH (15 mg/kg/day) and RIF (20 mg/kg/day) significantly improved disseminated BCGitis after 18 months.\n\nTwo other episodes of pneumococcal pneumonia were reported at the age of 3.5 and 4 years. At the age of 5, the patient experienced severe Aspergillus nidulans pneumonia and was started on Voriconazole (8mg/kg) followed by Itraconazole (5 mg/kg) twice a day for 1 year with a favorable outcome.\n\nAt the age of 6, the patient was referred to our center for the evaluation of immunodeficiency. Further examination revealed additional features of ectodermal dysplasia including conical lacteal teeth without agenesia, ridged nails, sparse hair and skin abnormalities. In terms of problems associated with recurrent opportunistic and unusual infections, laboratory evaluation of the immune system was performed and reported normal immunoglobulin levels, impaired response to pneumococcal vaccine and defective reaction to PPD (<5mm induration) (Table 1). Gene sequencing revealed a homozygous NEMO missense mutation in exon 8, c.932 A>G which led to the substitution of asparagine by glycine at residue 311 (designated D311G). Molecular testing of the pateint’s mother, grandmother and second aunt revealed heterozygous NEMO mutations in the corresponding locus. The patient has been administered monthly courses of intravenous immunoglobulin (800 mg/kg) and a prophylactic dose of Itraconazole (5mg/kg/day). Currently, patient is symptom free.\n\n\nDiscussion\n\nHypomorphic mutations in NEMO are associated with XL-EDA-ID6–8. Patients with hypomorphic hemizygous IKBKG mutation appear to possess some variety of immunodeficiency, regardless of presence or absence of EDA2–5. This could be due to an abnormal NF-κB which results in defective lymphocytic receptor signaling18,19. Here we describe the first patient with NEMO deficiency who presents a classical EDA phenotype, disseminated BCGitis and pulmonary Aspergillosis. This patient carries a hemizygous NEMO mutation, D311G, which has also been reported previously in a patient with recurrent mycobacterial infections (M. avium and M. abscessus) without any history of fungal infections19. The mycobacterial susceptibility in both patients can be explained by the impairment of CD40-dependent IL-12 production20. Defective PPD of our patient is the only document which revealed cellular immune system failure leading to disseminated BCGitis.\n\nThe immunological phenotypes of these two cases are comparable, because both patients displayed the same impaired antibody response to glycans as the only detected immunologic abnormality19. Almost all patients bearing mutations in NEMO have an impaired antibody response to glycans, including pneumococcal capsules in particular3,4. Half of them have also hypogammaglobulinemia, probably secondary to CD40 signaling impairment3. Some mutations in the IKBKG gene are associated with T-cell defects, because NEMO is an essential component of the inhibitor of NF-κB (IκB)-kinase (IKK) complex, affecting the phosphorylation of IκB which is necessary for nuclear translocation of NF-κB24. Signaling through the IKK complex has been shown to be essential for production of mature/memory T- cells, which may be an explanation for the low memory T-cell phenotype observed in these patients25,26.\n\nLastly, regarding the developmental phenotype, the patient reported here displays a more severe EDA phenotype (dysmorphic conical lacteal) compared to the patient with the same hemizygous NEMO mutation who has only teeth agenesis of maxillary lateral incisors and premolars19. Hence, appropriate genetic diagnosis and genetic counseling looks essential and testing for NEMO carriers should be considered (if applicable) as performed in our case for the patient’s mother and maternal aunts. Intravenous immunoglobulins is the treatment of choice in NEMO-deficient patients with evidence of impaired antibody production5,6. High incidence of atypical mycobacterial disease infection in these cases emphasize the need of prophylaxis. Prophylaxis against pneumocystis pneumonia should also be considered, specifically in males with low T-cell counts or severely impaired lymphocyte proliferation27–29. Conclusively, attention to gender, pattern of infections, and skin involvements helped us to diagnose and appropriately manage this case.\n\n\nConsent\n\nWritten informed consent for publication of the patient’s details was obtained from the patient’s parents.",
"appendix": "Author contributions\n\n\n\nTS, ZM conducted the study and prepared the first draft. All authors were involved in data collection and preparation of the written manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed\n\n\nGrant information\n\nThe authors declared that no grants were involved in supporting this work.\n\n\nReferences\n\nClarke A, Phillips DI, Brown R, et al.: Clinical aspects of X-linked hypohidrotic ectodermal dysplasia. Arch Dis Child. 1987; 62(10): 989–996. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDöffinger R, Smahi A, Bessia C, et al.: X-linked anhidrotic ectodermal dysplasia with immunodeficiency is caused by impaired NF-κB signaling. Nat Genet. 2001; 27(3): 277–285. PubMed Abstract | Publisher Full Text\n\nHanson EP, Monaco-Shawver L, Solt LA, et al.: Hypomorphic nuclear factor-κB essential modulator mutation database and reconstitution system identifies phenotypic and immunologic diversity. J Allergy Clin Immunol. 2008; 122(6): 1169–1177.e16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPuel A, Picard C, Ku CL, et al.: Inherited disorders of NF-κB-mediated immunity in man. Curr Opin Immunol. 2004; 16(1): 34–41. PubMed Abstract | Publisher Full Text\n\nPicard C, Casanova JL, Puel A: Infectious diseases in patients with IRAK-4, MyD88, NEMO, or IκBα deficiency. Clin Microbiol Rev. 2011; 24(3): 490–497. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZonana J, Elder ME, Schneider LC, et al.: A novel X-linked disorder of immune deficiency and hypohidrotic ectodermal dysplasia is allelic to incontinentia pigmenti and due to mutations in IKK-gamma (NEMO). Am J Hum Genet. 2000; 67(6): 1555–1562. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAradhya S, Woffendin H, Jakins T, et al.: A recurrent deletion in the ubiquitously expressed NEMO (IKK-γ) gene accounts for the vast majority of incontinentia pigmenti mutations. Hum Mol Genet. 2001; 10(19): 2171–2179. PubMed Abstract | Publisher Full Text\n\nMansour S, Woffendin H, Mitton S, et al.: Incontinentia pigmenti in a surviving male is accompanied by hypohidrotic ectodermal dysplasia and recurrent infection. Am J Med Genet. 2001; 99(2): 172–177. PubMed Abstract | Publisher Full Text\n\nJain A, Ma CA, Liu S, et al.: Specific missense mutations in NEMO result in hyper-IgM syndrome with hypohydrotic ectodermal dysplasia. Nat Immunol. 2001; 2(3): 223–228. PubMed Abstract | Publisher Full Text\n\nDrögemüller C, Distl O, Leeb T: X-linked anhidrotic ectodermal dysplasia (ED1) in men, mice, and cattle. Genet Sel Evol. 2003; 35(Suppl 1): S137–145. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKobielak A, Kobielak K, Wisniewski SA, et al.: Sequence polymorphisms of the EDA and the DL genes in the patients with an X-linked and an autosomal forms of anhidrotic ectodermal dysplasia. Folia Histochem Cytobiol. Polish Academy of Sciences, Polish Histochemical and Cytochemical Society. 2001; 39(2): 113–114. PubMed Abstract\n\nCluzeau C, Hadj-Rabia S, Jambou M, et al.: Only four genes (EDA1, EDAR, EDARADD, and WNT10A) account for 90% of hypohidrotic/anhidrotic ectodermal dysplasia cases. Hum Mutat. 2011; 32(1): 70–72. PubMed Abstract | Publisher Full Text\n\nNiehues T, Reichenbach J, Neubert J, et al.: Nuclear factor κB essential modulator-deficient child with immunodeficiency yet without anhidrotic ectodermal dysplasia. J Allergy Clin Immunol. 2004; 114(6): 1456–1462. PubMed Abstract | Publisher Full Text\n\nOrange JS, Jain A, Ballas ZK, et al.: The presentation and natural history of immunodeficiency caused by nuclear factor κB essential modulator mutation. J Allergy Clin Immunol. 2004; 113(4): 725–733. PubMed Abstract | Publisher Full Text\n\nAbinun M, Spickett G, Appleton AL, et al.: Anhidrotic ectodermal dysplasia associated with specific antibody deficiency. Eur J Pediatr. 1996; 155(2): 146–147. PubMed Abstract\n\nSchweizer P, Kalhoff H, Horneff G, et al.: [Polysaccharide specific humoral immunodeficiency in ectodermal dysplasia. Case report of a boy with two affected brothers]. Klin Padiatr. 1999; 211(6): 459–461. PubMed Abstract\n\nDupuis-Girod S, Corradini N, Hadj-Rabia S, et al.: Osteopetrosis, lymphedema, anhidrotic ectodermal dysplasia, and immunodeficiency in a boy and incontinentia pigmenti in his mother. Pediatrics. 2002; 109(6): e97. PubMed Abstract | Publisher Full Text\n\nKeller MD, Petersen M, Ong P, et al.: Hypohidrotic ectodermal dysplasia and immunodeficiency with coincident NEMO and EDA mutations. Front Immunol. 2011; 2: 61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHubeau M, Ngadjeua F, Puel A, et al.: New mechanism of X-linked anhidrotic ectodermal dysplasia with immunodeficiency: impairment of ubiquitin binding despite normal folding of NEMO protein. Blood. 2011; 118(4): 926–935. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStark R, Hartung A, Zehn D, et al.: IL-12-mediated STAT4 signaling and TCR signal strength cooperate in the induction of CD40L in human and mouse CD8+ T cells. Eur J Immunol. 2013; 43(6): 1511–1517. PubMed Abstract | Publisher Full Text\n\nKawai T, Nishikomori R, Izawa K, et al.: Frequent somatic mosaicism of NEMO in T cells of patients with X-linked anhidrotic ectodermal dysplasia with immunodeficiency. Blood. 2012; 119(23): 5458–5466. PubMed Abstract | Publisher Full Text\n\nNishikomori R, Akutagawa H, Maruyama K, et al.: X-linked ectodermal dysplasia and immunodeficiency caused by reversion mosaicism of NEMO reveals a critical role for NEMO in human T-cell development and/or survival. Blood. 2004; 103(12): 4565–4572. PubMed Abstract | Publisher Full Text\n\nKataoka K, Muta T, Yamazaki S, et al.: Activation of macrophages by linear (1right-arrow3)-beta-D-glucans. Impliations for the recognition of fungi by innate immunity. J Biol Chem. 2002; 277(39): 36825–36831. PubMed Abstract | Publisher Full Text\n\nOrange JS, Levy O, Geha RS: Human disease resulting from gene mutations that interfere with appropriate nuclear factor-κB activation. Immunol Rev. 2005; 203: 21–37. PubMed Abstract | Publisher Full Text\n\nSchmidt C, Peng B, Li Z, et al.: Mechanisms of Proinflammatory Cytokine-Induced Biphasic NF-κB Activation. Mol Cell. 2003; 12(5): 1287–300. PubMed Abstract | Publisher Full Text\n\nShokuhfar T, Panah E, Aghamohammadi A: Chronic Diarrhea in Hypogammaglobulinemia: A Case Report of Immunodeficiency. MOJ Immunol. 2016; 4(1): 00116. Publisher Full Text\n\nSchmidt-Supprian M, Courtois G, Tian J, et al.: Mature T cells depend on signaling through the IKK complex. Immunity. 2003; 19(3): 377–389. PubMed Abstract | Publisher Full Text\n\nOrange JS, Brodeur SR, Jain A, et al.: Deficient natural killer cell cytotoxicity in patients with IKK-gamma/NEMO mutations. J Clin Invest. 2002; 109(11): 1501–1509. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "18174",
"date": "02 Dec 2016",
"name": "Hiroyuki Nunoi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors reported a case with hypomorphic IKBKG mutation of a 7-year-old boy with disseminated BCGitis and unusual Aspergillus infection, who was successfully treated and prohibited with INH and RIF, in addition to IVIG administration. But any evidence about molecular and functional defects were not shown in the table or Figures. They only described. I could not confirm the results. The reviewer needs some figure for them but not precisely.\n\nThe title is appropriate for the content of the article. The abstract represent a suitable and attractive summary of the work. The design, methods and analysis of the results from the study been neither explained nor shown any figure or results of functional studies. Although the detail was not required, some figure for them are necessary. Conclusions are usual and no novel findings. The data shown in Table 1 is not enough to explain patient’s symptom. More critical or positive data should be listed, like a sequence data.",
"responses": []
},
{
"id": "20187",
"date": "14 Feb 2017",
"name": "Michele Callea",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors reported an interesting clinical and molecular case of a rare entity; this is enough to find the merit to index this case; considering the pathology I would not be surprised by an infection, and not focus in the title on the “unusual Aspergillus infections”; the mutation detected has been already reported, phenotype is typical and it would be of interest knowing something about life quality and expectancy for the incoming years since XL-HED-ID is quite a challenge with all the complications which might arise from the disease. A reference is attached not mandatory indeed to be cited.\nI recommend to change the title suggesting to be more general (i.e Clinical and molecular study in a case of X linked hypohidrotic ectodermal dysplasia with immunodeficiency).The abstract should begin with an introduction on Ectodermal Dysplasia (EDs) in general and after mentioning the most common form XL-HED, AD-HED etc caused by EDA gene or EDAR, EDARADD, WNT10A could continue and go deep in the analysis of NEMO gene features.\n\nDesign, methods and analysis of the results from the study have not been explained and do not clarify the data exposed and results.\n\nConclusions are sensible, balanced and justified on the basis of the results of the study although no novel findings are reported.\n\nGood information has been provided but indeed more figures such as electropherogram of the mutation, pedigree of the family, OMIM number of the disease, methodology of sequencing plus few more references are required. Furthermore mentioning clinical features (i.e peculiar conical shaped primary –better than “lacteal”- teeth, agenesis better than “agenesia”, sparse hair) a picture of the proband and/or radiographic examination such as orthopantomography should be added.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2606
|
https://f1000research.com/articles/5-2156/v1
|
02 Sep 16
|
{
"type": "Research Note",
"title": "Analysis of morphine responses in mice reveals a QTL on Chromosome 7",
"authors": [
"Wim E. Crusio",
"Esha Dhawan",
"Elissa J. Chesler",
"Anna Delprato",
"Wim E. Crusio",
"Esha Dhawan"
],
"abstract": "In this study we identified a quantitative trait locus (QTL) on mouse Chromosome 7 associated with locomotor activity and rearing post morphine treatment. This QTL was revealed after correcting for the effects of another QTL peak on Chromosome 10 using composite interval mapping. The positional candidate genes are Syt9 and Ppfibp2. Several other genes within the interval are linked to neural processes, locomotor activity, and the defensive response to harmful stimuli.",
"keywords": [
"Opioids",
"QTL analysis",
"Locomotor activity",
"Reward pathway"
],
"content": "Introduction\n\nResponses to drugs of abuse vary among individuals and are genetically influenced (Mistry et al., 2014). Such drugs stimulate the brain reward pathway (Gardner, 2011). Analysis of genetic variation in the behavioral and neurobiological processes can reveal molecular mechanisms that regulate or mediate the response to drugs, and neurobiological changes associated with chronic drug use. Many of the genes and signaling processes involved in the reward pathway in humans are conserved in mice (Adinoff, 2004), and mouse genetic methods provide an efficient means of identifying these genes and loci.\n\nQuantitative trait loci (QTL) analysis of recombinant inbred mouse strains (RIS) integrates phenotype and genotype data and is a widely used approach for studying the genetic basis of drug effects and addiction susceptibility (Spence et al., 2005). The BXD RIS, derived from C57BL/6J (B) and DBA/2J (D), are a well-established genetic reference population used in behavioral neuroscience studies for mapping complex traits associated with drug use (Crabbe et al., 1996; Dickson et al., 2016; Gora-Maslak et al., 1991; Peirce et al., 2004; Plomin et al., 1991).\n\nIn the present study we reanalyzed data from high throughput behavioral phenotyping of BXD RIS in the presence of morphine (Philip et al., 2010). Our purpose was to identify additional morphine response related genetic loci beyond those initially reported. We employed composite interval mapping for two behavioral phenotypes, rearing and locomotion in response to morphine, and identified a QTL on mouse Chromosome 7 that was masked by a major QTL on Chromosome 10.\n\n\nMethods\n\nExperimental protocols have been described elsewhere (Philip et al., 2010). BXD data were generated in the laboratory of Dr. Charles D. Blaha at the University of Memphis and obtained from GeneNetwork.org.\n\nQTL mapping was performed using GeneNetwork 1.0 with a composite interval mapping function using 2000 permutations. This is a forward regression approach in which a single locus with a major effect is included in a mapping model that scans for additional additive effects and interactions with the major locus. The technique will miss higher order interactions among loci that are not detectable as main effects, but is effective when a large, consistently observed major effect locus is present. Strain mean scores were Winsorized when statistical outliers were present. We first determined which marker had the highest LRS value, using a marker regression analysis and then performed composite interval mapping, controlling for the Chromosome 10 SNP rs3721803, one of the 3 markers with the highest LRS scores.\n\nThe MGI database (http://www.informatics.jax.org/) was used to find information about SNPs and strain polymorphisms occurring within the Chromosome 7 interval (queried –June 20, 2016). This database now includes variants detected in the sequencing of 17 mouse genomes (Keane et al., 2011). The 3 SNPs located at the Chromosome 7 QTL peak (rs13479451, rs3724540, rs6386601) are at ~114.5 Mb in GeneNetwork and ~107.6 Mb in the MGI and NCBI databases (http://www.ncbi.nlm.nih.gov/gene). In GeneNetwork, the relevant QTL interval on Chromosome 7 was 110–125 Mb whereas this region corresponds to 103–118 Mb in the MGI database. As finer mapping methods are developed, the exact interval location as reported here, may change. DAVID version 6.8 (https://david.ncifcrf.gov/) was used to obtain functional annotations and pathway information for the genes within the interval and GeneWeaver 1.0 (http://www.geneweaver.org/) to identify other drug-related phenotypes associated with the candidate genes.\n\n\nResults\n\nAs reported by Philip et al. (2010), whole genome scans produce robust QTLs mapping to Chromosome 10 at 0–30 Mb (Figure 1A-B). The positional and functional best gene candidate is the Oprm1 gene which encodes the opioid G-protein coupled receptor mu 1 (Philip et al., 2010) which has been previously detected (Bergeson et al., 2001; Doyle et al., 2014).\n\n(A) Trait data for locomotion (distance traveled) for 64 BXD strains post morphine injection (Trait id 11851). (B) Whole genome scan showing a robust QTL on Chromosome 10 (Trait id 11851, SNP rs372180). The x-axis represents chromosome number and megabase position and the y-axis represents the likelihood ratio statistic (LRS) of linkage. Blue lines indicate LRS across the genome. The pink and gray horizontal lines are approximate threshold values which are used to assess whether a peak is significant (P<0.05) or suggestive (P<0.63), respectively. Red and green lines represent the additive genetic contribution: red lines indicate negative values (C57BL/6J alleles increasing trait values) and green lines indicate positive values (DBA/2J alleles increasing trait values). Gray lines are shown when the parental strain is unknown. The yellow bars represent the relative frequency of peak LRS at a given location from 2000 bootstrap resamples (C) Composite interval mapping of locomotion (Trait id 11851) after correcting for the peak on Chromosome 10 reveals a significant QTL on Chromosome 7.\n\nWe remapped these data for each time point using composite interval mapping to identify QTLs potentially masked by this large QTL. This revealed a second significant QTL on Chromosome 7 for locomotion (LRS= 18.0, 114.7 Mb; Figure 1C) and, in an overlapping position, for rearing (LRS=16.5, 114.3 Mb). We also observed a time dependent decrease of the peak LRS values for both traits after the 60–75 minute time interval post-morphine injection. A suggestive QTL for the rearing trait data was already apparent on Chromosome 7 from the whole genome scan, but this was not the case for the locomotion trait data (Philip et al., 2010).\n\nThe area under the QTL peak contains 280 protein coding genes (MGI Chromosome 7: 103–118 Mb, GeneNetwork Chromosome 7: 110–125 Mb; database query-June 20, 2016, Supplementary worksheet 2). Of these genes, 162 are associated with olfactory receptors. Of the other genes with functional annotations, 11 are connected with neural processes (Adm: neural tube development, Appb1, Tub, Calca, Cckbr, Cnag4: sensory perception and cognition, Insc, Arntl: neurogenesis, Tpp1: neuromuscular control, Rras2: regulation of neuron death and Pde3b: morphine addiction (KEGG pathway 05032).\n\nThere are 3 SNPs mapped at the location of the LRS peak and two positional candidates: Synaptogamin 9 (Syt9: rs13479451, rs3724540) and PPFIA binding protein 2 (Ppfibp2, rs6386601). Both of these genes differ between the parental strains (MGI database query – June 20, 2016). Syt9 functions in vesicle traffic and Ca2+ triggered exocytosis (Ncbi, 2016b) and Ppfibp2 is involved in the regulation and development of neuronal synapses (Ncbi, 2016a). These two candidates are also differentially expressed in the striatum of the brains of mice from strains with distinct opioid sensitivity (Korostynski et al., 2006; GeneWeaver: Syt 9, geneset #86830; Ppfibp2, geneset #86906).\n\n\nConclusions\n\nGenetic analysis of the BXD RIS resulted in the detection of an additional locus for morphine induced locomotor activity and rearing on Chromosome 7 which may be associated with activation of the reward system pathway in response to morphine treatment. Whether this effect is opioid-specific or also occurs with other classes of drugs is not yet clear. Alternatively, this QTL may also be related to general locomotor activity or nociception, a central nervous system response to potentially harmful stimuli. The best gene candidates within the QTL interval, besides the positional candidates Syt9 and Ppfibp2, are Calca and Cckbr (sensory perception of pain) and Pde3b (morphine addiction).\n\n\nData availability\n\nAll data are available in GeneNetwork (www.genenetwork.org; see supplementary worksheet 1 for trait-ids and descriptions). Trait IDs: Locomotion: 11843, 11833, 11844, 11834, 11845, 11835, 11846, 11847, 11851, 11852, 11836, 11837, 11838, Rearing: 11884, 11885, 11886, 11887, 11888, 11878, 11854.\n\nData are also available in the Supplementary material.",
"appendix": "Author contributions\n\n\n\nWEC assisted in interpreting the results and writing the manuscript; ED performed the functional annotation analysis; EJC provided the original data and assisted with interpreting the results and writing the manuscript; AD conceived the study, supervised the analyses, and wrote the manuscript. All authors approved the final submitted draft.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nBioScience Project is a 501(c)(3) research and teaching organization. EJC was supported by grants from NIAAA (AA018776) for GeneWeaver and NIDA (DA037927) for addiction genetics.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nGeneNetwork traits used in the QTL analysis.\n\nClick here to access the data.\n\nList of genes within the Chromosome 7 interval.\n\nClick here to access the data.\n\n\nReferences\n\nAdinoff B: Neurobiologic processes in drug reward and addiction. Harv Rev Psychiatry. 2004; 12(6): 305–320. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBergeson SE, Helms ML, O'Toole LA, et al.: Quantitative trait loci influencing morphine antinociception in four mapping populations. Mamm Genome. 2001; 12(7): 546–553. PubMed Abstract | Publisher Full Text\n\nCrabbe JC, Phillips TJ, Gallaher EJ, et al.: Common genetic determinants of the ataxic and hypothermic effects of ethanol in BXD/Ty recombinant inbred mice: genetic correlations and quantitative trait loci. J Pharmacol Exp Ther. 1996; 277(2): 624–632. PubMed Abstract\n\nDickson PE, Miller MM, Calton MA, et al.: Systems genetics of intravenous cocaine self-administration in the BXD recombinant inbred mouse panel. Psychopharmacology (Berl). 2016; 233(4): 701–714. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDoyle GA, Schwebel CL, Ruiz SE, et al.: Analysis of candidate genes for morphine preference quantitative trait locus Mop2. Neuroscience. 2014; 277: 403–416. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGardner EL: Addiction and brain reward and antireward pathways. Adv Psychosom Med. 2011; 30: 22–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGill KJ, Boyle AE: Genetic basis for the psychostimulant effects of nicotine: a quantitative trait locus analysis in AcB/BcA recombinant congenic mice. Genes Brain Behav. 2005; 4(7): 401–411. PubMed Abstract | Publisher Full Text\n\nGora-Maslak G, McClearn GE, Crabbe JC, et al.: Use of recombinant inbred strains to identify quantitative trait loci in psychopharmacology. Psychopharmacology (Berl). 1991; 104(4): 413–424. PubMed Abstract | Publisher Full Text\n\nKeane TM, Goodstadt L, Danecek P, et al.: Mouse genomic variation and its effect on phenotypes and gene regulation. Nature. 2011; 477(7364): 289–294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKorostynski M, Kaminska-Chowaniec D, Piechota M, et al.: Gene expression profiling in the striatum of inbred mouse strains with distinct opioid-related phenotypes. BMC Genomics. 2006; 7: 146. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMistry CJ, Bawor M, Desai D, et al.: Genetics of Opioid Dependence: A Review of the Genetic Contribution to Opioid Dependence. Curr Psychiatry Rev. 2014; 10(2): 156–167. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNCBI: PPFIBP2 PPFIA binding protein 2 [ Homo sapiens (human)]. Gene. National Center for Biotechnology Information, 2016a. Reference Source\n\nNCBI: Syt9 synaptotagmin IX [ Mus musculus (house mouse)]. Gene. National Center for Biotechnology Information, 2016b. Reference Source\n\nPeirce JL, Lu L, Gu J, et al.: A new set of BXD recombinant inbred lines from advanced intercross populations in mice. BMC Genet. 2004; 5: 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPhilip VM, Duvvuru S, Gomero B, et al.: High-throughput behavioral phenotyping in the expanded panel of BXD recombinant inbred strains. Genes Brain Behav. 2010; 9(2): 129–159. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPlomin R, McClearn GE, Gora-Maslak G, et al.: Use of recombinant inbred strains to detect quantitative trait loci associated with behavior. Behav Genet. 1991; 21(2): 99–116. PubMed Abstract | Publisher Full Text\n\nSpence J, Liang T, Foroud T, et al.: Expression profiling and QTL analysis: a powerful complementary strategy in drug abuse research. Addict Biol. 2005; 10(1): 47–51. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "16166",
"date": "14 Sep 2016",
"name": "Clarissa Parker",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript from Crusio et al. describes the use of GeneNetwork to identify QTLs associated with morphine response in a genetic reference mouse population. The authors highlight many strengths afforded by both GeneNetwork and the BXD recombinant inbred mice. Specifically, they used composite interval mapping to reanalyze publicly available phenotypic and genotypic data in the BXDs to identify a QTL on chromosome 7 that had been masked by a major QTL on chromosome 10. Next, they employed a bioinformatics approach to prioritize among candidate genes within the QTL interval. This resulted in the identification of promising candidate genes that may be associated with drug-abuse phenotypes.\nOne step that would strengthen the manuscript would be to use QTLminer in GeneNetwork to examine cis-regulation of the genes located within the QTL support interval (or even just the most promising candidate genes) and to see if any of those genes were expression QTLs in BXD mice.\nIn addition, the authors raise an interesting question when they ask whether the chromosome 7 QTL is opioid-specific, or if it also occurs with other classes of drugs. They could use GeneNetwork to search for genetic correlations between the expression levels of their top candidate genes and drug-related traits.\n\nMinor Points: On the GeneNetwork website, Trait ID 11851 is defined as the number of beam breaks after morphine injection, but is described as “distance traveled” and “locomotion” in the Figure 1 legend of the manuscript.",
"responses": [
{
"c_id": "2246",
"date": "19 Oct 2016",
"name": "Anna Delprato",
"role": "Author Response",
"response": "Reviewer comment: “One step that would strengthen the manuscript would be to use QTLminer in GeneNetwork to examine cis-regulation of the genes located within the QTL support interval (or even just the most promising candidate genes) and to see if any of those genes were expression QTLs in BXD mice.” Of the candidate genes there is a marginally significant cis-eQTL for Syt9 ( LRS =18) but this is with one probe and in the striatum dataset only. There are no cis-eQTL for the other gene candidates. Reviewer comment: “In addition, the authors raise an interesting question when they ask whether the chromosome 7 QTL is opioid-specific, or if it also occurs with other classes of drugs. They could use GeneNetwork to search for genetic correlations between the expression levels of their top candidate genes and drug-related traits.” Correlation analyses do not rule out or support association with other drugs. The few significant correlations that we found were based on less than 10 common strains."
}
]
},
{
"id": "16660",
"date": "28 Sep 2016",
"name": "Robert Gerlai",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have only minor comments, listed below.\n“In the present study we reanalyzed data from high throughput behavioral phenotyping of BXD RIS in the presence of morphine (Philip et al., 2010).” Although it is clear what the authors are trying to say, strictly speaking this sentence implies that the authors had morphine present (on their desks? Or in their bloodstream??) when they were analyzing the data. Please make the sentence less ambiguous.\n\n“Strain mean scores were Winsorized when statistical outliers were present”. Please elaborate on this and/or cite references to support the decision to modify as opposed to exclude data.\n\n“We remapped these data for each time point using composite interval mapping to identify QTLs potentially masked by this large QTL.” Please clarify what “time point” refers to here.\n\n“protein coding genes” this is a redundant expression. Please use either “protein coding sequences” or “genes”.\n\nIn the discussion, perhaps a few words could be mentioned about why in general reanalysis of prior phenotyping data may yield useful new pieces of information about QTLs..",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2156
|
https://f1000research.com/articles/5-1366/v1
|
14 Jun 16
|
{
"type": "Data Note",
"title": "High quality, small molecule-activity datasets for kinase research",
"authors": [
"Rajan Sharma",
"Stephan C. Schürer",
"Steven M. Muskal",
"Rajan Sharma",
"Stephan C. Schürer"
],
"abstract": "Kinases regulate cell growth, movement, and death. Deregulated kinase activity is a frequent cause of disease. The therapeutic potential of kinase inhibitors has led to large amounts of published structure activity relationship (SAR) data. Bioactivity databases such as the Kinase Knowledgebase (KKB), WOMBAT, GOSTAR, and ChEMBL provide researchers with quantitative data characterizing the activity of compounds across many biological assays. The KKB, for example, contains over 1.8M kinase structure-activity data points reported in peer-reviewed journals and patents. In the spirit of fostering methods development and validation worldwide, we have extracted and have made available from the KKB 258K structure activity data points and 76K associated unique chemical structures across eight kinase targets. These data are freely available for download within this data note.",
"keywords": [
"Kinase",
"SAR",
"Bioactivity Database",
"Dataset",
"Drug Discovery",
"Bioactive Molecules",
"Kinase Knowledgebase",
"KKB"
],
"content": "Introduction\n\nSince their discovery in 1975 by Cohen et al.1, kinases are now one of the most established drug target families, second only to G-protein-coupled receptors (GPCRs). Most progress in kinase research has occurred in the last 25 years including the discovery of many new kinases2,3, identification of new isoforms of pre-existing kinases4,5, elucidation of new biological pathways, and identification of many new kinase-disease associations6,7. While kinases are well-validated anti-cancer targets8–11, kinase inhibitors also have been pursued in cardiovascular12, autoimmune13, inflammatory skin and bowel14, neurodegenerative15, and renal disease programs16. Most small-molecule kinase inhibitors target the ATP binding site of the kinase catalytic domain11. The ATP binding region of the catalytic domain is highly conserved among protein kinases, which has important consequences for drug development. Achieving selectivity of a small molecule inhibitor against kinase off-targets to avoid adverse reactions can be a major hurdle. However, the cross reactivity of many chemotypes can also open opportunities to focus on other closely related kinases. Despite the high degree of conservation in the ATP binding site, reasonably selective inhibitors with favorable pharmacological properties can be developed17. It is now common in discovery programs to profile inhibitors against an extensive set of kinase targets18. These kinase-profiling efforts have generated valuable data, providing insight into selectivity and promiscuity of clinical inhibitors19–21.\n\nMedicinal chemists can benefit significantly from well-curated databases documenting chemical structure(s) with an experimentally measured biological activity. These structure and activity databases or SAR databases help to better understand drug-target interaction, which can assist in the design of potent and selective chemical inhibitors22–25. A well populated, editable, easy to search and flexible SAR database is an integral part of the modern drug design process26. SAR databases provide elementary insights to researchers, including:\n\n(a) Target druggability: known small molecule binders are required to categorize a protein as druggable. High-affinity and non-promiscuous inhibitors are particularly valuable to establish druggability; and can be further validated using structure biology information. In many cases druggability can be inferred for new targets using homology models27 where similarities can be mapped via sequences, pathways or functions. Examples include the Target Informatics Platform (TIP)28 and Modbase29.\n\n(b) Scaffold selectivity: the golden principle that applies is “less selective scaffolds have more undesirable side effects.” A prior knowledge of selectivity profiles can help in making informed decisions on which chemotypes to pursue at the start of discovery programs30. Organizing data by scaffold enables classic SAR analysis in which side-chain moieties can be evaluated and considered or avoided in lead optimization31.\n\n(c) Clinical molecules: it can be very helpful to see scaffold(s) or derivatives under the study of launched drugs. This enables medicinal chemists to associate therapeutic classes with active scaffolds.\n\n(d) Development and validation of computational methods: well-curated datasets are very helpful in the development and refinement of computational methodologies. With a common set of data, computational researchers can also compare and contrast methods, providing additional validation32.\n\n(e) Virtual screening: high-quality, well-curated, standardized and annotated datasets are required to build predictive models for virtual screening as we have shown previously specifically for the Kinase Knowledgebase (KKB) data33.\n\n\nKinase Knowledgebase (KKB)\n\nThe KKB is a database of biological activity data, structure-activity relationships, and chemical synthesis data focused on protein kinases. Since its inception in 2001, the KKB has grown steadily with quarterly updates each year. With more than two decades of high quality SAR data, the KKB represents one of the first kinase target specific databases of biological activity and chemical synthesis data from curated scientific literature and patents. The KKB contains a large number of kinase structure-activity data points (>1.8M) reported in peer-reviewed literature covering journals and patents. The data have been curated from over 150 different journals reporting kinase inhibitors with activity data, with leading contributions from J Med Chem, Bioorg Med Chem, Bioorg Med ChemLett and Euro J Med Chem. In addition, the KKB contains data curated from patents/applications from WO, EP and US. The scientific information is curated from the published text using a combination of automatic and manual efforts.\n\nA summary of the first quarter release for year 2016 (Q1-2016) is reported in Table 1. With the Q1-2016 KKB release, there are total of 506 unique kinase targets with over 682K unique small molecules. A listing of few “hot” kinase targets with their inhibitors (data points) is reported in Table 2.\n\nSummary Statistics – Q1 2016 Release.\n\nData Points for Selected Targets– Q1 2016 Release.\n\nKinase inhibitors are biologically active small molecules and their activity refers to experimentally measured data on a given kinase target (in enzyme or in cell based assays), using predefined experimental protocols. After curation and standardization, these measured values together with related information are indexed in the KKB. Each inhibitor entered in the KKB carries unique identifiers such as:\n\n(a) Chemical information and biological information: unique structure IDs (MR_ID) are assigned based on unique canonical SMILES. In addition hand-drawn Cartesian coordinates are captured. Chemical compounds are associated with calculated chemical and physical properties.\n\n(b) Biological target and assay protocol: biological targets are annotated by EntrezGeneID, UniProt ID, and HUGO approved names. An assay protocol includes detailed information pertaining to the experiments performed to measure the biological activity for the compound. Each protocol has a descriptive title and a unique set of keywords. Assays are categorized by assay format (biochemical, cell-based, etc.) following standards set forth by BioAssay Ontology (BAO)34,35. Kinase targets are classified by protein and non-protein kinases and protein kinases by the typical domain-based classification into group, family, etc. We are in the process of mapping KKB targets to the Drug Target Ontology (DTO), which is in development.\n\n(c) Experimental bioactivity screening results. A bioactivity data point is a defined result/endpoint of a specified small molecule compound tested in a biological assay. The assay is defined in b); result type/endpoint captured include IC50, Ki, Kd; the vast majority for biochemical and cell-based assays correspond to BAO definitions.\n\n(d) Source reference: bibliographic information and unique identifiers for journal article and patents from which information related to the molecules was extracted include PubMedID, DOI, and standardized patent numbers. For journals, the KKB provides title, authors name, journal-name, volume, issues, and page numbers. For patents their titles, patent or patent application number (along with family members), inventor’s names, assignee names, publication data and priority numbers are provided.\n\nIt is observed that a disease type can be related to multiple kinase groups, and several diseases can arise from a common set of kinase group (Table 3)6. In the KKB, kinases are classified by protein and non-protein kinases with several sub-categories such as carbohydrate and lipid kinase and the typical protein kinase groups (such CMGC, CAMK, TK, TKL, RGC, AGC) and further sub-groups such as families. DTO provides a functional and phylogenetic classification of kinase domains to facilitate navigation of kinase drug targets. DTO is developed as part of the Illuminating the Druggable Genome (IDG) project. Here we make datasets freely available for the research community including to support efforts such as IDG. We also offer to run our predictive models built using KKB data to support prioritization of drug targets.\n\n\nKinase inhibitor datasets\n\nThe wealth of kinase inhibitor data presents opportunities for analysis as a whole or by integrating such data into various computational platforms to support development and validation of hypotheses of kinase inhibition. Several years ago, Eidogen-Sertanty made available 3880 pIC50 data points across three kinase targets (ABL1, SRC, and AURKA – validation sets) to foster algorithm development and validation worldwide. With this data note, eight additional targets comprising inhibitors for therapeutically important classes: EGFR, CDK2, ROCK2, MAPK14 and PI3K (class I catalytic) (Table 4) totaling ~258K data points (structure with standard results/endpoints such as IC50, Ki or Kd) and ~76K unique chemical structures now have been made available to further foster worldwide development, validation, and collaborative interaction (see KB_SAR_DATA_F1000.txt and KB_SAR_DATA_F1000.sdf files). These datapoints have been exported from the KKB and survey 1044 articles and 942 patents respectively.\n\n*Afatinib, Erlotinib, Gefitinib, Lapatinib, Osimertinib, Vandetanib are US-FDA approved kinase inhibitors with EGFR as one of the valid targets.\n\nThe datasets cover a broad range of biochemical and cell based studies investigating kinase inhibition; and they represent a diverse collection of pharmaceutically active scaffolds. These scaffolds can be easily examined for selectivity and specificity for the given eight kinase targets. Additionally, they can be used to infer novel target-inhibitor relationships for kinases and compounds not included in these subsets.\n\nBibliographic information is reported in the files ArticleInfo_F1000.txt and PatentInfo_F1000.txt. Experimental procedure along with metadata information for targets including EntrezGeneIDs, assay format/type (biochemical/enzyme, cell based, etc), keywords, species, and cell lines used in cell-based data are stored in AssayProtocols_F1000 (txt and xml attached).\n\nThe KKB validation sets have a maximum contribution from EGFR with nearly ~54K inhibitors molecules. This is followed by ~43K inhibitors for MAPK14; CDK2 and PIK3CA each have ~39K inhibitors. Figure 1 depicts data point distributions for each kinase in the attached subset. Moreover, 84% of the data are from biochemical enzyme based assay experiments, and 16% of the data from cell-based assays (in Figure 2). The datapoint measures include IC50, Ki and Kd (Figure 3).\n\nAnalysis of ~76K unique molecules for selectivity against targets reveals that ~64K inhibit only one kinase of the eight kinases extracted (Figure 4). Approximately 5K molecules show activity against two kinase targets, and ~3K molecules show activity against three kinases. A total of 79 molecules in the subset have some activity against all the eight kinase targets.\n\n\nConclusions\n\nThe KKB is available in various formats such as SQL, SDF and IJC format (Instant JChem) as quarterly updates. Two mobile apps, iKinase and iKinasePro25, are also available for download which enable basic search access into KKB content, including kinase inhibitor structures, biological data and references/patents. Simple substructure and exact structure search access into the KKB is also available. We have extracted from the KKB ~258K structure activity data points and ~76K associated unique chemical structures across eight kinase targets and made these data freely available for download within this datanote to foster algorithms development and validation worldwide.\n\n\nData availability\n\nF1000Research: Dataset 1. High quality, small molecule-activity for kinase research, 10.5256/f1000research.8950.d12459136",
"appendix": "Author contributions\n\n\n\nRS, SCS and SMM contributed equally to the work.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work of SCS was supported by grant U54CA189205 (Illuminating the Druggable Genome Knowledge Management Center, IDG-KMC). The IDG-KMC is a component of the Illuminating the Druggable Genome (IDG) project and NIH Common Fund project, awarded by the NCI.\n\n\nReferences\n\nCohen P: The origins of protein phosphorylation. Nat Cell Biol. 2002; 4(5): E127–130. PubMed Abstract | Publisher Full Text\n\nFleuren ED, Zhang L, Wu J, et al.: The kinome 'at large' in cancer. Nat Rev Cancer. 2016; 16(2): 83–98. PubMed Abstract | Publisher Full Text\n\nMahajan K, Mahajan NP: Cross talk of tyrosine kinases with the DNA damage signaling pathways. Nucleic Acids Res. 2015; 43(22): 10588–601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTavares MR, Pavan IC, Amaral CL, et al.: The S6K protein family in health and disease. Life Sci. 2015; 131: 1–10. PubMed Abstract | Publisher Full Text\n\nHage-Sleiman R, Hamze AB, Reslan L, et al.: The Novel PKCθ from benchtop to clinic. J Immunol Res. 2015; 2015: 348798. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen Q, Luo H, Zhang C, et al.: Bioinformatics in protein kinases regulatory network and drug discovery. Math Biosci. 2015; 262: 147–56. PubMed Abstract | Publisher Full Text\n\nChang E, Abe J: Kinase-SUMO networks in diabetes-mediated cardiovascular disease. Metabolism.. 2016; 65(5): 623–33. PubMed Abstract | Publisher Full Text\n\nCicenas J, Cicenas E: Multi-kinase inhibitors, AURKs and cancer. Med Oncol. 2016; 33(5): 43. PubMed Abstract | Publisher Full Text\n\nHohenforst-Schmidt W, Zarogoulidis P, Steinheimer M, et al.: Tyrosine Kinase Inhibitors for the Elderly. J Cancer. 2016; 7(6): 687–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGharwan H, Groninger H: Kinase inhibitors and monoclonal antibodies in oncology: clinical implications. Nat Rev Clin Oncol. 2016; 13(4): 209–27. PubMed Abstract | Publisher Full Text\n\nWu P, Nielsen TE, Clausen MH: Small-molecule kinase inhibitors: an analysis of FDA-approved drugs. Drug Discov Today. 2016; 21(1): 5–10. PubMed Abstract | Publisher Full Text\n\nCai A, Li L, Zhou Y: Pathophysiological effects of RhoA and Rho-associated kinase on cardiovascular system. J Hypertens. 2016; 34(1): 3–10. PubMed Abstract | Publisher Full Text\n\nYamaoka K: Janus kinase inhibitors for rheumatoid arthritis. Curr Opin Chem Biol. 2016; 32: 29–33. PubMed Abstract | Publisher Full Text\n\nBraegelmann C, Hölzel M, Ludbrook V, et al.: Spleen tyrosine kinase (SYK) is a potential target for the treatment of cutaneous lupus erythematosus patients. Exp Dermatol. 2016; 25(5): 375–9. PubMed Abstract | Publisher Full Text\n\nYarza R, Vela S, Solas M, et al.: c-Jun N-terminal Kinase (JNK) Signaling as a Therapeutic Target for Alzheimer's Disease. Front Pharmacol. 2016; 6: 321. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCormack PL: Pazopanib: a review of its use in the management of advanced renal cell carcinoma. Drugs. 2014; 74(10): 1111–25. PubMed Abstract | Publisher Full Text\n\nNorman RA, Toader D, Ferguson AD: Structural approaches to obtain kinase selectivity. Trends Pharmacol Sci. 2012; 33(5): 273–8. PubMed Abstract | Publisher Full Text\n\nNoble ME, Endicott JA, Johnson LN: Protein kinase inhibitors: insights into drug design from structure. Science. 2004; 303(5665): 1800–5. PubMed Abstract | Publisher Full Text\n\nKaraman MW, Herrgard S, Treiber DK, et al.: A quantitative analysis of kinase inhibitor selectivity. Nat Biotechnol. 2008; 26(1): 127–32. PubMed Abstract | Publisher Full Text\n\nFabian MA, Biggs WH 3rd, Treiber DK, et al.: A small molecule-kinase interaction map for clinical kinase inhibitors. Nat Biotechnol. 2005; 23(3): 329–36. PubMed Abstract | Publisher Full Text\n\nDavis MI, Hunt JP, Herrgard S, et al.: Comprehensive analysis of kinase inhibitor selectivity. Nat Biotechnol. 2011; 29(11): 1046–51. PubMed Abstract | Publisher Full Text\n\nWillighagen EL, Waagmeester A, Spjuth O, et al.: The ChEMBL database as linked open data. J Cheminform. 2013; 5(1): 23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBalakin KV, Tkachenko SE, Kiselyov AS, et al.: Focused chemistry from annotated libraries. Drug Discov Today Technol. 2006; 3(4): 397–403.Publisher Full Text\n\nSamwald M, Jentzsch A, Bouton C, et al.: Linked open drug data for pharmaceutical research and development. J Cheminform. 2011; 3(1): 19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilliams AJ, Ekins S, Clark AM, et al.: Mobile apps for chemistry in the world of drug discovery. Drug Discov Today. 2011; 16(21–22): 928–39. PubMed Abstract | Publisher Full Text\n\nOprea TI, Tropsha A: Target, chemical and bioactivity databases – integration is key. Drug Discov TodayTechnol. 2006; 3(4): 357–365. Publisher Full Text\n\nTuccinardi T, Martinelli A: Protein kinase homology models: recent developments and results. Curr Med Chem. 2011; 18(19): 2848–53. PubMed Abstract | Publisher Full Text\n\nHambly K, Danzer J, Muskal S, et al.: Interrogating the druggable genome with structural informatics. Mol Divers. 2006; 10(3): 273–81. PubMed Abstract | Publisher Full Text\n\nPieper U, Webb BM, Dong GQ, et al.: ModBase, a database of annotated comparative protein structure models and associated resources. Nucleic Acids Res. 2014; 42(Database issue): D336–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLawless MS, Waldman M, Fraczkiewicz R, et al.: Using Cheminformatics in Drug Discovery. Handb Exp Pharmacol. 2016; 232: 139–68. PubMed Abstract | Publisher Full Text\n\nKuhn B, Guba W, Hert J, et al.: A Real-World Perspective on Molecular Design. J Med Chem. 2016; 59(9): 4087–102. PubMed Abstract | Publisher Full Text\n\nKarthikeyan M, Vyas R: Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery. Comb Chem High Throughput Screen. 2015; 18(6): 528–43. PubMed Abstract | Publisher Full Text\n\nSchürer SC, Muskal, SM: Kinome-wide activity modeling from diverse public high-quality data sets. J Chem Inf Model. 2013; 53(1): 27–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbeyruwan S, Vempati UD, Küçük-McGinty H, et al.: Evolving BioAssay Ontology (BAO): modularization, integration and applications. J Biomed Semantics. 2014; 5(Suppl 1 Proceedings of the Bio-Ontologies Spec Interest G): S5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVempati UD, Przydzial MJ, Chung C, et al.: Formalization, annotation and analysis of diverse drug and probe screening assay datasets using the BioAssay Ontology (BAO). PLoS One. 2012; 7(11): e49198. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuskal S, Sharma R: Dataset 1 in: High Quality, Small Molecule-Activity Datasets for Kinase Research. F1000Research. 2016. Data Source"
}
|
[
{
"id": "14833",
"date": "08 Jul 2016",
"name": "George Nicola",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes an overview of current kinase-related databases of significance, with particular focus on the contents of the Kinase Knowledgebase (KKB). The KKB has the largest repository of high-quality kinase activity data. Providing access to over ¼ million data points on several of the most important kinases allows for an exciting insight into the relevance of these validated drug targets and the diversity of compounds affecting them. It is a promising trend that private companies are unlocking their proprietary data troves for the advancement of academic research. This is a nice Data Note that merits indexing in F1000Research.\n\nA few minor typographical corrections:\nTable 2: Three of the column names seem to be duplicated.\n\nTable 2: It is unclear what the grey vs white rows represent in ‘Kinase Classification’ and ‘Family’ columns. If only for readability, perhaps these should alternate.\n\nSection ‘Kinase inhibitor datasets’ at the end of the first paragraph: The word ‘respectively’ is not needed.\n\nSection ‘Kinase inhibitor datasets’ 4th paragraph, ‘~54K inhibitors molecules’: ‘inhibitors’ does not need to be plural.\n\nFigures 1 & 2:I would use the word ‘Breakdown’ instead of ‘Breakup’.\n\nFigure 2: Are ‘Cell-Free’ and ‘Animal Model’ truly zero percent? If so, they should be excluded; if not, the fractional percent should be listed.\n\n‘Conclusions’: ‘datanote’ should be two words, to be consistent with the F1000 article type.",
"responses": []
},
{
"id": "14358",
"date": "08 Jul 2016",
"name": "Sorin Avram",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper describes Kinase Knowledgebase (KKB), i.e., a database containing structure-activity data on kinases. The current data note briefly presents the KKB Q1 2016 Release and the appended eight kinase data sets, which are made hereby publicly available.\n\nKinases are valuable targets for many diseases, especially cancers. The subject is of real scientific. In general, the amount of bioactivity data, coming from various sources (scientific literature, high-throughput screening results, patents etc), is heterogeneous and a proper curation and standardization of the data can provide reliable activity points. These data may be employed in many ways as described by the authors. In my opinion, the main applications for a database such as KKB would be to build predictors to search the chemical space for new kinase inhibitors, and further to optimize the selectivity of kinase inhibitors. Currently, ChEMBL’s1 publicly available Kinase SARfari (https://www.ebi.ac.uk/chembl/sarfari/kinasesarfari), provides a standard source for these tasks, covering about 532155 bioactivity data points i.e., version: 6.00- accessed June 20, 2016. This is less than one third of the 1.8 million KKB activity data point reported by the authors. In these circumstances, KKB might add valuable information for kinase research. Finally, the future analysis and employment of the eight data sets made freely available in the current note will provide a clearer view of the potential and versatility of KKB.\n\nThere would be two minor observations:\nThe methodology used to generate the data is described in the first paragraph in the section entitled “Kinase Knowledgebase (KKB)”. In order to be more accessible for the reader, this paragraph should be encompassed in a separate section named “Materials and methods”. In Table 2 there are three columns with repeated headers. In order to remove any doubts, I would recommend the authors to clarify this issue.\n\nOtherwise, the data note is well written, kinases are indexed using the widely adopted Uniprot IDs and the references are updated.\nI recommend this data note for indexation and would like the authors to address the minor observations.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1366
|
https://f1000research.com/articles/5-2572/v1
|
25 Oct 16
|
{
"type": "Method Article",
"title": "Sonification of hyperspectral fluorescence microscopy datasets",
"authors": [
"Aprameya Mysore",
"Andreas Velten",
"Kevin W. Eliceiri",
"Aprameya Mysore",
"Andreas Velten"
],
"abstract": "Recent advances in fluorescence microscopy have yielded an abundance of high-dimensional spectrally rich datasets that cannot always be adequately explored through conventional three-color visualization methods. While computational image processing techniques allow researchers to derive spectral characteristics of their datasets that cannot be visualized directly, there are still limitations in how to best visually display these resulting rich spectral data. Data sonification has the potential to provide a novel way for researchers to intuitively perceive these characteristics auditorily through direct interaction with the raw multi-channel data. The human ear is well tuned to detect subtle differences in sound that could represent discrete changes in fluorescence spectra. We present a proof of concept implementation of a functional data sonification workflow for analysis of fluorescence microscopy data as an FIJI ImageJ plugin and evaluate its utility with various hyperspectral microscopy datasets. Additionally, we provide a framework for prototyping and testing new sonification methods and a mathematical model to point out scenarios where vision-based spectral analysis fails and sonification-based approaches would not. With this first reported practical application of sonification to biological fluorescence microscopy and supporting computational tools for further exploration, we discuss the current advantages and disadvantages of sonification over conventional spectral visualization approaches. We also discuss where further efforts in spectral sonification need to go to maximize its practical biological applications.",
"keywords": [
"Hyperspectral",
"Sonification",
"Fluorescence",
"Microscopy",
"Visualization",
"Multiplexing",
"FIJI",
"ImageJ"
],
"content": "Introduction\n\nThe increased availability of microscopes with multiple spectral channels and multi-colored fluorescent molecular markers has allowed life science researchers to generate datasets of higher spectral complexity than ever before from their optical imaging systems1. By selectively placing different markers throughout a specimen, a researcher can construct detailed visual narratives of different biological processes based on the spectral variations throughout the data2–5. This approach, called multiplexing, allows for functional readout of multiple genetic functions spatially and temporally, a feature unique to fluorescence imaging. These multiple fluorophores can be genetically engineered extrinsic fluorescence tags such as Green Fluorescent Protein (GFP) or endogenous fluorescence such as that emanating from cellular autofluorescence. In particular, spectral information has been used to discriminate fluorescence signals from cellular autofluorescence signals such as nicotinamide adenine dinucleotide (NADH) in metabolism studies and to understand subcellular protein trafficking via signaling protein labeling6,7. Additionally the collection of spectral data enables researchers to study other complex phenomena related to fluorescence such as fluorescence resonance energy transfer (FRET)8 or spectral shifts.\n\nAs fluorescence microscopy datasets increase in richness and dimensional complexity, designing flexible tools for researchers to explore these larger and denser datasets that strike the appropriate balance between intuitive functionality and analytic effectiveness is becoming increasingly important.\n\nAs the number of fluorescent markers used in a biological sample increases, two critical bottlenecks emerge within the current traditional analysis workflow. First, the spectral properties of fluorescent dyes often necessitate a linear unmixing (deconvolution) of the image data to discriminate one dye from the other. The importance of this step becomes particularly apparent when simultaneously using many fluorophores with overlapping emission spectra. Several types of spectral stripping and unmixing techniques have been borrowed from other scientific domains such as space science that involve analysis of hyperspectral data9,10. However, many of these techniques require precise a priori knowledge of the present spectra before the imaging5,11–13. Confirming emission spectra in experiments involving biological specimens becomes complicated due to factors such as spectral variance in endogenous chemical environment, the developmental fluorophore stage, and unpredictable refractive characteristics inside the specimens. All can cause significant variation in emission spectra14.\n\nSecond, even if all spectra are successfully unmixed, visual representations of an image volume with more than three spectral channels are limited by inability of the human vision system to manage effectively more than three channels at a time. False color images of datasets with three or more spectral dimensions can visually appear to be the same in areas where the data are vastly different, because the transcoding operation used to render the raw data visible to human eyes involves a data loss. Therefore visual analysis of hyperspectral imagery has to always rely on analysis of multiple images or on custom computational pre-processing.\n\nWhile the human vision system is limited for natively comprehending rich hyperspectral data, the human ear and auditory system are more optimally suited for this task15. Figure 1 shows a spectral sensitivity curve for the human eye, while Figure 2 shows the “equal loudness curve” describing the range of frequencies perceptible by the human ear. Note the logarithmic scaling on the x-axis of Figure 1 as compared to the relatively narrow interval on the x-axis of Figure 2. Additionally, whereas the eye has only three channels of spectral sensitivity, the ear can distinguish about 1400 pitches throughout its range16. Moreover, the ear actually has finer resolution when sensing complex, non-sinusoidal tones15. This behavior makes the ear a substantially more optimal sensor for spectrally rich signals than the eye, because, as we demonstrate, in Figure 1 and Figure 2, the eye actually becomes less spectrally adept with increasing spectral complexity. We intend to show how sonification allows researchers to avoid complex computational and visual techniques to best interpret their hyperspectral datasets.\n\nThis chart shows the combined spectral sensitivity of all the color cones in the human eye, which constitutes the full range of visually perceptible frequencies.\n\nThis chart shows the spectral sensitivity of the human ear in the form of multiple equal loudness contours over a frequency domain. While the eye has a domain of several hundred Hz, the ear has domain spanning several thousand Hz, as seen by the log scaling.\n\nResearch on data sonification has been conducted across a diverse variety of disciplines and has yielded results across the spectrum from purely aesthetic to highly functional. The emphasis in our project was on creating a sonification plugin as a functional and practical tool for fluorescence microscopy research. As such, we decided to design our sonification as an extension of the imaging and analysis workflow that researchers at our lab use to gather and understand multispectral fluorescence microscopy data. Sonification has been proven a viable means for analyzing hyperspectral or otherwise dimensionally complex datasets in many other contexts17–19. Potential microscopy sonification applications have been briefly explored but not actively pursued and implemented by the scientific community20,21. Dr. Thomas Hermann, in his 2002 PhD thesis “Sonification and Exploratory Data Analysis”, considers the viability of a hypothetical sonification-based multichannel image analysis tool for microscopy research22. In his thought experiment he outlines three ways that sonification would uniquely provide additional clarity and insight into dimensionally complex microscopy datasets:\n\n--Sonification intuitively allows users to perceive trend-related factors such as clustering, or other distributional behavior with increased clarity by enunciating more subtle spectral differences that missed by more conventional data display methods. This allows users to intuitively understand higher order relationships amongst the 15+ channels that might go unnoticed when visually exploring the image stack 1–3 channels at a time.\n\n--Sonification augments perception when the eyes are saturated with other information. If a user needs a visual interface to explore the structural organization of a particularly intricate specimen, the sonification can simultaneously provide information on salient characteristics from other channels co-localized to a particular region of interest without obfuscating the visual-spatial information.\n\n--Sonification enhances the salience of 'warning flags'. Rapid changes in sound naturally command the human attention very effectively. It is a lot easier to grasp the difference in amplitude between two points in the dataset than it is to compare the intensity of two points defined by multiple overlaid color maps.\n\nCurrently, no data sonification package exists for hyperspectral image volume exploration and analysis that avoids computational dimensionality reduction. Researchers at Stanford have implemented a sonification model for hyperspectral image analysis, but this relies heavily on principal component analysis and avoids sonification of the raw multichannel pixel stream21. Our approach uses data sonification to reduce the amount of abstraction between the researcher and the raw fluorescence data. We explore sonification as a way of confirming a spectral deconvolution, as well as a way of navigating the fluorescence data in its full dimensionality. By allowing researchers to interact directly with the spectral composition of their datasets, we can potentially elucidate salient characteristics of the data that otherwise would have gone unnoticed.\n\n\nMaterials and methods\n\nThere are two components to the sonification. The first is the image-viewing interface that provides the user interface for the data to be explored and sonified. For this we chose a plugin context inside FIJI (http://fiji.sc/) which is a popular distribution of the Java-based image analysis program ImageJ23,24. Choosing an open-source image analysis package was important to us as to allow maximum accessibility and ease adoption. FIJI is already the tool of choice for many scientists doing microscopy research. Several of the data sonification implementations described in the current research literature use a standalone application which integrates image UI and audio driver communication into one executable, but this was not the optimal strategy for our proposed use case. A FIJI plugin was the intuitive choice for development and deployment because it can easily be integrated into current research workflows instead of requiring the installation and usage of a separate standalone application. Additionally, since FIJI is used for many spectral fluorescence microscopy analysis routines, it opens up the possibility of integrating and interacting with these steps as needed by the end user.\n\nThe second component is the audio server, which is tasked with storing synths and outputting sounds to the native audio driver. Synths are definitions that describe the range of sounds produced by the sonification, and contain instructions on how to translate the commands received from the image host into audio samples. As we discuss in the “Sound Design” section, iteratively designing multiple mappings from data-space to sound space is how we explore the capacity of sonification to allow us to interact with hyperspectral datasets. We decided to use SuperCollider (http://supercollider.sourceforge.net/) and JCollider (http://www.sciss.de/jcollider/) for the functionality of storing synths and communicating with the audio driver. SuperCollider is an open-source package for algorithmic sound synthesis, and JCollider is a library that allows Java applications, such as ImageJ, to communicate with an instance of SuperCollider’s OpenSoundControl (OSC) server, called “scsynth”. This communication entails encoding and sending audio commands, manipulating synth definitions, and checking server status. Synth definitions are encapsulated in SynthDef Objects, as specified in the JCollider API, and exported to scsynth at runtime. All communications between the Java image host and the OSC server are done over User datagram Protocol (UDP).\n\nThe following signal flow diagram in Figure 3 describes the interaction process of the plugin. The user interacts with the image via the FIJI interface by clicking and dragging the cursor across the image, which sends pixel data encoded in OSC format to scsynth. The data are then sonified by the SuperCollider server in accordance with the chosen synth definition and sent to the speakers as audio samples.\n\nThis signal flow diagram describes the data sonification pipeline. When the user interacts with the image volume by clicking and dragging, FIJI sends encoded pixel values to the SuperCollider server, which translates the data into audio samples sent to the speakers.\n\nWhen designing the mapping from data space to sound, we first had to establish a set of heuristics to determine what qualities of the sound are desirable in a particular mapping. Grond and Berger’s chapter in the “Sonification Handbook”, entitled “Parameter Mapping Sonification”, informed our sound design methodology25. They outline the following optimal design criteria for a parameter based exploratory sonification:\n\nPolarity. Polarity constitutes the capacity of a sound to communicate both the direction and magnitude of a change in a stream of data. In the use-case of sonifying a spectrally rich signal, as an individual component of the set of spectra increases or decreases, such as the pixel intensity value for a particular channel in an image volume, the respective mapped quantity of the sound should change in a way that can be clearly identified as the increase or decrease.\n\nScale. Data from our image volumes have to be appropriately scaled in the mapping process in order to convey the full range of parameter fluctuations. Ideally the same image in two different formats should sound the same, so normalization of the pixel stream data should be implemented to minimize arbitrary differences in auditory characteristics due to image encoding and prevent signal spikes. Images encoded in a format with higher dynamic range have higher amounts of noise though, so there will invariably be some differences.\n\nContext. Context is the auditory equivalent of “axes and tick marks”--elements of the sonification that provide a baseline reference against fluctuations in the data25. If fluctuations appear to be arbitrary or ambiguously connected to observed changes in the data, the utility of the sonification as an exploratory mechanism is severely diminished. As such, we sought to design our synths with as little auditory clutter as possible; one should be able to selectively focus on particular modulating qualities in the sound and intuitively know how the data are changing. This rule of thumb definitely informed our preference for additive synthesis. Another issue related to context we observed was that, similar to the case with color lookup tables, there is no “one size fits all” auditory display solution. Certain synth designs are more useful for articulating subtle changes in spectral composition but ineffective for perceiving larger trends across the entire dataset, and vice versa. As such, we provide a variety of synths, each for usage in a specific observed context.\n\nCalibration. We employ a series of calibration images designed to systematically screen any arbitrary synthesis algorithm for viability and identify what spectral contexts are or are not displayed clearly by a given sound. Figure 4 shows a comparison of the four volumes we generated. Each calibration image volume consists of 15 450px by 450px TIFF images representing the spectral data shown in Figure 5a–Figure 5d.\n\nSingle channel slices of each of the four calibration image volumes are compared side by side. In each volume, the same cascading gradient is interpolated with a different ‘background noise’ texture. This particular design of a calibration image allows for testing the signal-noise discrimination capabilities of a given synth for each channel in a variety of spectral contexts.\n\nCalibration Volume ‘A’ contains the cascading 30px gradient with no background noise. This is a control volume to test a particular synth’s range of sounds with no background interference.\n\nCalibration Volume ‘B’ contains the cascading 30px gradient with low background noise. This volume tests a particular synth’s signal-noise discrimination with low intensity homogeneous background noise across all channels.\n\nCalibration Volume ‘C’ contains the cascading 30px gradient with high background noise. This volume tests a particular synth’s signal-noise discrimination with high intensity homogeneous background noise across all channels.\n\nCalibration Volume ‘D’ contains the cascading 30px gradient with random background noise. This volume tests a particular synth’s signal-noise discrimination with pseudo-random, heterogeneous background noise across all channels.\n\nNote in Figure 4 there is a 30 pixel thick horizontal stripe that traverses the entire width of the image in each of the volumes. This stripe starts at the top of the image in channel 1 of each volume, and incrementally shifts downward 30 pixels each channel until it is at the very bottom of the image in channel 15, as seen in Figure 5a–Figure 5d. Each of these stripes has gradation from 8-bit grayscale intensity value 0 (black) on the leftmost side to 255 (white) on the rightmost side. Throughout the rest of the image not occupied by the stripe, each of the four calibration volumes has a different pattern, designed to simulate various types of noise that could interfere with the perception of the 0–255 gradient. Volume A is designed to be the control volume which should showcase the discriminatory function of a sonic mapping at its clearest. Volumes B and C both have noise that is homogenous throughout the entire background; B has light noise (grayscale value = 84/33% intensity) while C has heavier noise (grayscale value = 168/66% intensity). Volume D has pseudorandom noise that is clamped between 20% and 80% intensity (to prevent unnecessary clipping).\n\nWhen the 15 images of a calibration volume are superimposed on top of one another in an image stack, the cascading stripe and background pattern provides a means to measure the effectiveness of a particular sonic mapping at discriminating intensities for any given channel by traversing the image horizontally. Conversely, the spectral differentiation of the mapping can be gauged at a given intensity by traversing the image vertically. Movie 1 shows how sonifying these four volumes with a given sound can give an idea of its effectiveness in a variety of spectral contexts.\n\n\n\nEach synth definition we created is outlined here as a list of summations defining how audio samples. These following synths were utilized on 15 channel datasets, so there are 15 components to each synth waveform that are modulated by fluctuations in the spectral content of a given stream of pixels. Additive synthesis is the primary algorithm used to group all these fluctuations into one sound in the definitions we created.\n\nSynth 1.\n\ny1(t)=∑a=115 iasin (2πfat)\n\nf = {440Hz, 466.16Hz, 493.88Hz, 523.25Hz, 554.37Hz,\n\n587.33Hz, 622.25Hz, 659.26Hz, 698.46Hz, 739.99Hz,\n\n783.99Hz, 830.61Hz, 880Hz, 932.33Hz, 987.77Hz}\n\nia = normalized pixel intensity value (0–1) for channel ‘a’ of a multichannel image volume\n\nThis synth maps the intensities of spectral channels 1–15 to the amplitudes of sine waves from A4 to B5, and then adds all the waves together to produce the resultant sonification for a given pixel. We found that this synth design was effective for the phantom, but was less useful for articulating subtle spectral distinctions in biological datasets.\n\nSynth 2.\n\ny2(t)=∑a=115 iasin (2π300at)\n\nThis synth uses the same additive principle of Synth 1 (intensities mapped directly to amplitude), but for a different set of notes. Instead of incremented tones on a scale, this synth uses a harmonic series. We found this synth to be notably more useful when sonifying biological datasets than Synth 1 as the wider range of notes makes it more difficult for the user to mistake adjacent spectral channels for one another.\n\nSynth 3.\n\ny3(t)=∑a=115 0.1sin (2π(300a+150ia)t)\n\nThis mapping is also an additive synthesis algorithm, but instead of modulating the amplitudes of the components, the actual frequencies are modulated to create inharmonics. The amplitude of each component is set to 0.1 (statically), and the intensities of spectral channels 1–15 are mapped to a range of 0–150Hz deviation from each harmonic partial.\n\nSynth 4.\n\ny4(t)=∑a=115 iasin (2π(300a+150ia)t)\n\nThis mapping combines modulation of amplitude with deviation from harmonic partials. We found the combination of the two allowed for better spectral differentiation than either Synth 1 or Synth 3 alone.\n\nSynth 5.\n\ny5(t)=∑a=115 iasin (2π((300+600(a−1))+350ia)t)\n\nThis is the mapping we used in all the example videos. The summation is constructed the same way as Synth #4, except the range of notes is expanded by choosing every other higher order harmonic partial of 300Hz. Additionally the frequency deviation from the harmonics, mapped to the pixel intensity, is scaled to 350Hz as opposed to just 150Hz. We found it to display subtle spectral changes with the most clarity out of all the synths.\n\n\nResults\n\nThe human eye makes use of photoreceptors with overlapping spectral sensitivity in order to improve its spectral resolution. It is important to note that this method does not increase the total number of colors or color combinations the eye can perceive, but rather optimizes color perception in certain situations that are more likely to be relevant in a natural environment while decreasing resolution in other situations. Specifically, overlapping color receptors allow the eye to establish the mean frequency of a single emission peak with greater accuracy. When, however, multiple peaks are involved this method becomes unreliable since there are always different peak configurations that will lead to the same perceived color. To illustrate this effect below we present a simple method to generate those ambiguous peak configurations for any given photoreceptor sensitivities. We demonstrate this method by generating a dataset made up of five emitters. We produce a hyperspectral image in which every pixel has a different spectrum due to different relative fluorophore concentration or emission wavelengths, but all appear identical to a three channel detector with overlapping spectra such as the eye.\n\nConsider a sensor with three overlapping channels detecting a spectrum consisting of five relevant channels. The input spectrum can be described as a five-element vector I and the detected intensities at the sensor are described by a three-element vector D. D can be computed from I by multiplying a 3 by 5 matrix M that in its columns contains the spectral sensitivity curves of the three spectral channels.\n\nWe consider D and M fixed and are seeking all vectors I that fulfill the equation\n\nD = MI\n\nBy varying u and v we can create a set of five channel spectra that all result in the same vector D. This D can contain negative numbers, which is not realistic. To avoid this we add set of constants Δ1, Δ2, Δ3, Δ4, Δ5 to our spectra I such that the smallest value occurring is zero. This leads to a different detected spectrum D’ that is still constant for all incident fluorophore spectra:\n\nThe result of this procedure, our 5-channel hyperspectral phantom dataset, is shown in Figure 6. Each pixel in the image multichannel volume has a different 5-fluorophore combination. They all look identical to the three channels sensor used. The spectra do however sound dramatically different in a simple sonification.\n\nThis figure shows a visual comparison of each channel of the phantom image volume alongside a hyperspectral false color image of the volume. While each channel has a unique gradient across the image, the hyperspectral false color image appears homogeneously textured throughout.\n\nThe sixth image seen in Figure 6 is a hyperspectral false color image of our phantom image volume. This false color image is a mapping of the 5D data to 3D space, using the spectral sensitivity curve seen in Figure 7, which has a spectral overlap similar to the cone sensitivity of the human eye shown earlier in Figure 1.\n\nThis figure shows the spectral sensitivity curve used to generate the hyperspectral false color image from the phantom image volume. In this curve the three color components are overlapping to emulate the sensitivity curve shown in Figure 1.\n\nAlthough the false color image appears to be uniform across its entire surface, the actual spectral data, shown on the left, are different for every pixel. Even though there is significant overlap between the spectral peaks used in the false color mapping, there are no apparent visual distinctions in the RGB color image. Figure 8 shows the invisible variations in spectral content that characterize our image volume.\n\nThe spectral composition of the phantom image volume is shown at various points throughout the volume and overlaid with the hyperspectral false color image. This shows the range of spectral variations that are not visible through the false color image.\n\nThis data loss does not reflect an inadequacy in the sensitivity curve, but rather the fundamental inability of a three-dimensional transcoding to convey fluctuations in a five-dimensional signal. In the above section detailing the generation of the phantom dataset we describe how any arbitrary spectral response curve with three peaks can be used to compute a five-dimensional image volume that varies spectrally at each pixel, and a RGB false color hyperspectral image that appears to be spectrally homogeneous across all pixels to the human eye.\n\nUsing the sonification plugin, the user traverses the image volume and generates an audio signal that elucidates the invisible spectral topology of this image volume. A log on the left displays spectral data for each pixel that is sonified, confirming the spectral difference. This process is shown in Movie 2.\n\n\n\nAfter demonstrating theoretical utility with the phantom dataset, we seek to create a test for our sounds that would more closely resemble the spectral subtleties of a real biological specimen with multiple fluorophores. However, we want a dataset that is much more spectrally controlled than a biological specimen so we can unequivocally verify our plugin’s functionality.\n\nThe FocalCheck DoubleGreen Fluorescent Microspheres Kit (Life Technologies, Carlsbad CA) provides a useful litmus test for using the sonification to differentiate two objects with very similar spectra. The three different fluorescent beads in this kit are uniformly labeled and are designed to test the spectral separation capabilities of a scope system. The subtle distinction in dye coverage and color between the three beads is designed to be visually unnoticeable, but precisely pronounced enough to be resolved by algorithmic linear unmixing. By testing our sonification with the FocalCheck beads, we can at once ascertain the capability of our plugin at confirming the results of a linear unmixing as well as differentiating spectra that are visually indistinguishable.\n\nFigure 9 shows magnified false color images of the three beads with spectral measurements taken both at the core and the edge for each bead. The bead in Figure 9a homogeneously emits “green 1”(512 nm) and the bead in Figure 9b homogeneously emits “green 2”(525 nm). The third bead in Figure 9c is green 1 on the outside ring and green 2 in the center. Despite the spectral differences, all three false color images appear to have the same color.\n\nThe spectral content of the FocalCheck DoubleGreen bead 1 (512nm emission) is shown at the core and at the edge of the bead.\n\nThe spectral content of the FocalCheck DoubleGreen bead 2 (525nm emission) is shown at the core and at the edge of the bead.\n\nThe spectral content of the FocalCheck DoubleGreen bead 3 (512nm emission at the edge, 525nm emission at the core) is shown at the core and at the edge of the bead.\n\nThis figure shows a hyperspectral false color image of an Arabidopsis thaliana specimen with spectral measurements of the raw multichannel image volume taken at various points of interest. The spectral differences between the cell membrane and the plastid are obscured in the hyperspectral image.\n\nThis figure shows a zoomed in hyperspectral false color image of an Arabidopsis Thaliana specimen with spectral measurements of the raw multichannel image volume taken at a cell membrane and a mitochondria. This presents another subtle spectral distinction in a live specimen that is not made clear by the hyperspectral image.\n\nThe video in Movie 3 shows a sonification of the same three image volumes. The cursor is clicked and dragged over each volume along a path that includes the blank space outside the bead, the edge of the bead, and the core of the bead. Choosing this path for each volume allows us to distinguish the “edge effect” of increasing intensity between the blank space and the bead (which is a function of scope resolution) from the subtle spectral shift between the thin ring around the outermost point of the bead and the bead’s core.\n\n\n\nThe final test of our sonification was sonifying a real biological specimen that has been modified to express multiple fluorescent proteins. We imaged an Arabidopsis seedling sample that expressed three fluorescent proteins, RFP, YFP and GFP, which have been genetically modified in accordance to the procedure described by Kato et al26.\n\nIn our first figure, we explore a visible similarity in the hyperspectral false color image between the plastids and the cell membranes of the Arabidopsis root. Although the plastids express a red fluorescent protein, background noise (likely from autofluorescence and scatter) in the green wavelength causes the plastids to appear very visually similar to the regions of the specimen expressing GFP.\n\nHowever, when sonified, we can hear a very salient difference between plastids and the cellular membranes. Additionally, the plastids that are visibly red sound similar to the plastids with more green/yellow tinting, which allows the user to clearly understand the distribution of the plastids throughout the specimen despite the counterintuitive shift in color.\n\nNext, we zoomed into an area of the specimen that has both mitochondria and cellular walls. The mitochondria express a protein (YFP) that is spectrally very similar to the protein the cell walls (GFP). In the false color image, this distinction is not apparent at all, due to the close proximity of the two spectral peaks.\n\nThe sonification clearly articulates the difference between the two peaks, even at varying intensities. In this video a user sonifies several parts of the image where mitochondria are either very close to or overlapping the cellular boundary, and the spectral difference is still made apparent.\n\n\nDiscussion\n\nMoving forward, the sonification framework that we have developed can be extended to provide deeper insights into complex microscopy datasets that have been inadequately represented through current visualization-only analysis methods. In particular, it is worth considering the potential utility of sonification-based analysis in experiments that involve multiplexed functional markers, such as calcium signaling, or experiments that require differentiation between intrinsic and extrinsic fluorescence. Additionally, improvements to the functionality of our plugin would allow sonification based analysis to be utilized in more demanding contexts. Ideas for improvements to the plugin fall under two main categories: improvements to the sonification interface, and extension of the proof of concept.\n\nImproving the sonification interface could be done on multiple levels that would amplify the utility of the plugin. First, adding functionality such as sonification over a Region of Interest, or automatic exporting of a sonification to a video format would afford the plugin a higher degree of similarity to traditional image analysis plugins, making it more usable to scientists already familiar with FIJI. On a deeper level, an external GUI could be designed to allow for full programmability of the sonification mapping process. Similar to the way that Digital Audio Workstations provide virtual instrument interfaces for programming custom sounds through a synthesis algorithm, our plugin could benefit from a visual interface that would allow users to tweak the mapping parameters connecting data space to sound space to create the perfect sonification for a given use case. This would be the audio equivalent of creating a custom lookup table for image analysis. Combined with the calibration image volume and the documentation of our incremental sound design process we have provided, users would be able to use such an interface to intuitively find the most useful sounds for given datasets.\n\nExtending this proof of concept application into new use cases would also expand the functionality of our plugin. We demonstrate how sonification can be used to confirm a given computational spectral separation, and as such it would be worth considering the utility of sonification alongside other computational image analysis techniques utilized on spectrally dense datasets. Moreover, it is worth considering the utility of sonification as the size and complexity of datasets scale, and whether parallel computing could potentially be useful for generation of sounds in real time with large numbers of input parameters. Finally, we have described ways that sonification provides us insights into our particular datasets that were inaccessible with only visualization, but it would be worth exploring in the general sense whether a difference can be quantified between a combination of sonification and visualization based analysis versus just visualization. Conducting an A/B test with a population of users and evaluating the variation in experience would be a good way to perceive the difference sonification makes in an applied context.\n\n\nConclusion\n\nIn this paper we have provided an in depth study of the utility of displaying spectral relationships in high-dimensional microscopy datasets. This has been the first study of sonification in microscopy without dimensionality reduction. Sonification has a wide variety of applications in image analysis, including providing data display to visually impaired users, clearly conveying patterns and relationships in complex data, and providing a novel way to aesthetically represent a dataset. As multiple fluorescent markers are becoming more important and common in biological microscopy research and interpretation of the resulting rich spectral data become more relevant, scientists will have to turn to new types of tools to meaningfully interact with their data. Sonification has potential to not only be used as an alternative to conventional spectral visualization but as well to complement in certain use cases where sonification would be especially advantageous. We have presented sonification as more than just an aesthetic novelty, but as a promising solution to address limitations in existing spectral visualization based image analysis techniques. Future work will need to further explore the practically of sonification of microscopy spectra in terms of appropriate sound mapping, biological application and user interface.\n\n\nData and software availability\n\nF1000Research: Dataset 1. Raw data for ‘Sonification of hyperspectral fluorescence microscopy datasets’, Mysore et al. 2016, 10.5256/f1000research.9233.d13868227\n\nLatest source code for the sonification plugin: https://github.com/uw-loci/sonification\n\nArchived source code at the time of publication: 10.5281/zenodo.15996028\n\nThe plugin can be installed through the FIJI Updater by following the LOCI updates website (http://loci.wisc.edu/software/fiji-plugins). The calibration images utilized to test our synth designs are available on the LOCI GitHub alongside the plugin source code, so as to facilitate the development of new data-to-sound mappings by users.\n\nFigshare: Video of Calibration. doi: 10.6084/m9.figshare.4029480.v129\n\nFigshare: Sonification of phantom image volume. doi: 10.6084/m9.figshare.4029495.v130\n\nFigshare: Sonification of focalcheck beads. doi: 10.6084/m9.figshare.4029513.v131\n\nFigshare: Sonification of Arabidopsis Plastid and Cell Membrane. doi: 10.6084/m9.figshare.4029519.v132\n\nFigshare: Sonification of Arabidopsis Mitochondria and Cell Membrane. doi: 10.6084/m9.figshare.4029522.v133",
"appendix": "Author contributions\n\n\n\nAprameya Mysore implemented and applied the method generated artificial data, and wrote the paper. Andreas Velten collected real data, guided the development of data generation and sonification methods, and helped writing the paper. Kevin Eliceiri initiated and oversaw the project.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no supporting grants were involved in this work.\n\n\nAcknowledgements\n\nThe authors acknowledge technical support from the Laboratory for Optical and Computational Instrumentation (LOCI). The authors also thank Alexandra Chanoca and Marisa Otegui for providing the plants samples and Curtis Rueden for assistance with the FIJI implementation.\n\n\nReferences\n\nVelten AU, White JG, Mackie TR, et al.: Hyperspectral Multi-Point Confocal Microscope. In CLEO: 2013. AW1I.2. Optical Society of America, 2013. Publisher Full Text\n\nLavis LD, Raines RT: Bright ideas for chemical biology. ACS Chem Biol. 2008; 3(3): 142–155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPinaud F, Michalet X, Bentolila LA, et al.: Advances in fluorescence imaging with quantum dot bio-probes. Biomaterials. 2006; 27(9): 1679–87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRizzo MA, Davidson MW, Piston DW: Fluorescent protein tracking and detection: fluorescent protein structure and color variants. Cold Spring Harb Protoc. 2009; 2009(12): pdb.top63. PubMed Abstract | Publisher Full Text\n\nZimmermann T: Spectral imaging and linear unmixing in light microscopy. Adv Biochem Eng Biotechnol. Springer, 2005; 95: 245–265. PubMed Abstract | Publisher Full Text\n\nConklin MW, Provenzano PP, Eliceiri KW, et al.: Fluorescence lifetime imaging of endogenous fluorophores in histopathology sections reveals differences between normal and tumor epithelium in carcinoma in situ of the breast. Cell Biochem Biophys. 2009; 53(3): 145–157. PubMed Abstract | Publisher Full Text\n\nConklin MW, Ada-Nguema A, Parsons M, et al.: R-Ras regulates beta1-integrin trafficking via effects on membrane ruffling and endocytosis. BMC Cell Biol. 2010; 11: 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSekar RB, Periasamy A: Fluorescence resonance energy transfer (FRET) microscopy imaging of live cell protein localizations. J Cell Biol. 2003; 160(5): 629–633. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarris AT: Spectral mapping tools from the earth sciences applied to spectral microscopy data. Cytometry A. 2006; 69(8): 872–879. PubMed Abstract | Publisher Full Text\n\nBusch K, Howell N, Morrison GH: Elimination of interferences in flame spectrometry using spectral stripping. Anal Chem. 1974; 46(14): 2074–2079. Publisher Full Text\n\nZimmermann T, Rietdorf J, Pepperkok R: Spectral imaging and its applications in live cell microscopy. FEBS Lett. 2003; 546(1): 87–92. PubMed Abstract | Publisher Full Text\n\nDavis LM, Shen G: Extension of multidimensional microscopy to ultrasensitive applications with maximum-likelihood analysis. In Biomedical Optics (BiOS) 2007. 64430N–64430N. International Society for Optics and Photonics, 2007. Publisher Full Text\n\nNeher R, Neher E: Optimizing imaging parameters for the separation of multiple labels in a fluorescence image. J Microsc. 2004; 213(Pt 1): 46–62. PubMed Abstract | Publisher Full Text\n\nPawley J: Handbook of biological confocal microscopy. Springer, 2010. Reference Source\n\nKollmeier B, Brand T, Meyer B: Perception of Speech and Sound. In Springer Handbook of Speech Processing. Springer Science & Business Media, 2008; 65. Reference Source\n\nOlson HF: Music, Physics and Engineering. Dover Publications, 1967. Reference Source\n\nHermann T, Ritter H: Crystallization Sonification of High-dimensional Datasets. ACM Trans Appl Percept. 2005; 2(4): 550–558. Publisher Full Text\n\nNesbitt KV, Barrass S: Evaluation of a multimodal sonification and visualisation of depth of market stock data. 2002. Reference Source\n\nLunn P, Hunt A: Listening to the invisible: Sonification as a tool for astronomical discovery. 2011. Reference Source\n\nHermann T, Nattkemper T, Schubert W, et al.: Sonification of Multi-Channel Image Data. In Proc of the Mathematical and Engineering Techniques in Medical and Biological Sciences (METMBS 2000). (ed. Falavar, V.) CSREA Press, 2000; 745–750. Reference Source\n\nCassidy RJ, Berger J, Lee K, et al.: Auditory Display of Hyperspectral Colon Tissue Images Using Vocal Synthesis Models. In ICAD. 2004. Reference Source\n\nHermann T: Sonification for Exploratory Data Analysis. University of Bielefield, 2002. Reference Source\n\nSchindelin J, Arganda-Carreras I, Frise E, et al.: Fiji: an open-source platform for biological-image analysis. Nat Methods. 2012; 9(7): 676–682. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchneider CA, Rasband WS, Eliceiri KW: NIH Image to ImageJ: 25 years of image analysis. Nat Meth. 2012; 9(7): 671–675. PubMed Abstract | Publisher Full Text\n\nGrond F, Berger J: Parameter Mapping Sonification. In The Sonification Handbook. Logos Publishing House, 2011; 363–397. Reference Source\n\nKato N, Reynolds D, Brown ML, et al.: Multidimensional fluorescence microscopy of multiple organelles in Arabidopsis seedlings. Plant Methods. 2008; 4: 9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMysore A, Velten A, Eliceiri K: Dataset 1 in: Sonification of hyperspectral fluorescence microscopy datasets. F1000Research. 2016. Data Source\n\nRueden C; invizibility: invizibility/sonification 1.0 [Data set]. Zenodo. 2016. Data Source\n\nMysore A, Velten A, Eliceiri KW: Video of Calibration. Figshare. 2016. Data Source\n\nMysore A, Velten A, Eliceiri KW: Sonification of phantom image volume. Figshare. 2016. Data Source\n\nMysore A, Velten A, Eliceiri KW: Sonification of focalcheck beads. Figshare. 2016. Data Source\n\nMysore A, Velten A, Eliceiri KW: Sonification of Arabidopsis Plastid and Cell Membrane. Figshare. 2016. Data Source\n\nMysore A, Velten A, Eliceiri KW: Sonification of Arabidopsis Mitochondria and Cell Membrane. Figshare. 2016. Data Source"
}
|
[
{
"id": "17919",
"date": "23 Nov 2016",
"name": "Paul S. Maddox",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript titled “Sonification of hyperspectral fluorescence microscopy datasets”, Mysore, Velten and Eliceiri present the development of an “eye-to-ear” conversion plugin for Fiji. Modern imaging places a premium on extracting as much information as possible from images and much focus has been placed on increasing the dimensionality, and thus ability to segment and measure, of image data. These efforts have resulted in images containing 6 or more dimensions (x, y, z, Intensity, time, color, fluor lifetime, etc), making presentation and interpretation visually complex. In the case of spectral imaging, commercial instruments are fully capable of producing 32 channels in the color range (hyperspectral), further complicating the situation. Here, the authors present an innovative solution: a D to S conversion (digital to sound). The human eye is tuned to respond to a relatively narrow band in the electromagnetic spectrum, however to do so with relatively high resolution. On the otherhand, the human ear has a very high (~3 orders of magnitude) dynamic range combined with high resolution. As such, the ear is better suited to interpreting hyperspectral data. The authors make this case effectively with supportive calibration and experimental data. This is an exciting window (or tune?) into the future of data presentation.\nBelow I make a few suggestions:\nThe authors do an excellent job of explaining the biology of the eye, however the ear seems to be less well defined. What about the biology, mechanics, or innervation of the ear allows this dynamic range? Personally, I have always considered the ear a single dimensional detector; at any given moment, the ear interprets and reports a single piece of data. Is this the case or can the ear send multiple inputs to the audio processor in our brain?\n\nCan this concept be applied to non-spectral information? Have the authors tried to use it for simply portraying depth or time? In my experience there are only a limited number of people using spectral data when compared to depth and/or time.\n\nMinor comment: Page 3; what is “image UI”?",
"responses": []
},
{
"id": "17182",
"date": "24 Nov 2016",
"name": "Andrew R. Cohen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nReview for: Sonification of hyperspectral fluorescence microscopy datasets\nSummary:\nThis paper by Mysore, et al., describes a framework and ImageJ plugin for constructing an audio signal from, or sonifying, 2-D multichannel fluorescence microscopy images. There are many positive aspects to this work. There is a pressing need for more effective approaches for human interaction with large, high-dimensional datasets. The use of audio information to enhance multi-channel image data is a key aspect of effective visualization that has not been well investigated. This is an important research topic. The manuscript is clearly written. The software is free and open source, with a broadly applicable ImageJ plugin provided. These are all commendable aspects of the manuscript.\nThere are however, a few shortcomings in both the approach and the manuscript that should be addressed in the next revision. Most significant, the approach proposed here seems to suggest that sonification is an alternative to visualization, rather than an enhancement to visualization. It seems clear that sound will greatly enhance visualization, but the manuscript states in multiple places that sound is superior to visualization. The result is the implication that sonification is an alternative rather than an enhancement to visualization. This is a surprising and seemingly unsupported notion. This point needs clarification.\nSpecific Comments:\nThe authors repeatedly refer to the discriminative limitations of the human visual system compared to auditory processing. For example, “While the human vision system is limited for natively comprehending rich hyperspectral data, the human ear and auditory system are more optimally suited for this task1”. This statement is untrue, and it is not supported by this reference that makes no relative measure in discriminative ability between human audio vs. visual capacity:\n\nRelated comment: “whereas the eye has only three channels of spectral sensitivity, the ear can distinguish about 1400 pitches throughout its range” is another misleading statement about human visual capacity. The visual system is generally superior to the auditory system at subtle discrimination tasks. The human eye is remarkably sensitive to color variations. If visual data has only three channels, then audio has at best two (stereo) and in the present approach just one channel. One key aspect of visual vs. auditory capabilities is the color map (or tonal map) used to render image (sound) data…. Please comment on the established literature quantifying relative bandwidth and discriminatory capabilities of the human visual vs. audio perception.\n\n...leading to another related point: “False color images of datasets with three or more spectral dimensions can visually appear to be the same in areas where the data are vastly different, because the transcoding operation used to render the raw data visible to human eyes involves a data loss”. Color mapped rendering is only lossy when the number of colors used exceeds the size of the color map. How many unique intensity bins come out of the microscope? Often, 8 bit per pixel images are sufficient quantization, and at this level it is straightforward to design a colormap that not only preserves intensity differences, but that enhances them in a perceptual manner. Even for 16 bit color images it can be possible to design effective color mappings. see e.g2\n\nThe introduction refers to spectral unmixing as a bottleneck in multi-channel image analysis, but does not clarify why or how audio should be used as part of the solution to the spectral unmixing question. Put another way, how is the mouse-driven sonification as described in the present paper used for spectral unmixing?\n\nWhat is the source for Figure 1? Figure 2 seems to be sourced from references1,3 – is this correct? Or was this somehow measured as part of the present work?\n\nDimensionality reduction is only a concern if meaningful information is lost. Ultimately, all biological image analysis involves dimensionality reduction – from the raw image dataset to a pie chart or distribution or some such representation. The manuscript states that “By allowing researchers to interact directly with the spectral composition of their datasets, we can potentially elucidate salient characteristics of the data that otherwise would have gone unnoticed.” Do you have an example of this for a real-world dataset? The example datasets (Figures 9-11) all seem to show data that is not clearly visually discriminated, but this seems a function of poor visualization rather than inherent visual limitation (see more detailed comments on color mapping above). Are there differences that can be detected via sonification that would not be detectable by a human using effective visualization, or by a computer using statistical pattern recognition techniques?\n\nHow about feature-driven sonification rather than pixel driven? The manuscript alludes to this in stating that “since FIJI is used for many spectral fluorescence microscopy analysis routines, it opens up the possibility of integrating and interacting with these steps as needed by the end user.” This seems a much more logical and promising approach – why isn’t it explored in more detail here?\n\nDoes the method work with 3-D multi-channel images? If so, please explain how. If not, then this seems an important limitation that should be explicitly acknowledged.\n\nThe system is driven by clicking and dragging the cursor across the image. This seems a cumbersome method, and a good illustration of the lack of spatial information in the audio signal. This also seems to introduce a new bottleneck into the analysis pipeline, particularly as the image resolutions continue to increase, e.g. to 4k resolutions and beyond. Consider this in light of the 450x450 pixel sample images. Can you comment on the scalability of the approach?\n\nThe sound design methodology fails to account for a perceptually “pleasing” sound scheme. The present tonal model becomes quite annoying after a short amount of interaction with the image data – is there a way to capture a less abrasive representation?\n\nThe manuscript states that “Another issue related to context we observed was that, similar to the case with color lookup tables, there is no “one size fits all” auditory display solution”. There are in fact a number of generally effective color mapping approaches. Specifically, there exists a significant amount of research on quantitative color spaces (e.g. HSV, perceptual spaces as mentioned above, etc.). Are there equivalent theoretical bases for audio? If not, why not? If so, please reference.\n\nFigure 6 seems misleading. Clearly a color map that can more effectively combine the channels could be formulated, particularly given the interactive environment that is enabled by sliding a mouse across the image for an audio comparison.\n\nFigure 8 also seems contrived – clearly it is possible to construct a multi-channel image that naively combines to any monotone. At the same time, it would be straightforward to e.g. learn a color space that maximizes separation among the image channels just from the given data.\n\nWhat would Figure 10 look like with an HSV color map? Just because the fluorophores are colored green and red does not mean that is the best color choice for representing them in a visualization context.\n\nFollowing the questions of naively designed color maps being easily defeated, are there equivalent limitations in your audio mapping that would allow images that are visually different to sound identical? Could you construct such an example?\n\nWhat about other senses in addition to sound? E.g. touch has been explored in data visualization previously4 – would such approaches add value in combination with your method?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2572
|
https://f1000research.com/articles/4-1267/v1
|
16 Nov 15
|
{
"type": "Research Article",
"title": "Evaluation of immunologic response of salivary sIg-A in pediatric tuberculosis patients to antigen Ag38-rec of Mycobacterium tuberculosis Indonesian strain",
"authors": [
"Tri Yudani Mardining Raras",
"Diah Erma Pritta Santi",
"HMS Chandra Kusuma",
"Diah Erma Pritta Santi",
"HMS Chandra Kusuma"
],
"abstract": "Objective: We studied the immune response of salivary secretory immunoglobulin A (sIg-A) from a pediatric tuberculosis (TB) group (scoring of 6) and non-TB group (scoring <6) against recombinant Ag38 (Ag38-rec) from Mycobacterium tuberculosis Indonesian strain.Materials and Methods: Seventy-eight children were divided into three groups; those with TB (n=26), those with suspected TB (n=26), and healthy children (n=26), their saliva was collected, and salivary sIg-A was challenged with purified Ag38-rec using the dot blot method. A change of color from white to dark blue indicated a positive reaction.Results: The immune response of sIg-A of children with TB and those with suspected TB to Ag38-rec was not significantly different. In the TB group, Ag38-rec showed a higher sensitivity than protein purified derivative (PPD) (70.8% vs. 62.5%), but a lower specificity (26.9% vs. 34.62%). However, within both groups (scoring of 6) as well as non-TB group (scoring <6) Ag38-rec was able to identify children with a positive TST (tuberculin skin test) better than PPD.Conclusion: The antigen Ag38-rec could not distinguish between children with TB scores of 6 and <6.1. However, it demonstrated the potential of Ag38-rec for use in screening for TB infection among children with suspect TB (scores <6).",
"keywords": [
"salivary sIg-A",
"children",
"tuberculosis",
"antigen Ag38-rec"
],
"content": "Introduction\n\nTuberculosis (TB) among children has been a serious threat in Indonesia, primarily due to a lack of accurate methods to diagnose the disease. Clinical examination, the acid-fast bacillus (AFB) test, and culture are the gold standard methods for the diagnosis of TB in adults. However, these methods have limited use for diagnosing TB in children in a clinical setting because collecting sputum samples from children is difficult. Moreover, children with TB do not exhibit the typical symptoms as TB observed in adults. The diagnosis of pediatric TB in Indonesia is determined according to the TB Scoring System1. This system is based on the clinical signs and symptoms found in suspected TB in children1. Children with a total score of ≥ 6 are considered to have TB and are treated for the disease. However, the accuracy of this scoring method is constrained due to the non-specificity of TB symptoms in children, and may result in over or under-diagnosis2. In consequence, a rapid, simple, and inexpensive method to confirm Mycobacterium tuberculosis (Mtb) infection is required to reduce the spread of the disease. One such promising method, a serodiagnostic approach focusing on the detection of specific TB antibodies in patients, has been extensively studied3–5. A non-invasive diagnostic method like this would be highly valued, particularly with regards to pediatric TB patients.\n\nSecretory immunoglobulin A (sIg-A) antibodies and presentations by antigen presenting cells (APC) are the first line of defense against bacterial invasion. These antibodies are a major class of immunoglobulin in external secretions and provide specific immunological protection in all mucosal surfaces that prevent the entry of bacteria6. Saliva contains a significant amount (85%) of sIg-A. This antibody is produced by B lymphocytes found near the salivary glands7. The 38-kDa protein from Mtb contains B-cell epitopes and has been shown to have high specificity for the Mycobacterium complex8. Previous study by Raras et al. demonstrated an immune response of salivary sIgA against recombinant Ag38 (Ag38-rec) Indonesian strain in adult pulmonary TB patients with a sensitivity of up to 80%, but with a low specificity (38%)9. Another study investigating Mtb specific antibodies, including sIgA, in saliva from children of the Warao Amerindian tribes in Venezuela, reported significantly greater reactivity to the purified protein derivative (PPD) antigen with a sensitivity of 26.5% and specificity of 97%10. In this study, the saliva from children diagnosed with TB (those scoring ≥ 6 in the TB Scoring System) and from children with suspected TB (those scoring < 6) was tested against Ag38-rec produced in our laboratory. The term ‘Indonesian strain’ is used throughout this article to refer to M. tuberculosis that was isolated from an Indonesian patient who was suffering from severe pulmonary TB, from which the pab gene was amplified. We evaluated whether Ag38-rec from the Mtb Indonesian strain could be used to differentiate between children with pulmonary TB with scores of ≥ 6 and children with suspected TB with scores of < 6.\n\n\nMethods\n\nProduction of Ag38-rec was conducted according to a previous study with slight modifications11. The pab gene coding for Ag38 was amplified via a PCR method using chromosomal DNA from M. tuberculosis which was isolated from a severe pulmonary TB patient in Malang, Indonesia. The fragment was inserted into a plasmid producing pMBhis. We do not know the strain of this Mtb, but alignment of the nucleotide sequence of the pab gene showed 95% homology to the pab gene from M. tuberculosis H37Rv. Escherichia coli BL21-(DE3) containing plasmid pMBhis was grown in Luria broth medium until OD600 of 0.5 and induced with IPTG. After 3 h the cells were then harvested by centrifugation. The pellet was mixed with phosphate buffer containing protease inhibitor phenyl methyl sulphonyl fluoride (Sigma, USA) and was broken using a sonicator. Protein purification was performed using a Protino®Ni-TED column according to the manufacturer’s protocol (Protino, Dueren, Germany). Following centrifugation for 15 min at 10,000 × g the supernatant was loaded onto an Ni-TED 1000 column and washed twice with phosphate buffer containing 10 mM imidazole. The antigen Ag38-rec and other proteins that bound to the nickel column were eluted using phosphate buffer containing 250 mM imidazole. The eluted proteins were collected and loaded once more on the new Ni-TED column. The eluent was then dialyzed against the elution buffer in order to remove imidazole. The purity of Ag38-rec was analyzed using 12.5% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).\n\nA total of 78 children (n=78) were included in this study: 26 patients with recently diagnosed pulmonary TB (total scores of ≥ 6), 26 patients with suspected TB (scores of < 6) from primary health care clinics in Malang City, Indonesia, and 26 healthy children to serve as negative controls. All of the participants were between the ages of 6 and 15 years, and their parents signed informed consent forms. The inclusion criteria for the healthy controls included absence of illness in the three weeks prior to the study and a lack of contact with adult TB patients. The study was initiated after being approved by the Ethical Commission from the Faculty of Medicine, Universitas Brawijaya, Malang Indonesia (No.341/EC/KEPK-OPDS/05/2014). Saliva samples were collected from June to October 2014. The diagnosis of TB was conducted by a pediatrician based on the eight parameters of the scoring system1 i.e., contact with TB patients; fever; cough; enlarged lymph nodes of the neck, groin, and armpits; nutritional status; bone and joint swelling; chest X-ray (CXR); and Mantoux tuberculin skin test (TST).\n\nThe dot blot method was applied according to a previous experiment9. Before mounting onto the dot blot apparatus, a nitrocellulose membrane was pre-wetted using sterile H2O. A 20 L volume containing 1 g of antigen Ag38-rec in Tris-Cl buffer [pH 7.4] was dropped onto the membrane following overnight incubation at 4°C with blocking buffer. PPD (protein purified derivative) from Mtb (Serum Staten Institute, Denmark) served as a control. The next day, the blocking buffer was removed and replaced with TBS and gently shaken for 10 min at 4°C. After the blocking agent was removed, 50 μL of primary antibody was applied to the membrane and incubated for 12 h at room temperature with gentle shaking. The solution was then discharged and the membrane was washed three times with 0.05% TBS-Tween-20 and subsequently shaken in Tris-Cl buffer containing secondary antibody (1:500) (mouse anti-Mycobacterium tuberculosis Ag38 monoclonal antibody, AbD, Serotec, England, cat.no.0100-0519) at room temperature for 1 h, followed by three washing steps. Finally, a chromogenic substrate (BCIP-NBT) was applied to the membrane in the darkroom at room temperature for 30 min. The reaction was stopped by the addition of H2O. The Corel Draw graphic suite X4 (Corel, USA) program was used to interpret the color range of the spot(s). The gradation of color was quantified to numerical value using Corel Photopaint 11. Positive and negative value was based on the cut-off of the median value of the positive control (PPD) after reaction with saliva of the TB group. A dot was regarded as positive when the value was below the cut-off point and as negative when the value was higher than the cut-off point. A positive result was defined as a dark blue or dark purple spot (>50%) on the blot.\n\nThe procedure of saliva collection was conducted according to Chiappin et al.7. For patients with cough, saliva collection was preceded by induction with nebulized 3% NaCl to thin phlegm. Subjects were asked to transfer the collected saliva (a minimum amount of 5 ml) from the mouth to a falcon tube. Saliva samples were immediately deposited into a thermos of ice at 4°C for up to 2 h to protect the unstable analyte and prevent the growth of bacteria prior to long-term storage in a refrigerator at -40°C until use.\n\nTo avoid the possibility of contamination from substances in the saliva that may interfere with the immunoassay, the following precautions for participants were applied: avoid consumption of a meal 1 h before sample collection; avoid dairy products 20 min before sample collection; avoid foods with high acidity, high sugar, or caffeine immediately before sample collection; and rinse mouth with water to remove food residue before sample collection.\n\nThe sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of an immune response to Ag38-rec in children with a TB score of ≥ 6 and those with a score < 6 were determined and compared to those values for PPD. The statistical significance of the differences between the sensitivity and specificity of the immune response to Ag38-rec and to PPD was analyzed using the Student’s t-test. A p-value of less than 0.05 was considered statistically significant. All data were analyzed using SPSS version 15.\n\n\nResults\n\nIn all three groups, the number of male patients was slightly higher than the number of female patients. The mean age of the subjects of all three groups was 8 years old. In the case of the TB group (score ≥ 6), most have had contact with an adult TB patient with a positive smear test. A positive TST result had been obtained from the majority of children from the TB group (22/26). In contrast, only a single positive TST result was found in the group of children with a suspected TB (score of < 6). Persistent cough for more than 3 weeks dominated the patients in both groups (score of ≥ 6 and < 6). All subjects in the group with pulmonary TB had X-rays suggestive of pulmonary TB, while only 6 subjects (6/28) in the suspected TB group had X-rays suggestive of pulmonary TB.\n\nTo explore whether there is an immune response of sIgA from the saliva of children against Ag38-rec, we tested two samples from each group against purified antigens. PPD was used as a positive control, and a negative control was performed without antigen. A dark blue color indicates a reaction between Ag38-rec Mtb antigens with sIgA, while a pale color or white indicates no reaction. After the optimization process with the checker board, the best concentration of antigen was 250 ng Ag38-rec, 125 ng PPD and 25 μL of saliva. All samples from the three different groups were tested and the results of the dot blot are shown in Figure 1.\n\nThe response of salivary sIg-A of TB group scores ≥ 6 (A) non TB group scores < 6 (B) and healthy children (C) against Ag38-rec (labeled 38 rek) and PPD using the dot blot method.\n\nComparison between response of sIg-A from TB group to Ag38-rec and the standard antigen PPD showed that sIg-A recognized PPD better than Ag38-rec (Table 1).\n\nThe strength of Ag38-rec as an immunodiagnostic agent to detect sIgA was determined based on the sensitivity and specificity. It was found that the sensitivity of sIg-A antibody response in saliva against Ag38-rec Mtb is significantly higher than PPD (70.8% vs 62.5%) (p < 0.05), although with lower specificity (26.9% vs 34.62%) (p < 0.001).\n\nComparison of sIgA responses to Ag38-rec between the confirmed TB group (score ≥ 6) and the suspected TB group (score < 6) (70.8% vs 73%), suggested that Ag38-rec could be recognized among TB suspected children, although this difference was not statistically significant (p = 0.861) (Table 2).\n\nWe then compared the response of sIgA towards Ag38-rec in both groups based on the TST result. It was clear that patients with a positive TST responded significantly better to Ag38-rec than did patients with a negative TST result (77.2% vs. 41%, p = 0.003) (Table 3).\n\n\nDiscussion\n\nThe diagnosis of TB in children in Indonesia is conducted using a scoring system1. This system is not a gold standard to diagnose TB in children, but rather a consensus by the WHO for areas that do not have the facilities for complete TB screening7. Therefore, it may be beneficial to add another complementary parameter to support the scoring determination.\n\nThe objective of the current study was to observe whether the Mtb Ag38-rec antigen could be used to discriminate children that are diagnosed with TB (scoring ≥ 6) from the suspected TB (scoring < 6) based on their immune response in salivary sIgA. However, we found that the positive response of salivary sIgA against Ag38-rec was lower for children with pulmonary TB than for children with suspected TB, albeit this difference was not statistically significant (73% vs 70.8%, p = 0.861). It could be that because the children with scores < 6 were recruited from Malang, Indonesia where TB is endemic; therefore these children may be infected with TB but still in the incubation stage of the disease, resulting in negative TST results12. When we consider the potential of the Ag38-rec antigen as a serodiagnostic agent in pediatric TB compared to the PPD antigen, Ag38-rec has a significantly higher sensitivity than PPD (70.83% vs. 62.5%, p<0.001).\n\nAlthough the 38-kDa Mtb antigen is the most widely studied antigen for serological purposes and is a major component in commercial TB tests13, the serum antibodies showed high heterogeneity against Mtb antigens so that specific antibody response requires high purity of antigen14. In this study, the purity of Ag38-rec was not very high (80%). Nevertheless, the sensitivity of Ag38-rec compared to PPD may indicate that the immunogenicity value of Ag38-rec is slightly better than that of PPD. A study conducted in children of the Warao Amerindian tribes in Venezuela against Mtb-specific antibodies, including sIgA in saliva, demonstrated that PPD and Ag38-rec had a higher reactivity than other antigens10. This immunodominant antigen in patients with smear-positive TB is apparently specific for Mtb bacteria. A humoral immune response against this protein is often associated with active TB disease. Since our research participants were children that could not produce sputum, we could not confirm whether the low sensitivity of our Ag38-rec antigen was a result of negative sputum smears from the children. A previous study using the same antigen demonstrated that the response of sIgA from AFB-positive adult TB patients had a sensitivity of up to 80%, although, with a lower specificity (38%)9. This suggests that the immunogenicity of Ag38-rec is better than that of PPD. The explanation for this may be the fact that the salivary sIgA antibody is part of the mucosal immune response that belongs to the naive immune response. These immune responses are produced earlier in infection than the cellular immune response (the immune response of TST). When the naive immune response alone is sufficient to eradicate a pathogen, consequently the cellular immune response will not be developed. The route of TB infection is initiated via the mucosal immune response, followed by an adaptive immune response. If the Mtb bacteria have been eliminated by the mucosal immune response, there would be no cellular immune response produced14.\n\nWith respect to the specificity of the sIgA response to Ag38-rec, it showed that it was significantly lower than that of PPD (26.9% vs. 34.6%, p < 0.001). Considering that all subjects in this study were recruited from TB endemic areas therefore it is possible that the healthy subjects may have had contact with TB patients without being infected and produced antibodies against Mtb. Interestingly the variation in specificity was also observed even when the study subjects came from non-endemic areas15.\n\nThe positive control PPD also produced a low specificity despite PPD being a mixture of antigens secreted by the bacteria Mtb. This suggests that the problem of low specificity is not simply a property of Ag38-rec, but also was affected by sample population and variations in antibody levels. These results are consistent with a previous study using serum samples in which the largest proportion of positive antibody responses was detected from TB endemic areas10.\n\nThe reactivity of antibodies against Mtb antigens in children can vary slightly due to an immature immune system, and severity of the disease16. Therefore, their antibody response has a widely ranging sensitivity and specificity, from 14% to 85% and from 86% to 100%, respectively12,16. Moreover, several factors such as the purity of the protein and its immunogenicity play an important role in antibody response. Patient characteristics, the severity of disease, the presence or absence of Mtb bacteria in sputum, and the type and origin of the antibody should be considered13. Finally, last but not least is the process of taking the saliva sample. In previous experiments, whole saliva exhibited a greater antibody response than the supernatant or pellet9. Although, Hagewald et al. found that centrifugation of saliva resulted in a greater sIgA antibody response in saliva supernatant17. In this study, due to the limited volume of saliva in children, we used whole saliva. It is perhaps worthwhile to test saliva supernatants for comparison.\n\nAn interesting phenomenon occurred as we connected a positive immune response to the Ag38-rec antigen in children with suspected TB (score < 6) and a positive TST result. Considering that a positive TST indicates an infection with Mtb bacteria18, the fact that sIgA in saliva showed a greater reactivity against Ag38-rec in children with a positive TST (77.2%) compared to children with a negative TST (41%) indicates that the Ag38-rec antigen is able to differentiate infected children within the group of suspected TB patients (score < 6). Similar results showed a significant difference between the levels of anti-Ag38-rec sIgA in individuals with a positive TST and a negative TST within the control group19. However Arauzo et al., found that there was no significant difference in the antibody response between patients with a positive TST and a negative TST19.\n\nThe specificity of Ag38-rec among healthy children compared to that of PPD suggests that Ag38 may potentially be used to identify TB infected children among children with suspected TB (scores < 6). Most of these children had a negative TST result (90%); however, 30% had a positive response to Ag38-rec.\n\nThis study has several limitations. The ideal method for saliva collection is passive drooling; however, we were unable to use this method because the children refused it. The criteria used for the determination of healthy child controls are also a limitation of this study. The control group was determined based on the criteria laid out in the scoring parameters for TB. They were children who did not meet the criteria for suspected TB and were very healthy.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for Raras et al. 2015, ’Evaluation of immunologic response of salivary sIg-A in pediatric tuberculosis patients to antigen Ag38-rec of Mycobacterium tuberculosis Indonesian strain’, 10.5256/f1000research.7234.d10695520\n\n\nConsent\n\nWritten informed consent for publication of their clinical details was obtained from the parents of the patients.",
"appendix": "Author contributions\n\n\n\nTMR, DE and CK conceived the study. TMR and DE designed the experiments. DE and TMR carried out the research. CK provided expertise in clinical analysis. DE and TMR prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nWe thank Suci Megasari, MS from Central Laboratory of Biomedical Science, Faculty of Medicine, Brawijaya University for technical assistance.\n\n\nReferences\n\n\"Indonesian Pediatric Association\". National Guidance in TB Diagnosis in Children, 2008.\n\nTriasih R, Graham SM: Limitations of the Indonesian Pediatric Tuberculosis Scoring System in the context of child contact investigation. Paediatr Indones. 2011; 51: 332–7. Reference Source\n\nChiang IH, Suo J, Bai KJ, et al.: Serodiagnosis of tuberculosis. A study comparing three specific mycobacterial antigens. Am J Respir Crit Care Med. 1997; 156(3 Pt 1): 906–911. PubMed Abstract | Publisher Full Text\n\nPottumarthy S, Wells VC, Morris AJ: A comparison of seven tests for serological diagnosis of tuberculosis. J Clin Microbiol. 2000; 38(6): 2227–2231. PubMed Abstract | Free Full Text\n\nAraujo Z, Giampietro F, Cançado LC, et al.: Comparison of serological responses in two different populations with pulmonary tuberculosis. Mem Inst Oswaldo Cruz, Rio de Janeiro. 2008; 103(7): 661–667. PubMed Abstract | Publisher Full Text\n\nAlvarez N, Otero O, Camacho F, et al.: Passive administration of purified secretory IgA from human colostrum induces protection against Mycobacterium tuberculosis in a murine model of progressive pulmonary infection. BMC Immunol. 2013; 14(Suppl 1): S3. PubMed Abstract | Free Full Text\n\nChiappin S, Antonelli G, Gatti R, et al.: Saliva specimen: a new laboratory tool for diagnostic and basic investigation. Clin Chim Acta. 2007; 383(1–2): 30–40. PubMed Abstract | Publisher Full Text\n\nJung SB, Yang CS, Lee JS, et al.: The mycobacterial 38-kilodalton glycolipoprotein antigen activates the mitogen-activated protein kinase pathway and release of proinflammatory cytokines through Toll-like receptors 2 and 4 in human monocytes. Infect Immun. 2006; 74(5): 2686–2696. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRaras TY, Sholeh G, Lyrawati D: Salivary sIg-A response against the recombinant Ag38 antigen of Mycobacterium tuberculosis Indonesian strain. Int J Clin Exp Med. 2014; 7(1): 129–35. PubMed Abstract | Free Full Text\n\nAraujo Z, Waard JH, Fernández de Larrea C, et al.: Study of the antibody response against Mycobacterium tuberculosis antigens in Warao Amerindian children in Venezuela. Mem Inst Oswaldo Cruz. 2004; 99(5): 517–524. PubMed Abstract | Publisher Full Text\n\nRaras TY, Lyrawati D: Cloning and expression of pab gene of M. tuberculosis isolated from pulmonary TB patient in E. coli DH5 [alpha]. Med J Indones. 2011; 20(4): 247. Reference Source\n\nAbbas AK, Lichtman AH, Pillai S: Cellular and molecular immunology. 2010. Reference Source\n\nAchkar JM, Ziegenbalg A: Antibody responses to mycobacterial antigens in children with tuberculosis: challenges and potential diagnostic value. Clin Vaccine Immunol. 2012; 19(12): 1898–1906. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUma Devi KR, Ramalingam B, Brennan PJ, et al.: Specific and early detection of IgG, IgA and IgM antibodies to Mycobacterium tuberculosis 38kDa antigen in pulmonary tuberculosis. Tuberculosis (Edinb). 2001; 81(3): 249–253. PubMed Abstract | Publisher Full Text\n\nRaviglione MC: Tuberculosis: the essentials. CRC Press. 2009. Reference Source\n\nAbebe F, Holm-Hansen C, Wiker HG, et al.: Progress in serodiagnosis of Mycobacterium tuberculosis infection. Scand J Immunol. 2007; 66(2–3): 176–191. PubMed Abstract | Publisher Full Text\n\nHägewald SJ, Fishel DL, Christan CE, et al.: Salivary IgA in response to periodontal treatment. Eur J Oral Sci. 2003; 111(3): 203–208. PubMed Abstract | Publisher Full Text\n\nCoulter JB: Diagnosis of pulmonary tuberculosis in young children. Ann Trop Paediatr. 2008; 28(1): 3–12. PubMed Abstract | Publisher Full Text\n\nSitienei JK, Kipruto H, Borus P, et al.: Predictors of Low Body Mass Index Among Patients with Smear Positive Pulmonary Tuberculosis in Kenya. International Journal of TROPICAL DISEASE & Health. 2014; 4(4): 427–436. Publisher Full Text\n\nRaras TYM, Pritta Santi DE, Kusuma HC: Dataset 1 in: Evaluation of immunologic response of salivary sIg-A in pediatric tuberculosis patients to antigen Ag38-rec of Mycobacterium tuberculosis Indonesian strain. F1000Research. 2015. Data Source"
}
|
[
{
"id": "11488",
"date": "15 Dec 2015",
"name": "Dorothy Yuan",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this submission the authors wish to evaluate the efficacy of using a protein (Ag38-rec) from Mycobacterium tuberculosis to determine whether it can be used to differentiate between pediatric patients who have TB from those suspected of disease. The use of salivary IgA is a promising non-invasive approach for testing pediatric patients. They conclude that the approach may be useful for identifying presumptive infected patients. Unfortunately the submission suffers from two major flaws that prevent any confirmation of these conclusions. Firstly, no data was presented to show the purity of the Ag38-rec produced in their laboratory. Secondly, the rationale and reliability of the method used to determine the titer of salivary IgA was not understandable to this reviewer such that it is impossible to evaluate their results. It is unclear why they did not utilize a more traditional method 1 of using anti-human IgA antibodies in the Elisa.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/4-1267
|
https://f1000research.com/articles/5-1963/v1
|
12 Aug 16
|
{
"type": "Software Tool Article",
"title": "Genomic variant annotation workflow for clinical applications",
"authors": [
"Thomas Thurnherr",
"Franziska Singer",
"Daniel J. Stekhoven",
"Niko Beerenwinkel",
"Thomas Thurnherr",
"Franziska Singer",
"Daniel J. Stekhoven"
],
"abstract": "Annotation and interpretation of DNA aberrations identified through next-generation sequencing is becoming an increasingly important task. Even more so in the context of data analysis pipelines for medical applications, where genomic aberrations are associated with phenotypic and clinical features. Here we describe a workflow to identify potential gene targets in aberrated genes or pathways and their corresponding drugs. To this end, we provide the R/Bioconductor package rDGIdb, an R wrapper to query the drug-gene interaction database (DGIdb). DGIdb accumulates drug-gene interaction data from 15 different source databases and allows filtering on different levels. The rDGIdb package makes these resources and tools available to R users. Moreover, DGIdb queries can be automated through incorporation of the rDGIdb package into NGS sequencing pipelines.",
"keywords": [
"Drug-gene interaction",
"genomics",
"next-generation sequencing",
"annotation",
"somatic variant",
"clinical application",
"Bioconductor package",
"pipeline."
],
"content": "Introduction\n\nIn recent years, next-generation sequencing (NGS) pipelines have been established and employed extensively in research settings. These efforts have helped tremendously to improve our understanding of genetic malignancies such as cancer. More recently, joint efforts of research groups and clinics aim to further enhance our knowledge of these malignancies for better diagnostic and treatment options. For example, the Cancer Genome Atlas (TCGA)1 Consortium has sequenced several thousand samples of more than 20 different cancer types. One of the aims of this project is to better characterize different cancer types, for example through identification of distinct molecular sub-types.\n\nThere are also substantial efforts to move NGS technologies and pipelines into molecular diagnostics, for example, for the characterization of somatic variants of individual tumor samples through targeted panel sequencing. Targeted panel sequencing covers a specific set of genes or locations, typically between 50 and a few hundred. Panels focus on frequently mutated or otherwise altered genes or genomic locations. Currently, several generic cancer panels and panels for specific cancer types are available2,3. Based on the panel characterization, targeted therapies for the specific genetic aberrations can be applied.\n\nThe number of targeted therapies for cancer available today is still relatively small and their approval is typically limited to one or several cancer sub-types4. However, as the therapeutic options increase, more patients can benefit from these targeted therapies. As a consequence, several clinics or institutes developed and implemented molecular diagnostic approaches based on whole-exome and/or whole-genome sequencing5–8. Unlike targeted panels, whole-exome or whole-genome sequencing is not limited to a set of pre-selected genes, but allows for the detection of somatic aberrations across all protein coding sequences or the entire genome, respectively.\n\nAn exome- or genome-wide approach provides great advantage over targeted gene panels. They allow for a more comprehensive picture of the mutational landscape of a specific tumor. In addition, with more such data available and a better understanding of gene-gene and drug-gene interactions, prediction of drug efficacy as well as adverse drug reactions may become feasible. However, workflows based on whole-exome or whole-genome sequencing require clinical interpretation of the identified genetic variants. The result of an NGS pipeline is generally a list of genes harboring somatic variants or other genomic aberrations. To identify clinically actionable targets, these genomic aberrations need to be associated with drugs specifically targeting them.\n\nHere we suggest a workflow to automate the identification of potential drug targets from a list of genomic aberrations, represented by a list of genes harboring them. For these genes, we mine drug-gene interactions using the drug-gene interaction database (DGIdb)9. DGIdb integrates drug-gene interactions from 15 different source databases. We provide the R/Bioconductor package rDGIdb (http://bioconductor.org/packages/rDGIdb/), which allows to efficiently integrate drug-gene annotation with NGS pipelines. rDGIdb can query DGIdb and filter results on different levels, i.e., source databases, interaction types, and gene categories. Through the rDGIdb package, drug-gene interaction mining can be automated and incorporated easily into NGS pipelines. Moreover, the rDGIdb package also provides functionality to visualize results.\n\n\nSomatic mutation calling\n\nSomatic variants or other genomic aberrations are identified from raw sequencing data and filtered using a standard NGS pipeline. The number of somatic variants might vary substantially, depending on the sequencing approach used and the levels or stringency of filtering employed. Next, somatic mutations are annotated with gene names, for which interacting drugs can then be queried through rDGIdb.\n\n\nIdentification of targetable aberrations\n\nProvided a list of genes with genomic aberrations, we identify aberrations targetable with a drug or compound. The R/Bioconductor package rDGIdb provides functionality to query drug-gene interactions provided by DGIdb and to apply filtering on different levels.\n\nThe package can be installed from an open R session. Instructions are provided on the rDGIdb Bioconductor page (http://bioconductor.org/packages/rDGIdb/). After installation of the package and all its dependencies, rDGIdb needs to be attached and a gene vector prepared. Gene names can be loaded from a text file or manually entered. The code below illustrates how to load gene names from a text file called aberrated-genes.txt, assuming the text file lists one gene symbol per line.\n\nTo query DGIdb, the rDGIdb package provides a simple query function, queryDGIdb. The function takes a vector of official gene symbols for which drug-gene interactions are to be queried. This is the only required argument to the query function, all other arguments are optional.\n\nThe function returns the query result as an object of type rDGIdbResult. The result is accessible through S4 methods. These methods format the result according to the result tabs provided on the DGIdb web interface. More specifically, the package provides four methods that return result data resembling the format provided through the DGIdb web interface, namely “Results Summary”, “Detailed Results”, “By Gene”, and “Search Term Summary”.\n\nAn example output of resultSummary for the DDR2 gene is shown in Table 1. The data can either be further processed using R or saved to a text file for analysis with other software tools.\n\nThe number in the table indicates if a drug-gene interaction was found in a source database, where 1 means yes and 0 means no. Drug-gene interactions are sorted by their score, which is the total number of source databases listing the interaction.\n\n\nFilter drug-gene interactions\n\nDepending on the application, it may be desirable to filter for specific drug-gene interactions. The rDGIdb package allows filtering on the level of (1) source database, (2) gene category, (3) interaction type, and (4) other criteria, applied directly to the query result.\n\nDGIdb accumulates drug-gene interactions from 15 different source databases. These are summarized in Table 2. Depending on the application for which drug-gene interactions are queried, one or several source databases might be more relevant. The specific database or a group of databases to be queried is specified through the sourceDatabases argument. rDGIdb will only return hits listed in respective source databases. For example, the query below returns drug-gene interactions from databases: MyCancerGenome and MyCancerGenomeClinicalTrials only.\n\nThe package provides a helper function that prints a list of all available source databases.\n\nSimilarly, we can filter for specific gene categories. With the gene categories filter, drug interactions for genes with a specific category label can be queried. Examples of gene categories are clinically actionable, kinase, or tumor suppressor. The optional geneCategories argument can be used to filter by gene categories.\n\nThere are 41 different gene categories available. The following command lists all available gene categories.\n\nFinally, the package provides filtering by interaction type. An interaction type is a label for the type of drug-gene interaction. 33 different interaction types are available and examples are: activator, inhibitor, cofactor, or modulator. The code below illustrates how to filter for specific interaction types.\n\nTo print a list of all available interaction types, one can use the following method:\n\nDepending on the requirement of a specific application, additional filtering might be applied directly on the query results. For example, to increase confidence of results, drug-gene interactions might be filtered by setting a minimum cutoff on the score. As a result, only drug-gene interactions supported by a minimum number of source databases will be reported. Different score cutoffs may be employed, depending on whether the aim is to query interactions with support from multiple source databases or to include as many drug-gene interactions as there are available in the source databases. The example below illustrates how to filter out drug-gene interactions with only a single supporting source database from the result summary table.\n\nAlthough rDGIdb returns information on the type of interacting drug (such as inhibitor), to assist the follow-up interpretation of drug-gene interactions, querying and filtering through rDGIdb has limitations. For example, it is not possible to filter for specific drug-mutation interactions. That is, mutations in different locations of the same gene might have different biological effects in a cell or tumor. However, as querying is done on a gene level, mutations can not be distinguished. Additional expert knowledge or other approaches will have to be employed to exclude non-relevant drug-gene interactions from the query results.\n\n\nPlotting of results\n\nThe package allows basic plotting of the results. Specifically, the number of interactions by source database can be visualized. An example plot is provided in Figure 1. This plot indicates which source databases report specifically large or small number of drug-gene interactions.\n\n\nSummary\n\nWe have described a workflow to identify potentially actionable genomic aberrations. More specifically, we have introduced the R/Bioconductor package rDGIdb, which provides an interface to query DGIdb using R. Given a list of genes with genomic aberrations, rDGIdb queries drug-gene interactions. The package allows filtering on different levels and visualization of the results. The rDGIdb package further includes detailed documentation and a vignette, which provides a step-by-step description of the workflow.\n\nrDGIdb depends on jsonlite and httr, which are available in R version 3.3.1 or higher. Briefly, rDGIdb queries the API provided by DGIdb (http://dgidb.genome.wustl.edu/api) using the POST function implemented in httr. Drug-gene interactions are returned by DGIdb in JSON format. Next, the data is deserialized into an R list object using the jsonlite package. Finally, the list is parsed and stored as an object of type rDGIdbResult. In order for rDGIdb to work, jsonlite, httr, and their dependencies need to be installed. A complete sessionInfo() output is provided below, which includes minimal version numbers of all dependencies.\n\n• R version 3.3.1 (2016-06-21), x86_64-apple-darwin13.4.0\n\n• Locale: en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8\n\n• Base packages: base, datasets, graphics, grDevices, methods, stats, utils\n\n• Other packages: rDGIdb 0.99.4\n\n• Loaded via a namespace (and not attached): httr 1.1.0, jsonlite 1.0, R6 2.1.2, tools 3.3.1\n\n\nSoftware availability\n\nSoftware available from: http://bioconductor.org/packages/rDGIdb/\n\nLatest source code: https://github.com/Bioconductor-mirror/rDGIdb\n\nArchived source code as at time of publication: http://dx.doi.org/10.5281/zenodo.5925322\n\nLicense: MIT license",
"appendix": "Author contributions\n\n\n\nTT and FS designed the query framework, tested the package, and wrote the manuscript. TT implemented the package. NB and DS supervised the work. All authors read and approved the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by EU Horizon 2020 PHC grant No. 633974 (SOUND – Statistical multi-Omics UNDerstanding of Patient Samples).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors acknowledge Anja Irmisch and Mitchell Levesque from the University Hospital Zurich (USZ) for their valuable feedback on filtering and interpretation of drug-gene interactions.\n\n\nReferences\n\nCancer Genome Atlas Research Network: Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature. 2008; 455(7216): 1061–1068. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhodakov D, Wang C, Zhang DY: Diagnostics based on nucleic acid sequence variant profiling: PCR, hybridization, and NGS approaches. Adv Drug Deliver Rev. 2016; pii: S0169-409X(16)30104-1. PubMed Abstract | Publisher Full Text\n\nEaston DF, Pharoah PD, Antoniou AC, et al.: Gene-panel sequencing and the prediction of breast-cancer risk. N Engl J Med. 2015; 372(23): 2243–2257. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLevy MA, Lovly CM, Pao W: Translating genomic information into clinical medicine: Lung cancer as a paradigm. Genome Res. 2012; 22(11): 2101–2108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClinical translation: NCT promotes swift translation of innovative high-throughput diagnostics into clinical practice. Accessed: 2016-06-22. Reference Source\n\nThe Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medical College. Accessed: 2016-06-22. Reference Source\n\nMD Anderson Cancer Center. Accessed: 2016-06-22. Reference Source\n\nPersonalized medicine at the Mayo Clinic. Accessed: 2016-06-22. Reference Source\n\nWagner AH, Coffman AC, Ainscough BJ, et al.: DGIdb 2.0: mining clinically relevant drug-gene interactions. Nucleic Acids Res. 2016; 44(D1): D1036–D1044. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShrager J, Tenenbaum JM, Travers M: Cancer Commons: Biomedicine in the internet age. In Ekins/- Collaborative Computational Technologies for Biomedical Research. Wiley-Blackwell; 2011; 161–177. Publisher Full Text\n\nBento AP, Gaulton A, Hersey A, et al.: The ChEMBL bioactivity database: an update. Nucleic Acids Res. 2014; 42(Database issue): D1083–D1090. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCIViC: Clinical Interpretations of Variants in Cancer. Accessed: 2016-06-07. Reference Source\n\nThe Clearity Foundation. Accessed: 2016-06-07. Reference Source\n\nDoCM: Database of Curated Mutations. Accessed: 2016-06-07. Reference Source\n\nLaw V, Knox C, Djoumbou Y, et al.: DrugBank 4.0: shedding new light on drug metabolism. Nucleic Acids Res. 2014; 42(Database issue): D1091–D1097. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPawson AJ, Sharman JL, Benson HE, et al.: The IUPHAR/BPS Guide to PHARMACOLOGY: an expert-driven knowledgebase of drug targets and their ligands. Nucleic Acids Res. 2014; 42(Database issue): D1098–D1106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhirl-Carrillo M, McDonagh EM, Hebert JM, et al.: Pharmacogenomics Knowledge for Personalized Medicine. Clin Pharmacol Ther. 2012; 92(4): 414–417. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSomaiah N, Simon NG, Simon GR: A tabulated summary of targeted and biologic therapies for non-small-cell lung cancer. J Thorac Oncol. 2012; 7(16 Suppl 5): S342–S368. PubMed Abstract | Publisher Full Text\n\nRask-Andersen M, Almén MS, Schiöth HB: Trends in the exploitation of novel drug targets. Nat Rev Drug Discov. 2011; 10(8): 579–590. PubMed Abstract | Publisher Full Text\n\nRask-Andersen M, Masuram S, Schiöth HB: The druggable genome: Evaluation of drug targets in clinical trials suggests major shifts in molecular class and indication. Annu Rev Pharmacol Toxicol. 2014; 54(1): 9–26. PubMed Abstract | Publisher Full Text\n\nZhu F, Han B, Kumar P, et al.: Update of TTD: Therapeutic Target Database. Nucleic Acids Res. 2009; 38(Database issue): D787–D791. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThurnherr T: rDGIdb: First release. Zenodo. 2016. Data Source"
}
|
[
{
"id": "15657",
"date": "18 Aug 2016",
"name": "Christopher Southan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis describes an R-based tool to query the drug-gene interactions in DGIdb. The paper is well written and the tool clearly has some utility. However, my reservations are outlined below.\nAs the application of NGS to cancer samples accelerates the resultant explosion of somatic variants threatens to swamp user’s ability to select them to input to this tool. What filters can be put in place to reduce huge aberration lists associated with passenger (i.e. probably non-causative and spurious) rather than driver mutations?\n\nGiven the latest Nature publication on the analysis of protein-coding genetic variation in 60,706 humans now available in the ExAC resource I suggest the utility emphasis for looking at germ-line vs somatic target aberrations should be re-balanced.\n\nAccording to their website, DGIdb (v2.22 - sha1 aa9170e) was last updated 2016-02-21 and not all primary sources loaded were the latest versions even then. For example DrugBank is now up to 5.0 and GtoPdb is up to 2016.3 and it is not clear if it has only ChEMBL 20 rather than 21. Unless DGIdb can be prevailed upon to update more frequently and provide the release statistics of content, the utility of this tool is constrained because users cannot trust the results to be up to date.\n\nThe main goal of this tool for the identification of targetable aberrations will be confounded by the conflation of loss vs gain of function on both the target and drug sides. As we know, genetic aberrations are predominantly LOF but most drugs also negatively modulate their targets. This should be discussed and perhaps even made filterable in some way?\n\nWhy does Table 1 show such an apparently inconsistent mosaic of results? Reasons for discordance between the individual sources need to be explained.\n\nGiven this tool was developed by SIB would it be possible to add in Swiss-Var as an independent source via Swiss-Prot or NeXtProt?",
"responses": [
{
"c_id": "2227",
"date": "10 Oct 2016",
"name": "Thomas Thurnherr",
"role": "Author Response",
"response": "We would like to thank Christoper Southan for his thoughtful comments. Please find our response below: There are several strategies that can be put in place to reduce the number of somatic aberrations to those which are likely relevant/functional. These include, but are not limited to: 1) Identification of recurrent mutations; 2) Prediction of the functional impact of a mutation; 3) Identification of recurrent combinations of mutations; and 4) Experimental validation. At this point, we would like to remark that the aim of rDGIdb is to annotate variants with potential drug-gene interactions and not to filter somatic variants. Other methods take care of filtering variants. Finally, rDGIdb is not limited in the number of genes to query. We query drug-gene interactions for several thousand genes at the time. We agree that germline mutations play a role in a variety of diseases, specifically in cancer. Although rDGIdb is not limited to somatic mutations, we aim to identify potential targeted therapies. That is, drugs that specifically target malignant cells. Therefore, we think that considering somatic mutations rather than germline mutations is justified in this case. We agree with the reviewer that DGIdb does not currently use the latest versions of all the resources it integrates. As a consequence, drug-gene interactions queried through rDGIdb might not agree with results from the most up-to-date resources. Results queried through rDGIdb are based on results from DGIdb and the resources it integrates. As a consequence, we have no control over how frequently resources are updated by DGIdb. However, we added a function to the package that prints the versions of all resource integrated by DGIdb. This helps the user to decide if the version available in rDGIdb/DGIdb is sufficient for the intended purpose. The function is documented in the updated manuscript and in the package vignette. The type of a drug-gene interaction can be filtered through an optional argument (interactionType) to the main query function. Possible values include suppressor, inhibitor, or activator. These limit reported drug-gene interactions to the interaction type of interest. Moreover, rDGIdb allows to query for specific gene categories, for instance tumor suppressor. These information/filters may help the user interpret the results provided by rDGIdb. Finally, we would like to point out that the interpretation of mutations in regard of their suitability as targetable mutations is beyond the scope of rDGIdb. All described filters and additional information can only assist the user in the interpretation of a specific mutation or interaction. The applicability of a certain therapy depends on a number of factors: cancer type, treatment history, and many others. We selected an example that is brief enough to be presented as a table in the manuscript. DDR2 seemed reasonable, with drug interactions in five different resources. The diversity of the drug-gene interactions in Table 1 can be explained by the diversity of these resources. For example, DrugBank lists experimental and approved drugs in any disease. In contrast, MyCancerGenomeClinicalTrials and CIViC list drugs in cancer only, which have either been approved by the authorities or are currently investigated through a clinical study. Finally, in the manuscript we mention that the most appropriate resources to be queried might depend on the application. We further explain how to filter for specific resources. To our knowledge, SwissVar does not catalogue drug-gene interactions, but provides information on variants and their disease relations. At the moment, the main focus of the package is to report drug-gene interactions. However, we agree that SwissVar provides useful additional information on the genes queried through rDGIdb. Therefore, we consider an extension of the scope in regard to disease associations for a future release of the package."
}
]
},
{
"id": "15658",
"date": "13 Sep 2016",
"name": "Ankush Sharma",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis R Package \"rDGidb\" is of immense usability for genomics and proteomics research community for integrating drug interactions with variations obtained from NGS data and researchers studying complex multi target drug-gene/protein interactions. The research article is written clearly and well organized except for results section which has a room for improvement. The minor concerns are outlined as follows:-\n\nWe recommend authors to demonstrate results shown in Table 1 as a pictorial representation such as drug-gene interaction network to increase readability.\n\nWe suggest inclusion of the information related to Source Trust Level.\n\nIt would be nice to include query option using reference SNP ID number (“rs” ID) or by chromosomal position of genomic aberrations obtained from Next Generation Sequencing pipeline to directly identify drugs associated with these clinically actionable variations.\n\nWe encountered a problem in installation of package “rDGidb”in R (version 3.3.1, release date 2016-06-21) with a warning message i.e. Package ‘rDGIdb’ is not available (for R version3.3.1).\n\nWe recommend authors to make “rDGidb” working and if this warning message is platform dependent, then please provide detailed documentation on software’s or any updates needed in existing packages before installation of package.",
"responses": [
{
"c_id": "2226",
"date": "10 Oct 2016",
"name": "Thomas Thurnherr",
"role": "Author Response",
"response": "We would like to thank Ankush Sharma and Sahidul Islam for their time and effort to review our manuscript. The concerns are addressed below: As suggested, we added a figure to the manuscript that illustrates drug-gene interactions of DDR2. “Source trust level” is a parameter available through the DGIdb API, but not through the DGIdb web interface. The parameter can either be set to “expert curated” or “non-curated”. We did not include it as a parameter in rDGIdb for mainly two reasons: 1) We aim to reflect the web interface as consistently as possible and “Source trust level” is not available for DGIdb website queries; and 2) It is not clear which resources or drug-gene interactions are considered “expert curated” and which are not. Thank you for the suggestion. Other packages implement variant call format (VCF) file import and annotation functionality. We recommend to use those. However, we added a paragraph to the manuscript (section “R session setup”) on that topic. Moreover, in the package vignette, we show how to employ the workflow with a VCF file as input. With this, we now provide a complete annotation workflow, from variants in VCF format to drug-gene interactions. Finally, we would like to point out that DGIdb queries are currently only possible on a gene level, but not on a variant level. Therefore, the association of a mutation in a specific position of the genome with a drug requires manual curation of the results obtained through rDGIdb. Finally, the package is not yet available in the current Bioconductor release branch (version 3.3). This is likely the reason why you encountered an error while installing the package. The release is scheduled for October 2016 (version 3.4). The release will make the package available through the standard installation procedure. Installation instructions for packages in the development branch are provided on the Bioconductor website (https://www.bioconductor.org/developers/)."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1963
|
https://f1000research.com/articles/4-89/v1
|
09 Apr 15
|
{
"type": "Research Article",
"title": "Abundance of ADAM9 transcripts increases in the blood in response to tissue damage",
"authors": [
"Darawan Rinchai",
"Chidchamai Kewcharoenwong",
"Bianca Kessler",
"Ganjana Lertmemongkolchai",
"Damien Chaussabel",
"Darawan Rinchai",
"Chidchamai Kewcharoenwong",
"Bianca Kessler",
"Ganjana Lertmemongkolchai"
],
"abstract": "Background: Members of the ADAM (a disintegrin and metalloprotease domain) family have emerged as critical regulators of cell-cell signaling during development and homeostasis. ADAM9 is consistently overexpressed in various human cancers, and has been shown to play an important role in tumorigenesis. However, little is known about the involvement of ADAM9 during immune-mediated processes. Results: Mining of an extensive compendium of transcriptomic datasets led to the discovery of gaps in knowledge for ADAM9 that reveal its role in immunological homeostasis and pathogenesis. The abundance of ADAM9 transcripts in the blood was increased in patients with acute infection but changed very little after in vitro exposure to a wide range of pathogen-associated molecular patterns (PAMPs). Furthermore it was found to increase significantly in subjects as a result of tissue injury or tissue remodeling, in absence of infectious processes. Conclusions: Our findings indicate that ADAM9 may constitute a valuable biomarker for the assessment of tissue damage, especially in clinical situations where other inflammatory markers are confounded by infectious processes.",
"keywords": [
"ADAM9",
"Data mining",
"Transcriptomics",
"RNAseq",
"Microarray"
],
"content": "Introduction\n\n“ADAM metallopeptidase 9 (ADAM9) is a member of the ADAM (a disintegrin and metalloprotease domain) family. Members of this family are membrane-anchored proteins structurally related to snake venom disintegrins, and have been implicated in a variety of biological processes involving cell-cell and cell-matrix interactions, including fertilization, muscle development, and neurogenesis. The protein encoded by this gene interacts with SH3 domain-containing proteins, binds mitotic arrest deficient 2 beta protein, and is also involved in TPA-induced ectodomain shedding of membrane-anchored heparin-binding EGF-like growth factor. Several alternatively spliced transcript variants have been identified for this gene.” (Quoted from RefSeq1).\n\nADAM9 top functions include cellular adhesion, protein cleavage and shedding. (Supplementary Figure 1). Human ADAM9 protein cleaves and releases collagen XVII from the surface of skin keratinocytes2. This activity is enhanced in the presence of reactive oxygen species. Mouse ADAM9 protein cleaves and releases epidermal growth factor (EGF) and fibroblast growth factor receptor 2IIIb (FGFR2IIIb) from the surface of prostate epithelial cells3. Following LPS treatment, ADAM9 protein catalytic domain cleaves Angiotensin-I converting enzyme (ACE) from the surface of endothelial cells4. Human ADAM9 protein disintegrin-cysteine-rich domain binds integrins and thus mediates cell adhesion5. Human ADAM9 protein enhances adhesion and invasion of non-small lung tumors which mediates tumor metastasis6. Mouse ADAM9 protein enhances tissue plasminogen activator (TPA)-mediated cleavage of CUB domain-containing protein 1 (CDCP1)7. This activity mediates lung tumor metastasis. Human ADAM9 protein mediates cell-cell contact interaction between stromal fibroblasts and melanoma cells at the tumor-stroma border, thus contributing to proteolytic activities required during invasion of melanoma cells8.\n\nADAM9 expression and regulation. ADAM9 has been reported as being expressed in various cell populations including monocytes9, activated macrophages10, epithelial cells, activated vascular smooth muscle cells, fibroblasts8, keratinocytes and tumor cells. The abundance of ADAM9 RNA measured by RT-PCR is decreased in vitro in human melanoma cells after culture with collagen type I or with Interleukin 1 alpha (IL1α) compared to mock stimulated conditions11.\n\nADAM9 has been involved in disease processes including cancer, cone rod dystrophy and atherosclerosis. Homozygous mutation of the human ADAM9 gene results in severe cone rod dystrophy and cataract12. Mutation of the mouse ADAM9 gene results in no major abnormalities during development and adult life13. The abundance of ADAM9 RNA and protein measured by immunostaining and RT-PCR is increased in vivo in human prostate tumors compared to normal tissue14. The abundance of ADAM9 RNA measured by microarray and RT-PCR is increased in vivo in human advanced atherosclerotic plaque macrophages compared to normal tissue15. This increase is predictive of Prostate Specific Antigen (PSA) relapse.\n\nIt is known that ADAM9 is upregulated in some tumor cells during pathologic processes and also contributes to the formation of multinucleate giant cells from monocytes and macrophages10. However, little is known about the activities of ADAM9 in regulating physiologic or pathologic processes, especially during acute infection or in response to tissue damage.\n\n\nMethods\n\nExisting knowledge pertaining to ADAM9 was retrieved using NCBI’s National Library of Medicine’s Pubmed search engine with a query that included official gene symbol and name as well as aliases: “ADAM9 OR ADAM-9 OR \"ADAM metallopeptidase domain 9\" OR MCMP OR MDC9 OR CORD9”. As of January of 2015, 287 papers were returned when running this query. By reviewing this literature keywords were identified that were classified under six categories corresponding to cell types, diseases, functions, tissues, molecules or processes. Frequencies of these keywords were then determined for the ADAM9 bibliography as shown in Supplementary Figure 1. This literature screen identified and prioritized existing knowledge about the gene ADAM9 and was used to prepare the background section of this manuscript and provided the necessary perspective for the interpretation of ADAM9 profiles across other large-scale datasets.\n\nWe employed a resource that is described in details in a separate manuscript (submitted) and is available publicly: https://gxb.benaroyaresearch.org/dm3/landing.gsp. Briefly: we have assembled and curated a collection of 172 datasets that are relevant to human immunology, representing a total of 12,886 unique transcriptome profiles. These sets were selected among studies currently available in NCBI’s Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/).\n\nThe custom software interface provides the user with a means to easily navigate and filter the compendium of available datasets (https://gxb.benaroyaresearch.org/dm3/geneBrowser/list). Datasets of interest can be quickly identified either by filtering on criteria from pre-defined lists on the left or by entering a query term in the search box at the top of the dataset navigation page.\n\nClicking on one of the studies listed in the dataset navigation page opens a viewer designed to provide interactive browsing and graphic representations of large-scale data in an interpretable format. This interface is designed to navigate ranked gene lists and display expression results graphically in a context-rich environment. Selecting a gene from the rank ordered list on the left of the data-viewing interface will display its expression values graphically in the screen’s central panel. Directly above the graphical display drop down menus give users the ability: a) To change how the gene list is ranked; this allows the user to change the method used to rank the genes, or to include only genes that are selected for specific biological interest. b) To change sample grouping (Group Set button); in some datasets, a user can switch between groups based on cell type to groups based on disease type, for example. c) To sort individual samples within a group based on associated categorical or continuous variables (e.g. gender or age). d) To toggle between the histogram view and a box plot view, with expression values represented as a single point for each sample. Samples are split into the same groups whether displayed as a histogram or box plot. e) To provide a color legend for the sample groups. f) To select categorical information that is to be overlaid at the bottom of the graph. For example, the user can display gender or smoking status in this manner. g) To provide a color legend for the categorical information overlaid at the bottom of the graph. h) To download the graph as a jpeg image.\n\nMeasurements have no intrinsic utility in absence of contextual information. It is this contextual information that makes the results of a study or experiment interpretable. It is therefore important to capture, integrate and display information that will give users the ability to interpret data and gain new insights from it. We have organized this information under different tabs directly above the graphical display. The tabs can be hidden to make more room for displaying the data plots, or revealed by clicking on the blue “show info panel” button on the top right corner of the display. Information about the gene selected from the list on the left side of the display is available under the “Gene” tab. Information about the study is available under the “Study” tab. Information available about individual samples is provided under the “Sample” tab. Rolling the mouse cursor over a histogram bar while displaying the “Sample” tab lists any clinical, demographic, or laboratory information available for the selected sample. Finally, the “Downloads” tab allows advanced users to retrieve the original dataset for analysis outside this tool. It also provides all available sample annotation data for use alongside the expression data in third party analysis software.\n\nAll statistical analyses were performed using GraphPad Prism software version 6 (GraphPad Software, San Diego, CA).\n\n\nResults and discussion\n\nThe seminal discovery was made while examining RNAseq transcriptional profiles. A knowledge gap was exposed when those results were interpreted in light of existing knowledge reported in the literature. Next, the initial observation was validated and further extended by examining profiles of the gene of interest, ADAM9, across a large number of independent publically available transcriptome datasets. The completion of these tasks was aided by a custom data browsing application loaded with a curated compendium of 172 datasets relevant to human immunology sourced from the National Center for Biotechnology Information’s (NCBI) Gene Expression Omnibus (GEO) (https://gxb.benaroyaresearch.org/dm3/landing.gsp, manuscript submitted). Briefly, ADAM9 transcript was identified as a potential early stage discovery while browsing RNA-sequencing profiles of blood leukocyte populations (https://gxb.benaroyaresearch.org/dm3/geneBrowser/show/396), with the genes being ranked in alphabetical order. In this particular dataset whole blood sample of healthy donors, patients during acute infections (meningococcal sepsis, E. coli sepsis, C. difficile colitis), multiple sclerosis patients pre- and post- interferon treatment, patients with Type 1 diabetes and patients with amyotrophic lateral sclerosis (ALS) were obtained and monocyte, neutrophil, CD4 T cell, CD8 T cells, B cell, NK Cell isolated prior to profiling via RNA sequencing16. The abundance of ADAM9 RNA measured by RNA-seq in human blood neutrophils and monocyte samples from subjects with sepsis was found to be markedly increased as compared to uninfected controls (Figure 1; [iFigure/GSE60424]16). By comparison levels of abundance of ADAM9 RNA in lymphocytes and Natural Killer (NK) cells were low and no changes were observed in subjects with sepsis in these cell populations. Despite the small number of septic subjects included in the study (N=3) the robust increase in abundance that was observed prompted attempts to validate and further extend this initial observation in independent public datasets that were part of the compendium.\n\nThe graph presented the abundance of ADAM9 RNA measured by RNA-seq in dataset whole blood sample of healthy donors, patients during acute infections, multiple sclerosis patients pre- and post- interferon treatment, patients with Type 1 diabetes and patients with ALS were obtained and monocyte (dark green), neutrophil (purple), CD4 T cell (blue), CD8 T cells (yellow), B cell (brown), NK Cell (maroon) isolated prior to profiling via RNA sequencing. Samples are group per disease thus each cluster of bars includes all cell types (as indicated by color coded squares underneath the bars).\n\nOur data browsing tool allows the assessment of expression profiles across transcriptome datasets (https://gxb.benaroyaresearch.org/dm3/geneBrowser/list). In order to validate and extend our original observation we looked up ADAM9 transcriptome profiles for all available 172 datasets (https://gxb.benaroyaresearch.org/dm3/geneBrowser/crossProject?probeID=ENSG00000168615&geneSymbol=ADAM9&geneID=8754studies).\n\nThe abundance of ADAM9 RNA measured by microarrays in human blood samples was significantly increased as compared to uninfected controls in subjects with sepsis [iFigure/GSE28750]17 & [iFigure/GSE29536]18, in subjects with bacterial and influenza pneumonia [iFigure/GSE34205]19, [iFigure/GSE40012]20, in subjects with respiratory syncytial virus (RSV) infection [iFigure/GSE34205]19 & [iFigure/GSE17156]19 and in subjects with tuberculosis [iFigure/GSE19439]21 & [iFigure/GSE34608]22. Aggregated findings were reported in the form of flow charts that were generated using google docs presentations, with links to the source interactive graphs systematically provided as hyperlinks (Figure 2, Supplementary Figure 2 and Table 1). Altogether these data indicate that increase in abundance of ADAM9 can be detected in blood leukocytes, including monocytes and neutrophils fractions during bacterial and viral infection.\n\nAggregated results obtained via the screening of a large compendium of datasets are represented graphically. The flow chart indicates how data were generated. Diamonds indicate supporting data and in the interactive version are hyperlinked to context-rich interactive plots. Links to these plots are also provided below:\n\n❶ GSE34205: In this study gene expression profiles were obtained from the whole blood of critically ill pediatric patients19, Children hospitalized with acute RSV and influenza virus infection were offered study enrollment after microbiologic confirmation of the diagnosis. Blood samples were collected within 42–72 hours of hospitalization. Median age of subjects was 2.4 months (range 1.5–8.6). Uninfected subjects of similar demographics were recruited in the study and served as controls. Children with suspected or proven polymicrobial infections, with underlying chronic medical conditions (i.e congenital heart disease, renal insufficiency), with immunodeficiency, or those who received systemic steroids or other immunomodulatory therapies were excluded. More details are available via the interactive data browsing application under the “study” tab.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Ka\n\n❷ GSE19439: Whole blood was collected from patients with different spectra of tuberculosis (TB) disease and healthy controls21. All patients were sampled prior to the initiation of any anti-mycobacterial therapy. Active Pulmonary TB: all patients confirmed by isolation of Mycobacterium tuberculosis on culture of sputum or bronchoalvelolar lavage fluid. Latent TB: All patients were positive by tuberculin skin test (>14mm if BCG vaccinated, >5mm if not vaccinated) and were also positive by Interferon-Gamma Release assay (IGRA).\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Kb\n\n❸ GSE29536: Whole blood was collected from culture positive patients meeting criteria for sepsis enrolled in two independent cohorts (Sepsis 1 and Sepsis 2)18. Uninfected controls recruited in this study were of similar demographics.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Jl\n\n❹ GSE60424: Whole blood sample of healthy donors, patients during acute infections (meningococcal sepsis, E. coli sepsis, C. difficile colitis), multiple sclerosis patients pre- and post- interferon treatment, patients with Type 1 diabetes and patients with ALS were obtained and monocyte, neutrophil, CD4 T cell, CD8 T cells, B cell, NK Cell isolated prior to profiling via RNA sequencing17.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Kc\n\nStatistical significance was determined using Mann-Whitney U test. ns, not significant, * p < 0.05, *** p < 0.001 and *** p < 0.0001. The horizontal lines indicate mean ± standard errors (SE).\n\nNote : Avg = average abundance of ADAM9 within a given group. Statistical significance was determined using Mann-Whitney U test.\n\nNext, we investigated the regulation of ADAM9 transcription following leukocyte exposure to pathogens and pathogen-associated molecules. The abundance of ADAM9 RNA measured by microarrays in human blood cultures treated with Heat Killed E.coli, Heat Killed Staphylococcus aureus (HKSA) or Heat Killed Legionella pneumophillum (HKLP) for 6 hours was increased marginally as compared to unstimulated conditions [iFigure/GSE30101]23. The abundance of ADAM9 RNA measured by microarrays in human blood cultures treated with Heat Killed Acholeplasma laidlawii (HKAS), E. coli LPS (E-LPS), Flagellin, PAM3, R837, Zymosan, Influenza virus, RSV, CpG, Poly:IC, for 6 hours was not changed as compared to unstimulated conditions (Ex-vivo) [iFigure/GSE30101]23 ; IL8 [iFigure] and CXCL10 [iFigure] serve as positive controls. The abundance of ADAM9 RNA measured by microarrays in human blood samples from subjects treated with poly:IC for 1 day was marginally increased as compared to baseline samples [iFigure/GSE32862]24; CXCL10 [iFigure] serves as a positive control (Figure 3 and Supplementary Figure 3). Statistical analysis results are shown in Table 2. Taken together, these results showed that the abundance of ADAM9 was not changed or changed only marginally after stimulation with purified molecules bearing Pathogen Associated Molecular Patterns (PAMPs). These finding raised the question as to whether ADAM9 transcription might be activated instead by host-derived Damage-Associated Molecular Pattern molecule (DAMPs)25,26.\n\nAggregated results obtained via the screening of a large compendium of datasets are represented graphically. The flow chart indicates how data were generated. Diamonds indicate supporting data and in the interactive version are hyperlinked to context-rich interactive plots. Links to these plots are also provided below:\n\n❶ GSE32862 Blood was collected at multiple time points from 8 healthy volunteers following sub-cutaneous administration of synthetic dsRNA (poly:IC)24.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Kd\n\n❷ GSE30101 Blood was collected from four healthy individuals and stimulated in vitro for 6 hours with a wide range of immune stimuli including PAM3, Zymosan, Poly IC, E-LPS, Flagellin, R837, CpG Type A, heat-killed Legionella pneumophila (HKLP), heat-killed Acholeplasma laidlawii (HKAL), and heat-killed Staphylococcus aureus (HKSA); IL-18, TNF-α, IFN-α2b, IFN-β, IFN-γ; heat-killed Escherichia coli, live influenza A virus and live RSV23.\n\nhttp://www.interactivefigures.com:80/dm3/miniURL/view/KB\n\n❸ See description above.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Jr\n\n❹ See description above.\n\nhttp://www.interactivefigures.com:80/dm3/miniURL/view/Jw\n\nStatistical significance was determined using one-way ANOVA and Dunnett’s multiple comparisons test. ns, not significant, * p < 0.05, ** p < 0.01, and *** p < 0.001. The horizontal lines indicate mean ± standard errors (SE).\n\nNote : Avg = average abundance of ADAM9 within a given group. Statistical significance was determined using Mann-Whitney U test.\n\nOur dataset screen revealed in addition that changes in abundance of ADAM9 could be associated with tissue remodeling. The abundance of ADAM9 RNA measured by microarrays in human skin biopsy samples of subjects with lepromatous leprosy was significantly increased as compared to controls in subjects with tuberculoid leprosy [iFigure/GSE17763]27. The abundance of ADAM9 RNA measured by microarrays in human blood samples was significantly increased as compared to controls in pregnant subjects [iFigure/GSE17449]28. The abundance of ADAM9 RNA measured by microarrays in human blood monocytes samples from subjects with filariasis was significantly increased as compared to uninfected controls [iFigure/GSE2135]29. These results are shown in Table 3, Figure 4 and Supplementary Figure 4. A common thread between these different states is that they involve extensive tissue remodeling, whether it involves the skin (leprosy), placental tissue (pregnancy) or lymphatic tissues (filariasis).\n\nAggregated results obtained via the screening of a large compendium of datasets are represented graphically. The flow chart indicates how data were generated. Diamonds indicate supporting data and in the interactive version are hyperlinked to context-rich interactive plots. Links to these plots are also provided below:\n\n❶ GSE17763 Skin biopsies were obtained from patients with leprosy classified as tuberculoid leprosy (controlled disease, few skin lesions) or lepromatous leprosy (uncontrolled diseases, widespread lesions)27. All tuberculoid and lepromatous specimens were taken at the time of diagnosis before treatment, and reversal reaction biopsies (labeled as “reaction”) were taken upon follow from patients originally diagnosed with borderline lepromatous leprosy.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Ke\n\n❷ GSE17449 Peripheral Blood Mononuclear Cells were isolated from the blood of 12 women (7 MS patients and 5 healthy controls) followed during their pregnancy28. Samples were obtained before pregnancy and at 9 months.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/KD\n\n❸ GSE2135 Monocytes were isolated from the peripheral blood of patently infected filaria patients (either Wuchereria bancrofti, Mansonella perstans, or both), and from uninfected blood bank donors in Mali29. Samples were collected from infected patients prior to and after antifilarial treatment.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/KB\n\nStatistical significance was determined using Mann-Whitney U test. ns, not significant, * p < 0.05 and *** p < 0.001. The horizontal lines indicate mean ± standard errors (SE).\n\nNote : Avg = average abundance of ADAM9 within a given group. Statistical significance were determined using Mann-Whitney U test. * (Pair samples) Statistical significance was determined using Wilcoxon test.\n\nChanges in ADAM9 transcript abundance were observed in additional datasets: The abundance of ADAM9 RNA measured by microarrays in human blood samples was significantly increased as compared to healthy controls in subjects with sarcoidosis [iFigure/GSE34608]22, in subjects after severe blunt trauma [iFigure/GSE11375]30, in subjects with chronic kidney disease [iFigure/GSE15072]31, and in subjects who have undergone elective thoracic or abdominal surgery [iFigure/GSE28750]17. The abundance of ADAM9 RNA measured by microarrays in human blood samples from subjects treated with localized external beam radiation therapy for 42 days was significantly increased as compared to baseline samples [iFigure/GSE30174]32. The abundance of ADAM9 RNA measured by microarrays in human blood monocytes samples from obese subjects was significantly increased as compared to lean controls [iFigure/GSE32575]33. Finally, the abundance of ADAM9 RNA measured by microarrays in human blood monocytes samples from subjects after severe trauma was significantly increased as compared to healthy controls [iFigure/GSE5580]34. These results showed that increase in ADAM9 transcript abundance was associated with tissue injury and sterile inflammation (Table 4, Figure 5 and Supplementary Figure 5) and thus are consistent with the observations that are reported above associating increase in ADAM9 RNA with responses to Damage-Associated Molecular Pattern molecules (DAMPs) and tissue remodeling.\n\nAggregated results obtained via the screening of a large compendium of datasets are represented graphically (https://docs.google.com/presentation/d/12ytv11_LmMOAsocziIAe8MwwKOrGgHSO60hpdK2hHsQ/edit#slide=id.g496fd210c_046). The flow chart indicates how data were generated. Diamonds indicate supporting data and in the interactive version are hyperlinked to context-rich interactive plots. Links to these plots are also provided below:\n\n❶ GSE34608 blood was collected from patients with active tuberculosis and sarcoidosis as well as uninfected controls22.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/Jt\n\n❷ GSE11375 blood was collected from patients following severe blunt trauma within 12 h of traumatic injury30.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/K8\n\n❸ GSE15072 Peripheral Blood Mononuclear Cells were isolated from the blood of patients with stage II-III Chronic kidney disease (CKD), patients undergoing hemodialysis treatment (HD) and healthy controls31.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/KE\n\n❹ GSE28750 Blood was collected from sepsis patients with clinical evidence of systemic infection based on microbiology diagnoses (n=27). Participants in the post-surgical (PS) group were recruited pre-operatively and blood samples collected within 24 hours following surgery (n=38). Healthy controls (HC) included hospital staff with no known concurrent illnesses (n=20)17.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/K6\n\n❺ GSE30174 Blood samples were collected from ten subjects at 7 timepoints for microarray analysis: baseline (before External Beam Radiation Therapy - EBRT); days 1, 7, 14, 21, 42 of EBRT; and 30 days post-EBRT. Baseline data obtained from subjects were compared to data obtained from age-, race-, and gender-matched healthy controls32.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/K4\n\n❻ GSE32575 CD14+ monocytes were isolated from the blood of 18 morbidly obese subjects (BMI: 45.1±1.4 kg/m2) before and three months after bariatric surgery. Six lean age-matched female (BMI: 20.3±0.5 kg/m2, mean±SEM) were used as controls33.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/K5\n\n❼ GSE5580 Monocytes were isolated from the peripheral venous blood of seven subjects with defined multi organ dysfunction syndrome that developed after experiencing severe traumatic injury. Blood was also obtained from seven age-, sex-, and ethnicity-matched healthy subjects34.\n\nhttps://gxb.benaroyaresearch.org/dm3/miniURL/view/KC\n\nStatistical significance was determined using Mann-Whitney U test or one-way ANOVA and Dunnett’s multiple comparisons test (GSE30174). ns, not significant, * p < 0.05, ** p < 0.01, and *** p < 0.001. The horizontal lines indicate mean ± standard errors (SE).\n\nNote : Avg = average abundance of ADAM9 within a given group. Statistical significance was determined using Mann-Whitney U test. ** This dataset was tested by One-way ANOVA and Dunnett’s multiple comparisons test, P value summary = 0.0042.\n\n\nConclusions\n\nThis study is the first report describing the modulation of levels of ADAM9 transcripts in human whole blood and showing restriction of its expression to neutrophils and monocytes. In addition we observed that the abundance of ADAM9 was increased during acute infection but did not change after stimulation with pathogen-derived molecules. It was not changed in vivo following administration of synthetic double stranded RNA (polyIC), a treatment that mimics viral exposure. Notably, it was not increased either in patients during the early acute phase of HIV infection when an intense immunological response is detected in absence of clinical symptoms iFigure/GSE29536]18. However, ADAM9 transcript abundance was increased in the blood of patients as a result of tissue damage, sterile inflammation and tissue remodeling. Therefore, in addition to its widely reported role in the pathogenesis of cancer the constellation of findings that we are reporting point towards the involvement of ADAM9 in immune-mediated processes and suggest that ADAM9 may constitute a valuable marker for assessing tissue damage, whether it occurs as result of acute infection, traumatic injury or medical procedures such as surgery or radiation therapy. Indeed, these findings may be of especially high significance in the context of acute infections since unlike “generic” markers of inflammation, that could also be used to assess tissue injury in other settings, ADAM9 would not be confounded by the host responses to the pathogen and may therefore accurately reflect damage to the patient tissues or organs (Figure 6). Thus ADAM9 blood transcript levels, or possibly levels of circulating proteins, could potentially be employed for triage of patients presenting with symptoms of infection in the emergency room or for monitoring of patients in intensive care units.\n\nA. Sterile inflammation resulting from tissue injury caused for instance by severe trauma, surgery or radiation therapy can be monitored via the use of prototypical markers of inflammation (acute phase proteins) with ADAM9 levels increasing in concert. B. Acute infection also causes a measurable inflammatory response that is the direct result of the antimicrobial response mounted by the immune system. This response can develop in absence of substantial tissue injury and thus does not cause an increase in abundance of ADAM9. C. When substantial tissue injury occurs as a result of the infection the abundance of ADAM9 rises, which detection enables the identification and triage of critically ill subjects.\n\n\nData availability\n\nAll primary data presented in this manuscript can be accessed along with contextual information via the data browsing application described above and is also available in NCBI’s GEO public repository. GEO accession numbers (starting with GSE) are provided where appropriate throughout this manuscript along with the primary reference associated with the GEO record.\n\nF1000Research: Dataset 1. Raw data of ADAM9 transcripts in blood in response to tissue damage, 10.5256/f1000research.6241.d4506135",
"appendix": "Author contributions\n\n\n\nDR and DC designed the analytic approach, mined the data, prepared figures and drafted the manuscript. CK, BK, GL participated in the mining of the dataset compendium. All authors read and approved manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe would like to thank Dr Laurent Chiche for constructive comments.\n\n\nSupplementary Figures\n\nThis bubble graph indicates the frequency of abstracts of papers published on ADAM9 containing curated terms (Y-axis: Abstract Term) as of January 2015. Terms are ordered alphabetically along the X-axis. The colors indicate different categories (Blue: Cell type, Green: Disease, Red: Tissue, Light blue: Cellular component, Yellow: Function. The size of the bubbles is proportional to the number of abstract terms. (Interactive version: Additional file iFIGURE 1 - X-axis drop down menu, select “Order: Alphabetical”; Y-axis drop down menu, select Log Scale; Click “Play” (bottom left corner)).\n\nThese plots show data supporting the notion that the abundance of ADAM9 increases during infection as presented in Figure 2 of the manuscript, and are accessible online in an interactive format:\n\n❶ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Ka\n\n❷ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Kb\n\n❸ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Jl\n\n❹ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Kc\n\nThese plots show data supporting the notion that the abundance of ADAM9 increases only marginally following treatment with pathogen-associated molecules as presented in Figure 3 of the manuscript, and are accessible online in an interactive format:\n\n❶ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Kd\n\n❷ http://www.interactivefigures.com:80/dm3/miniURL/view/KB\n\n❸ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Jr\n\n❹ http://www.interactivefigures.com:80/dm3/miniURL/view/Jw\n\nThese plots show data supporting the notion that the abundance of ADAM9 increases during tissue remodeling as presented in Figure 4 of the manuscript, and are accessible online in an interactive format:\n\n❶ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Ke\n\n❷ https://gxb.benaroyaresearch.org/dm3/miniURL/view/KD\n\n❸ https://gxb.benaroyaresearch.org/dm3/miniURL/view/KB\n\nThese plots show data supporting the notion that the abundance of ADAM9 increases following tissue injury and sterile inflammation and are accessible online in an interactive format:\n\n❶ https://gxb.benaroyaresearch.org/dm3/miniURL/view/Jt\n\n❷ https://gxb.benaroyaresearch.org/dm3/miniURL/view/K8\n\n❸ https://gxb.benaroyaresearch.org/dm3/miniURL/view/KE\n\n❹ https://gxb.benaroyaresearch.org/dm3/miniURL/view/K6\n\n❺ https://gxb.benaroyaresearch.org/dm3/miniURL/view/K4\n\n❻ https://gxb.benaroyaresearch.org/dm3/miniURL/view/K5\n\n❼ https://gxb.benaroyaresearch.org/dm3/miniURL/view/KC\n\n\nReferences\n\nPruitt KD, Brown GR, Hiatt SM, et al.: RefSeq: an update on mammalian reference sequences. Nucleic Acids Res. 2014; 42(Database issue): D756–763. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFranzke CW, Bruckner-Tuderman L, Blobel CP: Shedding of collagen XVII/BP180 in skin depends on both ADAM10 and ADAM9. J Biol Chem. 2009; 284(35): 23386–23396. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeduto L, Reuter VE, Shaffer DR, et al.: Critical function for ADAM9 in mouse prostate cancer. Cancer Res. 2005; 65(20): 9312–9319. PubMed Abstract | Publisher Full Text\n\nEnglish WR, Corvol P, Murphy G: LPS activates ADAM9 dependent shedding of ACE from endothelial cells. Biochem Biophys Res Commun. 2012; 421(1): 70–75. PubMed Abstract | Publisher Full Text\n\nZigrino P, Steiger J, Fox JW, et al.: Role of ADAM-9 disintegrin-cysteine-rich domains in human keratinocyte migration. J Biol Chem. 2007; 282(42): 30785–30793. PubMed Abstract | Publisher Full Text\n\nShintani Y, Higashiyama S, Ohta M, et al.: Overexpression of ADAM9 in non-small cell lung cancer correlates with brain metastasis. Cancer Res. 2004; 64(12): 4190–4196. PubMed Abstract | Publisher Full Text\n\nLin CY, Chen HJ, Huang CC, et al.: ADAM9 promotes lung cancer metastases to brain by a plasminogen activator-based pathway. Cancer Res. 2014; 74(18): 5229–5243. PubMed Abstract | Publisher Full Text\n\nZigrino P, Nischt R, Mauch C: The disintegrin-like and cysteine-rich domains of ADAM-9 mediate interactions between melanoma cells and fibroblasts. J Biol Chem. 2011; 286(8): 6801–6807. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNamba K, Nishio M, Mori K, et al.: Involvement of ADAM9 in multinucleated giant cell formation of blood monocytes. Cell Immunol. 2001; 213(2): 104–113. PubMed Abstract | Publisher Full Text\n\nIzumi Y, Hirata M, Hasuwa H, et al.: A metalloprotease-disintegrin, MDC9/meltrin-gamma/ADAM9 and PKCdelta are involved in TPA-induced ectodomain shedding of membrane-anchored heparin-binding EGF-like growth factor. EMBO J. 1998; 17(24): 7260–7272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZigrino P, Mauch C, Fox JW, et al.: Adam-9 expression and regulation in human skin melanoma and melanoma cell lines. Int J Cancer. 2005; 116(6): 853–859. PubMed Abstract | Publisher Full Text\n\nEl-Haig WM, Jakobsson C, Favez T, et al.: Novel ADAM9 homozygous mutation in a consanguineous Egyptian family with severe cone-rod dystrophy and cataract. Br J Ophthalmol. 2014; 98(12): 1718–1723. PubMed Abstract | Publisher Full Text\n\nWeskamp G, Cai H, Brodie TA, et al.: Mice lacking the metalloprotease-disintegrin MDC9 (ADAM9) have no evident major abnormalities during development or adult life. Mol Cell Biol. 2002; 22(5): 1537–1544. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFritzsche FR, Jung M, Tolle A, et al.: ADAM9 expression is a significant and independent prognostic marker of PSA relapse in prostate cancer. Eur Urol. 2008; 54(5): 1097–1106. PubMed Abstract | Publisher Full Text\n\nOksala N, Levula M, Airla N, et al.: ADAM-9, ADAM-15, and ADAM-17 are upregulated in macrophages in advanced human atherosclerotic plaques in aorta and carotid and femoral arteries--Tampere vascular study. Ann Med. 2009; 41(4): 279–290. PubMed Abstract | Publisher Full Text\n\nLinsley PS, Speake C, Whalen E, et al.: Copy number loss of the interferon gene cluster in melanomas is linked to reduced T cell infiltrate and poor patient prognosis. PLoS One. 2014; 9(10): e109760. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSutherland A, Thomas M, Brandon RA, et al.: Development and validation of a novel molecular biomarker diagnostic test for the early detection of sepsis. Crit Care. 2011; 15(3): R149. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPankla R, Buddhisa S, Berry M, et al.: Genomic transcriptional profiling identifies a candidate blood biomarker signature for the diagnosis of septicemic melioidosis. Genome Biol. 2009; 10(11): R127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIoannidis I, McNally B, Willette M, et al.: Plasticity and virus specificity of the airway epithelial cell immune response during respiratory virus infection. J Virol. 2012; 86(10): 5422–5436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParnell GP, McLean AS, Booth DR, et al.: A distinct influenza infection signature in the blood transcriptome of patients with severe community-acquired pneumonia. Crit Care. 2012; 16(4): R157. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerry MP, Graham CM, McNab FW, et al.: An interferon-inducible neutrophil-driven blood transcriptional signature in human tuberculosis. Nature. 2010; 466(7309): 973–977. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaertzdorf J, Weiner J, Mollenkopf HJ, et al.: Common patterns and disease-related signatures in tuberculosis and sarcoidosis. Proc Natl Acad Sci U S A. 2012; 109(20): 7853–7858. PubMed Abstract | Publisher Full Text | Free Full Text\n\nObermoser G, Presnell S, Domico K, et al.: Systems scale interactive exploration reveals quantitative and qualitative differences in response to influenza and pneumococcal vaccines. Immunity. 2013; 38(4): 831–844. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaskey M, Lefebvre F, Filali-Mouhim A, et al.: Synthetic double-stranded RNA induces innate immune responses similar to a live viral vaccine in humans. J Exp Med. 2011; 208(12): 2357–2366. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeldman N, Rotter-Maskowitz A, Okun E: DAMPs as mediators of sterile inflammation in aging-related pathologies. Ageing Res Rev. 2015; pii: S1568-1637(15)00005-7. PubMed Abstract | Publisher Full Text\n\nKang R, Lotze MT, Zeh HJ, et al.: Cell death and DAMPs in acute pancreatitis. Mol Med. 2014; 20(1): 466–477. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMontoya D, Cruz D, Teles RM, et al.: Divergence of macrophage phagocytic and antimicrobial programs in leprosy. Cell Host Microbe. 2009; 6(4): 343–353. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilli F, Lindberg RL, Valentino P, et al.: Learning from nature: pregnancy changes the expression of inflammation-related genes in patients with multiple sclerosis. PLoS One. 2010; 5(1): e8962. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSemnani RT, Keiser PB, Coulibaly YI, et al.: Filaria-induced monocyte dysfunction and its reversal following treatment. Infect Immun. 2006; 74(8): 4409–4417. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWarren HS, Elson CM, Hayden DL, et al.: A genomic score prognostic of outcome in trauma patients. Mol Med. 2009; 15(7–8): 220–227. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGranata S, Zaza G, Simone S, et al.: Mitochondrial dysregulation and oxidative stress in patients with chronic kidney disease. BMC Genomics. 2009; 10: 388. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaligan LN, Hsiao CP, Wang D, et al.: Upregulation of α-synuclein during localized radiation therapy signals the association of cancer-related fatigue with the activation of inflammatory and neuroprotective pathways. Brain Behav Immun. 2013; 27(1): 63–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHulsmans M, Geeraert B, De Keyzer D, et al.: Interleukin-1 receptor-associated kinase-3 is a key inhibitor of inflammation in obesity and metabolic syndrome. PLoS One. 2012; 7(1): e30414. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaudanski K, Miller-Graziano C, Xiao W, et al.: Cell-specific expression and pathway analyses reveal alterations in trauma-related human T cell and monocyte pathways. Proc Natl Acad Sci U S A. 2006; 103(42): 15564–15569. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRinchai D, Kewcharoenwong C, Kessler B, et al.: Dataset 1 in: Abundance of ADAM9 transcripts increases in the blood in response to tissue damage. F1000Research. 2015. Data Source"
}
|
[
{
"id": "8789",
"date": "16 Jul 2015",
"name": "Adaikalavan Ramasamy",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRinchai et al. suggest a novel role for ADAM9 by mining exisiting dataset. This clever re-use of existing dataset is a demonstration on how scientists can test new hypothesis quickly, inexpensively and with more robustness. They also provide a web tool based on 172 curated datasets (https://gxb.benaroyaresearch.org/dm3/geneBrowser) which makes is a practical resource.All sections of the article is extremely well written and I strongly recommend the article be indexed subject to the following comments.Introduction: The introduction starts with the Refseq definition of ADAM9 and a thorough review of existing literature on gene function of ADAM9. It left me wondering what motivated them to ADAM9 until the first section of Results (Knowledge gap assessment). It would be useful to the reader if a brief sentence or two on the motivation to study this gene was at the beginning of the Introduction section. Figure 1: I find it very difficult to color match the Cell type on the x-axis of Figure 1 especially when it appears legend colors are sorted differently. A plot with seven smaller panels (one for each cell type) or even just 2 panels (nueturophils and monocytes) might be clearer. Can you add GSE60424 to title of Figure 1? General comment on Figures 2 - 5 and Tables 1 - 4:a) There is an inconsistency in the number of datasets stated in text and demonstrated in the figure. E.g. the text for Figure 2 talks about seven datasets but figure only shows three and Table 1 also talks about three datasets but includes SOJIA vs Control and HIV vs Control from GSE29536.b) I find the process diagrams (top half of figures) distracting and redundant with text and legend. This space could be used to incorporate the other studies. I suggest incorporating the cell type and measurement type after the study names on plot (e.g. GSE34205 \\n microarray on whole blood; GSE29536 \\n RNA-seq on neutrophils). Legend is well described.c) The column for \"Avg A - Avg B\" is meaningless especially when comparing different platforms. The fold change (Avg A / Avg B) is more meaningful and would be worth stating to two decimal points. d) If possible, combine Tables 1 - 4 into one page, possibly a large table with subheadings for during infection, after treatment with PAMPs, during tissue remodelling etc ... T Forest plot: An alternative/additional suggestion to 3d is to present the data visually in the form of a forest plot with subheadings (e.g. https://www.nichd.nih.gov/cochrane_data/mcguirew_13/fig2019799225306621155.png; test of heterogeneity might not be necessary). This allows the readers to visually scan all of the in one page. Many ways of doing this but I suggest calculating the Glass effect size (see R codes below) followed by foresplot function from R package rmeta or forestplot. The authors might also find such a plot on their webtool useful in the long run but this is beyond the scope of current paper.effectSize <- function(baseline, posttest){ ## Source http://www.meta-analysis.com/downloads/Meta-analysis%20Effect%20sizes%20based%20on%20means.pdfstopifnot( length(posttest)==length(baseline) ) ## assume the data is in same patient orderw <- which( !is.na(posttest) & !is.na(baseline) )posttest <- posttest[w]baseline <- baseline[w] r\n\n<- cor( posttest, baseline ) diff\n\n<- posttest - baseline n\n\n<- length(diff) S.diff\n\n<- sqrt( var(baseline) + var(posttest) - 2*cov(baseline, posttest) )\n\n# note: var(x - y) = var(x) + var(y) - 2cov(x, y) S.within <- S.diff/sqrt(2*(1-r))\n\n# same as S.pooled ## Cohen's d ## d\n\n<- mean(diff)/S.within var.d <- ( 1/n + (d^2)/(2*n) ) * 2 * (1-r) ## Hedge's g estimate ## cf\n\n<- 1 - 3/(4*n -5) g\n\n<- cf*d var.g <- cf^2 * var.d se.g <- sqrt(var.g) rm(posttest, baseline, w, r, diff, n, S.diff, S.within, d, var.d, cf, var.g) return( c( g=g, se.g=se.g, LCL=( g - 1.96*se.g), UCL=( g + 1.96*se.g) ) )} I am unclear what Supplementary Figures 2, 3, 4 and 5 adds to the paper. There is emerging evidence that the monocyte to lymphocyte ratio has relevance to susceptibility to infectious diseases (e.g. Wang et al., 2015; Naranbhai et al., 2014; Warimwe et al., 2013). Could you speculate/demonstrate how you could potentially use your resource to test for this hypothesis? Perhaps using cell deconvulation methods on whole blood?",
"responses": [
{
"c_id": "2233",
"date": "11 Oct 2016",
"name": "Damien Chaussabel",
"role": "Author Response",
"response": "We would like to thank reviewer for kindly comments and suggestions to improve our manuscript. Rinchai et al. suggest a novel role for ADAM9 by mining exisiting dataset. This clever re-use of existing dataset is a demonstration on how scientists can test new hypothesis quickly, inexpensively and with more robustness. They also provide a web tool based on 172 curated datasets (https://gxb.benaroyaresearch.org/dm3/geneBrowser) which makes is a practical resource. All sections of the article is extremely well written and I strongly recommend the article be indexed subject to the following comments. 1. Introduction: The introduction starts with the Refseq definition of ADAM9 and a thorough review of existing literature on gene function of ADAM9. It left me wondering what motivated them to ADAM9 until the first section of Results (Knowledge gap assessment). It would be useful to the reader if a brief sentence or two on the motivation to study this gene was at the beginning of the Introduction section. Authors: Thank you for raising this point, we agree that it would be better to start with such a description. So we have now added a paragraph explaining the “Collective data to knowledge” approach as the first paragraph of the introduction section. 2. Figure 1: I find it very difficult to color match the Cell type on the x-axis of Figure 1 especially when it appears legend colors are sorted differently. A plot with seven smaller panels (one for each cell type) or even just 2 panels (neutrophils and monocytes) might be clearer. Can you add GSE60424 to title of Figure 1? Authors: We agree with this suggestion. We initially wanted to use the plot as it would appear to the reader when accessing the GXB via the link provided, but it is rather difficult to interpret without the interactive features built into the software tool (overlay of sample information, sample sorting, pop ups etc…). Also per your and another reviewer’s suggestion we changed the plot of Figure 1 showing only neutrophil and monocyte data. 3. General comment on Figures 2 - 5 and Tables 1 - 4: a) There is an inconsistency in the number of datasets stated in text and demonstrated in the figure. E.g. the text for Figure 2 talks about seven datasets but figure only shows three and Table 1 also talks about three datasets but includes SOJIA vs Control and HIV vs Control from GSE29536. Authors: Thanks for pointing this out. Not all dataset analyzed were plotted on the figures but all are now listed in the tables, in the textual figure legends, are represented on the graphical figure legends and can be accessed via the links provided. b) I find the process diagrams (top half of figures) distracting and redundant with text and legend. This space could be used to incorporate the other studies. I suggest incorporating the cell type and measurement type after the study names on plot (e.g. GSE34205 \\n microarray on whole blood; GSE29536 \\n RNA-seq on neutrophils). Legend is well described. Authors: This point has been raised by another reviewer as well and is obviously important. We did not properly communicate the purpose of these diagrams which are meant as “graphical figure legends”. We aimed at structuring the information communicated and also help readers navigate the many finding that are reported while providing links to interactive figures and make details regarding study design more readily accessible (which a third reviewer deemed particularly important). In addition to providing the rationale for including those graphical figure legends we also moved them at the bottom of each figure, which is really the most logical spot for them to be. Rationale: Diagrams have been incorporated within each figure. These have dual purpose, first providing readers with a graphical summary of the findings and second constitute an attempt a structuring information for future computational applications. Indeed, an important limitation of communicating biomedical knowledge in the form of research articles is that it consists in unstructured information (free text). This type of information is notoriously difficult to extract by computational means [Chaussabel D. Am J Pharmacogenomics 2004; 4: 383-93]. Standardized graphical summaries such as the ones provided in this manuscript constitutes structured information that is both human readable and computationally tractable. The need for such solutions will become more pressing as the biomedical literature continues to grow exponentially to such scales that it can only be very narrowly apprehended by research investigators. c) The column for \"Avg A - Avg B\" is meaningless especially when comparing different platforms. The fold change (Avg A / Avg B) is more meaningful and would be worth stating to two decimal points. Authors: We agree that it indeed cannot be compared across platforms, which we did not intend to do since rather than a meta-analysis our approach consists in a “meta-interpretation” across publicly available datasets. However, it is a good indication of robustness of the changes that are measured. We have used this criterion for many years to weed out genes that show high fold change but for expression levels that are close to background levels, which we have found to be poorly reproducible. For example, in case where fold change = 3 difference if A=30 and B=10 will be 20 which might be about twice the background intensity of the chip; whereas if A=300 and B=100, A/B is still = 3 but A-B is 200 or twenty time the background intensity. So having this information can help decide whether the changes that are observed are likely to be robust. d) If possible, combine Tables 1 - 4 into one page, possibly a large table with subheadings for during infection, after treatment with PAMPs, during tissue remodelling etc ... T Authors: Thank you for raising this point, we initially considered this possibility but found that It would be too much information in one table. We were also concerned that it would make it too difficult for the reader to locate this information. 4. Forest plot: An alternative/additional suggestion to 3d is to present the data visually in the form of a forest plot with subheadings (e.g. https://www.nichd.nih.gov/cochrane_data/mcguirew_13/fig2019799225306621155.png; test of heterogeneity might not be necessary). This allows the readers to visually scan all of the in one page. Many ways of doing this but I suggest calculating the Glass effect size (see R codes below) followed by foresplot function from R package rmeta or forestplot. The authors might also find such a plot on their webtool useful in the long run but this is beyond the scope of current paper. effectSize <- function(baseline, posttest){ ## Source http://www.meta-analysis.com/downloads/Meta-analysis%20Effect%20sizes%20based%20on%20means.pdf stopifnot( length(posttest)==length(baseline) ) ## assume the data is in same patient order w <- which( !is.na(posttest) & !is.na(baseline) ) posttest <- posttest[w] baseline <- baseline[w] r <- cor( posttest, baseline ) diff <- posttest - baseline n <- length(diff) S.diff <- sqrt( var(baseline) + var(posttest) - 2*cov(baseline, posttest) ) # note: var(x - y) = var(x) + var(y) - 2cov(x, y) S.within <- S.diff/sqrt(2*(1-r)) # same as S.pooled ## Cohen's d ## d <- mean(diff)/S.within var.d <- ( 1/n + (d^2)/(2*n) ) * 2 * (1-r) ## Hedge's g estimate ## cf <- 1 - 3/(4*n -5) g <- cf*d var.g <- cf^2 * var.d se.g <- sqrt(var.g) rm(posttest, baseline, w, r, diff, n, S.diff, S.within, d, var.d, cf, var.g) return( c( g=g, se.g=se.g, LCL=( g - 1.96*se.g), UCL=( g + 1.96*se.g) ) ) } Authors: We thanks reviewer for this valuable suggestion and sharing the R code! Our GXB tool is still in development and we hope to add additional options for data visualizations in the future so this is perfect. 5. I am unclear what Supplementary Figures 2, 3, 4 and 5 adds to the paper. Authors: We intended to show in Supplement Figures 2-5 the data exactly how they can be visualized interactively in the GXB. We could remove these figures but since this information is merely added as a supplement we felt that there is no harm in leaving it as is. 6. There is emerging evidence that the monocyte to lymphocyte ratio has relevance to susceptibility to infectious diseases (e.g. Wang et al., 2015; Naranbhai et al., 2014; Warimwe et al., 2013). Could you speculate/demonstrate how you could potentially use your resource to test for this hypothesis? Perhaps using cell deconvulation methods on whole blood? Authors: This is a very interesting hypothesis but perhaps beyond the scope of this manuscript; it might be possible to identify relevant dataset but such outcome measure is not easily ascertained at least in humans (i.e. susceptibility to infection)"
}
]
},
{
"id": "9961",
"date": "13 Aug 2015",
"name": "Andreas Ludwig",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRinchai and co-workers nicely present a re-analysis of existing genomic datasets, demonstrating an useful tool for quick establishment of functional hypotheses. By this, they suggest a novel function of ADAM9 as biomarker for tissue damage. The article is well written, but several concerns should be addressed before indexing.Introduction: The authors give a nice review of the current literature. However, a link to their one study is missing. The introduction should include the motivation (“knowledge gap assessment”). Otherwise, readers could expect a detailed physiologic analysis of ADAM9 in tissue damage. Methods: The description is very nice but could be adapted to journal style and shortened. The possibilities offered by the software could be summarized in a table. Figure 1: The colour scheme in the figure and the legend are sorted differentially. In general, the figure is overloaded and the colour scheme not helpful. Differences were only observed for monocytes and neutrophils. These results should be included in figure 1, whereas the other results should be included as supplementary figure. Supplementary Figure 2 to 5: I don’t see any additional information by this second plot type.What is the difference between Suppl. Figure 2 4 and Figure 2? Why are not all datasets mentioned in the text also shown and listed within the figures? This is very obvious for Figure 2. The diagrams within the figures are very redundant especially due to the detailed description within the text. This space should be used to present more original data sets. The tables should be summarized within one table. Further, the table should include all datasets analysed and mentioned in the text. Figure 3: It would be helpful to mark the values for the different individuals, maybe by different colours to avoid the impression of a general outlier. Otherwise, changes after PAMP treatment could be possible. Conclusion: To address the point of infection the authors include a stimulation of blood samples. However, this is not sufficient to draw the conclusion of a biomarker for tissue damage (also as a result of infection). Experiments with tissue cells, including scratch assays, stimulations with cytokines, and conditioned media from blood samples would provide further information and address the tissue damage effect in comparison to the infection effect.",
"responses": [
{
"c_id": "2232",
"date": "11 Oct 2016",
"name": "Damien Chaussabel",
"role": "Author Response",
"response": "We thank the reviewers for their valuable feedback and suggestions to improve our manuscript. Rinchai and co-workers nicely present a re-analysis of existing genomic datasets, demonstrating an useful tool for quick establishment of functional hypotheses. By this, they suggest a novel function of ADAM9 as biomarker for tissue damage. The article is well written, but several concerns should be addressed before indexing. 1. Introduction: The authors give a nice review of the current literature. However, a link to their one study is missing. The introduction should include the motivation (“knowledge gap assessment”). Otherwise, readers could expect a detailed physiologic analysis of ADAM9 in tissue damage. Authors: As suggested we added a paragraph at the beginning of the introduction section to present the rationale behind the data mining approach that was employed. We experienced issues with one of our servers at some time. We checked the links and references provided in the introduction and it seems to be working fine now. 2. Methods: The description is very nice but could be adapted to journal style and shortened. The possibilities offered by the software could be summarized in a table. Authors: We also took care of this. Since the description of the GXB tool has now been published and the code made openly available in Github we now point readers to these resources and shortened the paragraph describing the features of the software accordingly (Speake C, et al., J Transl Med 2015, Rinchai D, et al., F1000R 2016). A link to a tutorial video has also been added to the methods (https://www.youtube.com/playlist?list=PLtx3tvfIzJ9XkRKUz6ISEJpAhqKyuiCiD). 3. Figure 1: The colour scheme in the figure and the legend are sorted differentially. In general, the figure is overloaded and the colour scheme not helpful. Differences were only observed for monocytes and neutrophils. These results should be included in figure 1, whereas the other results should be included as supplementary figure. Authors: Thank you for pointing this out. In the original version of the Figure we used the graphic exported directly with GXB. However, we agree that it is difficult to read, especially without interactive features that allow overlay of sample information, sorting and mouse overs. Another reviewer also suggested retaining only the neutrophil and monocyte data the plot for Figure 1 and we have made these changes accordingly. 4. Supplementary Figure 2 to 5: I don’t see any additional information by this second plot type. Authors: Supplementary Figures 2-5 represent the data exactly how they can be visualized interactively in the GXB. We felt this might be helpful given the fact that we provide links throughout the manuscript that lead to interactive version of these plots. We are now providing this rationale in the legend of the supplementary figures. 5. What is the difference between Suppl. Figure 2 4 and Figure 2? Authors: Same rationale as stated above, we used Supplementary Figures representing the original data exactly how they can be visualized interactively in the GXB. The links to the interface of each graph are provided in legends of each Supplementary Figures. 6. Why are not all datasets mentioned in the text also shown and listed within the figures? This is very obvious for Figure 2. Authors: We in fact initially tried to show the results from all of the studies. But since some of the Figures make reference to a rather large number of datasets and that we are able to provide links to interactive graphs we decided to only select a subset of the key studies that best support the points that we were making. In response to the reviewers' comments we are now listing all the studies in the Figure legend and accompanying Table. Readers can access the data for each study by clicking the associated hyperlinks. All the studies mentioned in the text are also represented on the graphical abstract. 7. The diagrams within the figures are very redundant especially due to the detailed description within the text. This space should be used to present more original data sets. Authors: We did not properly communicate the purpose of these diagrams that constitute graphical legend and allow presentation of the data in a semi-structured format that is both human and machine readable. We are now providing a rationale (see below) and have re-positioned them at the bottom of the Figure, which will hopefully work better. Rationale: “Diagrams have been incorporated within each Figure. These have a dual purpose, first they provide readers with a graphical summary of the findings and second constitute an attempt a structuring information for future computational applications. Indeed, an important limitation of communicating biomedical knowledge in the form of research articles is that it consists in unstructured information (free text). This type of information is notoriously difficult to extract by computational means [e.g. Chaussabel D. Am J Pharmacogenomics 2004; 4: 383-93]. Standardized graphical summaries such as the ones provided in this manuscript constitutes structured information that is both human readable and computationally tractable. The need for such solutions will become more pressing as the biomedical literature continues to grow exponentially to such scales that it can only be very narrowly apprehended by research investigators.” 8. The tables should be summarized within one table. Further, the table should include all datasets analysed and mentioned in the text. Authors: We thanks to reviewer for raising this point. As mentioned earlier we have added studies that have been previously omitted. But as far as merging all datasets in one table we were concerned after making an attempt that it would be difficult for reader to track down information about a given dataset if the list is too extensive. Also we reverted to the original format where separate tables are linked to each individual figure. 9. Figure 3: It would be helpful to mark the values for the different individuals, maybe by different colours to avoid the impression of a general outlier. Otherwise, changes after PAMP treatment could be possible. Authors: As suggested by reviewers, we labeled the value of different individual by using different colors in the PAMPs treatment dataset (GSE30101). We found that ADAM9 levels didn’t show significant outlier response, with the exception of the green subject that shows low response to HKSA in comparison to the other subjects. This could be explained by donor-specific variation in the subject’s ability to respond. Overall the magnitude of response to such stimuli remains especially low, especially when compared to CXCL10 which served as a positive control and did not reach significance. Donor information was not available for GSE32862 10. Conclusion: To address the point of infection the authors include a stimulation of blood samples. However, this is not sufficient to draw the conclusion of a biomarker for tissue damage (also as a result of infection). Experiments with tissue cells, including scratch assays, stimulations with cytokines, and conditioned media from blood samples would provide further information and address the tissue damage effect in comparison to the infection effect. Authors: Indeed we are reporting an association and direct evidence will have to be obtained experimentally. We have changed the title to better reflect this fact, and it now reads: “Increased abundance of ADAM9 transcripts in the blood is associated with tissue damage”. Confirmation experiments will require the analysis of large number of sample for validation and will require further investigation, we are working to secure necessary funding and ethics approval for follow on studies, which is taking more time than anticipated. Some datasets available in GEO include experiments that are relevant, this is for instance the case GSE30101 where whole blood that was stimulated with microbe-derived as well as host-derived factors such as pathogen-associated molecular patterns (PAMPs), inflammatory cytokines (IL18, TNF), as well as interferon type I and Type II). Prompted by your comment we also have checked expression of ADAM9 transcript in additional datasets. Transcript abundance were for instance measured in the lungs of C57BL/6 in a model of lung inflammation and injury (GSE2411) [Kennedy-Crispin M. et al., J Invest Dermatol 2012]. The result showed that the abundance of ADAM9 was significantly higher in mice that developed acute lung injury after exposure to low-dose LPS and mechanical ventilation (GSE2411). In an experimental model of epidermal injury (GSE30355) the abundance of ADAM9 was significantly higher in injured epidermis (sorted keratinocyte (KC)) as comparison to uninjured condition (laser capture microscopy or in vitro cultured keratinocytes) [Altemeier WA. et al., J Immunol 2005]. Additionally, in a murine dermal burn wound model ADAM9 transcript of mouse thermal injury induced increased over the time at 0, 2 hours, 3 days respectively (GSE460). We have added a description of these findings in the result section under the paragraph entitled “The abundance of ADAM9 increases following tissue injury and sterile inflammation” and are also including a new Supplementary Figure 6."
}
]
},
{
"id": "10054",
"date": "24 Aug 2015",
"name": "Caroline A. Owen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title and abstract of the manuscript: Both are appropriate. The design, methods and analysis of the results from the study:The methods and design have been explained, and the analyses are appropriate for the topic being studied. The results show impressive increases in ADAM9 gene expression in blood leukocytes in some disease states. However there are issues regarding the design/content of the study: It is not clear from reading only the manuscript whether the controls and disease groups are matched with respect to age, sex, and/or race/ethnicity, and/or whether the disease groups studied have co-morbidities that might contributed to the differences in ADAM9 gene expression observed between the groups. It would be necessary to read many of the cited papers in order to obtain this information. The microarray results do not appear to have been validated by performing real-time qPCR studies (for example) on any of the samples. In the in vitro studies, details on the concentrations and incubation times for the agonists have not been provided in the methods, text or legends. It is possible that the concentrations and time points studied were not optimal for detecting increases in ADAM9 gene expression. Data presentation:All of the results have been presented in the manuscript However, in general more details about the experimental conditions in the figure legends would have been helpful to the reader. Discussion and conclusions: The discussion section could be expanded to include a discussion of the limitations of the study. The discussion could have included a section on how the changes in ADAM9 gene expression detected in blood leukocytes might influence the pathogenesis of progression of the diseases that were studied based upon the known activities of this proteinase.",
"responses": [
{
"c_id": "2231",
"date": "11 Oct 2016",
"name": "Damien Chaussabel",
"role": "Author Response",
"response": "We thank the reviewer for the valuable feedback and suggestions to improve our manuscript. The title and abstract of the manuscript: Both are appropriate. The design, methods and analysis of the results from the study: The methods and design have been explained, and the analyses are appropriate for the topic being studied. The results show impressive increases in ADAM9 gene expression in blood leukocytes in some disease states. However there are issues regarding the design/content of the study: 1. It is not clear from reading only the manuscript whether the controls and disease groups are matched with respect to age, sex, and/or race/ethnicity, and/or whether the disease groups studied have co-morbidities that might contributed to the differences in ADAM9 gene expression observed between the groups. It would be necessary to read many of the cited papers in order to obtain this information. Authors: This is a good point as such factors may indeed potentially confound analysis and undermine the conclusions advanced by the authors of those studies (since it would presumably affect not only ADAM9 but all the transcripts being measured). Mechanisms exist that should help ensure that the study design and choice of selection of control subjects is appropriate at least in most studies: One is IRB review that to some extent will evaluate the study design elements such as inclusion and exclusion criteria for case and control groups and will help ensure that results of the study will be meaningful and justify risk to the study population. The second mechanism is peer review. Having conducted such studies ourselves and reviewed submissions of others concerns often come up regarding factors that might potentially confound analyses and that would need be addressed before publication. In addition, the process of loading dataset and sample as well as study information in GXB as well as QC checks provide an additional opportunity to identify “faulty” designs. These steps can of course only mitigate risk since even study investigators may not be aware of all the factors that could potentially confound the analysis. One of the potential advantages of the analytic strategy that we have employed is that it factors in results not from one but several studies carried out by different investigators in different geographic locations, often using different technology platforms. Thus the conclusions we derive from such “meta-interpretation” is likely to be rather robust with only a minority of the studies potentially being affected by study design. It should also be noted that details concerning study design have been incorporated in GXB and therefore the reader does not have to access the original manuscript in order to locate the relevant information. Sample information is also available via GXB and can be accessed by 1) hovering of the mouse cursor over individual data points; 2) overlaying the information on the interactive bar graph; 3) accessing the table listing all available sample information. We have acknowledged the point raised in the review and include some of the considerations outlined above in the conclusion of our manuscript: “Concerns with regards to the quality of the public data used as input for meta-interpretation, for instance the introduction of uncontrolled confounding factors that may be technical (batch effects) or biological (demographics, treatment), should be mitigated by the fact that conclusions are based on findings from not one but multiple studies, and that all of them were vetted by institutional review boards and peer review. These mechanism should ensure that only a small minority of those studies would be affected by critical design or technical flaws.” 2. The microarray results do not appear to have been validated by performing real-time qPCR studies (for example) on any of the samples. Authors: Confirmation by real-time PCR was not available for all studies, and when they were had not been performed for ADAM9 since it was not a focus of the systems-scale analyses. We did not have direct access to study samples and could not check levels of ADAM9 transcript ourselves. It should be noted that doing so would in any case only serve to validate the accuracy of the technology platform that the authors employed rather than the intrinsic value of ADAM9 as a biomarker. The approach that we employ provides a means for in silico validation of findings from an initial study across additional independent patient cohorts. However, we recognize that it does not ultimately obviate the need for follow on studies/experimentations. We added the following sentence in the conclusion: “We also recognize that such in silico cross-validation of our seminal observation does not obviate the need for follow on studies or experimentation.” 3. In the in vitro studies, details on the concentrations and incubation times for the agonists have not been provided in the methods, text or legends. It is possible that the concentrations and time points studied were not optimal for detecting increases in ADAM9 gene expression. Data presentation: All of the results have been presented in the manuscript. However, in general more details about the experimental conditions in the figure legends would have been helpful to the reader. Authors: As requested by the reviewer we have added details in each dataset as shown in Figure legends through the manuscript. As mentioned above we have employed GXB as an interface between the readers and the papers that originally described the study and its findings. Information regarding study design or samples has been structured within GXB and can be accessed directly from the manuscript and in only a few clicks. It can also be represented graphically. We have also added a few sentences to highlight this point in the manuscript: “Finally, the fact that the approach presented relies on interpretation of transcriptional profiles derived from a relatively large number of transcriptional studies presents another challenge given that the amount of background information that can be provided for each study cannot be exhaustive. The data browsing web application that we have used attempts to address this limitation by providing readers access to interactive Figures that they can drill into to access detailed sample and study information.” Furthermore, we selected CXCL10 as a positive control to show that large levels of induction could be obtained for genes known to respond to those stimuli. Expression values for this gene range from nearly 10 up to nearly 40,000 units. And although this dataset is indeed publicly available we happen to have been the contributors (as is the case of a number of datasets being reanalyzed here) and had performed dose ranging and time course experiments prior to selection of the stimulation conditions. Discussion and conclusions: The discussion section could be expanded to include a discussion of the limitations of the study. The discussion could have included a section on how the changes in ADAM9 gene expression detected in blood leukocytes might influence the pathogenesis of progression of the diseases that were studied based upon the known activities of this proteinase. Authors: A new section has been added in the conclusion specifically to discuss limitations of the study (see additions mentioned above). We are yet unsure of the functional significance of elevation in levels of ADAM9, which on one hand may be beneficial to mediate tissue repair; on the other hand the fact that ADAM9 proteins or transcripts levels are found elevated in blood may be an indication of extensive tissue damage and be associated with poor outcome. Indeed we now for instance report in the context of GSE11375 (profiling of responses in the blood of trauma patient) that abundance of ADAM9 in patients who did not survive was significantly higher than those who survive. In another dataset GSE34205/GSE38900 (Viral infections) we now show that abundance of ADAM9 is correlated with degree of severity in pediatric viral infection (RSV, influenza and HRV infection), moreover level of ADAM9 transcript in patients who were ventilated were significantly higher than that who were non-ventilated. We have added these statements in the discussion."
}
]
}
] | 1
|
https://f1000research.com/articles/4-89
|
https://f1000research.com/articles/5-1964/v1
|
12 Aug 16
|
{
"type": "Research Article",
"title": "South-East Asian strains of Plasmodium falciparum display higher ratio of non-synonymous to synonymous polymorphisms compared to African strains",
"authors": [
"Gajinder Pal Singh",
"Amit Sharma",
"Amit Sharma"
],
"abstract": "Resistance to frontline anti-malarial drugs, including artemisinin, has repeatedly arisen in South-East Asia, but the reasons for this are not understood. Here we test whether evolutionary constraints on Plasmodium falciparum strains from South-East Asia differ from African strains. We find a significantly higher ratio of non-synonymous to synonymous polymorphisms in P. falciparum from South-East Asia compared to Africa, suggesting differences in the selective constraints on P. falciparum genome in these geographical regions. Furthermore, South-East Asian strains showed a higher proportion of non-synonymous polymorphism at conserved positions, suggesting reduced negative selection. There was a lower rate of mixed infection by multiple genotypes in samples from South-East Asia compared to Africa. We propose that a lower mixed infection rate in South-East Asia reduces intra-host competition between the parasite clones, reducing the efficiency of natural selection. This might increase the probability of fixation of fitness-reducing mutations including drug resistant ones.",
"keywords": [
"artemisinin resistance",
"resistance evolution",
"Plasmodium falciparum",
"non-synonymous polymorphism"
],
"content": "Introduction\n\nArtemisinin combination therapy (ACT) is the frontline treatment for malaria caused by Plasmodium falciparum and has played a major role in reducing malaria mortality from an estimated 840,000 deaths in the year 2000 to 440,000 deaths in the year 20151. The emergence and spread of artemisinin resistance in South-East Asia, however, poses a serious threat to malaria control, and the containment of artemisinin resistance is thus a global public heath priority2–8.\n\nOne of the most important unanswered questions in anti-malarial drug resistance is why it has repeatedly emerged in South-East Asia3,4,6,9. The resistance to chloroquine was first reported in South-East Asia in 1957 before spreading to India and Africa where it resulted in the significant increase in malaria child mortality possibly killing millions of children10–12. The resistance to sulphadoxine-pyrimethamine also emerged in South-East Asia in the late 1960s following a similar route to India and Africa9. Worryingly, the resistance to artemisinin has emerged independently at multiple places in South-East Asia13–17 and is now present 25 km from the Indian border17 threatening to follow the same trajectory as resistance to previous anti-malarial drugs. Improved understanding of the process of how and why anti-malarial drug resistance emerges in South-East Asia could provide critical information in developing strategies to prevent the spread of the current wave of artemisinin resistance.\n\nHere we ask whether there are evolutionary constraints on P. falciparum strains from South-East Asia that differ from African strains and thus might explain the higher predisposition of South-East Asia strains to evolve drug resistance. To answer this question we utilized a recent large global genome sequencing data from ~3400 clinical samples which identified nearly million high-quality single nucleotide polymorphisms (SNPs) in the exonic regions of P. falciparum18.\n\n\nResults\n\nResistance to anti-malarial drugs often involves changes in the amino-acid sequence within specific proteins. Thus, we tested whether the ratio of non-synonymous (amino acid changing) to synonymous polymorphism is higher in South-East Asia (SEA). Figure 1 shows a significantly higher ratio of non-synonymous to synonymous polymorphism (N/S) in SEA samples compared to African samples with almost no overlap in their distributions. The mean and median N/S for samples from SEA were 2.33, compared to 2.06 for Africa (Wilcox test p-value 0, number SEA samples 1600, and number Africa samples 1647). The higher N/S in SEA compared to Africa was also evident at the gene level with a larger number of genes showing higher N/S in SEA than in Africa (Figure 2). Mean and median N/S for genes in SEA samples were 2.1 and 1.9 respectively, while for African samples the mean and median N/S were 1.9 and 1.8 respectively (paired t-test p-value 1E-43, paired Wilcox-test p-value 4E-27, n = 4792). There were 75 genes with more than 3-fold higher N/S in SEA samples relative to African samples and N/S of more than four in SEA. Interestingly, most of these genes were not related to antigenic variation (Supplementary Table 1), but perform basic housekeeping functions, suggesting that higher N/S of these genes in SEA might not be primarily driven by differential host immune selection. In addition to Kelch13, -the only gene known to be causally associated with artemisinin resistance- the list includes CRT (chloroquine-resistance transporter) which shows an 8-fold higher N/S in SEA samples compared to African samples and has previously been shown to be associated with artemisinin resistance in a genome-wide association studies (GWAS) study15. In summary, P. falciparum strains from SEA show a higher ratio of non-synonymous to synonymous polymorphisms than African strains.\n\nThe ratios of non-synonymous to synonymous polymorphism (N/S) for 3394 samples from 22 countries are shown. The y-axis is truncated at the top with 13 samples not shown.\n\nThe scatter-plot shows N/S in SEA and Africa for 4792 genes. Genes previously associated with artemisinin resistance in a GWAS study15 are shown in red, with Kelch13 and chloroquine-resistance transporter (CRT) labelled. The diagonal line is shown and the numbers of genes on both sides of the diagonal are indicated. The x and y-axes are truncated with 28 genes not shown.\n\nHighly conserved proteins in P. falciparum show a much lower N/S, indicating lower tolerance for non-synonymous polymorphism18. We tested whether the correlation between N/S and protein conservation might be different in SEA and Africa. The correlation between N/S and conservation was much weaker in SEA (Figure 3) with Pearson correlation of -0.43 (95% CI: -0.41 to -0.46) compared to -0.69 (95% CI: -0.68 to -0.71) in Africa. The lower correlation in SEA suggests a higher ratio of non-synonymous to synonymous changes at conserved positions. Indeed, non-synonymous polymorphisms specifically observed in SEA are more likely to occur at conserved positions compared to those specific to Africa (Figure 4). Samples from SEA show higher N/S compared to Africa when considering only conserved positions (Figure 5). These results suggest a lower efficiency of negative selection in SEA in removing potentially deleterious mutations. This may be important for the acquisition of antimalarial drug resistance since these mutations preferentially occur at the conserved sites19, e.g. artemisinin resistance mutations in Klech13 occur in the conserved region of the protein18, resistance mutations also occur in the conserved regions in DHFR (dihydrofolate reductase), DHPS (dihydropteroate synthase), and CRT (chloroquine-resistance transporter)19. In summary, P. falciparum strains from SEA show a higher ratio of non-synonymous to synonymous polymorphisms at conserved sites in the protein sequences than African strains.\n\nA) Scatter-plot of N/S in Africa and percent protein conservation and B) Scatter-plot of N/S in SEA and percent protein conservation. Percent conservation for each protein is the percent of residues identical across orthologs in seven Plasmodium species (P. berghei, P. chabaudi, P. cynomolgi, P. knowlesi, P. reichenowi, P. vivax, P. yoelii). Only proteins with orthologs in all Plasmodium species are shown (4075 proteins). Y-axis is truncated with 7 points not shown in Figure 3a and 112 points not shown in Figure 3b.\n\nA) Conserved sites were defined as sites identical across orthologs in Plasmodium species (P. berghei, P. chabaudi, P. cynomolgi, P. knowlesi, P. reichenowi, P. vivax, P. yoelii) in multiple sequence alignment. B) Conserved sites were defined as sites identical across orthologs in diverse eukaryotes (S. cerevisiae, D. melanogaster, C. elegans, H. sapiens) in multiple sequence alignment. Error bars indicate 95% confidence intervals of the mean from 1,000 bootstrap samples.\n\nA) Box-plot showing N/S for 3394 samples from 22 countries at sites identical across orthologs in seven Plasmodium species in multiple sequence alignment. B) Box-plot showing the N/S ratio at sites identical across orthologs in diverse eukaryotes (S. cerevisiae, D. melanogaster, C. elegans and H. sapiens) in multiple sequence alignment. The y-axis is truncated at the top with 10 samples not shown in both panels.\n\nBlood samples may contain more than one haploid parasite clone due to mixed infections by multiple genotypes. The rate of mixed infection is generally lower in areas of low-transmission such as SEA20. The lower efficiency of negative selection in removing potentially deleterious mutations at conserved positions in SEA could result from lower competition between parasite clones in the hosts. Indeed, the estimated rate of mixed infections, detected by a high proportion of heterozygous calls in the sequencing data, was much lower in South-East Asia compared to Africa (Figure 6).\n\nMixed infection defined as samples with >10% SNP calls as heterozygous. This cut-off was determined from the distribution of heterozygous SNPs across the samples (Supplementary Figure 2). The numbers of samples from each country are shown at the top of bar plots.\n\n\nDiscussion\n\nBased on our observations that P. falciparum from SEA shows 1) a higher ratio of non-synonymous to synonymous polymorphism 2) a higher proportion of non-synonymous polymorphism at conserved sites and 3) a lower rate of mixed infections, we propose a model for the higher propensity of SEA populations to acquire drug resistance (Supplementary Figure 1). Lower mixed infections in SEA may allow even less-fit parasites to be transmitted to the next set of hosts due to reduced level of intra-host competition between multiple genotypes. Thus, fitness-reducing mutations including drug-resistance mutations might have a higher chance of spreading in SEA. In contrast, the higher mixed infection rate in Africa may drive more intense competition between genotypes within the host, and may therefore reduce the probability of transmission of less-fit parasites.\n\nThis model can explain some surprising observations and predicts conditions under which resistance might spread to Africa. The much larger population size of P. falciparum in Africa, as also evidenced by high rate of mixed infection (Figure 6) should make it easier for resistance mutation to appear. Indeed, artemisinin resistance mutations in Kelch13 were observed in 10 samples from Africa, including two samples with the most common artemisinin resistance mutation C580Y18. The C580Y mutation is capable of generating artemisinin resistance in vitro in the NF54 parasite strain considered to be of African origin21. This raises an important question as to why artemisinin resistance is not spreading in Africa. Since artemisinin resistance is likely to incur a fitness cost in the drug-free environment22–24, we propose that strains with these mutations are continuously arising in Africa but get competitively removed by the more fit drug-sensitive strains23. However once a strain acquires compensatory mutations that may reduce the fitness cost of the original mutation, it may be able to spread in a more competitive environment in Africa. While compensatory mutations can occur anywhere in the genome and may even spread in South-East Asia, these could be unlinked by recombination in areas with high transmission rate such as Africa25. Thus, compensatory mutations in the same gene might be more likely to spread in high-transmission areas. Indeed, drug-resistance genes often acquire multiple mutations before spreading to Africa, e.g. pyrimethamine resistance gene dhfr acquired at least three different mutations in South-East Asia before it spread to Africa9. All chloroquine resistance strains have the K76T mutation in CRT (chloroquine-resistance transporter) but are accompanied by a number of mutations in the same gene26. While at present Klech13 does not appear to have multiple mutations27, it would be critical to monitor the acquisition of additional mutations in the Kelch13 protein which might compensate the fitness cost of Kelch13 resistant mutations in the drug-free environment.\n\nIt is important to note that higher N/S in SEA populations does not necessarily imply higher mutation rate. Brown et al. previously found similar substitution rates in samples from Africa and SEA28. Thus substitution rate in SEA population appears to be similar to that of African populations, but a higher fraction of those substitutions occur at conserved non-synonymous positions in SEA populations. The MalariGEN study from where we obtained the dataset reported much higher density (per sample) of both synonymous and non-synonymous polymorphisms in Africa compared to SEA18. It is also important to note that higher density of SNP/sample does not imply higher substitution rate in Africa, rather it reflects the higher rate of mixed infection in Africa, i.e. more SNPs are identified in samples from Africa because of the higher number of different parasite clones per samples (Figure 6). While our data suggest that differences in the selective constraints on P. falciparum strains from SEA and Africa may contribute towards the higher propensity for resistance emergence and spread in SEA, other factors such as differences in the drug pressure, usage of artemisinin mono-therapy, social factors and host immune factors may also be important3,4.\n\nResistance to chloroquine and sulphadoxine-pyrimethamine spread from SEA to India to Africa4. Interestingly we observed higher mixed infection rates in Bangladesh than in neighboring SEA. The Indian subcontinent has areas with widely variable transmission rates1. This might allow drug-resistant P. falciparum evolved in low transmission areas in SEA to gradually adapt to higher transmission areas in the Indian subcontinent, which could then spread to the high transmission areas in Africa. Therefore, it would be critical to track the spread of artemisinin resistance in the Indian subcontinent.\n\n\nMethods\n\nThe SNP data of P. falciparum was obtained from the MalariaGen community webpage (https://www.malariagen.net/data/p-falciparum-community-project-jan-2016-data-release)18. The SNP data consist of filtered and high quality 939,687 exonic SNPs with 631,715 non-synonymous and 307,972 synonymous SNPs. The data comprised 3,394 samples from 22 countries, with roughly equal number of samples from South-East Asia (1,600 samples) and Africa (1,647 samples). The N/S ratio for each sample was obtained by dividing the number of non-synonymous SNPs by number of synonymous SNPs in that sample. Proteome sequences of P. falciparum, P. berghei, P. chabaudi, P. cynomolgi, P. knowlesi, P. reichenowi, P. vivax, P. yoelii were downloaded from PlasmoDB database and proteome sequences of S. cerevisiae, D. melanogaster, C. elegans and H. sapiens were downloaded from European Bioinformatics Institute (EBI) database. Orthologous sequences were identified using best bidirectional hit algorithm29 and aligned using ClustalO30. The conservation score for P. falciparum proteins was calculated as the percentage of positions identical across all orthologous proteins from Plasmodium species. The N/S ratio for each gene in South-East Asia and Africa was calculated by dividing the number of unique non-synonymous SNPs by the number of unique synonymous SNPs across samples from the two geographical areas. The Pearson correlation between N/S for each gene and the conservation score was calculated in R. All figures were created in R version 3.2.3. Mixed infection samples were defined as samples with >10% SNP calls as heterozygous. This cut-off was determined from the distribution of heterozygous SNPs across the samples (Supplementary Figure 2). It is important to note that this method is not likely to accurately classify each sample into a polyclonal (mixed infection) or a monoclonal sample, but the overall trend of higher rate of mixed infection in African samples compared to SEA samples is likely to be robust.\n\n\nData availability\n\nThis publication uses data from the MalariaGEN Plasmodium falciparum Community Project as described in Genomic epidemiology of artemisinin resistant malaria, eLife, 2016 (DOI: http://dx.doi.org/10.7554/eLife.08714). This data is also available from the MalariaGEN website (https://www.malariagen.net/data/p-falciparum-community-project-jan-2016-data-release).",
"appendix": "Author contributions\n\n\n\nG.P.S. and A.S. conceived and designed the study. G.P.S. performed the research. G.P.S. and A.S. wrote the manuscript. All authors reviewed the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by J. C. Bose Fellowship to A.S. by Department of Science and Technology, Govt. of India and by an Early Career Fellowship to G.P.S. by the Wellcome Trust/DBT India Alliance (IA/E/15/1/502297).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nGenome sequencing was performed by the Wellcome Trust Sanger Institute and the Community Project is coordinated by the MalariaGEN Resource Centre with funding from the Wellcome Trust (098051, 090770).\n\n\nSupplementary material\n\nLower mixed infections in SEA may allow even less-fit parasites to be transmitted to next set of hosts due to reduced level of intra-host inter-parasite competition. Thus, fitness-reducing mutations including drug-resistance mutations might have a higher chance of spreading in SEA. In contrast, the higher mixed infection rate in Africa may drive more intense inter-clone competition within the host, thereby reducing the probability of transmission of the less-fit clones, including potentially drug-resistant clones. The public domain image of the outline of the human body was obtained from https://en.wikipedia.org/wiki/File:Outline-body.png.\n\nAbout half of the 3,394 samples showed percentage heterozygous calls less than ten percent and were defined as monoclonal. Rest of the samples showed roughly uniform distribution of percentage heterozygous calls and were defined as polyclonal (mixed infection).\n\n\nReferences\n\nWorld Health Organization: World Malaria Report 2015. 2015. Reference Source\n\nAshley EA, Dhorda M, Fairhurst RM, et al.: Spread of artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2014; 371(5): 411–423. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoberts L: Malaria wars. Science. 2016; 352(6284): 398–402, 404-5. PubMed Abstract | Publisher Full Text\n\nDondorp AM, Yeung S, White L, et al.: Artemisinin resistance: current status and scenarios for containment. Nat Rev Microbiol. 2010; 8(4): 272–280. PubMed Abstract | Publisher Full Text\n\nDondorp AM, Ringwald P: Artemisinin resistance is a clear and present danger. Trends Parasitol. 2013; 29(8): 359–360. PubMed Abstract | Publisher Full Text\n\nWhite NJ: Artemisinin resistance--the clock is ticking. Lancet. 2010; 376(9758): 2051–2052. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Emergency response to artemisinin resistance in the Greater Mekong subregion. 2013. Reference Source\n\nWorld Health Organization: Update on artemisinin and ACT resistance. 2015. Reference Source\n\nRoper C, Pearce R, Nair S, et al.: Intercontinental spread of pyrimethamine-resistant malaria. Science. 2004; 305(5687): 1124. PubMed Abstract | Publisher Full Text\n\nKorenromp EL, Williams BG, Gouws E, et al.: Measurement of trends in childhood malaria mortality in Africa: an assessment of progress toward targets based on verbal autopsy. Lancet Infect Dis. 2003; 3(6): 349–358. PubMed Abstract | Publisher Full Text\n\nTrape JF: The public health impact of chloroquine resistance in Africa. Am J Trop Med Hyg. 2001; 64(1–2 Suppl): 12–17. PubMed Abstract\n\nTrape JF, Pison G, Preziosi MP, et al.: Impact of chloroquine resistance on malaria mortality. C R Acad Sci III. 1998; 321(8): 689–697. PubMed Abstract | Publisher Full Text\n\nTakala-Harrison S, Jacob CG, Arze C, et al.: Independent emergence of artemisinin resistance mutations among Plasmodium falciparum in Southeast Asia. J Infect Dis. 2015; 211(5): 670–679. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYe R, Hu D, Zhang Y, et al.: Distinctive origin of artemisinin-resistant Plasmodium falciparum on the China-Myanmar border. Sci Rep. 2016; 6: 20100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiotto O, Amato R, Ashley EA, et al.: Genetic architecture of artemisinin-resistant Plasmodium falciparum. Nat Genet. 2015; 47(3): 226–234. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiotto O, Almagro-Garcia J, Manske M, et al.: Multiple populations of artemisinin-resistant Plasmodium falciparum in Cambodia. Nat Genet. 2013; 45(6): 648–655. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTun KM, Imwong M, Lwin KM, et al.: Spread of artemisinin-resistant Plasmodium falciparum in Myanmar: a cross-sectional survey of the K13 molecular marker. Lancet Infect Dis. 2015; 15(4): 415–421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalariaGEN Plasmodium falciparum Community Project: Genomic epidemiology of artemisinin resistant malaria. eLife. 2016; 5: pii: e08714. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGardner KB, Sinha I, Bustamante LY, et al.: Protein-based signatures of functional evolution in Plasmodium falciparum. BMC Evol Biol. 2011; 11: 257. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson TJ, Haubold B, Williams JT, et al.: Microsatellite markers reveal a spectrum of population structures in the malaria parasite Plasmodium falciparum. Mol Biol Evol. 2000; 17(10): 1467–1482. PubMed Abstract | Publisher Full Text\n\nGhorbal M, Gorman M, Macpherson CR, et al.: Genome editing in the human malaria parasite Plasmodium falciparum using the CRISPR-Cas9 system. Nat Biotechnol. 2014; 32(8): 819–821. PubMed Abstract | Publisher Full Text\n\nHott A, Tucker MS, Casandra D, et al.: Fitness of artemisinin-resistant Plasmodium falciparum in vitro. J Antimicrob Chemother. 2015; 70(10): 2787–2796. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilairat P, Kumpornsin K, Chookajorn T: Plasmodium falciparum malaria: Convergent evolutionary trajectories towards delayed clearance following artemisinin treatment. Med Hypotheses. 2016; 90: 19–22. PubMed Abstract | Publisher Full Text\n\nChotivanich K, Tripura R, Das D, et al.: Laboratory detection of artemisinin-resistant Plasmodium falciparum. Antimicrob Agents Chemother. 2014; 58(6): 3157–3161. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVolkman SK, Sabeti PC, DeCaprio D, et al.: A genome-wide map of diversity in Plasmodium falciparum. Nat Genet. 2007; 39(1): 113–119. PubMed Abstract | Publisher Full Text\n\nWellems TE, Plowe CV: Chloroquine-resistant malaria. J Infect Dis. 2001; 184(6): 770–776. PubMed Abstract | Publisher Full Text\n\nSingh GP, Goel P, Sharma A: Structural mapping of Kelch13 mutations associated with artemisinin resistance in malaria. J Struct Funct Genomics. 2016; 1–6. PubMed Abstract | Publisher Full Text\n\nBrown TS, Jacob CG, Silva JC, et al.: Plasmodium falciparum field isolates from areas of repeated emergence of drug resistant malaria show no evidence of hypermutator phenotype. Infect Genet Evol. 2015; 30: 318–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWolf YI, Koonin EV: A tight link between orthologs and bidirectional best hits in bacterial and archaeal genomes. Genome Biol Evol. 2012; 4(12): 1286–1294. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSievers F, Wilm A, Dineen D, et al.: Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011; 7: 539. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "15669",
"date": "30 Aug 2016",
"name": "François H. Nosten",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper the authors analyze the data produced by the MalariaGen community that provides high quality data on over 900,000 SNP in more than 3000 samples from 22 countries. The authors report a significantly higher ratio of non-synonymous to synonymous polymorphisms in P. falciparum from South-East Asia compared to Africa and propose that this could explain the emergence of resistance in Asia. However this observation (and also the higher multiplicity of infections in Africa) is not novel.\nFurthermore the large number of K13 alleles, their dynamics and relative contributions to the clinical phenotype are likely to also be important. I think they could improve by showing that their analysis helps to understand the spread of a specific allele of K13 such as the C580Y.\nIt would be useful if the authors could provide more explanations as to why they believe their findings are innovative and how they help us to understand the evolution of artemisinin resistance, since the mechanism is clearly different to that of other antimalarials.",
"responses": [
{
"c_id": "2242",
"date": "17 Oct 2016",
"name": "Gajinder Singh",
"role": "Author Response",
"response": "Thank you for your review and comments. In this paper the authors analyze the data produced by the MalariaGen community that provides high quality data on over 900,000 SNP in more than 3000 samples from 22 countries. The authors report a significantly higher ratio of non-synonymous to synonymous polymorphisms in P. falciparum from South-East Asia compared to Africa and propose that this could explain the emergence of resistance in Asia. However this observation (and also the higher multiplicity of infections in Africa) is not novel. Author response: We are not aware of any manuscript that reported higher N/S ratio of P. falciparum of SEA strains of P. falciparum compared to African strains. If the reviewer is referring to the MalariaGen study (Elife. 2016), the authors in that manuscript did not report or suggest these results. In fact the authors of the MalariaGEN wrote that at the gene level “we found virtually identical distributions of the ratio of non-synonymous to synonymous mutations (N/S ratio) in the two regions”, however, no statistical test was performed by the authors. Furthermore, no comparison of N/S at the sample level was performed in the MalariaGEN study. We have added these statements to the Discussion section of the revised manuscript. We agree with the reviewer that the general conclusion of a higher multiplicity of infection in Africa is not novel, indeed we wrote “The rate of mixed infection is generally lower in areas of low-transmission such as SEA20” before we reported our results. However we have confirmed the higher mixed infection rate in Africa compared to SEA in the MalariaGEN dataset, which is the largest whole-genome dataset on P. falciparum till date. We have added these statements to the Discussion section of the revised manuscript. Furthermore the large number of K13 alleles, their dynamics and relative contributions to the clinical phenotype are likely to also be important. I think they could improve by showing that their analysis helps to understand the spread of a specific allele of K13 such as the C580Y. Author response: Our manuscript is an attempt to understand why in general anti-malarial resistance often arises and spreads in SEA and it is beyond the scope of this manuscript to understand why a specific allele of K13 is dominating in certain parts of SEA. It would be useful if the authors could provide more explanations as to why they believe their findings are innovative and how they help us to understand the evolution of artemisinin resistance, since the mechanism is clearly different to that of other antimalarials. Author response: We have modified the Discussion section of the manuscript to clarify how our results differ from the MalariaGEN study. We have also discussed the implications of our results for the current wave of artemisinin resistance. These findings have not been previously reported. As stated previously our manuscript is an attempt to understand why in general anti-malarial resistance often arises and spreads in SEA, but whether and how the evolutionary mechanism of artemisinin resistance is different from other antimalarial is beyond the scope of this manuscript. We would again like to thank you for comments on our manuscript, and would be happy to address any further concerns that you may have about the revised version of the manuscript."
}
]
},
{
"id": "15966",
"date": "08 Sep 2016",
"name": "Daniel N. Hupalo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a compelling manuscript from Gajinder Pal Singh and Amit Sharma, which conducts an analysis of the accrual of non-synonymous changes within Plasmodium falciparum (Pf) genes across the breadth of the MalariaGen population genomic dataset. This work builds upon a trend observed in the initial MalariaGen publication(s) that South East Asian (SEA) populations exhibited a general trend of higher Non-synonymous to synonymous mutation ratio (N/S) compared to African populations. The authors replicate and expand this analysis and then compare this trend to a protein dataset comprised of Eukaryotic sequences to show that even among the most conserved sites SEA continues to accrue more NS polymorphisms than African isolates. Lastly the authors set out an argument for the mechanism behind this apparent relaxed negative selection within SEA Pf, attributing it to the lower rate of mixed genotype infections across SEA and the subsequent lack of competition within host. This would create the circumstances that would allow for multiple independent resistance mutations arising and fixing within the SEA population. Exploring general patterns of genome architecture within P. falciparum is vital to understanding the spread and prevention of resistance to anti-malarial drugs, and as a result this work is of interest to the malaria community. There are some clarifications required, and two significant areas of improvement for the paper:\n\nMajor clarifications:\n\nMore detailed results/discussion sections, and deeper discussion of previous literature.While the authors cite the MalariaGen publication (citation 18) they should more explicitly describe the relationship between that publication and this analysis. Specifically, highlighting Figure 5 of citation 18 and its supplemental material, which shows a similar genome-wide analysis of N/S ratio between Africa/SEA. It would be good to directly acknowledge this article/figure as the first description of this African/SEA pattern. Furthermore, it would be constructive for the reader if the authors added a deeper justification for their extension of this work.\n\nLanguage used to describe the accrual of Nonsynonymous sites overstates the general trend of NS SNPs at conserved sites. Figure 4 shows that NS SNPs accrue at a higher rate in SEA across the set of conserved sites within Plasmodium and at the set of conserved sites across Eukaryotes. However, the language used to describe this figure in the legend states that “NS polymorphisms… in SEA are more likely to occur at conserved positions.” This statement implies that this is a general genome-wide trend of higher rate of fixing NS SNPs at sites with high conservation. Perhaps extending this legend to mention that NS SNPs are “are more likely to occur at conserved positions when compared to Africa” would avoid this implied generalization. Additionally, on page 3 the authors state “these mutations preferentially occur at the conserved sites”. This statement repeats the highlighted issue above, and adds the word “preferentially” which may create an unintentional teleological statement about the occurrences of NS mutations. NS SNPs occur at a higher rate within the pre-selected subset of conserved sites when compared to Africa, but the figures do not show a general genome-wide preference for NS mutations at conserved sites.\n\nMinor clarifications are as follows:\n“Kelch” is misspelled multiple times throughout the article.\n\n“Cambodia” is misspelled in multiple figures.\n\n“PNG” Abbreviation should be written out fully.",
"responses": [
{
"c_id": "2241",
"date": "17 Oct 2016",
"name": "Gajinder Singh",
"role": "Author Response",
"response": "This is a compelling manuscript from Gajinder Pal Singh and Amit Sharma, which conducts an analysis of the accrual of non-synonymous changes within Plasmodium falciparum (Pf) genes across the breadth of the MalariaGen population genomic dataset. This work builds upon a trend observed in the initial MalariaGen publication(s) that South East Asian (SEA) populations exhibited a general trend of higher Non-synonymous to synonymous mutation ratio (N/S) compared to African populations. The authors replicate and expand this analysis and then compare this trend to a protein dataset comprised of Eukaryotic sequences to show that even among the most conserved sites SEA continues to accrue more NS polymorphisms than African isolates. Lastly the authors set out an argument for the mechanism behind this apparent relaxed negative selection within SEA Pf, attributing it to the lower rate of mixed genotype infections across SEA and the subsequent lack of competition within host. This would create the circumstances that would allow for multiple independent resistance mutations arising and fixing within the SEA population. Exploring general patterns of genome architecture within P. falciparum is vital to understanding the spread and prevention of resistance to anti-malarial drugs, and as a result this work is of interest to the malaria community. There are some clarifications required, and two significant areas of improvement for the paper: Author response: We thank you for your review and comments. We would like to clarify that higher N/S in P. falciparum strains from SEA compared to African strains has not been stated or suggested in the MalariaGen manuscript (Elife. 2016). In fact the authors of the MalariaGEN wrote that at the gene level “we found virtually identical distributions of the ratio of non-synonymous to synonymous mutations (N/S ratio) in the two regions”, however, no statistical test was performed by the authors. Furthermore, no comparison of N/S at the sample level was performed in the MalariaGEN study. We have added these statements to the Discussion section of the revised manuscript. Major clarifications: More detailed results/discussion sections, and deeper discussion of previous literature. While the authors cite the MalariaGen publication (citation 18) they should more explicitly describe the relationship between that publication and this analysis. Specifically, highlighting Figure 5 of citation 18 and its supplemental material, which shows a similar genome-wide analysis of N/S ratio between Africa/SEA. It would be good to directly acknowledge this article/figure as the first description of this African/SEA pattern. Furthermore, it would be constructive for the reader if the authors added a deeper justification for their extension of this work. Author response: The aim of Figure 5 of the MalariaGen manuscript (Elife. 2016) was to test whether neutral evolution could account for the pattern of mutations in kelch13. Figure 5a plots genic N/S vs. protein conservation in Africa and SEA. The Figure shows that kelch13 gene is an outlier in the scatter-plot for SEA, but follows a general trend in Africa. Figure 5b plots the ratio of N/S ratios in SEA and Africa, for all genes (with ≥5 synonymous and ≥5 non-synonymous SNPs) and shows again that kelch13 is an outlier. Figure 5—Supplementary Figure 1 is same as Figure 5a, but also highlights other drug resistance genes. Figure 5—Supplementary Figure 2a plots number of non-synonymous SNPs/kbp vs. protein conservation in Africa and SEA and show that kelch13 follows the trend among genes in Africa, but has excess non-synonymous polymorphisms in SEA, while Figure 5—Supplementary Figure 2b plots number of synonymous SNPs/kbp vs. protein conservation in Africa and SEA and shows that kelch13 follows the normal trend among genes in Africa, but has far fewer synonymous SNPs than expected in SEA. Thus the authors conclude that high prevalence of non-synonymous SNPs in kelch13 in SEA is not explainable by neutral evolution, but is consistent with neutral evolution in Africa. We would like to reemphasize that nowhere in the manuscript do the authors state or imply that strains from SEA have a higher N/S than strains from Africa. Language used to describe the accrual of Nonsynonymous sites overstates the general trend of NS SNPs at conserved sites. Figure 4 shows that NS SNPs accrue at a higher rate in SEA across the set of conserved sites within Plasmodium and at the set of conserved sites across Eukaryotes. However, the language used to describe this figure in the legend states that “NS polymorphisms… in SEA are more likely to occur at conserved positions.” This statement implies that this is a general genome-wide trend of higher rate of fixing NS SNPs at sites with high conservation. Perhaps extending this legend to mention that NS SNPs are “are more likely to occur at conserved positions when compared to Africa” would avoid this implied generalization. Additionally, on page 3 the authors state “these mutations preferentially occur at the conserved sites”. This statement repeats the highlighted issue above, and adds the word “preferentially” which may create an unintentional teleological statement about the occurrences of NS mutations. NS SNPs occur at a higher rate within the pre-selected subset of conserved sites when compared to Africa, but the figures do not show a general genome-wide preference for NS mutations at conserved sites. Author response: We would like to thank the reviewer for pointing out this mistake. We have modified the Figure legend in the revised manuscript to “Non-synonymous polymorphisms specifically observed in SEA are more likely to occur at conserved positions when compared to non-synonymous polymorphisms specifically observed in Africa” as suggested. We have also modified the statement “This may be important for the acquisition of antimalarial drug resistance since these mutations preferentially occur at the conserved sites” to “This may be important for the acquisition of antimalarial drug resistance since drug-resistance mutations preferentially occur at the conserved sites”. Minor clarifications are as follows: “Kelch” is misspelled multiple times throughout the article. “Cambodia” is misspelled in multiple figures. “PNG” Abbreviation should be written out fully. Author response: Thanks for pointing out these mistakes; we have corrected these mistakes in the revised manuscript. We would be happy to address any further concerns that you may have about the revised version of the manuscript."
}
]
},
{
"id": "15909",
"date": "20 Sep 2016",
"name": "Georges Snounou",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article the authors conduct a detailed analysis that focused on the frequency of non-synonymous mutations across an extensive genome set derived by the MalariaGEN consortium from an extensive set of P. falciparum isolates collected mainly in Africa and SE Asia. The analyses extend the work presented recently (DOI: 10.7554/eLife.08714) where a trend was noted for a higher level of non-synonymous mutations in the SE Asian parasites a compared to that in the African parasites. Specifically, the authors have presented the analysis for the parasites from individual countries and focused on SNPs that occur at residues that are highly conserved in orthologues within a variety of Plasmodium species and across eukaryotes in general. The authors have thus provided convincing evidence for statistically significant albeit relatively subtle differences between the parasites from SE Asia and Africa (it is unclear to me why parasites from Peru and Colombia were included in the analysis, all the more as their numbers were relatively low).\n\nWhereas, the quality of the data, the analyses and the results are not into question, I have some misgivings and indeed frustrations as to the conclusions put forward in an attempt to account for the differences observed. The scenario inspired by the observations that the authors propose as an explanation for the propensity of antimalarial drug resistance to emerge and spread in P. falciparum from SE Asia first before doing so in Africa is plausible and could be supported by the data presented. However, this scenario is based on a set of assumptions whose limitations are not discussed, without any justification as to why this particular scenario is the most likely.\n\nWhat is the nature of the intra-host competition between the genotypes of P. falciparum that circulate in humans, and is there any evidence that this actually alters transmissibility? Immunity is the most obvious selective intra-host factor. To which extend differences in the levels of immunity among the patients that contributed the P. falciparum isolates analysed here alters the multiplicity of the infection or indeed the likelihood of genetic crosses?\n\nIs there any evidence that parasite with the higher levels of non-synonymous mutations across the genome are biologically less fit? There is evidence that this is the case for drug resistant parasites in vitro, that for parasites growing in vivo is less straightforward and varies for different drugs.\n\nTo what extend are the isolates collected from a limited number of febrile patients reflect the overall parasite population across a whole country. Could a bias be introduced simply because of differences in the degree of acquired immunity on admission, the time between the onset of the infection and the time treatment was sought, or the level of admission parasitaemia?\n\nExposure to drug is clearly a strong selective constraint for those genes implicated in overcoming the effects of the drug. What can be the nature of the selective constrains that maintain the higher levels of non-synonymous mutations across numerous genes spread throughout the genome? Clearly not all can be considered to provide some speculative compensatory effect to a potential reduction in fitness following drug selection.\n\nIt should be pointed out that in the 1950’s resistance to chloroquine first appeared not only in SE Asia but also in Colombia. Similarly resistance to pyrimethamine did not first appear in SE Asia, but in all areas where the use of this drug became widespread and systematic (this was the case in both SE Asia and African countries in the 1950’s). The “unanswered” question as to the reason why drug resistance has repeatedly emerged in SE Asia, might have more to do with drug usage than with some special property of the parasites.\n\nUltimately, I would suggest that the authors should present the limitations of their conclusions as well as alternative scenarios to account for their observations (and rank or discount them if they can). This will surely be welcome by readers. Advances in knowledge are seeded by diversity in speculation (conservative or less so) and genome wide analyses are a rich source for this.",
"responses": [
{
"c_id": "2240",
"date": "17 Oct 2016",
"name": "Gajinder Singh",
"role": "Author Response",
"response": "We are grateful for your valuable comments, which we believe have significantly improved our manuscript. In this article the authors conduct a detailed analysis that focused on the frequency of non-synonymous mutations across an extensive genome set derived by the MalariaGEN consortium from an extensive set of P. falciparum isolates collected mainly in Africa and SE Asia. The analyses extend the work presented recently (DOI: 10.7554/eLife.08714) where a trend was noted for a higher level of non-synonymous mutations in the SE Asian parasites a compared to that in the African parasites. Specifically, the authors have presented the analysis for the parasites from individual countries and focused on SNPs that occur at residues that are highly conserved in orthologues within a variety of Plasmodium species and across eukaryotes in general. The authors have thus provided convincing evidence for statistically significant albeit relatively subtle differences between the parasites from SE Asia and Africa (it is unclear to me why parasites from Peru and Colombia were included in the analysis, all the more as their numbers were relatively low). Author response: We would like to clarify that MalariaGEN authors did not write or suggest that N/S ratio of P. falciparum strains from SEA and Africa are different. We have discussed the difference between the work reported in MalariaGEN study and our manuscript in the Discussion section of the revised manuscript. We added data from Peru and Columbia in Figures 1, Figure 5 and Figure 6 only for completeness, but we have refrained from making any firm conclusions based on these samples. We have nevertheless commented upon the observations from South-American samples in the Discussion section in response to your comment. Whereas, the quality of the data, the analyses and the results are not into question, I have some misgivings and indeed frustrations as to the conclusions put forward in an attempt to account for the differences observed. The scenario inspired by the observations that the authors propose as an explanation for the propensity of antimalarial drug resistance to emerge and spread in P. falciparum from SE Asia first before doing so in Africa is plausible and could be supported by the data presented. However, this scenario is based on a set of assumptions whose limitations are not discussed, without any justification as to why this particular scenario is the most likely. What is the nature of the intra-host competition between the genotypes of P. falciparum that circulate in humans, and is there any evidence that this actually alters transmissibility? Immunity is the most obvious selective intra-host factor. To which extend differences in the levels of immunity among the patients that contributed the P. falciparum isolates analysed here alters the multiplicity of the infection or indeed the likelihood of genetic crosses? Author response: We thank the reviewer for pointing this out. There is indeed evidence of intra-host competition between genotypes of P. falciparum in humans that reduces parasite density 1. The association between parasite density and gametocyte density in humans has also been previously shown 2. The reasons for intra-host competition however remain unknown. Strain-transcending immunity, resource competition (e.g. RBCs), and direct interference between strains have been proposed as possible mechanisms responsible for within-host competition3-6. It is also possible that lower recombination rates due to lower rate of mixed infections in SEA may reduce the removal of deleterious mutations. We have added these sentences to the Discussion section of the revised manuscript. Is there any evidence that parasite with the higher levels of non-synonymous mutations across the genome are biologically less fit? There is evidence that this is the case for drug resistant parasites in vitro, that for parasites growing in vivo is less straightforward and varies for different drugs. Author response: There is currently no evidence that P. falciparum strains from SEA are biologically less fit that strains from Africa. Indeed that is the prediction of our model. It would be fascinating to test this hypothesis. We have added these sentences to the Discussion section of the revised manuscript. To what extend are the isolates collected from a limited number of febrile patients reflect the overall parasite population across a whole country. Could a bias be introduced simply because of differences in the degree of acquired immunity on admission, the time between the onset of the infection and the time treatment was sought, or the level of admission parasitaemia? Author response: It is possible that samples collected may not be representative of the whole country; however the consistent results we obtain for across countries in Africa and SEA suggest that our observations of higher N/S is SEA compared to Africa is robust to random variations introduced due to sampling. It is possible that some biases in sampling may be introduced that could lead to differences in the observed rate of polyclonal infections between Africa and SEA. We separately analysed predicted monoclonal and polyclonal samples and find results similar to those obtained using all samples (Supplementary Figure 1 in the revised manuscript). We thus currently have no hypothesis of how biased sampling could actually lead to higher genome-wide N/S in SEA. Exposure to drug is clearly a strong selective constraint for those genes implicated in overcoming the effects of the drug. What can be the nature of the selective constrains that maintain the higher levels of non-synonymous mutations across numerous genes spread throughout the genome? Clearly not all can be considered to provide some speculative compensatory effect to a potential reduction in fitness following drug selection. Author response: As further explained in the Discussion section, the higher genome scale N/S in SEA is consistent with relaxed negative selection. We hypothesise that relaxed negative selection could be due to low level of polyclonal infection and thus lower within-host competition in SEA. It should be pointed out that in the 1950’s resistance to chloroquine first appeared not only in SE Asia but also in Colombia. Similarly resistance to pyrimethamine did not first appear in SE Asia, but in all areas where the use of this drug became widespread and systematic (this was the case in both SE Asia and African countries in the 1950’s). The “unanswered” question as to the reason why drug resistance has repeatedly emerged in SE Asia, might have more to do with drug usage than with some special property of the parasites. Author response: Indeed resistance to chloroquine and sulphadoxine-pyrimethamine also appeared independently in South-America (but not in Africa7). We have very few samples from South America (27 samples from 2 countries) to make robust conclusions. However it is interesting that samples from South America do show low rate of polyclonal infections and higher N/S compared to Africa (Figure 1 and Figure 6). We have added these sentences to the Discussion section of the revised manuscript. Ultimately, I would suggest that the authors should present the limitations of their conclusions as well as alternative scenarios to account for their observations (and rank or discount them if they can). This will surely be welcome by readers. Advances in knowledge are seeded by diversity in speculation (conservative or less so) and genome wide analyses are a rich source for this. Author response: We have significantly expanded the Discussion section which we hope addresses the reviewer’s concerns. We would be happy to address any further concerns that you may have in the revised version of the manuscript. Reference List 1. Bushman, M. et al. Within-host competition and drug resistance in the human malaria parasite Plasmodium falciparum. Proc. Biol. Sci. 283, 20153038 (2016). 2. Ross, A., Killeen, G. & Smith, T. Relationships between host infectivity to mosquitoes and asexual parasite density in Plasmodium falciparum. Am. J. Trop. Med. Hyg. 75, 32-37 (2006). 3. Metcalf, C. J. et al. Partitioning regulatory mechanisms of within-host malaria dynamics using the effective propagation number. Science. %19;333, 984-988 (2011). 4. Raberg, L. et al. The role of immune-mediated apparent competition in genetically diverse malaria infections. Am. Nat. 168, 41-53 (2006). 5. Bruce, M. C. & Day, K. P. Cross-species regulation of malaria parasitaemia in the human host. Curr. Opin. Microbiol. 5, 431-437 (2002). 6. Yap, G. S. & Stevenson, M. M. Blood transfusion alters the course and outcome of Plasmodium chabaudi AS infection in mice. Infect. Immun. 62, 3761-3765 (1994). 7. Roper, C. et al. Intercontinental spread of pyrimethamine-resistant malaria. Science. %20;305, 1124 (2004)."
}
]
},
{
"id": "16468",
"date": "21 Sep 2016",
"name": "Tim J. Anderson",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract\nThe title is appropriate.\nThe abstract makes some jumps in logic: e.g “Furthermore, South-East Asian strains showed a higher proportion of non-synonymous polymorphism at conserved positions, suggesting reduced negative selection.” This pattern can equally well be explained by stronger positive selection in SEA than in Africa.\n\nArticle content\n\nMajor points\nResults differ from the prior analysis of this dataset. The article comes to different conclusions from those reported by the MalariaGEN consortium in eLife (ref 18) who write: \"Accordingly, we found virtually identical distributions of the ratio of non-synonymous to synonymous mutations (N/S ratio) in the two regions (Figure 3c)” page 6 bottom: MalariaGEN Plasmodium falciparum Community Project. eLife 2016;5:e08714. DOI: 10.7554/eLife.08714). It is unclear to me why this F1000 submission and the published eLife paper reach very different conclusions using the same dataset? I suggest that the authors should cite the conclusions of the eLife paper and explain why their analysis comes to a very different conclusion. This point needs urgently needs satisfactory resolution as this is the central conclusion of the paper.\n\nMethodology/analysis biases. Reduced ability to score non-synonymous variants in African populations could generate the N/S ratios observed. Such a bias could be generated because infections are more complex in Africa. Non-synonymous mutations tend to be at lower frequency than synonymous mutations. In complex mixed African infections low frequency non-synonymous mutations may be filtered out of the dataset (because very few reads show the mutation) but scored in SE Asian samples, where infections are simpler (so multiple reads show non-synonymous mutations). To eliminate this potential methodological bias, the authors should filter their data to minimize mixed infections prior to analysis. This could be done by setting a threshold for numbers of mixed base calls. Again, it is important to understand why the conclusions reached by this analysis differ from the published eLife paper.\n\nAs pointed out in the eLife paper there is a dramatic difference in the allele frequency spectrum in the two locations. The excess of rare variants in Africa could also potentially alter the N/S ratios.\n\nPopulation genetics explanation. Assuming the central result is correct, a simpler model explains these results: The effective population size (Ne) of parasite populations is lower in SEA than in Africa. Therefore purifying selection is weaker in SEA than in Africa resulting in less efficient removal of deleterious mutations. Discussion of this simple alternative explanation would improve the paper. Also Cheeseman et al, MBE 2016 (Mol Biol Evol. 2016 Mar;33(3):603-20. doi: 10.1093/molbev/msv282.), who make this argument to explain patterns of global copy number polymorphism in Plasmodium falciparum.\n\nStatistics used: The paper counts numbers of non-synonymous and synonymous mutations and estimates the ratio (N/S). This is an unusual way to present such data – it would be more informative to show dN/dS – (Nonsynonymous changes per non-synonymous site/synonymous variants per synonymous site). This statistic is more useful because the expected ratio under neutrality is 1. This should not change the results but will be more easily interpretable with reference to a neutral model.\n\nRelated questions: How is N/S ratio determined for genes in which there are NO synonymous mutations? Details of the methods for calculating this ratio, and how they differ from those used in the eLife paper should be provided.\n\nMore support needed for intrahost competition explanation: The authors argue that the observed pattern may result from more within infection competition in Africa relative to Asia. If competition is invoked to explain the results, then it would be useful to examine the categories of genes contributing to the excess of NS variations. This argument would imply that this effect should be seen in subsets of genes involved in within host competition. On the other hand, if there is no particular enrichment of particular gene classes, then the simple population genetics explanation seems more likely.\n\nConclusions\n\nLack of balance. The authors argue that drug resistance may arise in SEA rather than Africa because intrahost competition prevents emergence of resistance mutations associated with fitness costs. I agree in part – this may certainly contribute. However, greater selection for resistance to drugs in SE Asia is likely to be a critical factor. Most infections in SEA are symptomatic and infected individuals seek treatment. However, most infections in Africa are asymptomatic, so people to no seek treatment. Discussion of the difference in selection strength between continents would add balance to this paper.\n\nOther issues. I was very surprised to see this analysis of MalariaGEN data without involvement from malariaGEN authors. My understanding is that access to these data requires a \"Fort Lauderdate\" type agreement with MalariaGEN (https://www.malariagen.net/data/terms-use/pf3k-terms-use). The authors may have already discussed this with relevant people at MalariaGEN - I would suggest that the authors directly contact Dominic Kwiatkowski, if they have not already done so.",
"responses": [
{
"c_id": "2239",
"date": "17 Oct 2016",
"name": "Gajinder Singh",
"role": "Author Response",
"response": "We are grateful for your valuable comments, which we believe have significantly improved our manuscript. The abstract makes some jumps in logic: e.g “Furthermore, South-East Asian strains showed a higher proportion of non-synonymous polymorphism at conserved positions, suggesting reduced negative selection.” This pattern can equally well be explained by stronger positive selection in SEA than in Africa. Author response: Positive selection can indeed increase (though not in all cases) N/S for the gene under selection, as has been observed for the kelch13 gene in SEA. However, the pattern of higher N/S in strains in SEA which can be observed at genome-wide level is much more consistent with relaxed negative selection in SEA, rather than higher positive selection in SEA, which typically acts at a few sites. Furthermore, reduced constraints at conserved sites at the genome-wide level supports relaxed negative selection rather than positive selection. We thus believe that our use of word “suggesting” is appropriate in the statement. Article content Major points Results differ from the prior analysis of this dataset. The article comes to different conclusions from those reported by the MalariaGEN consortium in eLife (ref 18) who write: \"Accordingly, we found virtually identical distributions of the ratio of non-synonymous to synonymous mutations (N/S ratio) in the two regions (Figure 3c)” page 6 bottom: MalariaGEN Plasmodium falciparum Community Project. eLife 2016;5:e08714. DOI: 10.7554/eLife.08714). It is unclear to me why this F1000 submission and the published eLife paper reach very different conclusions using the same dataset? I suggest that the authors should cite the conclusions of the eLife paper and explain why their analysis comes to a very different conclusion. This point needs urgently needs satisfactory resolution as this is the central conclusion of the paper. Author response: The authors in the MalariaGEN study did not perform any statistical test to support their statement that N/S in SEA and Africa is similar at the gene level. While the difference is subtle, it is highly statistically significant, as we report in our manuscript. Furthermore, the authors in MalariaGEN study did not analyse differences in N/S at the sample level, where we clearly see the differences between SEA and African strains (Figure 1). We have now added these statements to the Discussion section of the revised manuscript. Methodology/analysis biases. Reduced ability to score non-synonymous variants in African populations could generate the N/S ratios observed. Such a bias could be generated because infections are more complex in Africa. Non-synonymous mutations tend to be at lower frequency than synonymous mutations. In complex mixed African infections low frequency non-synonymous mutations may be filtered out of the dataset (because very few reads show the mutation) but scored in SE Asian samples, where infections are simpler (so multiple reads show non-synonymous mutations). To eliminate this potential methodological bias, the authors should filter their data to minimize mixed infections prior to analysis. This could be done by setting a threshold for numbers of mixed base calls. Again, it is important to understand why the conclusions reached by this analysis differ from the published eLife paper. Author response: Separately analysing predicted monoclonal and polyclonal samples did not change conclusions (Supplementary Figure 1 in the revised manuscript). This is not surprising given that we observed almost no overlap in the distributions of N/S in SEA and African samples (Figure 1), as we wrote in the manuscript. We would like to emphasize that authors in MalariaGEN study did not perform N/S analyses at the sample level. As pointed out in the eLife paper there is a dramatic difference in the allele frequency spectrum in the two locations. The excess of rare variants in Africa could also potentially alter the N/S ratios. Author response: As proposed by the reviewer in the previous comment, the excess of rare variants in Africa might increase N/S in Africa, thus the higher N/S in SEA could not be explained by excess rare variants in Africa. Population genetics explanation. Assuming the central result is correct, a simpler model explains these results: The effective population size (Ne) of parasite populations is lower in SEA than in Africa. Therefore purifying selection is weaker in SEA than in Africa resulting in less efficient removal of deleterious mutations. Discussion of this simple alternative explanation would improve the paper. Also Cheeseman et al, MBE 2016 (MolBiolEvol. 2016 Mar;33(3):603-20. doi: 10.1093/molbev/msv282.), who make this argument to explain patterns of global copy number polymorphism in Plasmodium falciparum. Author response: Indeed relaxed negative selection in SEA due to lower effective population (reflected in lower polyclonal infections) is exactly what we propose in our manuscript for our observation of higher N/S in SEA. Lower Ne (effective population size) is bound to be associated with lower polyclonal infection rate, which has been shown to lead to within-host competition in P. falciparum1. Thus we do not believe that the two explanations are alternative independent explanations, one of which needs to be discounted. Rather the observation made by Cheeseman et al, and in our manuscript are convergent which further supports reduced negative selection in P. falciparum in SEA and we have added these statements to the Discussion section of the revised manuscript. Statistics used: The paper counts numbers of non-synonymous and synonymous mutations and estimates the ratio (N/S). This is an unusual way to present such data – it would be more informative to show dN/dS – (Nonsynonymous changes per non-synonymous site/synonymous variants per synonymous site). This statistic is more useful because the expected ratio under neutrality is 1. This should not change the results but will be more easily interpretable with reference to a neutral model. Author response: As pointed out by the reviewer dN/dS comparison would not change the results. P. falciparum is unusual in having very high N/S compared to other organisms2, thus the usual expectation of dN/dS of 1 for neutral evolution is not applicable for P. falciparum. Related questions: How is N/S ratio determined for genes in which there are NO synonymous mutations? Details of the methods for calculating this ratio, and how they differ from those used in the eLife paper should be provided. Author response: There were 136 genes with zero synonymous SNPs in SEA (none in Africa) and thus were excluded from the analyses. We have added this statement in the Method section of the revised manuscript. More support needed for intrahost competition explanation: The authors argue that the observed pattern may result from more within infection competition in Africa relative to Asia. If competition is invoked to explain the results, then it would be useful to examine the categories of genes contributing to the excess of NS variations. This argument would imply that this effect should be seen in subsets of genes involved in within host competition. On the other hand, if there is no particular enrichment of particular gene classes, then the simple population genetics explanation seems more likely. Author response: There is no dataset of genes known to be involved in within-host competition in P. falciparum. We do note however that many housekeeping genes show higher N/S in SEA than Africa (Supplementary Table 1). As discussed in response to previous comments by the reviewer, the lower Ne, lower rate of polyclonal infections and within-host completion are not mutually exclusive explanations, one of which needs to be discounted. Conclusions Lack of balance. The authors argue that drug resistance may arise in SEA rather than Africa because intrahost competition prevents emergence of resistance mutations associated with fitness costs. I agree in part – this may certainly contribute. However, greater selection for resistance to drugs in SE Asia is likely to be a critical factor. Most infections in SEA are symptomatic and infected individuals seek treatment. However, most infections in Africa are asymptomatic, so people to no seek treatment. Discussion of the difference in selection strength between continents would add balance to this paper. Author response: The higher rate asymptomatic infections as well as untreated patients in Africa would mean higher competition between drug resistant and drug sensitive clones in the absence of drug, further decreasing the spread of drug resistance mutations with a fitness cost. Thus higher immunity, lower treatment rates, and higher polyclonal infections are likely to work synergistically. We have added these statements to the Discussion section in the revised manuscript. Other issues. I was very surprised to see this analysis of MalariaGEN data without involvement from malariaGEN authors. My understanding is that access to these data requires a \"Fort Lauderdate\" type agreement with MalariaGEN (https://www.malariagen.net/data/terms-use/pf3k-terms-use). The authors may have already discussed this with relevant people at MalariaGEN - I would suggest that the authors directly contact Dominic Kwiatkowski, if they have not already done so. Author response: We have taken permission from the authors of the MalariaGEN study, and their contribution has been appropriately acknowledged in our publication. We would again like to thank you for your critical comments on our manuscript, and would be happy to address any further concerns that you may have in the revised version of the manuscript. Reference List 1. Bushman, M. et al. Within-host competition and drug resistance in the human malaria parasite Plasmodium falciparum. Proc. Biol. Sci. 283, 20153038 (2016). 2. Chang, H. H. et al. Malaria life cycle intensifies both natural selection and random genetic drift. Proc. Natl. Acad. Sci. U. S. A. 110, 20129-20134 (2013)."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1964
|
https://f1000research.com/articles/5-2549/v1
|
21 Oct 16
|
{
"type": "Research Article",
"title": "Evaluation of the Postoperative Quality of Recovery Scale test and re-test in Swedish among healthy volunteers",
"authors": [
"Pether Jildenstål",
"Johan Eriksson",
"Margareta Warren Stomberg",
"Jan G. Jakobsson",
"Pether Jildenstål",
"Johan Eriksson",
"Margareta Warren Stomberg"
],
"abstract": "Introduction\nPatient outcome measures are required to assess the quality of healthcare. Tools for a patients’ self-assessment of quality of recovery, during perioperative care, have been developed during the last decade. The Postoperative Quality of Recovery Scale (PostopQRS) questionnaire is one of the most well-accepted and validated tools available. Here we assess the PostopORS questionnaire in Swedish. Methods Sixty-one students from the Bachelor Program in Nursing, (50 female and 11 male; mean age, 25; range, 21-46) filled in the Swedish translation of the PostopQRS questionnaire twice. They also evaluated whether they found the queries easy to understand and respond to.\nResults The participants found the Swedish translation of the PostopQRS questionnaire easy to read and understand. There were minor differences in test responses between the initial test and the re-test 48 hours later. We found that the PostopQRS questionnaire has some background noise; 12 out of 61 participants (20%) reported mild pain, 25 (41%) scored some depression and 33 scored mild anxiety (54%). The cognitive domain showed a learning effect between tests in “word recall” and “word generation”, while “digit recall forward” and “digit recall backward” showed no change. We found a difference in cognitive test performance with age; younger participants had higher mean cognitive test scores compared to participants >30 years. Overall, nine participants showed a decrease in re-test scores; two experienced a mild increase in pain; one experienced a mild increase in anxiety; and six performed more poorly on cognitive tests. Conclusion The Swedish translation of the PostopQRS was found to be adequate for use in the assessment of quality of recovery, and the questions were well understood by participants. Our study shows the importance of baseline testing for assessment of recovery, since recovery is assessed as a return to or improvement in each individual’s baseline score.",
"keywords": [
"anaesthesia",
"recovery",
"quality of recovery",
"PostopQRS",
"volunteers",
"re-test"
],
"content": "Introduction\n\nThere is an increasing interest in identifying patient outcome measures1,2 to best assess the quality of patient recovery3. Several tools have been developed to achieve this goal4. The Postoperative Quality of Recovery scale (PostopQRS) questionnaire was developed in 2010 and has been widely accepted as an effective tool for the self-assessment of patients’ quality of recovery3,5. The PostopQRS homepage provides detailed information around how to use the test and available languages (http://www.postopqrs.com/). The PostopQRS assesses patient recovery compared to a unique individual preoperative, baseline score. This is somewhat in contrast to other tests, e.g. the Quality of Recovery Scale, where the absolute score is commonly used to describe recovery. Moreover, Myles et al. recently published recommendations around the minimal clinically important difference4. The PostopQRS questionnaire addresses multiple domains, including nociception, emotion, day-to-day activities, cognition and satisfaction. The cognitive domain consists of a five tests, as follows: orientation, digit recall forwards, digit recall backwards, letter forwards and word generation. These tests have been shown to be effective in assessing cognitive performance6. There has been a discussion around how individual cognitive re-test scores should be evaluated, and an amended technique, including a tolerance factor of -1 to -3, is now recommended7,8. The PostopQRS is an attractive tool for assessing the quality of recovery following general, as well as local, anaesthesia9.\n\nThe aim of the present study was to validate the Swedish translation of the PostopQRS in a test and re-test study in healthy volunteers.\n\n\nMethods\n\nEthical approval was obtained from the Ethics Committee of Stockholm (January 20th 2016; approval no., Dnr 20152015/2163-31/4, Sweden) prior to the start of the study. Oral informed consent was obtained from 65 adult students at the University of Gothenburg, Sahlgrenska Academy, aged between 21 and 39 years. The study was conducted between 29th August 2016 and 2nd September 2016.\n\nExclusion criteria included the inability to complete the questionnaire, due to mental disabilities, hearing impairment, any form of substance abuse or not having Swedish as their native language.\n\nFour anaesthetic nurses and senior lecturers trained in interview techniques performed the tests.\n\nThe PostopQRS tests (Data availability; http://www.postopqrs.com/) were performed on two occasions. Tests were initially undertaken face-to-face on the day of inclusion to establish a baseline, and then on day 2, 48 hours after the initial baseline test. The second test was performed by telephone.\n\nThe tests were all performed in a quiet environment, free from distraction. The participants completed all sections of the PostopQRS on both occasions. The questions and answers were read from the prescribed PostopQRS script. During the initial test, all participants also read the question by themselves, with the exception of the cognitive tests.\n\nAll participants were explicitly asked whether they had any hesitation regarding the understanding of the questions, both when provided verbally and by reading. The question about understanding the queries was asked on both occasions, although during the telephone re-test only verbal evaluation was possible.\n\nAll questions were translated to Swedish, The letters used for the word generation test were D and S was used; D for the initial base-line test and S for the 48-hour re-test.\n\nThe physiological domain, which assesses vital signs, were not included in this study.\n\nData were collected from the interview personnel before submission to the data administrator for analysis. Descriptive statistics in the demographics section is expressed as numbers, mean±SD and range; minimum to maximum values. Participants were divided into three different groups depending on age (20–24 years, 25–29 years, and >30 years). Significance testing was performed using Wilcoxon signed-rank test or Kruskal-Wallis test where appropriate, and were analysed using SPSS version 23.0 for Windows (SPSS Inc, Chicago, IL, USA). A two-tailed P value >0.05 was considered significant. Results are presented for the entire cohort and the three age groups.\n\n\nResults\n\nWe included 65 students in the study. The initial test was performed face-to-face and the retest was by phone interview. Four (n=4) students could not reached by phone for the re-test; thus only 61 subjects were included in the result analysis. The mean age for the cohort was 25 years and the majority of the participants were female (Table 1).\n\nAll participants explicitly expressed that the questions were easy to read and understand, and also easy to understand when asked orally.\n\nThe overall results of the test and re-test results for the different domains [nociceptive, emotion, day-to-day activities (ADL domain), and cognition] are presented in Table 2.\n\n*Taking the correction factor suggested by Royse et al., 2013.\n\nIn the initial test (base-line), 12 out of 61 (20%) participants reported mild pain, 25 (41%) scored mild depression and 33 (54%) scored mild anxiety.\n\nAll participants in the study scored the maximum score in orientation, at base-line and at the 48-hour re-test. Digit recall forward, digit recall backwards, word recall and word generation had a median value score of 5, 3, 7 and 8, respectively, at base-line, with ranges 3, 5, 9 and 12 (see Table 2). Word recall and word generation showed both a significant improvement in the re-test (Table 3), which may be characterised as a ‘learning effect’. When cognitive test performance was separated by age, a numeric difference was seen in overall performance (Table 4–Table 6); the absolute scores decreased with age, and the learning effect diminished (Table 7, Table 8).\n\n*Kruskal-Wallis Test\n\n*Kruskal-Wallis Test\n\nAll participants, except one, scored full capacity in day-to-day activates and overall perception on both test occasions.\n\nA decrease in pain and depression was seen in four participants, and 16 participants scored lower in anxiety at in the re-test (Table 2). A learning effect was seen in word recall and word generation. There were overall nine participants (15%) that showed a decrease in test scores; two participants experienced a mild pain increase, one experienced a mild increase in anxiety, and six demonstrated a poorer performance on cognitive tests (Table 2).\n\n\nDiscussion\n\nWe observed that the Swedish translation of the PostopQRS is easy to read and understand. The test and the re-test 48 hours later showed minor alterations. There was background noise and a learning effect in word recall and generation tests between the two test occasions. The cognitive tests showed huge individual variability in scores emphasising the importance of baseline testing, since PostopQRS defines recovery as whether an individual has regained their base-line score. The PostopQRS does not assess an absolute value; it defines recovered/not recovered as a return to base-line performance or better test score.\n\nThe PostopQRS was developed by an international team in 2010, and since then has been validated and is now considered a robust test tool3,5,10. The questionnaire has been translated into several languages (http://www.postopqrs.com/). The Japanese translation was studied in patients by Naito et al.11, who considered it a feasible tool for assessing recovery after surgery, despite having some limitation in ceiling effect in a high number of questions. The questionnaire was also recently translated into Chinese by Bu et al.12, who concluded that the translated PostopQRS tool was robust, but showed that the Quality of Recovery scale13 had a higher validity and was faster to perform.\n\nThe importance of a baseline test was also shown by Lindqvist et al.14, who studied patients scheduled for breast cancer surgery and showed that the baseline test was significantly affected. In addition, several patients did not have a baseline score that was sufficient for the assessment of the recovery process. The learning effect was also seen in a volunteer study by Royse et al.7.\n\nThe impact on age has also been addressed by Royse et al. The authors found only minor age effects on the recovery process; however, they did not investigate the explicit test results9. Both age and time of day have been shown to influence more complex word tests, with older patients performing better in the morning in contrast to younger patients, who exhibit better results in the evening15. In this study, we used the letters ‘D’ and ‘S’ for word generation. It has been suggested that the letters should be changed for the word generation test, and different words, letters and numbers should be chosen for the recall domain tests in re-tests, in order to reduce any learning effect. In agreement with Rosye et al., we used face-to-face interviews for the initial test and phone interviews for the re-test7. Royse et al. did not find that phone interviews had a significant impact on the results.\n\nThere are several limitations of the present study. The participants in our study were young healthy volunteers; thus, the participants were not exposed to any form of intervention, such as surgery and anaesthesia. We made only one follow-up after 48 hours and did not perform re-tests at any additional time-points. It should also be acknowledged that we did not compare the PostopQRS questionnaire with any other assessment tool. We do consider the PostopQRS a well-established recovery assessment tool3,13, and also find that the Swedish version could work well in determining patients’ quality of recovery.\n\nIn summary, we observed that the Swedish translation of the PostopQRS is easy to read and understand, and preoperative baseline testing is of importance to define each individuals score profile. The test and re-test performed showed low variability, which was observed by the majority of participants performing equally on both tests. However, a learning effect in the word recall and word generation cognitive tests was observed. In conclusion, we believe that the PostopQRS is a feasible tool for assessment of recovery.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data from the test and retest, 10.5256/f1000research.9740.d13966016\n\nF1000Research: Dataset 2. Swedish translation of the PostopQRS questionnaire, 10.5256/f1000research.9740.d13966117",
"appendix": "Author contributions\n\n\n\nJGJ was lead for the paper work-up, study design, protocol writing and ethical committee communication. Translation was performed by PJ. Interviews and data collection was led by PJ and MWS. Compilation of data and statistics was led by PJ in collaboration with JE and JGJ. JGJ is the lead investigator for the paper.\n\n\nCompeting interests\n\n\n\nJGJ has received research grants from Abbott, Baxter, MSD, Maquet, Nycomed, Pfizer, Astra-Zeneca and MundiPharma, and he has lectured and taken part in advisory boards for Abbott, Baxter, MSD, Grunenthal and Pfizer. He is employed as a consultant safety physician at Linde Healthcare.\n\n\nGrant information\n\nThis study has been supported by the Department of Anaesthesia Danderyds Hospital and University of Gothenburg, Sahlgrenska Academy. The study has not received any further economic support.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Daniel Norbeck, CRNA, MSc, for support during the interviews.\n\n\nReferences\n\nKanatas A, Velikova G, Roe B, et al.: Patient-reported outcomes in breast oncology: a review of validated outcome instruments. Tumori. 2012; 98(6): 678–688. PubMed Abstract | Publisher Full Text\n\nShida D, Wakamatsu K, Tanaka Y, et al.: The postoperative patient-reported quality of recovery in colorectal cancer patients under enhanced recovery after surgery using QoR-40. BMC Cancer. 2015; 15: 799. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBowyer A, Jakobsson J, Ljungqvist O, et al.: A review of the scope and measurement of postoperative quality of recovery. Anaesthesia. 2014; 69(11): 1266–1278. PubMed Abstract | Publisher Full Text\n\nMyles PS, Myles DB, Galagher W, et al.: Minimal Clinically Important Difference for Three Quality of Recovery Scales. Anesthesiology. 2016; 125(1): 39–45. PubMed Abstract | Publisher Full Text\n\nRoyse CF, Newman S, Chung F, et al.: Development and feasibility of a scale to assess postoperative recovery: the post-operative quality recovery scale. Anesthesiology. 2010; 113(4): 892–905. PubMed Abstract | Publisher Full Text\n\nNewman S, Wilkinson DJ, Royse CF: Assessment of early cognitive recovery after surgery using the Post-operative Quality of Recovery Scale. Acta Anaesthesiol Scand. 2014; 58(2): 185–191. PubMed Abstract | Publisher Full Text\n\nRoyse CF, Newman S, Williams Z, et al.: A human volunteer study to identify variability in performance in the cognitive domain of the postoperative quality of recovery scale. Anesthesiology. 2013; 119(3): 576–581. PubMed Abstract | Publisher Full Text\n\nLindqvist M, Royse C, Brattwall M, et al.: Post-operative Quality of Recovery Scale: the impact of assessment method on cognitive recovery. Acta Anaesthesiol Scand. 2013; 57(10): 1308–1312. PubMed Abstract | Publisher Full Text\n\nRoyse CF, Williams Z, Ye G, et al.: Knee surgery recovery: Post-operative Quality of Recovery Scale comparison of age and complexity of surgery. Acta Anaesthesiol Scand. 2014; 58(6): 660–667. PubMed Abstract | Publisher Full Text\n\nRoyse CF, Williams Z, Purser S, et al.: Recovery after nasal surgery vs. tonsillectomy: discriminant validation of the Postoperative Quality of Recovery Scale. Acta Anaesthesiol Scand. 2014; 58(3): 345–351. PubMed Abstract | Publisher Full Text\n\nNaito Y, Tanaka Y, Sasaoka N, et al.: Feasibility, reliability, and validity of the Japanese version of the Postoperative Quality of Recovery Scale: a first pilot study. J Anesth. 2015; 29(3): 463–466. PubMed Abstract | Publisher Full Text\n\nBu XS, Zhang J, Zuo YX: Validation of the Chinese Version of the Quality of Recovery-15 Score and Its Comparison with the Post-Operative Quality Recovery Scale. Patient. 2016; 9(3): 251–259. PubMed Abstract | Publisher Full Text\n\nBowyer AJ, Royse CF: Postoperative recovery and outcomes--what are we measuring and for whom? Anaesthesia. 2016; 71(Suppl 1): 72–77. PubMed Abstract | Publisher Full Text\n\nLindqvist M, Granstrom A, Schening A, et al.: Cognitive testing with the Post-Operative Quality of Recovery Scale in pre-surgery cancer patients--a controlled study. Acta Anaesthesiol Scand. 2015; 59(6): 763–772. PubMed Abstract | Publisher Full Text\n\nIskandar S, Murphy KJ, Baird AD, et al.: Interacting effects of age and time of day on verbal fluency performance and intraindividual variability. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn. 2016; 23(1): 1–17. PubMed Abstract | Publisher Full Text\n\nJildenstål P, Eriksson J, Warren Stomberg M, et al.: Dataset 1 in: Evaluation of the Postoperative Quality of Recovery Scale test and re-test in Swedish among healthy volunteers. F1000Research. 2016. Data Source\n\nJildenstål P, Eriksson J, Warren Stomberg M, et al.: Dataset 2 in: Evaluation of the Postoperative Quality of Recovery Scale test and re-test in Swedish among healthy volunteers. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17147",
"date": "26 Oct 2016",
"name": "Colin F. Royse",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have translated the PostopQRS into Swedish and conducted a validation study in normal volunteers. The study is well conducted and supports the translated version. Further, they have shown a small learning effect as well as considerable variation in baseline performance of the Scale, which emphasizes the need to always compare postoperative recovery against the individual's own baseline scores.\nThe authors should mention in their discussion that their cognitive scores were very similar to the cognitive scores reported by Royse et al in their study on volunteers, indicating that the value of the scores in Swedish should be comparable to English version publications.1",
"responses": []
},
{
"id": "20172",
"date": "13 Feb 2017",
"name": "Stanton P. Newman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well conducted study examining changes over time in a newly translated Swedish version of the PostopQRS Scale. Sixty one students were assessed twice on the scale with a 48 hour gap between assessments. The physiological aspects of the PostopQRS Scale were not used. On the first occasion participants were assessed face to face and the second via telephone. The PostopQRS Scale has been validated for telephone use in its English version.\n\nThe authors report some learning on the tests of word recall and word generation. It is not uncommon to find some learning taking place on cognitive tests especially between the first and second occasions. This has been attributed in part to the fact that on the first occasion the tests are new and the participant does not know what to expect. On the second occasion the experience of having done the test enables some participants to adjust their strategy and obtain an improved score. The use of parallel forms is widely used to reduce learning of the actual items on any of the tests and the authors note that they may have increased the learning observed on these two tests by not using a parallel set of words on the word recall task and not counterbalancing the letter on Word Generation task.\n\nThe authors note then baseline levels of pain, anxiety and depression along with highly variable cognitive performance all of which attest to the importance of having baseline scores before surgery in order to account for individual differences. Overall the Swedish version of the PostopQRS appears to perform well.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2549
|
https://f1000research.com/articles/5-2319/v1
|
14 Sep 16
|
{
"type": "Research Article",
"title": "“Science Fiesta!” Combining student-led community outreach with local culture",
"authors": [
"Travis J. Block",
"Milos Marinkovic",
"Jodie Gray",
"Paul E. Dowell",
"Charlotte Anthony",
"Ryan Daly",
"LaShauna Evans",
"Chase Fordtran",
"Elizabeth Hassan",
"Tara Holmgren",
"Aaron Horning",
"Sabrina Martinez-Anz",
"Rosemary Riggs",
"Thomas Vanasse",
"Mikaela Sifuentes",
"Jonathan M. Berman",
"Teresa M. Evans",
"Paul E. Dowell",
"Charlotte Anthony",
"Ryan Daly",
"LaShauna Evans",
"Chase Fordtran",
"Elizabeth Hassan",
"Tara Holmgren",
"Aaron Horning",
"Sabrina Martinez-Anz",
"Rosemary Riggs",
"Thomas Vanasse",
"Mikaela Sifuentes",
"Jonathan M. Berman"
],
"abstract": "Science outreach improves science literacy among the public and communication skills of scientists. However, despite the array of well-documented benefits, robust outreach efforts are often absent from communities which stand to benefit the most from these initiatives. Here, we introduce “Science Fiesta,” a graduate student-led outreach initiative which utilizes cultural traditions of South Texas as a vehicle to establish self-sustaining interactions between scientists and their local community. Event assessment surveys indicated that attendees found the event both fun and educational. At the same time, graduate students who organized the event and participated in outreach reported that they strengthened a variety of professional skills important for their future careers. More importantly, the event had a substantial positive impact on enthusiasm for science outreach. Both public attendees and graduate students reported that they were likely to participate in future science outreach events, even though a majority of both groups had not been previously involved in outreach efforts. Science Fiesta is a model for a highly effective graduate student-led outreach initiative that is capable of 1) improving public scientific literacy, 2) reinforcing graduate education and career development and 3) creating a culture of science engagement within local communities.",
"keywords": [
"science outreach",
"science communication",
"graduate education",
"career development"
],
"content": "Introduction\n\nThere is a growing appreciation for the importance of community outreach among scientists1. However, there is not yet a consensus regarding the most effective method of engaging the public. Instead, a number of public outreach models have emerged, each with a share of supporters and critics from within the scientific community2,3. Even the meaning of “outreach” has also proved contentious among scientists, alternatively defined as either unidirectional communication from scientists to the public or as bidirectional exchanges intended to foster dialogue2. In our initiative, we used “outreach” to broadly describe all efforts performed by scientists to communicate scientific information to non-expert audiences. Finally, perceptions regarding the benefits of outreach for scientists are continuing to change3. Recently, the traditional view of outreach as a “duty,” which solely benefits public literacy, has been challenged by the view that scientists, and particularly graduate students, also benefit from interactions with diverse public audiences4. Science Fiesta was conceived as a platform for scientists to inform, engage, and excite the public, while providing graduate students a unique opportunity for practical training in communication, leadership and project management.\n\nSan Antonio is a rapidly-growing metropolis and the largest city in South Texas5. The city features a vibrant biotechnology sector. Nearly 1 in 6 jobs are in healthcare or biosciences. The city is home to multiple public and private research universities, and has two large, private, world-class, non-profit research foundations6. Furthermore, South Texas can play a critical role in bolstering diversity in science, technology, engineering and math (STEM) education at all levels. A number of colleges and universities in the region are federally-designated “Hispanic-Serving Institutions” (HSIs)7 and receive funding from the U.S. Department of Education for programs in STEM and healthcare education and NIH-funding for biomedical science training through the Research Centers in Minority Institutions (RCMI) program8.\n\nStill, South Texas is faced with specific challenges relating to science education. San Antonio lags behind other major cities in educational attainment with less than one-quarter of adults holding a 4-year degree and the region as a whole is affected by rates of poverty above the national average9. Moreover, many of the counties in the region have been designated as “Healthcare Professional Shortage Areas” (HPSAs) by the U.S. Department of Health and Human Services, indicating a need for more STEM professionals in the region10,11. These challenges represent an opportunity for science education, outreach and workforce development to improve quality of life in the entire region. While normative public engagement efforts can be constrained by specific goals, such as educating the public on a particular topic, Science Fiesta’s ultimate aim was to spark a culture of outreach and demonstrate the value of science to the San Antonio community.\n\nScience Fiesta was developed by the Graduate Student Association (GSA) of the University of Texas Health Science Center at San Antonio (UTHSCSA) in order to fulfill the need for broader science outreach and to create awareness and appreciation for the central role of science in the future of South Texas. To achieve this goal, the GSA sought to integrate its outreach efforts with a widely-attended, yearly event. Held each April, “Fiesta” is an over century-old San Antonio tradition, featuring a full two weeks of parades, festivals and community service activities across the city12. As a major local tradition, Fiesta presented a unique opportunity for scientists to engage with the San Antonio community.\n\nSeamlessly integrated within this larger cultural tradition, Science Fiesta aimed to preserve both the aesthetic qualities and festive environment of Fiesta events. Science Fiesta was held in venues used for traditional Fiesta events and featured a mixture of science outreach and local entertainment. The event featured dozens of booths representing San Antonio science institutions and industry, interactive science stations, discussion panels, a “Q&A” poster session (featuring science from undergraduate and graduate-level scientists), science-themed art exhibits, food vendors, Mexican folk dancers and musicians. Through these diverse offerings, Science Fiesta was an event embraced by the entire community. By providing activities that engaged both children and adults, trainees in attendance were challenged to practice multiple levels of science communication and tailor their scientific messages to a variety of diverse audiences. Furthermore, graduate students who organized the event further refined their scientific communication skills through selecting the modes of outreach best suited for the different groups in attendance. For example, the student organizers developed a poster template for the “Q&A” poster session accessible to a broad age group but selected questions for the discussion panel on science policy that engaged a more sophisticated audience. For trainees, Science Fiesta was more than an opportunity to practice science outreach and communication, but a unique educational opportunity to design and direct interactions between scientists and the general public for the maximum benefit of both groups. Importantly, organizing an event of this type allowed graduate students to developed career-specific skills not typically exercised in graduate-level science education.\n\n\nMethods\n\nThe educational evaluation and assessment of Science Fiesta outlined below was deemed to be Not Regulated Research by the UT Health Science Center Institutional Review Board Office (Protocol Number: HSC20160219N).\n\nScience Fiesta was promoted through local magazines, as well as via public service announcements on Texas Public Radio, flyers posted in local businesses, and a social media campaign managed by graduate students. The social media campaign consisted of regular updates on the planning through Facebook and Twitter, using the hashtags #ScienceFiesta and #ScienceinoldSanAntonio.\n\nMore than a dozen interactive science stations hosted “hands-on” activities that allowed attendees to conduct simple science fair-style experiments. In order to draw foot traffic into Science Fiesta, interactive science stations were positioned immediately adjacent to the entrances of the venue. All groups were welcome to participate and either responded to direct invitations or applied to host a booth using a link on the event website. These groups were furnished with tables, chairs, table cloths, and electrical outlets, but were expected to furnish activities and information about their group.\n\nOver two dozen informational booths were hosted by local universities, professional societies, scientific organizations, and local industry partners. The booths were intended to promote the awareness of local research and provide the public with ways to engage with science. While informal groups such as astronomy and botany clubs promoted science as recreation, representatives from local universities described educational opportunities to aspiring scientists and representatives from local companies educated the community about the scientific enterprise in San Antonio. Informational booths were recruited in the same manner as interactive stations.\n\nThe poster session served to engage the public with basic science that is being conducted in San Antonio and allow trainees to explain their research directly to the public. The session included more than 50 posters, representing several Texas universities. However, unlike traditional research posters, those featured at Science Fiesta were specifically designed by trainees to maximize their accessibility to a broad audience (Figure 1). The graduate students organizing the event produced a template to guide presenters in tailoring their posters to the public. In communicating their research, trainees were instructed to keep their talks concise while allowing for plenty of time for questions from the public. Students could sign up to present posters on the event website. An invitation for no-cost registration was sent to leaders of all local universities with the template for tailoring posters for public consumption.\n\nThe template stresses simplicity.\n\nThe graduate student planning team organized a discussion panel featuring local, state and national authorities on science policy and communication. The goal of the panel was to enrich the appeal of Science Fiesta to the public and scientists alike. The panel discussed topics related to supporting scientific education and research in South Texas, as well as sharing views on the importance of outreach as a driver of science in the region. Texas Public Radio recorded the panel and posted the audio to their website (www.tpr.org).\n\nWe hosted a keynote speaker in order to increase interest in attending the event, entertain attendees, and set an example of effective science communication for students and career scientists. Therefore, we invited a nationally-recognized science journalist and New York Times® Bestselling Author.\n\nAn anonymous feedback survey that contained six brief questions was collected via convenience sampling. The survey featured one nominal response, two binary responses, two ordinal responses, and one free-response question. Surveys were distributed at a table where Science Fiesta memorabilia was being sold. People who approached the table were offered a free Science Fiesta pen in exchange for completing the survey. Surveys were handed out and collected at this table from the start of the event, at 12pm, until approximately 5pm. This survey was available for the duration of the event. Attendees also had the option of scanning a QRS code and completing the survey online. We collected a total of 198 surveys.\n\nAn anonymous graduate student survey was given to participants in the week following Science Fiesta. For the purpose of administering surveys and analyzing results, graduate student participation was defined as having played a substantive role in the planning and execution of any part of the event in one’s own judgement. The survey contained twenty-two questions featuring ordinal- and free-response questions. Some questions asked the respondent to compare perceived personal abilities before and after participation in Science Fiesta in a retrospective fashion. Out of fifteen surveys distributed, twelve were returned.\n\nData were manually input into Excel (Version 15.24) for graphing. When appropriate, Fisher’s exact test was computed using Graphpad Prism (Version 6.0d) software in order to compare student confidence in their own abilities before and after participation in science fiesta.\n\n\nResults\n\nScience Fiesta was attended by approximately 3,500 people, most of whom had never previously attended science outreach events (73.7%). Survey results indicate that the abundance of hands-on/family-friendly activities were the most popular element of the event, with 40.2% of respondents listing in an open-ended response that those were their favorite aspects of the event. Next most commonly-cited (37.3%) were the educational activities of the event, including informational booths, learning about science, and meeting local scientists. The least common responses were aspects such as the venue, music and food offerings (3%). A large fraction (19.5%) of attendees answered more broadly, saying that they enjoyed the diverse offerings (Figure 2A). More importantly, Science Fiesta substantially increased interest in science events within the community. Even though only 26.3% of attendees said they had attended a science event before, over 90% of respondents said they were likely or very likely to attend other science events (Figure 3).\n\n(A) A survey of attendee’s favorite aspects of Science Fiesta show that people enjoyed a variety of aspects (n=169). These responses were further broken down into general themes of family friendliness, educational value, and entertainment value. (B) The vast majority of attendees reported that they learned about science as a result of attending Science Fiesta (n=198).\n\nTrainees participating in Science Fiesta were characterized by varying degrees of prior engagement in science outreach (Figure 4). However, after taking part in Science Fiesta, 100% of participating graduate students reported the highest level of enthusiasm for future outreach initiatives. In addition to improved appreciation for public engagement, trainees reported perceived improvements in a number of professional skills (Figure 5). Most notably, trainees were significantly more likely to characterize themselves as confident communicating science to the public, and setting and following through with goals (p=0.047, for both) (Figure 4A,D). Overall, trainees derived considerable value by participating in science outreach and consider the skills gained from the experience to be highly relevant to their career development (Figure 5).\n\nScience Fiesta facilitated student training in a number of skill-areas associated with professional success, including communication to the public (A) and other scientists (B), (C) leadership and (D) organizational management. In each case, results suggest that students benefitted from participation in Science Fiesta (n=12).\n\n(A) Career development adds considerable value to student training through outreach. (B) Science outreach provides valuable training for early-career scientists (n=12).\n\n\nDiscussion\n\nScience communication and public engagement are fundamental roles of scientists. Here, we describe “Science Fiesta,” a student-led outreach initiative tailored to engage the local community of San Antonio, Texas. Science Fiesta combined science outreach, entrainment and local culture in order to attain an exceptionally diverse appeal within the community.\n\nScience Fiesta succeeded in attracting segments of the public who either were not previously inclined, or lacked the opportunity, to engage with science. This assertion is supported by data provided by public attendees, citing Science Fiesta’s diversity of activities as the primary sources of satisfaction with the event (Figure 2A). Interestingly, while relatively few respondents specifically cited the educational activities as their favorite component, more than 80% reported receiving a positive science learning experience at the event (Figure 2B). While educational activities were not the primary focus for attendees, learning outcomes were achieved through immersion in science. The vast majority of attendees indicated that they learned “some” or “a lot” about science at the event (87.3%).\n\nThrough Science Fiesta, we planned and executed an event that was perceived by all parties to be enjoyable, family-friendly, and educational. In doing so, we created enthusiasm for engaging in future science outreach efforts amongst graduate students and attendees. Thus, Science Fiesta-“like” events could achieve a sustained improvement in the culture of scientific engagement within a community, such as that of San Antonio. This is supported by evidence from public attendees, as approximately 90% expressed a likelihood of attending future science events (Figure 3). This result is even more remarkable when considering that over 70% of respondents had not previously attended science outreach events. Similarly, trainees participating in Science Fiesta were not uniformly involved in science outreach prior to the event. Before Science Fiesta, 42% of graduate students participating in organizing the event were not previously engaged in outreach. However, following participation in this initiative, trainee enthusiasm for science outreach increased dramatically. All participating trainees reported that they were “very likely” to engage in future outreach initiatives as a result of their experience with Science Fiesta. It is our hope that the increased interest in science engagement among both the public and graduate students will seed a culture of science-awareness and appreciation in the community.\n\nImportantly, as the job market for scientists with post-secondary training continues to diversify into areas outside of academic research (i.e. public policy, scientific journalism and education) and become increasingly competitive within academic research, increased emphasis on developing programs to promote the career preparedness of trainees is required. We present evidence that graduate student-led outreach initiatives not only instill a consciousness for the importance of public engagement, but offer an effective method for career training. Mastering communicating research to diverse audiences, as well as organizing outreach initiatives, develops career skills not taught through laboratory training. Importantly, the graduate students reported positive impacts in key professional skills: communication (Figure 4A-B), leading teams of peers (Figure 4C), as well as setting and achieving project objectives (Figure 4D). As indicated by student responses, these skills are valued by trainees as an important part of their professional growth (Figure 5A). As a whole, the trainees deemed their participating in outreach to be a beneficial learning experience that added value to their education (Figure 5B).\n\nIt is clear that planning and participating in Science Fiesta addressed professional development gaps in graduate education. Science Fiesta engaged every level of the STEM pipeline, developed career skills among graduate students, and connected local industry and academia with the larger community. However, the most promising outcome of Science Fiesta was enthusiasm for science within the community, as reflected by survey results citing increased interest in science outreach, among both the public and graduate students. This enthusiasm has the potential to lead to sustained improvement in science awareness and literacy. We propose that Science Fiesta, and events which employ its model, may address issues, such as STEM workforce development, barriers to diversity, low educational attainment, and public investment in the sciences. We assert that weaving science into the local cultural fabric may establish a tradition of engagement between scientists and the community.\n\n\nData availability\n\nF1000Research: Dataset 1. Results of Science Fiesta attendee surveys, 10.5256/f1000research.9560.d13525813\n\nF1000Research: Dataset 2. Results of student organizer surveys, 10.5256/f1000research.9560.d13525914",
"appendix": "Author contributions\n\n\n\nAll authors contributed to, concept and design, critical review and edits, resources and data collection, and data interpretation. TJB, MM, and TME were primarily responsible for manuscript drafting and data analysis. PD also helped with data analysis. JD was primarily responsible for the organization and execution of key fiesta events.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe event described was paid for by San Antonio Science (www.sascience.org), the City of San Antonio (www.sanantonio.gov), and the University of Texas Health Science Center at San Antonio (www.uthscsa.edu). No funds were allocated to complete the surveys described in this manuscript.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nScience Fiesta was supported by the UTHSCSA Graduate School of Biomedical Sciences, the Office of the Vice-President for Research, and the Office of the President, as well as donations from number of local businesses, non-profit organizations, individuals, and the City of San Antonio.\n\n\nReferences\n\nLaursen S, Liston C, Thiry H, et al.: What good is a scientist in the classroom? Participant outcomes and program design features for a short-duration science outreach intervention in K-12 classrooms. CBE Life Sci Educ. 2007; 6(1): 49–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBraun M, Starkbaum J, Dabrock P: Safe and Sound? Scientists’ Understandings of Public Engagement in Emerging Biotechnologies. PLoS One. 2015; 10(12): e0145033. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurns TW, O’Connor DJ, Stocklmayer SM: Science Communication: A Contemporary Definition. Public Underst Sci. 2003; 12(2): 183–202. Publisher Full Text\n\nClark G, Russell J, Enyeart P, et al.: Science Educational Outreach Programs That Benefit Students and Scientists. PLoS Biol. 2016; 14(2): e1002368. PubMed Abstract | Publisher Full Text | Free Full Text\n\nU.S. Census Bureau: Quickfacts. San Antonio, TX. Retrieved May 25, 2016. 2015. Reference Source\n\nButler RV, Stefl ME: 2014 San Antonio Healthcare & Bioscience: Economic Impact Study. Trinity University, 2014. Reference Source\n\nU.S. Department of Education: Lists of Postsecondary Institutions Enrolling Populations with Significant Percentages of Undergraduate Minority Students. Office of Postsecondary Education, 2016. Reference Source\n\nU.S. Department of Health and Human Services: Research Centers in Minority Institutions Program. National Institutes of Health, 2015. Reference Source\n\nU.S. Department of Agriculture: Geography of Poverty. Economic Research Service, 2015. Reference Source\n\nU.S. Department of Health and Human Services: Shortage Designation: Health Professional Shortage Areas & Medically Underserved Areas/Populations. Health Resources and Services Administration, 2015. Reference Source\n\nTexas Department of State Health Resources: Health Profession Shortage Area Designation. Health Resources and Services Administration, 2014. Reference Source\n\nHernández-Ehrisman L: Inventing the Fiesta City: Heritage and Carnival in San Antonio. Albuquerque: U of New Mexico, 2008. Print. Reference Source\n\nBlock T, Marinkovic M, Dowell P, et al.: Dataset 1 in: “Science Fiesta!” Combining student-led community outreach with local culture. F1000Research. 2016. Data Source\n\nBlock T, Marinkovic M, Dowell P, et al.: Dataset 2 in: “Science Fiesta!” Combining student-led community outreach with local culture. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16280",
"date": "03 Oct 2016",
"name": "Greg B. Clark",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes a recent science outreach event which was held in San Antonio. The authors provide the results of surveys completed by the public as well as the graduate student scientists that participated in the event. Overall this is a well-written manuscript and provides an exciting model for combining outreach with a long-standing cultural event which is to my knowledge a novel approach. It deserves to be published and will hopefully inspire other groups to try this approach in their science outreach. However, I have listed revisions which the authors need to address before this manuscript is indexed.\n\nBoth surveys are dependent on self-reporting of the positive impacts of participation in Science Fiesta. However perceived outcomes do not always match actual results as discussed in Brownell and Kloser (2015). Please add references and text discussing this issue to the Discussion section.\n\nThe n-value of 12 for the surveys of the graduate students is relatively low so please also add a statement to the Discussion section acknowledging that this data is based on input from a small number of graduate students.\n\nPlease add text to the Discussion section describing previous literature discussing limits to the benefits of short-duration outreach programs (see Laursen et al., 2007). In general the Discussion section needs to be modified to include the perspective that although the surveys indicate multiple positive outcomes may have occurred from this outreach program due to the short duration of the program, the main accomplishment may have been as a starting point towards the goals of increasing public science understanding and enthusiasm and science communication and outreach participation by graduate student scientists.\n\nPlease indicate how many of the participants were Ph.D. students and how many were Masters’ students. Did all the students have the opportunity to communicate science or were some more involved in organizing and greeting the public? This might also be an important point to make regarding your data, if a student did not present a scientific topic then he or she did not likely improve their science communication skills.\n\nI had difficulty determining if this event was completely separate from the other Fiesta events physically and/or temporally? Please make it clear if the public participants in Science Fiesta were only participating in science outreach or if there were other non-science outreach activities occurring at the same time in close proximity.\n\nI am curious if there is additional background as to how the authors arrived at this outreach model. For example, is there literature to support the idea that pairing scientific outreach with culture events is an effective outreach model? Please add references for any similar models found in the science communication literature and emphasize this approach in the Discussion section.\n\nAt the beginning of the Results section, the following two statements are misleading and refer to data not shown in the Figures: (1) \"Science Fiesta was attended by approximately 3,500 people, most of whom had never previously attended science outreach events (73.7%)\"; (2) \"Even though only 26.3% of attendees said they had attended a science event before, over 90% of respondents said they were likely or very likely to attend other science events.\" Please revise the text to indicate that this data is found in the Dataset 2 link and that the 73.7% and 26.3% is for only the attendees that participated in the survey not all 3,500 people.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2319
|
https://f1000research.com/articles/5-2541/v1
|
20 Oct 16
|
{
"type": "Method Article",
"title": "Improving data transparency in clinical trials using blockchain smart contracts",
"authors": [
"Timothy Nugent",
"David Upton",
"Mihai Cimpoesu",
"David Upton",
"Mihai Cimpoesu"
],
"abstract": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.",
"keywords": [
"Clinical Trials",
"Missing Data",
"Blockchain",
"Ethereum",
"Smart Contract"
],
"content": "Introduction\n\nData from clinical trials is routinely withheld from researchers, doctors, and patients, leading to a lack of trust in the process and highlighting the need for greater transparency1. While there have been efforts by the World Health Organization (WHO) requiring all trials to make their methods and results available2, a view supported by the UK Medicines and Healthcare products Regulatory Agency (MHRA), it remains to be seen how effectively such statements can be enforced. For example, while United States Food and Drug Administration (FDA) regulations require that methods and results of all clinical trials be made available, a recent study suggests that more than half of trials have failed to do so3 Clearly, legislation alone will not solve these problems. Technological solutions such as the use of blockchains for record management may therefore provide an alternative strategy with which to address these challenges.\n\nA blockchain serves as a distributed database which maintains a continuously growing list of transactional records organised into blocks, using consensus algorithms allowing untrusted parties to agree on a common state while ensuring tamper resistance. Valid transactions stored in a blockchain are digitally signed and timestamped by their sender, providing cryptographically irrefutable evidence of both the provenance and existence of a record at a given point in time. These qualities were recently leveraged by Carlisle and Irving and Holden to address endpoint switching in clinical trials [10]4. Using the public Bitcoin blockchain5 - perhaps the best known example of a blockchain - they generated a hash of a study protocol document, and used this as a public address to which they sent a transaction. This process serves as a “proof-of-existence” - verification that the document exists at the timestamp indicated by the transaction. Since Nakamoto’s seminal Bitcoin paper, blockchains have moved into the 2.0 era with the advent of smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network. Here, we introduce a system built using smart contracts which addresses a number of the data manipulation issues common to clinical trials. We show that smart contracts can act as trusted administrators, able to improve the transparency of data reporting in clinical trials, by immutably capturing all aspects of data that might be subject to manipulation including trial registration, protocol, subject registration, and clinical measurements.\n\n\nMethods\n\nWe propose a private, permissioned Ethereum blockchain network maintained by regulators (e.g. MHRA, FDA), pharma and contract research organisations (CROs), to be used in parallel with traditional clinical data management systems (CDMS), framing the process as a transactional inter-organisational record keeping model between untrusted participants (Figure 1). Ethereum is a blockchain protocol that features smart contract functionality, and has been described as a next-generation cryptocurrency and decentralised application platform6,7. Rather than validating just the balances and transfer of digital tokens, smart contracts allow the state of arbitrary data and logic to be agreed on by the network using the same cryptographic principles. A hierarchical arrangement of two core types of smart contract is required:\n\n(i) A regulator contract, holding a data structure containing clinical trial authorisation (CTA) details. This contract is owned and updated by regulators based on off-chain licensing agreements, and includes a container used to store trial contracts.\n\n(ii) A trial contract, deployed by CROs using a function within the regulator contract, dependent on permissioning logic determined using the CTA data structure. Contains a data structure used to store the trial protocol, using IPFS8 or Ethereum’s native Swarm protocol where large file storage is required, with permissioning logic requiring protocol deposition and endpoint definition prior to the storage of subjects within a container.\n\nSubjects are added by CROs using a function within the trial contract, with permissioning logic restricting the calling of this function outside of the recruitment period defined in the protocol. The subject data structure contains anonymised subject information, consent documentation, and a container allowing storage of successive clinical measurements. Individual measurements are recorded, with full timestamping, in a format such as string-encoded JavaScript Object Notation (JSON), providing a flexible schema that can be adapted to any study type. Should data privacy be required, strings can be encrypted using public key encryption, with regulators holding a distinct private key for each trial contract, or using more elaborate techniques such as zero-knowledge proofs and homomorphic encryption as they become available.\n\nSource code written in JavaScript and the Solidity smart contract programming language is provided under Data and software availability, allowing contracts to be implemented, and data to be written to and read from the blockchain. The scripts perform the following steps:\n\nStart JavaScript implementations of Ethereum and IPFS nodes, each connecting to local private networks.\n\nDeploy a regulator contract. A trial proposal, including protocol documentation, is subsequently submitted to this contract by a CRO, with the documentation being stored using IPFS.\n\nIf the proposal is accepted by the regulator, a trial contract is created. This contract is owned and administered by the CRO.\n\nSubjects are appended to the trial contract up until the trial start data. Synthetic data is then appended for each of the subjects, up until the trial end date.\n\nFinally, a script is provided to read all the data from the blockchain, providing a summary of each trial, and details of each subject and data points that have been added, with full timestamping.\n\nThe system is composed of a hierarchical arrangement of two core types of smart contract - regulator contracts and trial contracts - with subjects and their associated clinical measurements appended to a container within the trial contract. The logic within the trial contract effectively enforces aspects of the trial protocol, ensuring that neither subjects nor measurements are appended outside of the predetermined trial timelines, while the tamper resistant characteristics of the blockchain prevent data manipulation.\n\n\nResults\n\nContracts were deployed onto a private Ethereum blockchain and used to record synthetic data representing clinical trials of Tamiflu, an influenza drug stockpiled by the British government at a cost of £424m despite 60% of trial data remaining unpublished at the time the decision was taken9, totalling thousands of individual transactions. Ethereum’s block time is significantly faster than the Bitcoin blockchain, with transactions used to deploy contracts or update data taking an average of 14 seconds to be accepted by the network, although confirmation of 12 blocks is recommended to ensure finality. With the Ethereum roadmap anticipating the processing of 10,000 transactions per block by release 2.0, the network should scale well for the task in hand. At all points during the test, we were able to query the number of trials underway, the number of subjects recruited to each one, the address of the transaction sender (resolvable to a CRO) and the timestamp at which the transaction was processed. Due to the append-only nature of blockchains, we were also able to query the state of the data at any historic block.\n\n\nConclusions\n\nHere, we have demonstrated that smart contracts running on the Ethereum blockchain can be used to improve the transparency of data management in clinical trials. We have shown that the cryptographic guarantees that modern protocols provide can go beyond “proof-of-existence”, and be used for complex clinical trial data management that prevents all forms of manipulation due to the tamper resistant characteristics of blockchains. Systems built using smart contracts should help to increase trust in the data they hold and the credibility of trials findings, allowing medical professionals to make better-informed decisions that have the potential to reduce both patient risk and the financial strain placed on health services that data manipulation issues contribute to.\n\n\nData and software availability\n\nF1000Research: Dataset 1. Data and full source code required to repeat the experiment., 10.5256/f1000research.9756.d13864711",
"appendix": "Author contributions\n\n\n\nTN conceived the experiment and wrote the manuscript. All authors contributed to the source code.\n\n\nCompeting interests\n\n\n\nThe authors are employees of Thomson Reuters.\n\n\nGrant information\n\nThis research was funded by Thomson Reuters Global Resources.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors wish to thank Chris Bassett for providing Figure 1, Sam Chadwick and James Cross for their help at the hackathon, and Ash Rahman for his helpful comments.\n\n\nReferences\n\nGoldacre B: Bad pharma: how drug companies mislead doctors and harm patients. Macmillan. 2014. Reference Source\n\nMoorthy VS, Karam G, Vannice KS, et al.: Rationale for WHO’s new position calling for prompt reporting and public disclosure of interventional clinical trial results. PLoS Med. 2015; 12(4): e1001819. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson ML, Chiswell K, Peterson ED, et al.: Compliance with results reporting at ClinicalTrials.gov. N Engl J Med. 2015; 372(11): 1031–1039. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIrving G, Holden J: How blockchain-timestamped protocols could improve the trustworthiness of medical science [version 2; referees: 3 approved]. F1000Res. 2016; 5: 222. Publisher Full Text\n\nNakamoto S: Bitcoin: A peer-to-peer electronic cash system. White Paper. 2008. Reference Source\n\nButerin V: A next-generation smart contract and decentralized application platform. White Paper. 2014. Reference Source\n\nWood G: Ethereum: A secure decentralised generalised transaction ledger. Yellow Paper. 2014. Reference Source\n\nBenet J: IPFS - content addressed, versioned, P2P file system.CoRR, abs/1407.3561. 2014. Reference Source\n\nJefferson T, Jones MA, Doshi P, et al.: Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Database Syst Rev. 2012; 1: CD008965. PubMed Abstract | Publisher Full Text\n\nBenjamin Gregory C: Proof of prespecified endpoints in medical research with the bitcoin blockchain. 2014. Reference Source\n\nNugent T, Upton D, Cimpoesu M: Dataset 1 in: Improving data transparency in clinical trials using blockchain smart contracts. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17773",
"date": "29 Nov 2016",
"name": "William J. Knottenbelt",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article proposes a smart-contract based system for improving the transparency of clinical trials. This represents an original extension to existing proposals which apply blockchain technology to prove the historical existence of certain documents and protocols. Specifically, a private blockchain network with an hierarchical arrangement of smart contracts gives the various stakeholders (e.g. regulators, clinical research organisations etc.) the ability to manage and record trial data in a natural - but also immutable and auditable - fashion.\nThe paper itself is well-written and focused and adequately referenced. I have a few minor queries which could be clarified in any final version:\n\"Untrusted parties\" - are these actually \"untrusting parties\" or \"untrusted and untrusting\" parties?\n\n\"ensuring tamper resistance\" - should this be \"ensuring a very high degree of tamper resistance\" (since 51% attacks are still possible)? Being a private, permissioned blockchain, does it need any special arrangements to ensure its integrity, or does the fact that runs on top of the Ethereum network ensure this?\n\n\"includes a container\" - can you please elaborate on what you mean by \"container\" here? It is a very overloaded term in computer science, and can refer to data structures, a lightweight virtual environment for code execution, a file store etc.\n\nOverall I enjoyed reading the paper and I think it makes a worthy contribution to the literature. I am happy to recommend it be indexed.",
"responses": []
},
{
"id": "20706",
"date": "15 Mar 2017",
"name": "Sönke Bartling",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article represents the first description of a real-world implementation of blockchain, smart contracts and decentralized/floating cloud storage for medical approval studies.\nIt logically extends the application of blockchain beyond the pure ‘notarization’ functionality, as earlier described, into the realm of post-processing and data analyses. The source code is provided, so future groups can work on that.\nIt can be hoped that this example will be picked up and used as a model in real-world studies. I ask the author to consider crypto-assured study blinding in future designs. Furthermore, it would be great to compare the practical possibilities of such a study system to today's system on a step-by-step analysis.\nThe author mentions the potential of zero-knowledge proofs and homomorphic encryption. I’m looking forward to reading more from this author.",
"responses": []
},
{
"id": "20163",
"date": "16 Mar 2017",
"name": "Jesse Yli-Huumo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article discusses and proposes a blockchain based smart contract system to improve transparency of clinical trials. The current issues in scientific credibility in clinical trial findings include missing data, endpoint switching, data dredging, and selective publication. There have been efforts to solve these issues with legislation, but it can be seen that they do not solve the issues alone. Prior to this paper, a blockchain method was described to solve these issues. This paper extends the idea by using smart contracts with blockchain.\n\nThe paper is well-written and referenced. I have few questions that need more clarification:\n\nRegarding FDA regulations, “more than half of trials have failed to do so”. What were the key reasons why this happened? This could be contrasted to issues that this type of blockchain would solve.\n\nHow is “proof-of-existence” verified by network? How is verification orchestrated in the network? What resources are required from participants to join this blockchain?\n\nUntrusted participants. Can you elaborate more what do you mean by this term?\n\nEMA. What is EMA in this picture and does it have some special role compared to other participants?\n\nCDMS. How is data synchronized and standardized between various participants? What does it require from participant perspective?\n\nChecklist\nThe appropriateness of the title: OK\n\nWhether the abstract provides an adequate summary of the article: OK\n\nIf there is a comprehensive explanation of study design, methods and analysis, and their suitability to the investigation: OK\n\nWhether the conclusions are balanced and justified on the basis of the results: OK\n\nWith regards to the data (if applicable), whether sufficient information has been provided for replication of the experiment, and/or if the data are in a usable format: OK\n\nOverall, I think this study is really interesting and I enjoyed reading it. Blockchain 2.0 and its applications are now emerging in various fields and it is important that research is being conducted. Looking forward for more results regarding this study in future.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2541
|
https://f1000research.com/articles/5-2537/v1
|
19 Oct 16
|
{
"type": "Opinion Article",
"title": "Puzzles in modern biology. IV. Neurodegeneration, localized origin and widespread decay",
"authors": [
"Steven A. Frank"
],
"abstract": "The motor neuron disease amyotrophic lateral sclerosis (ALS) typically begins with localized muscle weakness. Progressive, widespread paralysis often follows over a few years. Does the disease begin with local changes in a small piece of neural tissue and then spread? Or does neural decay happen independently across diverse spatial locations? The distinction matters, because local initiation may arise by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. A local trigger must be coupled with a mechanism for spread. By contrast, independent decay across spatial locations cannot begin by a local change, but must depend on some global predisposition or spatially distributed change that leads to approximately synchronous decay. This article outlines the conceptual frame by which one contrasts local triggers and spread versus parallel spatially distributed decay. Various neurodegenerative diseases differ in their mechanistic details, but all can usefully be understood as falling along a continuum of interacting local and global processes. Cancer provides an example of disease progression by local triggers and spatial spread, setting a conceptual basis for clarifying puzzles in neurodegeneration. Heart disease also has crucial interactions between global processes, such as circulating lipid levels, and local processes in the development of atherosclerotic plaques. The distinction between local and global processes helps to understand these various age-related diseases.",
"keywords": [
"cancer",
"neurodegeneration",
"heart disease",
"genetics",
"epidemiology"
],
"content": "Introduction\n\nInitial symptoms of motor neuron disease present as localized muscle weakness. Motor loss often progresses to widespread paralysis over a few years1.\n\nThe onset of this disease poses a puzzle. Does the disease arise in a localized focus of neural tissue and then spread from that focal lesion? Or does the decay arise independently in diverse spatial locations?\n\nSuppose that disease begins from a localized origin2–6. Then onset may start by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. Those local processes may transform a small piece of tissue into a focal lesion that can spread disease to other cells. The widespread decay that ultimately follows happens by local transformation and then spread.\n\nBy contrast, suppose that widespread decay originates independently in each small site across the broad spatial domain of diseased tissue. Then localized genetic, epigenetic and regulatory changes in a single site cannot be the origin of the disease. Instead, spatially separated positions must progress independently.\n\n\nClues from sporadic versus inherited disease\n\nConsider the pattern of onset and spread in the most common motor neuron disease, amyotrophic lateral sclerosis (ALS).\n\nThe majority of cases occur sporadically7. Sporadic means that there is no direct evidence of predisposing inherited mutations. These apparently random cases typically occur after age 40, with incidence increasing up to age 75 and then declining at later ages8.\n\nInherited mutations predispose individuals to ALS, causing familial occurrence1. For example, individuals carrying an inherited mutation in SOD1 or C9orf72 often have greatly increased risk of disease.\n\nThe age of onset in genetically predisposed cases typically occurs several years earlier than sporadic disease9,10. Genetically predisposed individuals also have much higher incidence than those without genetic predisposition.\n\nThe puzzle is whether disease begins with a local change that triggers global spread or with dispersed decay over a broad spatial range. The observed shift in age and incidence associated with inherited mutations provides clues.\n\nInterpreting the clues from the age-incidence shift between familial and sporadic cases requires attention to two aspects. First, the puzzle concerns the dynamics of disease progression. To understand dynamics, we must consider the time-related aspects of the disease. Second, we must frame the clues in relation to the alternatives of localized versus dispersed origin.\n\n\nTime from onset to full disease\n\nIndividuals with certain inherited mutations have a high probability of developing ALS. However, the age at which symptoms first appear varies widely, even for carriers of the same mutation9. In sporadic cases, the age of first onset also varies widely.\n\nOnce initial symptoms arise, most individuals progress to final widespread paralysis within a few years. What could explain variable age for the first appearance of localized symptoms and the subsequent relatively rapid development of widely dispersed disease?\n\n\nLocalized versus dispersed origin\n\nI mentioned two possible solutions. First, disease may originate locally in a small piece of tissue and then spread from that origin. Second, degeneration may happen nearly simultaneously and independently across diverse spatial locations.\n\nThe first solution of local origin and spread fits nicely with the observed pattern of variable age of onset and rapid subsequent progression.\n\nHowever, the second solution of parallel distributed decay could be true. For example, each individual might be prone to a particular timing of decay across the broad neural landscape. Approximate synchrony may arise because of the common genetic background or environmental exposures shared by all locations.\n\nFor example, a global change in a widely circulating factor may initiate simultaneous decay across spatial locations. That global process shifts the locus of causality to the origin of the widely circulating trigger and to the susceptibility of the distributed sites across the neural landscape.\n\n\nTrigger versus spread\n\nInherited cases have an earlier age of onset than sporadic cases. That fact refines the alternative solutions of local versus dispersed origin8–10.\n\nIn the local origin solution, a shared mutation across all locations may increase the rate at which the first localized origin arises. An origin may require several local changes before it can act as a trigger to initiate spatial spread. If all locations share a mutation that moves progression ahead, then the first trigger will happen at an earlier age.\n\nAlternatively, the shared mutation across all locations may reduce the threshold for spread. A lower threshold may induce spread in response to a weaker local trigger.\n\n\nSeed and soil\n\nA reduced threshold for spread suggests a variant of the dispersed origin solution. A reduced global threshold expresses distributed decay, but one that still requires an additional local origin trigger.\n\nThe interaction between local origin and dispersed decay echoes an old idea from cancer research about seed and soil11. In that theory, the metastatic spread of cancer requires both a transformed cell that can act as a seed and a transformed tissue that can act as a soil in which the seed may grow.\n\n\nCandidate mechanisms\n\nAlternative explanations focus attention on different mechanisms of disease.\n\nLocal triggers may arise from various processes: localized environmental insults, tissue microenvironment fluctuations such as infection or inflammation, local vascular changes, local hypoxia, and local changes in other kinds of environmental factors. Changes within one or few cells also initiate local changes: somatic mutation, epigenetic changes, fluctuations in regulatory state, phenotypic responses to altered environments, and so on.\n\nSpread may follow from intercellular transfer of RNA or cytoplasmic components, transmissible misfolding of proteins, diffusible signals, attraction of inflammatory responses, and so on.\n\nDispersed origin may arise from wider environmental changes, including extrinsic insults, inflammation, broad vascular changes, and so on.\n\nDispersed origin seems less likely to follow from localized somatic mutation, random epigenetic changes in cells, or random fluctuations in cellular regulatory states. This limitation and the absence of important mechanisms of spread provide the clearest distinction between local versus dispersed origin.\n\nMuch research focuses on these kinds of alternative mechanisms. However, mechanistic studies often do not explicitly frame analysis of cause in terms of the variety of potential mechanisms for local triggers and spread versus the variety of potential mechanisms for dispersed origin. My only purpose here is to clarify the relation between different mechanisms and the broader framework in which we must understand the puzzles of disease onset and progression.\n\nIn the study of mechanism, one must also distinguish rate of onset versus physiological function12. An inherited mutation may increase the rate at which disease-causing changes arise in physiological function, but the inherited mutation itself may have no direct physiological role in disease.\n\nFor example, inherited defects in modulators of protein folding or in clearance of misfolded proteins may raise the rate at which misfolded proteins act as local triggers of global spread. Similarly, an inherited increase in somatic mutation may raise the rate at which local triggers arise.\n\nAlternatively, an inherited mutation may directly initiate a disease-causing change in a physiological function. For example, a mutation in a protein coding gene may increase the tendency for misfolding of that particular protein. The increased tendency for misfolding may act as a local trigger or may lower the global threshold in response to external triggers.\n\n\nNeurodegenerative diseases\n\nI have used ALS to illustrate the puzzle of local versus dispersed origin of disease. Similar puzzles arise in Parkinson’s disease, Alzheimer’s disease and other neurodegenerative diseases.\n\nWithin each disease, there will likely be different mechanisms of origin and timing of spread. Between diseases, there will also likely be different aspects of origin and spread. The similarities and differences help to understand broader aspects of disease.\n\n\nCancer\n\nAt first glance, cancer and neurodegenerative disease seem very different. Cancer arises at a localized site. One thinks about the origin of cancer in terms of the local changes in a few cells and the surrounding tissue microenvironment. Global factors such as immune system status or hormone levels may play a role, but they do so to the extent that they influence local changes at the site of cancer origin.\n\nProgression of cancer depends on the factors that promote spread. The interactions between local triggers and global spread dominate all aspects of cancer research. The study of prevention, early detection, treatment, and basic understanding depends on the local-global interaction.\n\nBy contrast, most studies of neurodegeneration are vague about the origin and spread of disease. If a neurodegenerative disease does arise locally and then spread, then such a disease shares with cancer its general causal structure and dynamics.\n\nRecently, several studies of neurodegeneration have focused on the spread of misfolded proteins in a prion-like manner5,6. However, those studies remain vague about the variety of mechanisms that influence local triggers and about the broader conceptual framing of interactions between local and global processes.\n\nCertainly, different neurogenerative syndromes vary in their causal structure, and various aspects of cancer and neurodegeneration differ in significant ways. It would be useful to understand explicitly the broad conceptual similarities and differences between the diseases. It would also be useful to understand the broader ways in which we can analyze the dynamics of interactions between local and global processes.\n\n\nHeart disease\n\nHeart disease typically arises from an interaction of local and global processes. Initially, global factors such as lipid levels set the preconditions for localized plaque formation in the inner lining of artery walls.\n\nAlthough widespread conditions for plaque formation may occur, severe disease often requires a series of local changes at individual plaque sites13. For example, the early stages of local site progression typically include recruitment of leukocytes that mature into macrophages, which take up lipid.\n\nChanges in the local tissue microenvironment associate with proliferation of nearby muscle cells and tumor-like expansion and physiological transformation. An advanced plaque may rupture, attracting platelets and wound healing processes that make a clot. The clot may block local blood flow or break off to block flow at a distant site.\n\nOnce again, a strong interaction between local and global processes drives disease progression. The particular timing of the local and global factors differs between heart disease, cancer and neurodegeneration. However, these age-related diseases share a common frame of interacting local and global processes that cause disease onset14,15.\n\n\nConclusions\n\nWhy does emphasis on interacting local and global processes matter? Consider the basic understanding of disease onset in neurodegeneration.\n\nIf a local trigger starts the process, then a localized microenvironmental change or a local somatic mutation can be the event that initiates disease2–4. By contrast, if a global change initiates disease, then we must look for a factor that can circulate or diffuse widely and that can alter conditions over dispersed spatial sites.\n\nWith either initial local or global changes to start disease, progression typically depends on further interactions between subsequent local and global processes. For example, a high global level of certain lipids may be an important trigger of heart disease. Subsequent progression depends on local changes at plaque sites.\n\nMuch biological research hunts for the causes of disease. With better basic understanding of cause, one may improve prevention, detection and treatment. However, the notion of cause is always slippery and requires careful thought to frame properly.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation grant DEB–1251035 supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBäumer D, Talbot K, Turner MR: Advances in motor neurone disease. J R Soc Med. 2014; 107(1): 14–21. PubMed Abstract | Publisher Full Text\n\nArmon C: Acquired nucleic acid changes may trigger sporadic amyotrophic lateral sclerosis. Muscle Nerve. 2005; 32(3): 373–377. PubMed Abstract | Publisher Full Text\n\nArmon C: Accrued somatic mutations (nucleic acid changes) trigger ALS: 2005–2016 update. Muscle Nerve. 2016; 53(6): 842–849. PubMed Abstract | Publisher Full Text\n\nFrank SA: Evolution in health and medicine Sackler colloquium: Somatic evolutionary genomics: mutations during development cause highly variable genetic mosaicism with risk of cancer and neurodegeneration. Proc Natl Acad Sci U S A. 2010; 107(suppl 1): 1725–1730. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrusiner SB: Cell biology. A unifying role for prions in neurodegenerative diseases. Science. 2012; 336(6088): 1511–1513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJucker M, Walker LC: Self-propagation of pathogenic protein aggregates in neurodegenerative diseases. Nature. 2013; 501(7465): 45–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTalbott EO, Malek AM, Lacomis D: The epidemiology of amyotrophic lateral sclerosis. In F. Boller M. J. Aminoff and D. F. Swaab, editors, Neuroepidemiology. volume 138 of Handbook of Clinical Neurology, Elsevier. 2016; 225–238.Publisher Full Text\n\nAlonso A, Logroscino G, Jick SS, et al.: Incidence and lifetime risk of motor neuron disease in the United Kingdom: a population-based study. Eur J Neurol. 2009; 16(6): 745–751. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCudkowicz ME, McKenna-Yasek D, Sapp PE, et al.: Epidemiology of mutations in superoxide dismutase in amyotrophic lateral sclerosis. Ann Neurol. 1997; 41(2): 210–221. PubMed Abstract | Publisher Full Text\n\nIngre C, Roos PM, Piehl F, et al.: Risk factors for amyotrophic lateral sclerosis. Clin Epidemiol. 2015; 7: 181–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFidler IJ: The pathogenesis of cancer metastasis: the 'seed and soil' hypothesis revisited. Nat Rev Cancer. 2003; 3(6): 453–458. PubMed Abstract | Publisher Full Text\n\nFrank SA: Puzzles in modern biology. III. Two kinds of causality in age-related disease [version 1; referees: awaiting peer review]. F1000Research. 2016; 5: 2533. Publisher Full Text\n\nLibby P, Ridker PM, Hansson GK: Progress and challenges in translating the biology of atherosclerosis. Nature. 2011; 473(7347): 317–325. PubMed Abstract | Publisher Full Text\n\nDoherty TM, Asotra K, Fitzpatrick LA, et al.: Calcification in atherosclerosis: bone biology and chronic inflammation at the arterial crossroads. Proc Natl Acad Sci U S A. 2013; 100(20): 11201–11206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShah NR, Mahmoudi M: The role of DNA damage and repair in atherosclerosis: A review. J Mol Cell Cardiol. 2015; 86: 147–157. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "17107",
"date": "08 Nov 2016",
"name": "Christos Proukakis",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this topical and well argued paper, Steve Frank proposes the framework within which one should consider causes of disparate disease processes as either primarily local, or primarily generalized. The focus is mainly on neurodegeneration, and specifically ALS. This condition does appear to start in one location, at least clinically, and spread in an apparently contiguous manner in most cases. A helpful contrast is drawn between the possibilities of a local event which may include a somatic mutation (on which he has eloquently written in the past) [1], and spread, which is an idea that has certainly spread over recent years. Importantly, the agents of spread may not be proteins in all cases, and this is stated here, as RNA and others are also mentioned.\n\nOne point that deserves some discussion is the assertion that somatic mutations would generally be limited to causing local onset. This is true if one accepts that a single cell, or handful of neighbouring cells, could be the trigger. A more widespread onset, but still due to somatic mutations, could be caused by widespread dispersion of early somatic mutations. There is now clear evidence of extensive mixing of cells in early development, with work particularly from the Chris Walsh lab showing how a somatic mutation in brain could also be present throughout a broad region, and perhaps even further afield at a lower level.[2] Apparently synchronous or near-synchronous onset in disparate locations could be due to such a phenomenon, with multifocal onset reported in Parkinson’s disease.[3] This would lead to spatially separated positions progressing independently as stated, but the origin could still be a somatic mutation, which happened early enough in development to allow its progeny to be spatially separated.\n\nIt is implicit that neurodegenerative diseases affect specific brain cell types, which differ in each one, with selective vulnerability being a key determinant of pathology, along with initiation / spread process. Most readers will be aware of this, but this could be explicitly clarified e.g. to page 2 “the broad neural landscape”.\n\n“Trigger versus spread”. I note the suggestion that an inherited mutation, present therefore in all cells, may lead to earlier onset by allowing spread to start earlier. This is in line with a staging pathology scheme proposed in a genetic subset of ALS, which claims to describe the sequence of spread.[4] I do not conceptually understand why one has to invoke spread in situations where every cell carries a mutation which can act in a direct or indirect pathogenic way locally. If the mutation has a pathogenic effect, then surely no spread is required when it is present in all cells? I accept that this does not exclude spread, which could still underlie a temporal sequence of events, but differential vulnerability could also determine the sequence of events, particularly if it is stereotyped.\n\nThe author finally proceeds to compare this potential dichotomy with cancer and cardiovascular diseases, as situations where, after local initiation, pathology can spread through relevant factors. While it may seem obvious, it should be stated that in the case of cancer the agent of spread is the cell, which clearly is not the case in neurodegenerative disorders. Intriguingly perhaps one could actually invoke the cell as the agent of spread of somatic mutations through migration at the early neurodevelopmental stage, resulting in the spatially disparate somatic mutation situation I outlined above. Furthermore, the medium through which distant spread occurs in these disease categories, circulating blood, does not appear relevant to neurodegeneration, which (if spread is involved) would have to spread through physically connected neurons. Whether this occurs through trans-synaptic spread, secretion (which may or may not involve exosomes), or tunneling nanotubes, is an ongoing debate, albeit beyond the scope of this valuable opinion piece.",
"responses": [
{
"c_id": "2345",
"date": "06 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Christos Proukakis has provided an excellent commentary on the origins and spread of neurodegenerative disease, placing my article in that broader context. F1000Research includes referee reports as part of the final publication, and I am very pleased to have this report included. I add a few replies here, which will also be included as part of the final publication. I list referee comments in italics. One point that deserves some discussion is the assertion that somatic mutations would generally be limited to causing local onset. This is true if one accepts that a single cell, or handful of neighbouring cells, could be the trigger. A more widespread onset, but still due to somatic mutations, could be caused by widespread dispersion of early somatic mutations... I agree. However, a key issue concerns the more or less synchronous decay across spatial locations. Dispersed mutations by themselves might lead to parallel independent decay in different locations, but the near synchrony of the decay remains a puzzle. For example, in cancers associated with an inherited mutation carried by all cells, aggressive tumors typically do not appear simultaneously in diverse spatial locations. Instead, different locations progress at different rates, leading to different foci that typically progress to aggressive disease at different times. Of course, there may be cases of approximate synchrony, but I had in mind the likely situation in which particular mutations predispose to disease but are not by themselves sufficient. I did discuss how dispersed mutations may play a key role as spatially distributed altered \"soil\" that would enhance the spread of a local trigger. It is implicit that neurodegenerative diseases affect specific brain cell types, which differ in each one, with selective vulnerability being a key determinant of pathology, along with initiation / spread process. Most readers will be aware of this, but this could be explicitly clarified e.g. to page 2 “the broad neural landscape”. The role of different cell types is likely to be important. I suspect that following up on this point by careful reading of the current literature and further thought would lead to useful hypotheses and perhaps some insight. However, I do not have a properly detailed response at present, and so I will simply agree that this is a topic worth pursuing. “Trigger versus spread”. I note the suggestion that an inherited mutation, present therefore in all cells, may lead to earlier onset by allowing spread to start earlier. This is in line with a staging pathology scheme proposed in a genetic subset of ALS, which claims to describe the sequence of spread.[4] I do not conceptually understand why one has to invoke spread in situations where every cell carries a mutation which can act in a direct or indirect pathogenic way locally. If the mutation has a pathogenic effect, then surely no spread is required when it is present in all cells? I accept that this does not exclude spread, which could still underlie a temporal sequence of events, but differential vulnerability could also determine the sequence of events, particularly if it is stereotyped. This comment includes the answer that I favor and also emphasized in my article. With regard to triggers, the answer is given by the referee as *I accept that this does not exclude spread, which could still underlie a temporal sequence of events,* emphasizing the point I made above that a mutation likely predisposes but by itself does not change a cell to the diseased state. With regard to spread, *differential vulnerability could also determine the sequence of events,* that is the point of my emphasis in the text on the seed and soil hypothesis, in which differential vulnerability relates to an altered, receptive soil. The author finally proceeds to compare this potential dichotomy with cancer and cardiovascular diseases, as situations where, after local initiation, pathology can spread through relevant factors. While it may seem obvious, it should be stated that in the case of cancer the agent of spread is the cell, which clearly is not the case in neurodegenerative disorders. The agent of spread in cancer is perhaps a bit more complex than stated here. Many mechanistic aspects of cancer transcend single cells. For example, secretion of digestive factors that help to penetrate tissue barriers may often be crucial. Similarly, tumors may often secrete a variety of immunomodulatory factors that act both locally and globally, and the various mechanisms that stimulate angiogenesis can be crucial. In cancer, cells may be the key factor for triggering distant metastatic spread, but a variety of extracellular processes may be important in all phases of carcinogenesis. Furthermore, the medium through which distant spread occurs in these disease categories, circulating blood, does not appear relevant to neurodegeneration, which (if spread is involved) would have to spread through physically connected neurons. Whether this occurs through trans-synaptic spread, secretion (which may or may not involve exosomes), or tunneling nanotubes, is an ongoing debate, albeit beyond the scope of this valuable opinion piece. With regard to neurodegeneration, I think the issue may again be a bit more subtle. Diet, overall health, immune status, and many other global factors will influence the variety of potential ways in which circulating blood may carry the agents that change spatially distributed aspects of neural tissue. Those changes may act in two ways. First, such changes may interact with a local somatic mutation to transform one or a few cells, which can then act as a trigger. Second, such changes may alter many spatially distributed sites in parallel, fertilizing the soil to be more receptive to triggering seeds when they arise."
}
]
},
{
"id": "17108",
"date": "22 Nov 2016",
"name": "Anya Plutynski",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI had only few minor comments and a suggested reference:\n\nFirst, there was some redundancy in the paper: local v. spatially distributed origins as possibilities was mentioned at least twice.\n\nSecond, I really liked the following point, and wondered if this might be developed further, “mechanistic studies often do not explicitly frame analysis of cause in terms of the variety of potential mechanisms for local triggers and spread versus the variety of potential mechanisms for dispersed origin. My only purpose here is to clarify the relation between different mechanisms and the broader framework in which we must understand the puzzles of disease onset and progression.” The idea here seems to be that mechanistic approaches must always be supplemented or contextualized in specific ways, if our aim is to differentiate between alternative hypotheses about origins of disease? Perhaps a firmer and more general statement to this effect might be worth making. As a general point, this is worth emphasizing!\n\nLast: are local v. spatially distributed initiations of the disease mutually exclusive options for a given disease, or could one be in play in some cases, and another be in play in other cases? To explain, perhaps in some cases a local lesion may advance so quickly that it clearly is a single origin story. In other cases (perhaps for other cancer types or subtypes), typically, many independent populations of porto-cancer cells may be arising simultaneously. For cancers of epithelial origin, the latter seems likely; Martincorena et al., (2015) provide independent evidence in favor of the notion that mutations are always accumulating in healthy tissue, most epithelial cells slough off and do not become cancers, but some acquire sufficient mutations to advance to disease.",
"responses": [
{
"c_id": "2344",
"date": "06 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I appreciate Anya Plutynski's thoughtful and encouraging comments. On F1000Research, the referee reports and author comments are part of the final publication, so I will respond here. Plutynski's second point mentions: The idea here seems to be that mechanistic approaches must always be supplemented or contextualized in specific ways, if our aim is to differentiate between alternative hypotheses about origins of disease? Perhaps a firmer and more general statement to this effect might be worth making. As a general point, this is worth emphasizing! I think this is likely to be true. However, to address the \"always\" or even to say \"usually,\" I would first have to give a lot of thought to other diseases and, for each, the variety of potential mechanisms. Perhaps this would be a good long-term project to develop, leading to a variety of insights about how to study the possible alternative causes of disease. For now, I will only say \"thank you.\" Plutynski's third point mentions: Last: are local v. spatially distributed initiations of the disease mutually exclusive options for a given disease, or could one be in play in some cases, and another be in play in other cases? I agree. Once one is thinking along these lines, my main point has been made successfully. As emphasized by the referee, what appears to the \"same\" disease endpoint can arise from a variety of mechanistic pathways. Although the pathways may vary, it seems likely that distinguishing local and spatial aspects along particular trajectories will help to parse how the variety of causes interact, and the ways in which distinct pathways differ."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2537
|
https://f1000research.com/articles/4-407/v1
|
28 Jul 15
|
{
"type": "Research Article",
"title": "Adult attention-deficit/hyperactivity disorder: Associations between subtype and lifetime substance use – a clinical study.",
"authors": [
"Michael Liebrenz",
"Alex Gamma",
"Iliyan Ivanov",
"Anna Buadze",
"Dominique Eich",
"Alex Gamma",
"Iliyan Ivanov",
"Anna Buadze",
"Dominique Eich"
],
"abstract": "ADHD is the one of the most prevalent childhood disorders and has been associated with impairments persisting into adulthood. Specifically, childhood ADHD is an independent clinical risk factor for the development of later substance use disorders (SUD). Moreover, adults who meet diagnostic criteria for ADHD have shown high rates of comorbid SUDs. Few studies, however, have reported on the relationship between ADHD subtypes and SUD in adult samples. The purpose of this study was to characterize a clinical sample of adults with ADHD and to identify possible associations between ADHD subtypes, lifetime substance use, and if ADHD subtypes may be preferentially associated with specific substances of abuse. We recruited 413 adult ADHD patients, performed an evaluation of their ADHD and conducted an interview on their use of psychotropic substances. Complete data was obtained for 349 patients. Lifetime substance abuse or dependence was 26% and occasional use was 57% in this sample. The inattentive subtype was significantly less likely to abuse or be dependent on cocaine than the combined subtype. Our findings underscore the high rate of comorbidity between substance use and ADHD in adults. The more frequent abuse/dependence of cocaine by adult patients with hyperactive-impulsive symptoms should be kept in mind when treating this patient group.",
"keywords": [
"Attention deficit hyperactivity disorder (ADHD)",
"subtype",
"presentation",
"substance-use disorder",
"cocaine",
"dual-diagnosis"
],
"content": "Introduction\n\nAttention-deficit/hyperactivity disorder (ADHD) is a complex neuropsychiatric syndrome that is common not only in childhood and adolescence, but in adulthood1–4. It is characterized by symptoms of inattention (distractibility), hyperactivity, and impulsivity, which all contribute to significant psychosocial impairment in affected individuals of all age groups5–7. In order to make a diagnosis of ADHD, the two diagnostic manuals, the American Psychiatric Association’s (APA) Diagnostic and Statistical Manual of Mental Disorders Third Edition Revision (DSM-III-R) and the World Health Organisation’s (WHO) International Statistical Classification of Diseases and Related Health Conditions (ICD-10), require the presence of both inattentive and hyperactive-impulsive symptoms8,9.\n\nIn 1994, the introduction of the DSM Fourth Edition (IV)10 marked a diversion from this route by allowing for a diagnosis of ADHD when either hyperactive-impulsive or inattentive behaviors were present, and thereby defined three subtypes of ADHD: a) a predominantly inattentive type, b) a predominantly hyperactive-impulsive type, and c) a combined type11. The following years saw a significant amount of research in which the importance of these subtypes in a clinical and epidemiological context was debated. For example, the clinical response to pharmacologic treatment by subtype or symptom clusters was investigated12–14, as were subtype differences in psychosocial functioning15,16, and the rate of comorbidity17 in different age groups.\n\nIn both pediatric and adult populations, ADHD is significantly comorbid with a wide range of other DSM-IV disorders, irrespective of subtype. The most prevalent of these are mood, anxiety, impulse control, and substance use disorders (SUD)18–20. Data that stem primarily from clinical and population-based studies suggest that up to 89% of all adults with ADHD suffer from a psychiatric comorbidity during their lifetime16, and that the comorbidity of SUD in adolescents and adults with ADHD might range from 16%–79%16,21–23. The heterogeneity of these data is also reflected in research focused on the association between specific ADHD subtypes and SUDs. While some authors find no evidence of such an association, others have concluded that the hyperactive-impulsive subgroup is more likely to suffer from a comorbid SUD than is the inattentive subgroup15,22,24.\n\nTo our knowledge, few studies have reported on the relationship between ADHD subtypes and SUD in adult samples. Furthermore, the limited data available stems primarily from America, while the few European studies focused on several comorbid factors, not solely on SUD16,25. The purpose of this study was therefore to characterize a clinical sample of adults with ADHD and to identify possible associations between ADHD subtypes, lifetime substance use, and preferences for specific substances.\n\n\nMethods\n\nOut of all consecutive referrals to the ADHD consultation service of the Zurich University Psychiatric Hospital26 between 2002 and 2011, we included adults with a confirmed diagnosis of ADHD and with available information on substance use (N=413). There were no other inclusion or exclusion criteria.\n\nThe diagnosis of ADHD was based on the Utah criteria for diagnostic assessment with the Wender Reimherr Interview (WRI)27, and translated into German and validated for the German language by Rösler et al. and Retz-Junginger et al.28–30. According to DSM-IV Text Revision (TR)31 specifications, three ADHD subtypes were identified: a predominantly inattentive subtype, a predominantly hyperactive-impulsive subtype, and a combined subtype. Subtypes were derived from the Attention Deficit-/Hyperactivity Self-Report Scale (ADHS-SB) questionnaire (see Supplementary material) by first summing the respective items (items 1–9 for \"inattentive\", items 10–18 for \"hyperactive-impulsive\"). Then, a cut-off value of 6 had to be exceeded in order for the respective subtype to be assigned. Subjects exceeding the threshold for both the inattentive and hyperactive-impulsive type were assigned to the combined subtype. As reported elsewhere32, patients also received a number of questionnaires, including German versions of the Symptom Check List 90-Revised (SCL-90-R)33, the Wender Utah Rating Scale (WURS-k)28, and the ADHS-SB34. If patients did not answer all questions on the questionnaire items, they were approached again and asked to supply the missing information. When patients had difficulty answering a question, their therapist helped to clarify it and enable them to provide an answer. In addition, third-party information was sought from family members, spouses, school reports, and childhood medical reports to support the diagnostic procedure.\n\nAssessment of substance use was based on ICD-10 criteria (F10-F19)9. Subjects reported on the lifetime use of alcohol, opioids, cannabinoids, sedatives, cocaine, (non-cocaine) stimulants, hallucinogens, and tobacco. Substance use was differentiated into abuse/dependence and sub-threshold, i.e. non-dependent and non-abusive, but more-than-singular, use.\n\nFisher's exact tests were used to compare frequency of substance abuse/dependence and comorbidity rates between ADHD subtypes, since small cell sizes were frequent. Kruskal-Wallis tests were used to compare questionnaire scores. Bonferroni correction was applied to all substance-related significance tests. A total of 26 tests were conducted, resulting in a Bonferroni-corrected significance threshold of p ≤.002. P-values surviving this threshold are printed in boldface in the results section. The study has low power: assuming a power of 80%, the minimal detectable difference in substance use frequency among subtypes is between 25–36%, while the power to detect a difference of 10% ranges from 28–48%. Analyses were carried out in Stata 11.2 and Stata 13.135.\n\nAuthorization by the local ethics committee (Cantonal Ethics Committee Zurich; Kantonale Ethik Kommission Zürich (KEK)) was obtained before the study was conducted (04/2005). All participants received a written description of the study procedure and signed a consent form.\n\n\nResults\n\nA total of 64 subjects had no questionnaire data whatsoever and were dropped from further analysis. These \"drop-outs\" were compared with the remaining 349 subjects and found not to differ in age and gender distribution. Drop-outs more often had affective disorders (24.9% vs. 12.7%, p=.05). They tended to have less overall substance abuse or dependence (14.1% vs. 27.8%, p=.02). Total substance use excluding abuse and dependence was clearly lower in drop-outs (23.4% vs. 63.6%, p=.000).\n\nThe average age of the included sample was 38.7 years (SD = 11.28), with a gender distribution that was 56% male and 44% female. Other than substance use, the most common comorbidities included affective disorders (25%); neurotic, stress-related and somatoform disorders (15%); and personality disorders (6%).\n\nIn the sample with questionnaire data (N=332–345, depending on questionnaire participants reached average test scores of 35.4 (SD=14.51) on WURS-k, 28.5 (SD=9.77) on ADHS-SB and 17.6 (SD=7.87) on the newly developed SCL-ADHD scale18. A total of 233 subjects were identified as belonging to the combined subtype of ADHD (test scores: ADHS-SB 32.9 [SD=7.69], WURS-k 37.5 [SD=13.91], SCL-ADHD 19.4 [SD=7.62]), 70 belonged to the predominantly inattentive type (test scores: ADHS-SB 20.7 [SD=5.57], WURS-k 30.1 [SD=13.59], SCL-ADHD 14.2 [SD=6.52]), and 24 belonged to the predominantly hyperactive-impulsive type (test scores: ADHS-SB 23.9 [SD=6.68], WURS-k 40.8 [SD=16.16], SCL-ADHD 16.4 [SD=7.28]). WURS-k (p<.04) and ADHS-SB (p<.0001) scores were different between inattentive and hyperactive-impulsive subtypes, while all scores were different at p<.004 for the comparison of inattentive vs. combined subtype.\n\nAccording to ICD-10 F1x, 26% of all participants at the time of the study, regardless of subtype, fulfilled the criteria for abuse of or dependence on psychotropic substances other than nicotine. The most frequently misused substances consisted of alcohol (8.9%), opioids (6.0%), cannabinoids (8.3%), and cocaine (8.0%). Nicotine abuse/dependence was found in 20.3% of participants.\n\nSubtype-specific analyses revealed that 36.9% of the combined subgroup, 44.3% of the predominantly inattentive subgroup, and 41.7% of the hyperactive-impulsive subgroup currently suffered from a comorbid psychiatric disorder. Additionally, 31.3% of the combined-type individuals, 15.7% of the predominantly inattentive subjects and 41.7% of hyperactive-impulsive patients were diagnosed with abuse or dependence on a psychotropic substance other than nicotine. Table 1 summarizes the results.\n\nbFischer's exact test\n\ninatt-hyp = inattentive vs. hyperactive-impulsive subtype, inatt-combined = inattentive vs combined subtype\n\n\nDiscussion\n\nThe present study investigated associations between the combined and predominantly inattentive subtypes of adults with ADHD and lifetime substance use, within a clinical sample. The most clinically significant result is the finding that the inattentive subtype showed a statistically significantly smaller rate of cocaine abuse/dependence compared to the combined subtype.\n\nThese results are in line with earlier work by Sobanski et al., who had characterized a sample of 118 adults with ADHD and found that the combined type suffered significantly more from lifetime SUDs (48.4%) than did patients with a predominantly inattentive type (23.3%)16. On the other hand, our findings contrast with results published by Clure et al., who reported on 43 patients with adult ADHD but found no differences in ADHD subtypes when divided by substance of choice (cocaine, alcohol, and multiple substances)36.\n\nThe most frequently consumed substance among all study participants was nicotine. This finding is in accord with results from prior studies37–39. With regard to subtype-specific differences, some authors have reported that, at least in young adolescents, the inattentive subtype of ADHD is more likely to correlate with higher levels of nicotine use than does the combined subtype40. It was suggested that nicotine might primarily improve attention but have less influence on hyperactive-impulsive behavior, which might explain this finding41,42. Other researchers, however, suggest that hyperactive-impulsive symptoms present a greater risk for frequent nicotine use than do inattentive symptoms at a later age, and argue that the relationship between ADHD symptoms and nicotine use might change between adolescence and adulthood43.\n\nOur hypothesis that findings would show continuing preferences for the use of specific substances in adulthood according to subtype (beyond cocaine), remains open due to lack of statistical significance. Like earlier reports of (non-cocaine) stimulants being used as self-medication by patients with ADHD, we had also expected to find a higher rate of non-prescribed lifetime stimulant abuse/dependence in the hyperactive-impulsive type, but not in the inattentive one44,45. In this sample, however, we found no evidence for this assumption, but lack of statistical power precludes interpreting this as evidence of no difference. We suspect that adults with both hyperactive-impulsive and inattentive symptoms might initially prefer cocaine to stimulants for self-medication, but there is no direct evidence for this assessment46–48.\n\nThe possibility of using cocaine as an attempt to self-medicate for ADHD symptoms was originally proposed in the early ’90s49,50. More recently, Saules et al. compared the symptom profile among adult ADHD smokers with and without cocaine dependence, and found that when they corrected for the use of nicotine, adults who used cocaine exhibited a more severe adult ADHD symptom profile, as accounted for by the presence of elevated hyperactive-impulsive but not inattentive symptoms. He therefore suggested that cocaine use in smokers with ADHD might be driven by excesses in hyperactivity50. Despite differences in sampling, our results are in accord with this finding.\n\nThe main limitation of this study is low power. This means, in particular, that non significant findings cannot be interpreted as evidence of no difference. A further limitation is that our sample was recruited entirely within a university setting, which might contribute to a selection bias. As a result, this clinical sample might have different characteristics than patients would exhibit who are in treatment with a physician in private practice. Nevertheless, the ADHD consultation service of the Psychiatric University Hospital Zurich is the largest institution of its kind in Switzerland and attracts patients from diverse psychosocial backgrounds.\n\nIn conclusion, our findings underscore the high rate of comorbidity between substance use and ADHD in adults. The more frequent abuse/dependence of cocaine by adult patients with hyperactive-impulsive symptoms should be kept in mind when treating this patient group. Although a limited number of evidence-based treatment strategies currently exist for the concurrent treatment of ADHD and SUD, some studies suggest that stimulant medication remains an efficacious pharmacological treatment option that improves symptoms of ADHD without increasing the likelihood of relapse into SUD22,51.\n\n\nData availability\n\nZENODO: Dataset 1. Contains all the variables necessary to reproduce the results of Adult attention-deficit/hyperactivity disorder: Associations between subtype and lifetime substance use – a clinical study, Liebrenz et al., doi: 10.5281/zenodo.1962352\n\nZENODO: Stata source code to reproduce analysis, doi: 10.5281/zenodo.1962253\n\n\nConsent\n\nWritten informed consent was obtained from patients.",
"appendix": "Author contributions\n\n\n\nML, AB and DE conceived the study. AB, ML, AG and DE carried out the research. AG provided statistical expertise and conducted analysis. ML and II prepared the first drafts of the manuscript. All authors contributed to the preparation of the manuscript. All authors were involved in the revision of the draft manuscript.\n\n\nCompeting interests\n\n\n\nThe author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.\n\n\nGrant information\n\nThis work was funded by a grant of the Swiss Federal Office of Public Health (FOPH) (05.000383). The Swiss Federal Office of Public Health had no further role in the study design, in the analysis and interpretation of data, in the writing of the report, or in the decision to submit the paper for publication.\n\nMichael Liebrenz was financially supported by the Prof. Dr. Max Cloëtta foundation, Zurich, Switzerland and the Uniscientia foundation, Vaduz, Principality of Liechtenstein.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe want to acknowledge the work of Corinna Fales (New York) who copyedited and clarified our content.\n\n\nSupplementary material\n\nThe ADHS-SB questionnaire (German).\n\nClick here to access the data.\n\n\nReferences\n\nBloom B, Cohen RA, Freeman G: Summary health statistics for U.S. children: National Health Interview Survey, 2010. Vital Health Stat 10. 2011; (250): 1–80. PubMed Abstract\n\nSimon V, Czobor P, Bálint S, et al.: Prevalence and correlates of adult attention-deficit hyperactivity disorder: meta-analysis. Br J Psychiatry. 2009; 194(3): 204–211. PubMed Abstract | Publisher Full Text\n\nMichielsen M, Semeijn E, Comijs HC, et al.: Prevalence of attention-deficit hyperactivity disorder in older adults in The Netherlands. Br J Psychiatry. 2012; 201(4): 298–305. PubMed Abstract | Publisher Full Text\n\nde Zwaan M, Gruss B, Muller A, et al.: The estimated prevalence and correlates of adult ADHD in a German community sample. Eur Arch Psychiatry Clin Neurosci. 2012; 262(1): 79–86. PubMed Abstract | Publisher Full Text\n\nBarkley RA, Fischer M, Edelbrock CS, et al.: The adolescent outcome of hyperactive children diagnosed by research criteria: I. An 8-year prospective follow-up study. J Am Acad Child Adolesc Psychiatry. 1990; 29(4): 546–557. PubMed Abstract | Publisher Full Text\n\nRösler M, Casas M, Konofal E, et al.: Attention deficit hyperactivity disorder in adults. World J Biol Psychiatry. 2010; 11(5): 684–698. PubMed Abstract | Publisher Full Text\n\nBiederman J, Faraone SV, Spencer T, et al.: Patterns of psychiatric comorbidity, cognition, and psychosocial functioning in adults with attention deficit hyperactivity disorder. Am J Psychiatry. 1993; 150(12): 1792–1798. PubMed Abstract | Publisher Full Text\n\nAssociation AP: Diagnostic criteria from DSM-III-R. American Psychiatric Association. 1987.\n\nDilling H: Internationale Klassifikation psychischer Störungen: ICD-10 Kapitel V (F). Klinisch-diagnostische Leitlinien. Huber Hans. 2013.\n\nAmerican Psychiatric Association: Diagnostic criteria from DSM-IV. The Association, 1994. Reference Source\n\nBatstra L, Frances A: DSM-5 further inflates attention deficit hyperactivity disorder. J Nerv Ment Dis. 2012; 200(6): 486–488. PubMed Abstract | Publisher Full Text\n\nSolanto M, Newcorn J, Vail L, et al.: Stimulant drug response in the predominantly inattentive and combined subtypes of attention-deficit/hyperactivity disorder. J Child Adolesc Psychopharmacol. 2009; 19(6): 663–671. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarbaresi WJ, Katusic SK, Colligan RC, et al.: Long-term stimulant medication treatment of attention-deficit/hyperactivity disorder: results from a population-based study. J Dev Behav Pediatr. 2006; 27(1): 1–10. PubMed Abstract | Publisher Full Text\n\nMattingly G, Weisler R, Dirks B, et al.: Attention deficit hyperactivity disorder subtypes and symptom response in adults treated with lisdexamfetamine dimesylate. Innov Clin Neurosci. 2012; 9(5–6): 22–30. PubMed Abstract | Free Full Text\n\nMurphy KR, Barkley RA, Bush T: Young adults with attention deficit hyperactivity disorder: subtype differences in comorbidity, educational, and clinical history. J Nerv Ment Dis. 2002; 190(3): 147–157. PubMed Abstract | Publisher Full Text\n\nSobanski E, Brüggemann D, Alm B, et al.: Subtype differences in adults with attention-deficit/hyperactivity disorder (ADHD) with regard to ADHD-symptoms, psychiatric comorbidity and psychosocial adjustment. Eur Psychiatry. 2008; 23(2): 142–149. PubMed Abstract | Publisher Full Text\n\nWilens TE, Biederman J, Faraone SV, et al.: Presenting ADHD symptoms, subtypes, and comorbid disorders in clinically referred adults with ADHD. J Clin Psychiatry. 2009; 70(11): 1557–1562. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKessler RC, Adler L, Barkley R, et al.: The prevalence and correlates of adult ADHD in the United States: results from the National Comorbidity Survey Replication. Am J Psychiatry. 2006; 163(4): 716–723. PubMed Abstract | Free Full Text\n\nFayyad J, De Graaf R, Kessler R, et al.: Cross-national prevalence and correlates of adult attention-deficit hyperactivity disorder. Br J Psychiatry. 2007; 190: 402–409. PubMed Abstract | Publisher Full Text\n\nGillberg C, Gillberg IC, Rasmussen P, et al.: Co-existing disorders in ADHD -- implications for diagnosis and intervention. Eur Child Adolesc Psychiatry. 2004; 13(Suppl 1): I80–92. PubMed Abstract | Publisher Full Text\n\nBukstein O: Substance use disorders in adolescents with attention-deficit/hyperactivity disorder. Adolesc Med State Art Rev. 2008; 19(2): 242–253, viii. PubMed Abstract\n\nTamm L, Adinoff B, Nakonezny PA, et al.: Attention-deficit/hyperactivity disorder subtypes in adolescents with comorbid substance-use disorder. Am J Drug Alcohol Abuse. 2012; 38(1): 93–100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKonstenius M, Larsson H, Lundholm L, et al.: An epidemiological study of ADHD, substance use, and comorbid problems in incarcerated women in Sweden. J Atten Disord. 2015; 19(1): 44–52. PubMed Abstract | Publisher Full Text\n\nElkins IJ, McGue M, Iacono WG: Prospective effects of attention-deficit/hyperactivity disorder, conduct disorder, and sex on adolescent substance use and abuse. Arch Gen Psychiatry. 2007; 64(10): 1145–1152. PubMed Abstract | Publisher Full Text\n\nRodgers S, Müller M, Rössler W, et al.: Externalizing disorders and substance use: empirically derived subtypes in a population-based sample of adults. Soc Psychiatry Psychiatr Epidemiol. 2015; 50(1): 7–17. PubMed Abstract | Publisher Full Text\n\nFrei A, Hornung R, Eich D: Tobacco consumption of adults diagnosed with ADHD. Der Nervenarzt. 2010; 81(7): 860–866. PubMed Abstract | Publisher Full Text\n\nWender PH: Attention-deficit hyperactivity disorder in adults. Oxford University Press, New York. 1995. Reference Source\n\nRetz-Junginger P, Retz W, Blocher D, et al.: Reliability and validity of the Wender-Utah-Rating-Scale short form. Retrospective assessment of symptoms for attention deficit/hyperactivity disorder. Nervenarzt. 2003; 74(11): 987–993. PubMed Abstract | Publisher Full Text\n\nRetz-Junginger P, Retz W, Blocher D, et al.: Wender Utah rating scale. The short-version for the assessment of the attention-deficit hyperactivity disorder in adults. Nervenarzt. 2002; 73(9): 830–838. PubMed Abstract | Publisher Full Text\n\nRösler M, Retz W, Thome J, et al.: Psychopathological rating scales for diagnostic use in adults with attention-deficit/hyperactivity disorder (ADHD). Eur Arch Psychiatry Clin Neurosci. 2006; 256(Suppl 1): i3–11. PubMed Abstract | Publisher Full Text\n\nAssociation AP: Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition: DSM-IV-TR®. American Psychiatric Association. 2000.\n\nEich D, Angst J, Frei A, et al.: A new rating scale for adult ADHD based on the Symptom Checklist 90 (SCL-90-R). Eur Arch Psychiatry Clin Neurosci. 2012; 262(6): 519–528. PubMed Abstract | Publisher Full Text\n\nDerogatis LR: SCL-90: administration, scoring & procedures manual for the R (evised) version and other instruments of the psychopathology rating scale series. s.n., s.l. 1977. Reference Source\n\nRösler M, Retz W, Retz-Junginger P, et al.: Tools for the diagnosis of attention-deficit/hyperactivity disorder in adults. Self-rating behaviour questionnaire and diagnostic checklist. Nervenarzt. 2004; 75(9): 888–895. PubMed Abstract | Publisher Full Text\n\nStataCorp: Stata Statistical Software: Release 11. 2 edn. StataCorp LP College Station, TX. 2009.\n\nClure C, Brady KT, Saladin ME, et al.: Attention-deficit/hyperactivity disorder and substance use: symptom pattern and drug choice. Am J Drug Alcohol Abuse. 1999; 25(3): 441–448. PubMed Abstract | Publisher Full Text\n\nLambert NM, Hartsough CS: Prospective study of tobacco smoking and substance dependencies among samples of ADHD and non-ADHD participants. J Learn Disabil. 1998; 31(6): 533–544. PubMed Abstract | Publisher Full Text\n\nPomerleau OF, Downey KK, Stelson FW, et al.: Cigarette smoking in adult patients diagnosed with attention deficit hyperactivity disorder. J Subst Abuse. 1995; 7(3): 373–378. PubMed Abstract | Publisher Full Text\n\nBurke JD, Loeber R, Lahey BB: Which aspects of ADHD are associated with tobacco use in early adolescence? J Child Psychol Psychiatry. 2001; 42(4): 493–502. PubMed Abstract | Publisher Full Text\n\nTercyak KP, Lerman C, Audrain J: Association of attention-deficit/hyperactivity disorder symptoms with levels of cigarette smoking in a community sample of adolescents. J Am Acad Child Adolesc Psychiatry. 2002; 41(7): 799–805. PubMed Abstract | Publisher Full Text\n\nLerman C, Audrain J, Tercyak K, et al.: Attention-Deficit Hyperactivity Disorder (ADHD) symptoms and smoking patterns among participants in a smoking-cessation program. Nicotine Tob Res. 2001; 3(4): 353–359. PubMed Abstract | Publisher Full Text\n\nLevin ED, Conners CK, Sparrow E, et al.: Nicotine effects on adults with attention-deficit/hyperactivity disorder. Psychopharmacology (Berl). 1996; 123(1): 55–63. PubMed Abstract | Publisher Full Text\n\nKollins SH, McClernon FJ, Fuemmeler BF: Association between smoking and attention-deficit/hyperactivity disorder symptoms in a population-based sample of young adults. Arch Gen Psychiatry. 2005; 62(10): 1142–1147. PubMed Abstract | Publisher Full Text\n\nFalck RS, Carlson RG, Wang J, et al.: Psychiatric disorders and their correlates among young adult MDMA users in Ohio. J Psychoactive Drugs. 2006; 38(1): 19–29. PubMed Abstract | Publisher Full Text\n\nSoyka M, Sievers E, Fischer-Erlewein E: Amphetamine misuse in suspected adult ADHS syndrome. Gesundheitswesen. 2009; 71(4): 207–209. PubMed Abstract | Publisher Full Text\n\nPérez de Los Cobos J, Siñol N, Puerta C, et al.: Features and prevalence of patients with probable adult attention deficit hyperactivity disorder who request treatment for cocaine use disorders. Psychiatry Res. 2011; 185(1–2): 205–210. PubMed Abstract | Publisher Full Text\n\nDelavenne H, Ballon N, Charles-Nicolas A, et al.: Attention deficit hyperactivity disorder is associated with a more severe pattern of cocaine consumption in cocaine users from French West Indies. J Addict Med. 2011; 5(4): 284–288. PubMed Abstract | Publisher Full Text\n\nDarredeau C, Barrett SP, Jardin B, et al.: Patterns and predictors of medication compliance, diversion, and misuse in adult prescribed methylphenidate users. Hum Psychopharmacol. 2007; 22(8): 529–536. PubMed Abstract | Publisher Full Text\n\nCarroll KM, Rounsaville BJ: History and significance of childhood attention deficit disorder in treatment-seeking cocaine abusers. Compr Psychiatry. 1993; 34(2): 75–82. PubMed Abstract | Publisher Full Text\n\nSaules KK, Pomerleau CS, Schubiner H: Patterns of inattentive and hyperactive symptomatology in cocaine-addicted and non-cocaine-addicted smokers diagnosed with adult attention deficit hyperactivity disorder. J Addict Dis. 2003; 22(2): 71–78. PubMed Abstract | Publisher Full Text\n\nKlassen LJ, Bilkey TS, Katzman MA, et al.: Comorbid attention deficit/hyperactivity disorder and substance use disorder: treatment considerations. Curr Drug Abuse Rev. 2012; 5(3): 190–8. PubMed Abstract | Publisher Full Text\n\nLiebrenz M, Gamma A, Ivanov I, et al.: Dataset 1. Contains all the variables necessary to reproduce the results of Liebrenz et al. Zenodo. 2015. Data Source\n\nGamma A: Stata source code to reproduce analysis. Zenodo. 2015. Data Source"
}
|
[
{
"id": "11890",
"date": "17 Feb 2016",
"name": "Marc Auriacombe",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this clinical study, Liebrenz et al. addressed the association between Adult attention-deficit/hyperactivity disorder (ADHD) subtypes and lifetime substance-use disorders (SUD). This topic is of particular interest (Fatseas et al 2012), as there is a strong literature supporting a relationship between ADHD and SUD (Wilens 2004). An emerging issue is the relationship between SUD and ADHD subtypes (Inattentive, Hyperactive-Impulsive, and Combined). For instance, Tamm et al. found more severe SUDs for the Combined subtype (Tamm et al. 2012). Our team suggested that Combined and Inattentive subtypes might be associated with more severe addictive disorders (Hurmic et al., CPDD annual meeting 2015). As stated in the manuscript, other authors reported the Hyperactive-impulsive subtype more likely to be associated to a comorbid SUD, or no difference. Data remains limited and this topic is still in debate.In this perspective, Liebrenz et al. conducted a cross sectional study with validated questionnaires assessing ADHD symptomatology and substance use, among 413 adults diagnosed with ADHD. They hypothesized that results would show statistical differences of lifetime SUD prevalence for specific substances, according to ADHD subtype. The main finding of this study was that the Inattentive ADHD subtype was significantly less associated than the Combined subtype to lifetime cocaine abuse/dependence (there was no difference for other substances). However, Hyperactive-impulsive and Inattentive subtypes were not statistically different for their association to any substance disorder.Liebrenz et al.’s work is a significant contribution – but not a breakthrough – to research on the relationships of ADHD subtypes and addiction. As reviewers, we noted the clarity of their manuscript, the justification of the study and the definition of their purpose in the introduction section. The method seemed valid to us and missing data (64 subjects) were correctly managed. However, we think that the strategy of analysis should have been more detailed in a specific paragraph. For instance, the usefulness of the WURS-K questionnaire is not explained. The Wender Reimherr Interview (WRI) and the Attention Deficit-/Hyperactivity Self-Report Scale (ADHS-SB) would also have deserved a short presentation and explanation of their use in this study. Another point to argue is the choice of the WRI to diagnose ADHD, as Rosler et al. (2006) stated that “on the basis of the WRI, DSM-IV diagnoses cannot be made”, even if “considerable amount of overlap exists” (Rosler et al. 2006).Results are presented and discussed appropriately. The authors mentioned that assessment of substance use was based on ICD-10 criteria. However, for their analysis they pooled substance abuse and dependence in one category without giving the rationale for that choice. In ICD-10 and DSM-IV these are distinct entities. However, in DSM 5 these distinct categories have been combined with other adjustments into one unique category (Hasin et al 2013). The authors might clarify this use of ICD-10 in a DSM 5 perspective. The ICD-10 substance use categories might have allowed a more specific study of the association of addiction and ADHD-subtypes?The absence of statistical difference between Hyperactive-impulsive and Inattentive subtypes for association with any substance disorder was unexpected, especially for stimulants. Liebrenz et al. discussed the lack of statistical power of their study, which could explain the lack of significant differences. Thus, the hypothesis of a link between specific SUDs and specific ADHD subtypes remains open.In conclusion, the study of Liebrenz et al. illustrates the high rate of comorbidity between substance use disorders and ADHD in adults, and reports a specific relationship between cocaine disorder and ADHD Combined subtype. Further research with more statistical power may highlight other specificities of ADHD subtypes in the field of addiction.",
"responses": [
{
"c_id": "2214",
"date": "19 Oct 2016",
"name": "Michael Liebrenz",
"role": "Author Response",
"response": "We appreciate the time and effort of Dr. Auriacombe, Dr. Alexandre and Dr. Fatseas and respond to their concerns below: 1.The reviewers ask us to explain the usefulness of the WURS-K questionnaire. The WURS-k retrospectively assesses symptoms of ADHD in childhood and as such is part of the diagnostic process for adult ADHD. It is mentioned here for completeness, not because it was a target of analysis. 2.The reviewers ask for more detailed presentations of the Wender Reimherr Interview (WRI) and the Attention Deficit-/Hyperactivity Self-Report Scale (ADHS-SB). We have now added the following two paragraphs on these tests: WRI: \"The Wender-Reimherr Interview is the German version of the American Wender-Reimherr Adult Attention Deficits Disorders Scale (WRAADDS) for the assessment of adult ADHD. It allows a diagnosis of adult ADHD to be made. It contains seven scales for: attention difficulties, persistent motor hyperactivity, temper, affective lability, emotional overreactivity, disorganization, and impulsivity. Each scale is represented by 3–5 items. A sum score is formed per scale, and each scale has a diagnostic threshold. A diagnosis requires that sum scores for scales 1–2 must each exceed their threshold, and that for scales 3–7, 2 out of 5 sum scores must exceed their threshold.\" ADHS-SB: \" The ADHS-SB is a self-rating instrument for the assessment of adult ADHD in German. It consists of 18 symptoms of ADHD derived from the DSM-IV and ICD-10 criteria for ADHD. The degree of endorsement is rated on four levels: 0 = not at all, 1 = slightly, 2 = moderately, and 3 = severely. The total score is obtained by summing up the 18 individual item scores. Subsyndrome scores for ‘‘attention deficit’’, ‘‘hyperactivity’’, and ‘‘impulsivity’’ can also be obtained.\" 3. The reviewers question the choice of the WRI to diagnose ADHD, stating that according to Rosler et al. (2006), DSM-IV diagnoses cannot be made based on the WRI. This is true. However, the DSM-IV does not contain a specific diagnosis for adult ADHD, while the WRI is specifically targeted to adults. As Rosler also states \"the question remains whether the DSM-IV criteria are adequate to characterize adult ADHD\" (p. I/4) and \"An advantage of the WRI could be the extended spectrum of adult psychopathology\" (p. I/10). There is certainly room to argue for either diagnostic approach. Reassuringly, as the reviewers also note, the two approaches show considerable overlap. 4.The reviewers ask for the rationale for pooling substance abuse and dependence. The reason is primarily statistical: estimates of percentages would have been small and noisy if we had separated dependence and abuse. Combining them gives a more robust estimate. This also seems justified to us because both abuse and dependence describe a serious level of substance use that requires treatment."
}
]
},
{
"id": "16482",
"date": "21 Sep 2016",
"name": "Swantje Matthies",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper \"Adult attention-deficit/hyperactivity disorders: Associations between subtype and lifetime substance use – a clinical study” covers, besides the well-known comorbidity of ADHD with substance use disorders (SUDs), a very interesting field of research concerning the associations between ADHD subtypes and substance use disorders. It adds to the knowledge and differentiates knowledge in focusing on ADHD subtypes. Particularly we noted the large sample size, as it allows more reliable results, the valid assessment of ADHD by using standardized instruments and third-party information and the well detailed statistical analysis. The applied methods appear overall reasonable and valid to us.\n\nWe have some remarks that should be addressed:\n\nThe authors mentioned that assessment of substance use was based on ICD-10 criteria. However they didn’t exactly explain their method to gather this data. We noted here the lack of standardized instruments. Furthermore they pooled substance abuse and dependence in one category. The rational for using this procedure should be given in more detail.\n\nWe missed information about psychopharmacological treatment. This information might provide potential correlations of treatment/no treatment with substance use.\n\nAnother question regards the comorbid disorders: It should be discussed that comorbidities, particularly personality disorders might have confounded the results. For instance, Borderline personality disorder, which often co-occurs with ADHD1 is also known to co-occur with SUD. Further, we wonder that only 6% of the patients had a comorbid personality disorder, which is low in contrast to some studies reporting prevalence rates of personality disorders between 25% and 78% among individuals with ADHD 2-5. It is also important to make clear, how the authors assessed comorbid disorders. Did they use standardized instruments?\n\nThe sample size of 327 represented on Table 1, page 4, does not correspond to the described sample size in the paragraph above “Results” (“…were compared with the remaining 349 subjects…” or “N=332-345, depending on questionnaire participants…”).\n\nIn the discussion it might be interesting to explain the use and effects of cocaine among individuals with ADHD, especially with regard to neurobiology. It would also be interesting to know about possible hypotheses and assumptions among the authors concerning the subtype difference in cocaine abuse/dependence. Are there neuropsychological or clinical differences between the subtypes which protect the inattentive subtype from cocaine dependence?\n\nThe discussion should also comment on the relevance of the presented findings for therapeutic approaches.",
"responses": [
{
"c_id": "2213",
"date": "19 Oct 2016",
"name": "Michael Liebrenz",
"role": "Author Response",
"response": "We appreciate the dedicated and useful comments of Dr. Swantje and Dr. Hoxhaj and the editor and the chance they gave us to improve the manuscript. The reviewer asks us to elaborate on how ICD-10 criteria for substance use were assessed and why abuse and dependence were pooled. ICD-10 criteria were applied by a highly experienced clinician (DE) in a semi-structured interview. No official instrument was used. Estimates of percentages would have been small and noisy if we had separated dependence and abuse. Combining them gives a more robust estimate and seems justified to us also because both describe a serious level of substance use that requires treatment. The reviewer asks for information on psychopharmacological treatment. Unfortunately, such information is not available. The reviewer asks us to discuss and to clarify assessment of comorbidities. We have addressed these questions in the Methods and in the Discussion (Limitations) Section. We now state: “Comorbid disorders were diagnosed according to ICD-10 by DE in a semi-structured interview, again without an official instrument.” and “Furthermore comorbidities, particularly personality disorders might have confounded the results. For instance, Borderline personality disorder, which often co-occurs with ADHD and is difficult to differentiate, is also known to be associated with SUD. However in this sample that relied for diagnosis of comorbidity on a semi-structured clinical interview, but not on additional instruments, we found only 6% of patients suffering from a comorbid personality disorder. This is low in comparison to some studies reporting prevalence rates between 25 – 78%.” The N=327 reflects the fact that not all patients diagnosed with ADHD fulfilled subtype criteria. Remember that in this study, ADHD was diagnosed according to Utah (Wender-Rheimherr), not DSM, criteria, and subtypes were constructed post-hoc using scores on the ADHS-SB questionnaire. This means that the diagnosis itself does not necessarily entail a subtype assignment, and vice versa. Thus, in about 20 participants ADHS-SB scores were not high enough to yield a subtype classification. This is now explained in the methods section under the heading \"Assessment of ADHD symptomatology\". The reviewer suggests to discuss the use and effects of cocaine among individuals with ADHD, especially with regard to neurobiology. We have addressed these aspects in an expanded discussion section. We now state: “On a different note van Wingen et al investigated structural brain abnormalities in this population and reported of significantly smaller grey matter volumes in the occipital cortex as well as smaller volumes in the putamen in ADHD patients with comorbid cocaine dependence when compared to those without this lifetime diagnosis. The authors of aforementioned study suggested that the differences in putamen volumes may reflect alterations in the availability of striatal dopamine transporters that are available for interaction with methylphenidate, thus giving some explanation for the finding that methylphenidate is less effective in patients with ADHD and a comorbid cocaine dependence.” The reviewer asks us to comment on the relevance of the presented findings for therapeutic approaches. We now point towards research by Levin et al and have expanded the conclusion section: “Furthermore a study among patients with ADHD and a comorbid cocaine dependence receiving methylphenidate, demonstrated an advantage over placebo with regard to reduction in cocaine use in individuals who responded to ADHD treatment.”"
}
]
},
{
"id": "17021",
"date": "17 Oct 2016",
"name": "Joseph O'Neill",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe investigators determined the effect of ADHD subtype on the prevalence of substance use disorders (SUDs) in a large (413 patients) sample of (medicated?) adult ADHD. In line with expectations, the overall sample evinced high rates of lifetime substance abuse or dependence (26%) and occasional use (57%). The inattentive subtype was significantly less likely to abuse or be dependent on cocaine than the combined subtype. As indicated by the Bordelais reviewers, while not a major advance, this study is a perfect example of normal science in the sense of Kuhn and represents a worthwhile gain in our knowledge of ADHD.\nThe authors list multiple clinical scales used to assess ADHD and state that SUD was assessed based on ICD-10 criteria. Like the Freiburger reviewers, the first question that occurred to me was whether any standardized instrument was administered to assess SUDs?\n\nAgain, as asked by other reviewers, were the patients in the sample undergoing current pharmacological treatment? Presumably, many were on methylphenidate or other agents.\n\nIt would be a good idea to cite some statistics for prevalence substance abuse and dependence in the general population around Zürich. Then the reader would have an idea how much higher prevalence is in the adult ADHD clinical sample.\n\nThe investigators were commendably conservative in claiming significant results, restricting themselves to p-values ≤ 0.002, based on Bonferroni correction for multiple comparisons. It might be informative to discuss some of the other results that would have been significant under a less stringent cut-off. For example, nicotine abuse/dependence is twice as frequent and alcohol abuse/dependence is nearly ten times as frequent in the Combined as in the Inattentive subgroup. These may be worth commenting on. But I wouldn’t discuss any comparisons involving the Hyperactive-Impulsive subgroup, since they are too few (only 24 subjects).\n\nIn the Introduction, the authors retell the diagnosis and subtyping of ADHD in DSM-III-R, ICD-10, and DSM-IV. They should add a line or two about DSM5, just to round out their brief historical review.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/4-407
|
https://f1000research.com/articles/5-683/v1
|
14 Apr 16
|
{
"type": "Research Article",
"title": "The effects of an editor serving as one of the reviewers during the peer-review process",
"authors": [
"Marco Giordan",
"Attila Csikasz-Nagy",
"Andrew M. Collings",
"Federico Vaggi",
"Marco Giordan",
"Attila Csikasz-Nagy",
"Andrew M. Collings"
],
"abstract": "Background Publishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this process, but peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.\n\nMethods Here we examine an element of the editorial process at eLife, in which the Reviewing Editor usually serves as one of the referees, to see what effect this has on decision times, decision type, and the number of citations. We analysed a dataset of 8,905 research submissions to eLife since June 2012, of which 2,750 were sent for peer review, using R and Python to perform the statistical analysis.\n\nResults\nThe Reviewing Editor serving as one of the peer reviewers results in faster decision times on average, with the time to final decision ten days faster for accepted submissions (n=1,405) and 5 days faster for papers that were rejected after peer review (n=1,099). There was no effect on whether submissions were accepted or rejected, and a very small (but significant) effect on citation rates for published articles where the Reviewing Editor served as one of the peer reviewers.\n\nConclusions An important aspect of eLife’s peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves as a reviewer. Other journals hoping to improve decision times could consider adopting a similar approach.",
"keywords": [
"peer review",
"decision times",
"eLife"
],
"content": "Background\n\nAlthough pre-publication peer review has been strongly criticised – for its inefficiencies, lack of speed, and potential for bias (for example, see 1 and 2) – it remains the gold standard for the assessment and publication of research3. eLife was launched to “improve [...] the peer-review process”4 in the life and biomedical sciences, and one of the journal’s founding principles is that “decisions about the fate of submitted papers should be fair, constructive, and provided in a timely manner”5. However, peer review is under pressure from the growth in the number of scientific publications, which increased by 8–9% annually from the 1940s to 20126, and growth in submissions to eLife would inevitably challenge the capacity of their editors and procedures.\n\neLife’s editorial process has been described before7,8. In brief, each new submission is assessed by a Senior Editor, usually in consultation with one or more members of the Board of Reviewing Editors, to identify whether it is appropriate for in-depth peer review. Traditionally, editors recruit peer reviewers and, based on their input, make a decision about the fate of a paper. Once a submission is sent for in-depth peer review, however, the Reviewing Editor at eLife has extra responsibility. First, the Reviewing Editor is expected to serve as one of the peer reviewers. Second, once the reviews have been submitted independently, the Reviewing Editor should engage in discussions with the other reviewers to reach a decision they can all agree with. Third, when asking for revisions, the Reviewing Editor should synthesise the separate reviews into a single set of revision requirements. Fourth, wherever possible, the Reviewing Editor is expected to make a decision on the revised submission without re-review. At other journals, the Reviewing Editor may instead be known as an Academic Editor or Associate Editor.\n\nSince editors have extra responsibility in eLife’s peer-review process, here we focus our analysis on the effect of the Reviewing Editor serving as one of the peer reviewers, and we examine three outcomes: 1) the effect on decision times; 2) the effect on the decision type (accept, reject or revise); and 3) the citation rate of published papers. The results of the analysis are broken down by the round of revision and the overall fate of the submission. We do not consider the effect of the discussion between the reviewers or the effect of whether the Reviewing Editor synthesizes the reviews or not.\n\n\nMethods\n\nWe analyzed a dataset containing information about 9,589 papers submitted to eLife since June 2012 in an anonymised format. The dataset contained the date each paper was first submitted, and, if it was sent for peer review, the dates and decisions taken at each step in the peer-review process. Information about authors had been removed, and the identity of reviewers and editors was obfuscated to preserve confidentiality.\n\nAs a pre-processing step, we removed papers that had been voluntarily withdrawn, or where the authors appealed a decision, as well as papers where the records were corrupted or otherwise unavailable. After clean up, our dataset consisted of a total of 8,905 submissions, of which 2,750 were sent for peer review. For the rest of the paper, we focus our analysis on this subset of 2,750 papers, of which 1,405 had been accepted, 1,099 had been rejected, and the rest were still under consideration. The article types included are Research Articles (MS type 1), Short Reports (MS type 14), Tools and Resources (MS type 19), and Research Advances (MS type 15). Registered Reports are subject to a slightly different review process and have not been included.\n\nBefore discussing the results, we introduce a few definitions: the “eLife Decision Time” is the amount of time taken by eLife from that version of the submission being received until a decision has been reached for a particular round of review. The“Author Time” is the amount of time taken by the authors to revise their article for that round of revision. The “Total Time” is the time from first submission to acceptance, or amount of time taken for eLife to publish a paper from the moment it was first received for consideration. By definition, the “Total Time” is equal to the sum of the “eLife Decision Time” and the “Author Time” across all rounds, including the initial submission step. “Revision Number” indicates the round of revision. We distinguish between Reviewing Editors who served as one of the reviewers during the first round of review and Reviewing Editors who did not serve as one of the reviewers (i.e., those who undertook more of a supervisory role during the review process) with the “Editor_As_Reviewer” variable (True or False).\n\nWe illustrate the variables with a real example taken from the dataset (Table 1).\n\nThe example submission from Table 1 was received as an “initial submission” (MS TYPE 5) on 20 June 2012. One day later, the authors were encouraged to submit a “full submission” (MS TYPE 1) that would be sent for in-depth peer review. The full submission was received on 27 June 2012, when the Reviewing Editor was assigned and reviewers were contacted. In this example, the Reviewing Editor also served as one of the reviewers (indicated by the “Editor_As_Reviewer” variable).\n\nOn 25 July (28 days later), the Reviewing Editor sent out a decision asking for revisions to the authors, who submitted their revised manuscript on 5 September. The paper was accepted on the same day that it was resubmitted. In this case, the total eLife Decision Time was 29 days (including the pre-review stage), the Author Time was 48 days, and the Total Time (eLife Decision Time plus Author Time) was 77 days. Total Time refers only to the total time across all rounds and revisions for each paper - and does not vary across rounds. Since we are focusing on the role of the editors in the peer review process, in the rest of the paper we will ignore the time spent in the pre-review stage.\n\nAll of the statistical analyses were performed using R and Python. On the Python side, we used statsmodels, scipy, numpy, and pandas for the data manipulation and analysis. To plot the results we used bokeh, matplotlib, and seaborn. Details of all the analysis, together with code to reproduce all image and tables in the paper are available on the companion repository of this paper here: https://github.com/FedericoV/eLife_Editorial_Process.\n\nTo obtain the citation numbers, we used a BeautifulSoup to scrape the eLife website, which provides detailed information about citations for each published paper.\n\n\nResults and discussion\n\nFirst, we examined the effect of the Reviewing Editors serving as one of the reviewers on the time from submission to acceptance or from submission to rejection after peer review (Total Time). When the Reviewing Editor served as a reviewer (Editor_As_Reviewer = True), the total processing time was 10 days faster in the case of accepted papers and more than 5 days faster in the case of papers rejected after peer review (Figure 1). Both differences are statistically significant (see Table 2 for details). Intuitively, regardless of the role of the Reviewing Editor, rejection decisions are typically much faster than acceptance decisions, as they go through fewer rounds of revision, and are not usually subject to revisions from the authors.\n\nWe compare the total time from submission to acceptance and submission to rejection after peer review. Orange indicates papers submissions where the Reviewing Editor served as one of the peer reviewers, while light blue indicates submissions where the Reviewing Editor did not serve as one of the reviewers (i.e., the editors had more of a supervisory role).\n\nOne possible reason why submissions reviewed by the Reviewing Editor have a faster turnaround is because fewer people are involved (e.g., the Reviewing Editor in addition to two external reviewers, rather than the Reviewing Editor recruiting three external reviewers), and review times are limited by the slowest person. To test this, we built a linear model to predict the total review time as a function of editor type (whether the Reviewing Editor served as a reviewer or not), decision (accept or reject), and the number of unique reviewers across all rounds (see Table S1). Indeed, the total review time did increase with each reviewer (7.4 extra days per reviewer, p < 0.001) and the effect of a Reviewing Editor serving as one of the reviewers remained significant (–9.3 days when a Reviewing Editor served as one of the reviewers, p < 0.0001).\n\nNext, we examined this effect across all rounds of review (rounds 0, 1, 2) and decision types (accept, reject and revise). The results are shown in Figure 2 and summarized in Table 2. Again, we see that processing times are consistently faster across almost every round, when the editors serves as one of the peer reviewers, except in the cases where the sample size was very small.\n\nBoxplots showing decision times for different rounds of review, depending on decision type and whether the Reviewing Editor served as one of the reviewers (orange) or not (light blue).\n\nInterestingly, when the Reviewing Editor serves as one of the peer reviewers, the eLife Decision Time is reduced, but the time spent on revisions (Author Time) does not change. This suggests that the actual review process is more efficient when the Reviewing Editor serves as a reviewer, but the extent of revisions being requested from the authors remains constant.\n\nWe next examined the chances of a paper being accepted, rejected or revised when a Reviewing Editor served as one of the reviewers. We found no significant difference when examining the decision type on a round-by-round basis (Table 3) (chi-squared test, p = 0.33).\n\nTo test whether eLife’s acceptance criteria changed over time, we built a logit model including as a predictive variable the number of days since eLife began accepting papers and whether the Reviewing Editor served as one of the reviewers. The number of days since publication had a very small (–0.003) but significant effect (p < 0.02) while the effect of the Reviewing Editor serving as a reviewer was not significant (see Table S2). We also tested whether a Reviewing Editor serving as a reviewer had an effect on the number of rounds of revision before the final decision and found no significant effect (see Table S3).\n\nThe final outcome we examined was the number of citations (as tracked by Scopus) received by papers published by eLife. Papers accumulate citations over time, and, as such, papers published earlier tend to have more citations (Figure 3).\n\nlog1p of the number of citations (indexed by Scopus) received by each paper accepted by eLife versus the number of days between when eLife first started accepting papers. Green dots represent papers where the Reviewing Editor served as one of the reviewers, while blue dots represent papers where the Reviewing Editor did not serve as one of the reviewers.\n\nWe examined this effect using a generalized linear model. As variables, we considered whether the Reviewing Editor served as a reviewer (Editor_As_Reviewer, true or false), as well as the number of days between eLife publishing its first manuscript and the day the Scopus database was queried. The presence of a Reviewing Editor serving as a reviewer had no significant effect on the number of citations (see Table S4). Papers with longer total review times tended to be cited less (this effect is small but significant).\n\nOne of the most noticeable effects of a Reviewing Editor serving as one of the peer reviewers at eLife is the faster decision times. However, serving as a Reviewing Editor and one of the reviewers for the same submission is a significant amount of work. As the volume of papers received by eLife has increased, the fraction of editors willing to serve as a reviewer has decreased. While in 2012 almost all editors also served as reviewers, that percentage decreased in 2013 and 2014. There are signs of a mild increase in the percentage of editors willing to serve as reviewers in 2015 (Figure 4).\n\nTime series plots for the number of active editors who served as one of the reviewers (Unique RE, blue) or not (Unique SE, green) in a given month. The total number of papers sent for peer review is shown in red. The % of papers is in black.\n\n\nConclusions\n\nDue to an increasingly competitive funding environment, scientists are under immense pressure to publish in scientific journals, yet the peer-review process remains relatively opaque at many journals. In a systematic review from 2012, the authors conclude that “Editorial peer review, although widely used, is largely untested and its effects are uncertain”9. Recently, journals and conferences (e.g., 10) have launched initiatives to improve the fairness and transparency of the review process. eLife is one such example. Meanwhile, scientists are frustrated by the time it takes to publish their work11.\n\nWe report the analysis of a dataset consisting of articles received by eLife since launch and examine factors that affect the duration of the peer-review process, the chances of a paper being accepted, and the number of citations that a paper receives. In our analysis, when an editor serves as one of the reviewers, the time taken during peer review is significantly decreased. Although there is additional work and responsibility for the editor, this could serve as a model for other journals that want to improve the speed of the review process.\n\nJournals and editors should also think carefully about the optimum number of peer reviewers per paper. With each extra reviewer, we found that an extra 7.4 days are added to the review process. Editors should of course consider subject coverage and ensure that reviewers with different expertise can collectively comment on all parts of a paper, but where possible there may be advantages, certainly in terms of speed and easing the pressure on the broader reviewer pool, of using fewer reviewers per paper overall.\n\nInsofar as the editor serving as a reviewer is concerned, we did not observe any difference in the chances of a paper being accepted or rejected, but we did notice a modest increase in the overall number of citations that a paper receives when an editor serves as one of the reviewers, although this effect is very small. An interesting result from our analysis is that a longer peer-review process or more referees does not lead to an increase in citations, so this is another reason for journals and editors to carefully consider the impact of the number of reviewers involved, and to strive to communicate the results presented in a timely manner for others to build upon. As eLife is a relatively young journal, we can verify if the citations trend we observe will hold over longer periods as different papers accumulate citations.\n\n\nData and software availability\n\nAll code for the analysis as well as the datasets: https://github.com/FedericoV/eLife_Editorial_Process\n\nArchived version as at the time of publication: http://dx.doi.org/10.5281/zenodo.4854415\n\nTo reproduce Figure 4, we pre-processed the raw dataset that contained the identity of the editors to avoid disclosing any information about the identity of reviewers.",
"appendix": "Author contributions\n\n\n\nFV and AC conceived the study. FV, MG, AC-N designed the analysis. FV carried out the analysis. FV and MG prepared the first draft of the figures and the manuscript. FV and AC wrote the final version of the manuscript. All authors have agreed to the final content.\n\n\nCompeting interests\n\n\n\nAndy Collings is Executive Editor at eLife. The other authors declare that they have no competing interests.\n\n\nGrant information\n\nAndy Collings is employed by eLife Sciences Publications Ltd. eLife is supported by the Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe gratefully acknowledge discussions and input from Mark Patterson (eLife’s Executive Director) and Peter Rodgers (eLife’s Features Editor). We thank James Gilbert (Senior Production Assistant at eLife) for extracting data from the submission system for analysis.\n\n\nSupplementary material\n\nWe built a linear model of the Total Time as a function of whether the Reviewing Editor served as a reviewer (Editor_As_Reviewer) (categorical variable, two levels), the final decision made on a paper (Decision_No), and the number of unique reviewers. The revision time increased with the number of reviewers, but it decreased when a Reviewing Editor served as one of the reviewers.\n\nWe used logit regression to estimate the chances of a paper being accepted as a function of whether the Reviewing Editor served as one of the reviewers (Editor_As_Reviewer), the number of unique reviewers, and the number of days between when a paper was published and the first published paper by eLife. The only significant variable is the days since eLife started accepting papers for publication (although the effect on the chances of a paper being accepted is very small).\n\nWe used a GLM with a log link function to model the number of revisions that a paper undergoes prior to a final decision as a function of whether a Reviewing Editor served as one of the reviewers (Editor_As_Reviewer), the number of unique reviewers, the decision type, and the number of days since eLife started accepting papers. The only variable that had a significant effect was the decision type, as papers that are rejected tend to be overwhelmingly rejected early on and thus undergo fewer rounds of revision.\n\nLinear model for the citations (as indexed by Scopus) after log1p transform. Publications where the Reviewing Editor served as one of the reviewers have a slightly higher number of citations, although this effect is very moderate. Papers also accumulate citations over time, thus papers that have been published longer ago (Publication_Since_Start) tend to accumulate more citations. The number of unique reviewers, as well as the total decision time before acceptance, had no significant effect.\n\n\nReferences\n\nSmith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006; 99(4): 178–182. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith R: Classical peer review: an empty gun. Breast Cancer Res. 2010; 12(Suppl 4): S13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMayden KD: Peer Review: Publication’s Gold Standard. J Adv Pract Oncol. 2012; 3(2): 117–122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchekman R, Patterson M, Watt F, et al.: Scientific publishing: Launching eLife, Part 1. eLife. 2012; 1: e00270. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchekman R, Watt F, Weigel D: Scientific publishing: Launching eLife, Part 2. eLife. 2012; 1: e00365. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBornmann L, Mutz R: Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. 2014; arXiv:1402.4578. Reference Source\n\nSchekman R, Watt F, Weigel D: Scientific publishing: The eLife approach to peer review. eLife. 2013; 2: e00799. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchekman R, Watt FM, Weigel D: Scientific publishing: A year in the life of eLife. eLife. 2013; 2: e01516. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJefferson T, Alderson P, Wager E, et al.: Effects of editorial peer review: a systematic review. JAMA. 2002; 287(21): 2784–2786. PubMed Abstract | Publisher Full Text\n\nFrancois O: Arbitrariness of peer review: A Bayesian analysis of the NIPS experiment. arXiv:1507.06411. 2015. Reference Source\n\nPowell K: Does it take too long to publish research? Nature. 2016; 530(7589): 148-51. PubMed Abstract | Publisher Full Text\n\nJones E, Oliphant E, Peterson P, et al.: SciPy: Open Source Scientific Tools for Python. 2001. Reference Source\n\nWaskom M, Botvinnik O, Okane D, et al.: Seaborn Plotting Library. 2016. Publisher Full Text\n\nSeabold S, Perktold J: “Statsmodels: Econometric and statistical modeling with python.” Proceedings of the 9th Python in Science Conference. 2010. Reference Source\n\nVaggi F: eLife_Editorial_Process: Review_Version. Zenodo. 2016. Data Source"
}
|
[
{
"id": "13794",
"date": "12 May 2016",
"name": "Alesia Zuccala",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI enjoyed reading this article, and I liked the fact that it was concisely written. I would only like to share a few comments. The authors of this paper have chosen to exam the peer review process for eLife, which is an electronic journal for the life and biomedical sciences. They note that two papers have previously described the editorial process of eLife; however for their own study, I think it would have been useful to include a link to the eLife website (https://elifesciences.org/).\n\nI was curious about whether or not it was an open access journal, and I looked for the website to obtain this information.\n\nMore importantly, I also looked to the website, because I wanted to know how many persons serve as Senior Editors or sit on the editorial board of eLife. What is interesting here is that the term “editor” for this journal is stretched to include one Editor-in-Chief, three Deputy-Editors, thirty-two Senior Editors and a 282-member Board of Reviewing Editors.\n\nThe peer review system for this journal is quite different from the ‘traditional’ journal, but to be more precise, it differs because “Reviewing Editors” are specialists who have agreed to review for the journal on a regular basis, and may in some cases call upon additional ‘outside’ reviewers. What we do not know from this paper, is whether or not two or more of the 282 Reviewing Editors sometimes choose to review the same paper. At the eLife website, the following is noted: “The Reviewing editor usually reviews the article him or herself, calling on one or two additional reviewers as needed”. Are the additional reviewers always from the outside? If not, how would this change the authors’ hypothesis related to the ‘effects of an editor serving as one of the reviewers’? The methods used for the data analysis are explained very well, with the exception of one detail: How did the authors acquire the initial dataset of 9,589 papers?\n\nThis information is presented in the ‘Acknowledgements’ section, but could have also been added to the Methods section, for more clarity. The graphs related to the authors’ findings are clear and present interesting information, but I am not sure how the citation data were collected from Scopus for the peer-reviewed papers in eLife and whether or not ‘citation windows’ were used for the papers depending on the year in which they were published. Essentially the authors are correct in saying that “papers accumulate citations over time, and, as such, papers published earlier tend to have more citations”, hence citation windows are used to correct for this. The highest rates of citation (especially in the life sciences and biomedicine) will appear within three-to-five years following an article’s date of publication. For this reason, bibliometricians usually count citations within this three-to-five year time-frame to determine an article’s initial impact.\n\nSince the articles used in this study had been “submitted to eLife since June 2012” the authors should have focus on three things: 1) the involvement of a Reviewing Editor as a peer reviewer or not, 2) the number of days between start of the submitted paper’s acceptance and publication, and 3) the papers’ citation rate following 3-5 years after final publication.",
"responses": []
},
{
"id": "14223",
"date": "05 Jul 2016",
"name": "Bernd Pulverer",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGiordan et al analyzed 2,750 manuscripts sent out at the journal eLife for peer review (of which 1,405 ended up published). The authors compare papers in which the editors functions as ‘reviewing editor’ that is as one of three referees. Globally, and at almost every decision stage the process is accelerate significantly if the reviewing editor functions as one of the referees, with no or very small impact in author revision time and citation rates, respectively.\nThe authors calculate that every additional external referee adds 7.4 days to the process and suggest that journals strive to balance the need for covering all the required expertises carefully with the negative effect on the speed of evaluation.\n\nThe quality and speed of the peer review process are topics of active debate. Despite widespread criticism, the publication in certain peer reviewed journals continues to directly impact research assessment by both funders and institutions. The quality and fairness of the process is therefore paramount not only to assure the reliability of the literature, but also to inform research assessment in a balanced manner. Notwithstanding the slow delivery of this particular referee report, speed matters in particular in fast moving and highly competitive research areas like the biosciences.\nQuantitative evidence that well defined aspects of an editorial process has positive effect on quality and or speed is therefore of significant importance.\n\nThe authors have carefully analyzed a decent sized dataset and report a statistically significant effect of a well defined change in the editorial process, while also showing evidence that this change has no detrimental effect on the quality of the editorial assessment, at least as far as the outcome is analyzed (here, in terms of two parameters: revision time and citation rate).\n\nWhile this manuscript makes a significant contribution, I have a number of suggestions I would invite the authors to consider in revision:\n\nTextural:\nAbstract/main text; Background: It is not merely the growth of the number publications that puts the system under pressure (after all, in principle the editorial/peer review process may well be able to scale with increased research output), but rather the increased pressure publish in a small number of high Impact Factor journals in an effort to optimize chances of a positive impact on research assessment.\n\nPlease introduce the journal eLife, including the scientific scope, as different communities have widely different peer review and citation cultures and this will likely affect the findings reported here.\n\nAbstract: Results. As presented, I found it confusing that the first sentence describes an apparently sizeable difference between accepted (10 days faster) and rejected (5 days faster), while the next sentence states ‘there was no effect on whether submissions were accepted or rejected)’.\n\nAbstract: the dataset is described as consisting of an analysis of 8,905 submissions, when in reality the 2,750 papers sent for review were analyzed. This could be formulated more clearly.\n\nI would suggest for clarity to remove the ‘False’ and ‘True’ nomenclature and change to ‘reviewing editor’ and ‘editor’ assessment or similar.\n\nAnalysis:\nIt is unclear to me if the authors can exclude any biases in terms of which manuscripts were selected for formal review by an editor vs. outside only refereeing. Have the authors attempted to assess possible bias? For example, maybe the reviewing editors tend to review the manuscripts themselves that strike them as the more interesting ones, or maybe certain subject areas are preferentially subject to one approach.\n\nI am missing a clear definition of what editorial peer review is and what is not. It is clear that this is likely not a completely binary situation and the authors do not describe how the decisions were parsed so clearly into the two groups (on p3. ‘more of a supervisory role’ vs. less of such a role sounds quite vague).\n\nWhy were appealed manuscripts removed form the analysis? This may introduce a bias as possibly erroneous decisions are excluded. How many appeals were excluded?\n\nIt is unclear to me if all paper invariably had three referees (i.e. 3 outside or 2 outside+reviewing ed.). I assume some papers only had two referees. Were they excluded? If not, how did these score for speed, revision time and citation?\n\nThe definition of ‘Total Time’ on p3. Is unclear: it is stated to be both ‘first submission-acceptance’ and ‘first submission to publication’: which one is it?\n\nPlease state that Scopus citations were assessed when fist introducing the topic. I would recommend to use the same time window for all papers (e.g. 12 months after publication) as this renders citation rates more comparable. Why was the eLife website scraped for citation rates, and not the primary Scopus database – that data may in principle be more reliable.\n\nThe 10 days (accepted) vs. 5 days (rejected) faster: is this simply the additive effect of 2 rounds vs. 1 round of review?\n\nPlease include basic stats information in the figure legends – in particular fig 2, where the numbers will decrease dramatically for ‘revision 1’ and ‘revision 2’.\n\nIt is unclear to me if reviewing editors were invariably faster the outside referees. It would be useful to quantify this and assuming there is a striking difference to speculate why – it is the individuals selected by e-Life or due to policing or incentives provided by the journal? After all, similar strategies could be applied to outside referees. On a related point, it would be useful to quantify if the reports by the reviewing editors were qualitatively different (e.g. length). One assumes the ultimate decision on the manuscript was as also much better correlated with the reports by reviewing editors than those of the outside referees.\n\nI am confused: in fig 2 ‘Reject full submission’ in revision 0 and revision 1 is slower than ‘accept’. This seems to be the opposite to fig 1 and in fact less intuitive than the results in fig 1. Since manuscripts are rarely re-reviewed (see p1), are all the datapoints displayed in ‘revision 1’ and ‘revision 2’ for re-review processes?\n\nFor fig 4, I would suggest to plot the manuscript load/editor.\n\nNon-essential further reaching analysis (suggestions):\nit would have been useful to measure and present the acceptance/rejection rates of manuscripts assess by three outside referees compared with two referees + reviewing editor.\n\nit would have been useful to quantify the % of agreement between the reviewing editor and the outside referees, compared with agreement between the outside referees.",
"responses": [
{
"c_id": "2212",
"date": "03 Oct 2016",
"name": "Federico Vaggi",
"role": "Author Response",
"response": "We thank the reviewer for some very constructive comments, and for appreciating the manuscript. We are now in the process of submitting a revised draft that we hope addresses all the major concerns. Speaking as the corresponding author, I also completely agree that the current publishing system causes very major distortions in the behaviour of scientists who seek high impact publications for grants/tenure. Unfortunately, given the currently economic climate and the funding situation for science, this is unlikely to change in the near future. By publishing this report, as well as making all the data available, we hope to at least make the process a bit more transparent, and give authors more information on how decisions are carried out. Textual Comments: 1. We agree completely, and emphasize this in the discussions. 2. We have added more discussion in the text discussing eLife and its mission. 3. We tried to reword this sentence to make it more clear. 4. We rewrote this section. 5. We tried to have a more consistent nomenclature throughout the paper, including in the figures. Analysis Comments: 1. The reviewer is completely correct on this, as other reviewers have also pointed out. We tried addressing this in the text: the short answer is that while eLife tries to encourage all editors to act as reviewers on papers where they are qualified to do so, we cannot completely exclude that there is a selection process taking place. We explain this in the text as well. 2. This was an unfortunate phrase, the situation is boolean. Either the editor reviews the paper as a normal peer reviewer in addition to their duty as editor, or they purely play the role of editor. 3. Appeals were excluded because they were systematically corrupted in the database. The dates for appealed papers were nearly all inconsistent - in the end, we removed them all because we couldn't be sure that even for those without inconsistent papers the dates were correct. 4. We discuss this at length in the text. Some papers had only 2 reviewers. They were not excluded from the analysis. We examined the effect of the number of unique reviewers on citation rates as well as review time, including the papers with only 2 reviewers. 5. Fixed in the text. 6. We explained the use of Scopus more clearly. We have also changed the figure showing citation rates more clearly: by using rates instead of total number of citations, we can normalize across time more easily. 7. Papers that were rejected had significantly fewer rounds of revision, so that's likely to be a significant cause of the shorter review time. We also break down the data in detail in Figure 2, as well as Table 2. 8. As there is a lot of data condensed in those figures, we include a table in the text with all the numbers that are available. Putting all the numbers inside the figure legend would make the text very hard to read. 9. This would be absolutely interesting to investigate further, but would require extensive resources to score individual reviews for quality. We think this would be an excellent candidate for a follow up study, but falls slightly outside of the purpose of this work. 10. For the few paper that end up rejected after round 0, the review time is slightly longer for reject decisions than accept decisions. Looking at Table 2 shows that the number of papers that make it to that stage is very small. 11. Great suggestion. We have vastly changed Figure 4 to incorporate this feedback. Additional Suggestions: 1. This was analyzed in the linear model in Table S2. We did not explicitly separate out the different categories, but treated each variable as continuous as we didn't find strong discontinuities in the data."
}
]
},
{
"id": "13397",
"date": "26 Jul 2016",
"name": "Ivan Oransky",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review this manuscript. This is a well-done study, and the conclusions follow from the results. We would recommend accepting the article once all clarifications and revisions have been made, or the lack of doing so adequately justified.\nA. While brevity is generally to be admired, we would recommend a bit more detail about the statistical analyses. These are critical, but are reduced to 3 sentences and a referral to the programming language through an external link. We would suggest that the main text include the (brief) discussion of the analyses done, and rationale for them, rather than have those relegated to the external link.\nB. The interpretation of the findings seems to be attributing causal factors - an A leads to B consideration - for which the control of variables is too limited. We believe that interpreting these as associations would be more consistent with the findings.\nConsider the statement: \"Journals and editors should also think carefully about the optimum number of peer reviewers per paper. With each extra reviewer, we found that an extra 7.4 days are added to the review process.\" Given that there appeared to be no inclusion of either article quality or complexity in the evaluation, is it not possible that issues within the article itself required the use of additional reviewers (i.e. a B leads to A perspective)? Perhaps extra reviewers with specific expertise was required, or concerns with potential problems in the manuscript led to consultations with other reviewers. It does not seem safe to assume that it was the addition of the reviewer that added extra days.\nSimilarly, the study centers around the role of the editor in the reviewing process, and the discussion suggested that the involvement of the reviewing editor as a peer reviewer expedited the process. There was little discussion of other factors that could have accounted for the statistical results. For example, perhaps the reviewing editor selected articles that piqued his or her interest, or were more clearly presented. Perhaps the reviewing editor selected to review at times more convenient to his or her workload, while other reviewers did not have such an option. The reviewing editor might select to review articles perceived to be of greater or timelier value to the journal itself, which may increase the speed of the review.\n\nSpecific questions:\nA. According to their method section, the authors state that they began with an initial N=9,589. After purging other articles they had an N=8,905. They then isolated a total of 2,750 articles subjected to the peer review process for the study: \"For the rest of the paper, we focus our analysis on this subset of 2,750 papers, of which 1,405 had been accepted, 1,099 had been rejected, and the rest [which would equal 246] were still under consideration.\"\nLooking at the Excel spreadsheet for citation counts, there are 1407 lines with entry numbers. For peer-reviewed papers, excel spreadsheet has 2747 (after removing duplicate entries based on the MS NO column) entries for manuscripts numbered up to 12621. The excel spreadsheet for unique reviewers has 2747 entries, with a final MS NO of 12621.\nThe numbers do not appear to match, and there is no explanation for that in methods. Exactly how many manuscripts were reviewed, how many rejected and why, and how many were tracked?\nB. In the Excel spreadsheet for citations, the second column was titled \"Citations,\" but these figures do not appear to have any relation to the Scopus citation numbers. What numbers were used for the actual citation counts?\nWe also note that we find the suggestions by other reviewers compelling, and would be happy to review a revision of this manuscript should that be considered useful.",
"responses": [
{
"c_id": "2211",
"date": "03 Oct 2016",
"name": "Federico Vaggi",
"role": "Author Response",
"response": "We thank the reviewers for a very detailed examination of our manuscript. They raise important issues, in particular, that certain findings which were exclusively correlations were treated as causative. They also caught a minor mistake in the reported number of papers. We are currently in the process of submitting a revised manuscript that we believe addresses most of the issues they raise. For the generic comments: A) We tried to expand in more detail about what variables and models were used for the different analysis. However, we believe that detailed descriptions are not as useful as mathematical formulas and the computer code that allows anyone to reproduce the analysis. As a companion to the paper, we made a literate programming document (an IPython notebook) that shows and reproduces all the statistical analysis in the paper. B) We agree completely. In the revised version of the paper, we tried to better explain the process through which an editor decides whether or not to serve as a reviewer. Unfortunately, as this is a purely observational study without direct intervention, we cannot identify causal factors. We now address the specific questions: A) - As the reviewers correctly point out, the correct number of papers in the dataset is 2747, not 2750. There were 3 other corrupted papers that were discarded that we accidentally included in the original count. The papers that were dropped were those for which the database entries were corrupted (the data of resubmission was prior to the date where the original decision was made) or where data was missing. - For the citation file, we only added rows for papers that received citations (in scopus or otherwise). When we merge the citation data with the other information about the paper, we implicitly treat all missing values as zeros (this can be seen in our script). We did not imagine that people would try to manually reproduce the analysis using Excel - so we apologize if this caused additional difficulties. B) We downloaded all the different metrics that eLife makes available for all published papers (,Citations,Likes,en.search.wordpress.com,en.wikipedia.org,europepmc.org,f1000.com,scholar.google.com,twitter.com,www.citeulike.org,Pubmed_Citations,Scopus). The column we used for all the analysis in the paper is Scopus. We were surprised to find out that the different citation sources (Scopus, Pubmed, citeulike, etc) can have significantly different values. This is important to take into account, as, unless discussed, this gives researchers a significant amount of degrees of freedom to pick the metric that best supports their hypothesis."
}
]
}
] | 1
|
https://f1000research.com/articles/5-683
|
https://f1000research.com/articles/5-2529/v1
|
18 Oct 16
|
{
"type": "Method Article",
"title": "A sensitive 301V BSE serial PMCA assay",
"authors": [
"Kevin C. Gough",
"Keith Bishop",
"Robert A. Somerville",
"Nora Hunter",
"Ben C. Maddison",
"Kevin C. Gough",
"Keith Bishop",
"Robert A. Somerville",
"Nora Hunter"
],
"abstract": "The prion strain 301V, is a mouse passaged form of bovine spongiform encephalopathy (BSE). It has been used as a model of BSE for more than 20 years, in particular in the investigation of tissue distribution of infectivity, the molecular phenotype and transmission properties of BSE, strain typing assays and prion inactivation studies. Most 301V experiments have required murine bioassay as a method for the quantitation of infectivity. To date this model strain has not been studied with the protein misfolding cyclic amplification assay (PMCA) which detects prion-associated PrPSc protein. The detection of BSE PrPSc by PMCA can be more sensitive than mouse bioassay and is carried out in a much shorter time frame of days as opposed to months/years. Here, we describe the development of a new highly sensitive and specific PMCA assay for murine 301V and assess the sensitivity of the assay in direct comparison with murine bioassay of the same material. This in vitro assay detected, in a few days, 301V at a brain dilution of at least 1x10-9, compared to bioassay of the same material in VM mice that could detect down to a 1x10-8 dilution and took >180 days. The 301V PMCA may therefore offer a faster and more sensitive alternative to live animal bioassay when studying the BSE agent in VM mice.",
"keywords": [
"BSE",
"bioassay",
"301V",
"sPMCA",
"in vitro",
"prion"
],
"content": "Introduction\n\nThe transmissible spongiform encephalopathies (TSE or prion diseases) form a group of infectious and fatal neurodegenerative diseases affecting several species of mammals for which there is no available treatment or cure. The cause is thought to be a novel infectious agent (the prion), itself a misfolded isomer (PrPSc) of a benign cell associated protein known PrP or PrPC. This group of diseases include scrapie in sheep and goats, Creutzfeldt Jakob disease (CJD) in humans and bovine spongiform encephalopathy (BSE) in cattle. The UK BSE epizootic of the mid-1980s to early 1990s was the result of recycling BSE through the cattle food chain via a high protein feed additive known as meat and bone meal. It is thought that over 460,000 BSE infected UK cattle entered the human food chain before a ban on feeding specified risk materials to cattle came into force1. As a consequence of this, a new human disease referred to as vCJD, the human form of BSE, began to present in a number of young adults from the mid-1990s. This demonstration of the zoonotic potential of prion diseases generated an accelerated program of research into these diseases and much of this has required animal models. The murine passaged BSE strain known as 301V was first described by Bruce and colleagues2 during transmission studies of cattle BSE to wild type mice. BSE 301V is the product of serial passage within the VM mouse line and this combination of 301V/VM has been well characterised and used in numerous studies, including those aimed at understanding the fundamental brain pathology during neuropathogenesis3. In addition, the 301V/VM model has also been important in experiments analysing the effectiveness of various decontamination measures for BSE infectivity. For example, Taylor et al.4 demonstrated the effectiveness of formic acid in inactivating both 301V and scrapie in the context of occupational exposure to histological samples. This BSE model has also been used to show the lack of sufficient inactivation of BSE prions during historical rendering processes which resulted in the BSE outbreak in UK cattle5. For development of safe procedures in the context of human health, 301V has been used to model vCJD in the fractionation of plasma and the safe manufacture of blood products6, and additionally has been used to estimate BSE infectivity that is likely to remain after processes in the derivation of bone gelatine from bovine products7. A further study by McLeod et al.8 screened a number of different proteases for their ability to reduce the infectious titre of 301V as a novel method for the decontamination of sensitive surgical instruments. More recently, the availability of transgenic mice expressing the bovine PRNP transgene have become available, and with this their high susceptibility to bovine BSE prions has complemented the use of 301V in these types of experiments. A study published by Giles et al.9 directly compared the effectiveness of decontamination of both bovine BSE and 301V in transgenic and VM mice, respectively. 301V was more sensitive to both heat and chemical denaturation than cattle BSE, suggesting that the physical properties of the 301V BSE strain have diverged slightly from those of cattle BSE. Despite the more recent availability of these transgenic rodent strains the 301V/VM infection system remains a useful, well characterised model for BSE in TSE research and allows direct comparison with numerous previous studies.\n\nIn the last 15 years or so prion research has been revolutionised by the demonstration of in vitro assays that are thought to replicate the molecular events occurring in vivo during prion infection and the conversion of PrPC to the disease isomer PrPSc. First reported by Saborio and colleagues in 200110 the protein misfolding cyclic amplification (PMCA) assay is able to replicate prions in vitro within a source of PrPC (generally produced from a healthy brain homogenate) during cycles of PrPC to PrPSc seeded conversion followed by sonication with high frequency sound waves that break up aggregates of PrPSc to form new seeds or sites of nucleation. The products of this sensitive in vitro assay retain the biochemical characteristics of the prion seed and are infectious11. The sensitivity of the PMCA assay was improved by including the dilution of the reaction into fresh PrPC substrate after an optimal period of amplification. This modification, known as serial PMCA (sPMCA)12 has been widely adopted by the research community and has been applied to several rodent prion strains13, scrapie in sheep14,15, BSE in cattle16, and CWD of cervids17. sPMCA can achieve levels of sensitivity significantly beyond that of animal bioassay18 and these experiments take days or weeks to perform compared to the months to years of animal bioassay, and at a fraction of the cost. As such, amplification of prions by sPMCA can be used as a surrogate for measuring infectivity in vivo. To date, the 301V strain of BSE has not been used in sPMCA based studies. Here, we describe a high sensitivity 301V sPMCA that can, over a period of 5 days detect higher dilutions of infectivity than are attained by a 170–200 day bioassay within the VM mouse line.\n\n\nResults and discussion\n\nA. Reactions were seeded with 10 µls of 10-4 to 10-13 dilution of 301V brain (as indicated). Unspiked samples had 10 µl VM brain substrate only. Sc, scrapie positive brain sample was used as a blotting control. B. The assessment of reproducibility of the 301V sPMCA using three separate batches of VM substrate, each were seeded with 301V brain dilutions 10-7 to 10-11 (as indicated). Unspiked PMCA samples were always negative (a total of 15 replicates are shown). Western blots were probed with the anti-PrP antibody SHa31, M, molecular mass markers at 41, 30 and 22 kDa.\n\nA pool of 301V mouse brain homogenate was used to assess the efficacy of a new sPMCA assay alongside conventional 301V/VM bioassay. For the in vitro assay we used a 5 day amplification method and a murine VM substrate. This 301V sPMCA assay demonstrated assay sensitivity to 1×10-9 dilution of brain homogenate (Figure 1A). The batch-to-batch variability test of a further 3 substrate preparations detected 301V to at least the same level (Figure 1B). The observed variations in sensitivity within these limiting dilution experiments (between 10-9 and 10-11) is likely a reflection of differences in individual substrate preparations that will be seen when making small volume preparations from limited numbers of brains.\n\nThe same 301V sample was also analysed in a VM mouse bioassay by limiting dilution and this bioassay detected infectivity in 1 out of 12 mice at the 10-8 dilution of 301V brain material (Table 1), equating to a 301V titre of the original brain pool of 108.5 LD50/g (as determined by Karber methodology19).\n\n*Number of clinical and pathology positive mice/total injected. Mouse numbers exclude intercurrent deaths i.e. animals dying earlier than the 1st clinical case (there was one intercurrent death of a mouse receiving 10-7 brain dilution, and two receiving the 10-4 dilution). Total number of challenged mice were 6 per group for 10-4 and 10-5, 12 per group for 10-6 to 10-9.\n\nThe 301V sPMCA assay can therefore detect PrPSc at a level at least tenfold more sensitive than the VM mouse bioassay, in a total assay time taking little over a week. Whilst we report sensitivity of the assay at 5 days of amplification, it is very likely that much higher levels of sensitivity could be attained with additional rounds of amplification. The highest dilution of 301V infectivity that could be detected within the VM bioassay was a 1×10-8 dilution of brain at 184 days post inoculation, or 26 times longer than the sPMCA assay. Maintaining animals within bioassay, including their category 3 containment make these kind of titration experiments very costly and time consuming to carry out. That, coupled with the ethical implications of use of animals means sPMCA could be the method of choice unless there is a good scientific reason for requiring to demonstrate infectivity (the ability to cause disease), or a requirement to monitor strain phenotype, as opposed to the surrogate marker of disease, PrPSc protein. A useful way of incorporating these two assays into future studies, could be to assess 301V seeding activity within a wide range of samples to identify those that contain PrPSc. Bioassay could then be used on a limited number of sPMCA-positive samples to confirm the presence of BSE infectivity. Another example of the application of sPMCA that has been routinely used for the detection of prions in a rodent prion model is with cervid CWD17. In this instance, CWD amplification within cervid CNS tissue substrate is notoriously inefficient, and transgenic mice have been used as an animal bioassay model for infectivity studies and also to provide substrate to facilitate efficient in vitro amplification by sPMCA.\n\nIn summary, we have developed a reliable in vitro method (sPMCA) for the detection of PrPSc resulting from infection with 301V (mouse passaged BSE). The assay is at least as sensitive as mouse bioassay and can derive data on the presence of PrPSc in a fraction of the time. This will be useful in studies such as those looking at BSE decontamination where the screening of large numbers of samples is required.\n\n\nMaterials and methods\n\nAll use of animals, the collection of animal tissues and the use of such tissue was carried out in accordance with the Animal (Scientific Procedures) Act (ASPA) 1986, under licences from the UK Government Home Office (Project licence 60/2544). All animal experiments were subject to review and approval (01-124) by The Roslin Institute Ethical Review Committee and euthanasia methods were approved by the UK Home Office.\n\nA serial dilution of pooled murine VM brains that were taken from 301V challenged animals was made as previously described20. A dilution series of this brain homogenate from 10-1 to 10-10 was made up in saline and used to inoculate groups of VM mice, bred in house at The Roslin Institute and of mixed sex, 6 weeks old (groups of 6 mice at 10-4 and 10-5, 12 mice from 10-6 to 10-10) with 20 µl of each dilution intracerebrally, as previously described20. Animals were observed daily for signs of ill health and euthanised by cervical dislocation when clinical signs of neurological disease or any intercurrent illness were observed. After euthanasia, brain tissue was confirmed as 301V positive or negative by detection of brain tissue vacuolation by light microscopy after Haematoxylin and Eosin staining20. This analysis was carried out blinded to the identity of the tissue in each case.\n\nVM brains from healthy animals were supplied frozen, before preparation of the 10 % (w/v) homogenate substrate. Preparation of 10 % brain homogenates as substrates for sPMCA has been previously described21. Here, we included the sPMCA additive digitonin22 (Sigma-Aldrich) which was added to reactions at 50 μg/ml. sPMCA reactions were assembled in 200 µl thin wall PCR tubes, and comprised 90 µl brain homogenate substrate with digitonin, three 2.4 mm Teflon beads (Precision plastic ball co. Ltd) and 10 µl of 301V sample to be amplified (10-4 to 10-13 dilution of 301V brain). Unspiked negative control samples were set up substituting the 301V seed with 10 µl VM brain substrate only. Reaction tubes were placed in a Misonix S3000 sonicating water bath set on a program of 10 seconds sonication every 30 mins, for 24 hours at a power setting of 190–200 W at 37°C. Every 24 hours, samples were diluted 1 in 10 into fresh VM brain substrate and sonicated for a further 24 hour round of repeated sonication and incubation retaining the same three Teflon beads throughout the 5 rounds of sPMCA. Amplifications were carried out for a total of 5 days. Dilutions of 301V brain homogenate are recorded as the dilution of brain spike before addition to the amplification reaction, ie 10-1 is 10 µl of a 10 % (w/v) preparation of brain, 10-2 is 10 µl of a 1 in 10 dilution of the 10-1 preparation of brain etc. All sPMCA was carried out at 37°C in a Misonix S3000 microplate horn.\n\nsPMCA reaction products (10 µl) were digested using a final concentration of 50 µg/ml proteinase K (Sigma-Aldrich), for 90 minutes at 40°C. Samples were then boiled for 5 minutes in 1X LDS buffer (Invitrogen) and electrophoresed through a NuPAGE SDS-PAGE gel system (Invitrogen) using 12% (w/v) acrylamide gels. Molecular mass markers (prestained Seeblue plus2, Invitrogen LC5925) were run alongside samples. As a blotting control an aliquot of proteinase K digested (50 µg/ml proteinase K (Sigma-Aldrich), 60 minutes at 40°C) scrapie positive ovine brain (equivalent to 2 µl of a 10% w/v brain homogenate) was also loaded onto each SDS-PAGE gel. Proteins were transferred to PVDF (Roche) membrane by electroblotting, and the membranes were then blocked for 1 hour with 3 % (w/v) skimmed milk. Western blots were probed with the anti-PrP mouse monoclonal antibody SHa31 (SpiBio A03213) diluted to 1/80,000 and a polyclonal goat anti-mouse immunoglobulins Horse Radish Peroxidase conjugate (Dako P04477), diluted to 1:20,000, as previously described21. Blots were imaged after the addition of EZ-ECL HRP substrate (Geneflow) using an ICCD225 photon counting camera system and IFS32 image software (Photek Ltd).\n\n\nData availability\n\nF1000Research: Dataset 1. Raw uncropped images of the Western blots shown in Figure 1. 10.5256/f1000research.9735.d13863823",
"appendix": "Author contributions\n\n\n\nRAS and NH conceived the study and provided both the 301V and VM biological material, BCM and KCG devised and directed the experiments, KB carried out the experiments. BCM wrote the manuscript, all authors proof read the manuscript before submission.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by DEFRA under project SE1433 (Robert Somerville) and by BBSRC Institute Strategic Grant BB/J004332/1 (The Roslin Institute).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nValleron AJ, Boelle PY, Will R, et al.: Estimation of epidemic size and incubation time based on age characteristics of vCJD in the United Kingdom. Science. 2001; 294(5547): 1726–1728. PubMed Abstract | Publisher Full Text\n\nBruce M, Chree A, McConnell I, et al.: Transmission of bovine spongiform encephalopathy and scrapie to mice: strain variation and the species barrier. Philos Trans R Soc Lond B Biol Sci. 1994; 343(1306): 405–411. PubMed Abstract | Publisher Full Text\n\nWilliams A, Lucassen PJ, Ritchie D, et al.: PrP deposition, microglial activation, and neuronal apoptosis in murine scrapie. Exp Neurol. 1997; 144(2): 433–438. PubMed Abstract | Publisher Full Text\n\nTaylor DM, Brown JM, Fernie K, et al.: The effect of formic acid on BSE and scrapie infectivity in fixed and unfixed brain-tissue. Vet Microbiol. 1997; 58(2–4): 167–174. PubMed Abstract | Publisher Full Text\n\nTaylor DM, Fernie K, McConnell I, et al.: Solvent extraction as an adjunct to rendering: the effect on BSE and scrapie agents of hot solvents followed by dry heat and steam. Vet Rec. 1998; 143(1): 6–9. PubMed Abstract | Publisher Full Text\n\nReichl HE, Foster PR, Welch AG, et al.: Studies on the removal of a bovine spongiform encephalopathy-derived agent by processes used in the manufacture of human immunoglobulin. Vox Sang. 2002; 83(2): 137–145. PubMed Abstract | Publisher Full Text\n\nGrobben AH, Steele PJ, Somerville RA, et al.: Inactivation of the bovine-spongiform-encephalopathy (BSE) agent by the acid and alkaline processes used in the manufacture of bone gelatine. Biotechnol Appl Biochem. 2004; 39(Pt 3): 329–338. PubMed Abstract | Publisher Full Text\n\nMcleod AH, Murdoch H, Dickinson J, et al.: Proteolytic inactivation of the bovine spongiform encephalopathy agent. Biochem Biophys Res Commun. 2004; 317(4): 1165–1170. PubMed Abstract | Publisher Full Text\n\nGiles K, Glidden DV, Beckworth R, et al.: Resistance of bovine spongiform encephalopathy (BSE) prions to inactivation. PLoS Pathogens. 2008; 4(11): e1000206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaborio GP, Permanne B, Soto C: Sensitive detection of pathological prion protein by cyclic amplification of protein misfolding. Nature. 2001; 411(6839): 810–3. PubMed Abstract | Publisher Full Text\n\nWeber P, Giese A, Piening N, et al.: Generation of genuine prion infectivity by serial PMCA. Vet Microbiol. 2007; 123(4): 346–57. PubMed Abstract | Publisher Full Text\n\nCastilla J, Saá P, Soto C: Detection of prions in blood. Nat Med. 2005; 11(9): 982–5. PubMed Abstract | Publisher Full Text\n\nYoshioka M, Imamura M, Okada H, et al.: Sc237 hamster PrPSc and Sc237-derived mouse PrPSc generated by interspecies in vitro amplification exhibit distinct pathological and biochemical properties in tga20 transgenic mice. Microbiol Immunol. 2011; 55(5): 331–40. PubMed Abstract | Publisher Full Text\n\nThorne L, Terry LA: In vitro amplification of PrPSc derived from the brain and blood of sheep infected with scrapie. J Gen Virol. 2008; 89(Pt 12): 3177–84. PubMed Abstract | Publisher Full Text\n\nThorne L, Holder T, Ramsay A, et al.: In vitro amplification of ovine prions from scrapie-infected sheep from Great Britain reveals distinct patterns of propagation. BMC Vet Res. 2012; 8: 223. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFranz M, Eiden M, Balkema-Buschmann A, et al.: Detection of PrPSc in peripheral tissues of clinically affected cattle after oral challenge with bovine spongiform encephalopathy. J Gen Virol. 2012; 93(Pt 12): 2740–8. PubMed Abstract | Publisher Full Text\n\nKurt TD, Perrott MR, Wilusz CJ, et al.: Efficient in vitro amplification of chronic wasting disease PrPRES. J Virol. 2007; 81(17): 9605–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaá P, Castilla J, Soto C: Ultra-efficient replication of infectious prions by automated protein misfolding cyclic amplification. J Biol Chem. 2006; 281(46): 35245–52. PubMed Abstract | Publisher Full Text\n\nKärber G: Beitrag zur kollektiven Behandlung pharmakologischer Reihenversuche. Archives of Experimental Pathology and Pharmacology. 1931; 162(4): 480–483. Publisher Full Text\n\nGrobben AH, Steele PJ, Somerville RA, et al.: Inactivation of transmissible spongiform encephalopathy agents during the manufacture of dicalcium phosphate from bone. Vet Rec. 2006; 158(11): 361–366. PubMed Abstract | Publisher Full Text\n\nMaddison BC, Baker CA, Terry LA, et al.: Environmental sources of scrapie prions. J Virol. 2010; 84(21): 11560–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzalez-Montalban N, Makarava N, Ostapchenko VG, et al.: Highly efficient protein misfolding cyclic amplification. PLoS Pathogens. 2011; 7(2): e1001277. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGough K, Bishop K, Somerville R, et al.: Dataset 1 in: A sensitive 301V BSE serial PMCA assay. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17050",
"date": "24 Nov 2016",
"name": "Bonto Faburay",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors described a research on a sensitive 301V BSE serial PMCA assay by comparing to a bioassay (mouse) model. The study per se is not entirely new; and also the time to detection (5 days) is still comparatively long compared to Fluorescent based (QuIc assay) assays. However, the experiments were well executed including the relevant controls. The method could also be a useful diagnostic and experimental tool for prions. I will encourage the authors cite Faburay et al.1, which also performed serial PMCA and is highly relevant to the current work.\nI recommend the article for acceptance for indexing.\nThank you.",
"responses": []
},
{
"id": "18080",
"date": "06 Dec 2016",
"name": "Mohammed Moudjou",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe serial PMCA procedure reported in the present work shows a highly sensitive amplification of the mouse adapted BSE prion strain 301V. The authors compared this method with the bioassay and underlined several advantages of the sPMCA procedure.\n\nThe data are convincing but I have one restriction on the author’s conclusions. In my opinion they have not clearly demonstrated that the sPMCA amplification products (Amplicons) correspond really to the original strain. They can at least in the present version of the article show on the same gel a profile of 301V mouse inoculum in parallel with the PMCA products instead of showing the scrapie PrPres profile, which is not relevant.\nThey can also determine and compare the glycoform’s ratio between the unamplified PrPres of inoculum and the PMCA amplicons. These parameters could rapidly give some information on the identity of the amplification products before having the bioassay data with the PMCA products (is it really 301V BSE strain?).\nOn page 3, the authors discuss the advantages of the sPMCA method in bioethical terms. Other laboratories1 have already underlined this point and they should be cited in this part of the present paper. And the following paper2 that goes further on this point by replacing animals tissues with cell culture lysates for amplification of several prion strains.\nOne minor point: please spell LDS on page 4, western blotting section.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2529
|
https://f1000research.com/articles/3-210/v1
|
04 Sep 14
|
{
"type": "Method Article",
"title": "Elucidating genomic gaps using phenotypic profiles",
"authors": [
"Daniel A. Cuevas",
"Daniel Garza",
"Savannah E. Sanchez",
"Jason Rostron",
"Chris S. Henry",
"Veronika Vonstein",
"Ross A. Overbeek",
"Anca Segall",
"Forest Rohwer",
"Elizabeth A. Dinsdale",
"Robert A. Edwards",
"Daniel Garza",
"Savannah E. Sanchez",
"Jason Rostron",
"Chris S. Henry",
"Veronika Vonstein",
"Ross A. Overbeek",
"Anca Segall",
"Forest Rohwer",
"Elizabeth A. Dinsdale"
],
"abstract": "Advances in genomic sequencing provide the ability to model the metabolism of organisms from their genome annotation. The bioinformatics tools developed to deduce gene function through homology-based methods are dependent on public databases; thus, novel discoveries are not readily extrapolated from current analysis tools with a homology dependence. Multi-phenotype Assay Plates (MAPs) provide a high-throughput method to profile bacterial phenotypes by growing bacteria in various growth conditions, simultaneously. More robust and accurate computational models can be constructed by coupling MAPs with current genomic annotation methods. PMAnalyzer is an online tool that analyzes bacterial growth curves from the MAP system which are then used to optimize metabolic models during in silico growth simulations. Using Citrobacter sedlakii as a prototype, the Rapid Annotation using Subsystem Technology (RAST) tool produced a model consisting of 1,367 enzymatic reactions. After the optimization, 44 reactions were added to, or modified within, the model. The model correctly predicted the outcome on 93% of growth experiments.",
"keywords": [
"Recent advancements in genomic sequencing provide high quality",
"deep-coverage DNA sequences for tens of thousands of bacterial genomes. To manage this breadth of data",
"online tools such as RAST (http://rast.nmpdr.org/)1 leverage the SEED database2 by using homology and genomic context to determine gene functions encoded in the DNA sequences. This automated annotation service additionally generates a raw metabolic reconstruction of the genome for use in in silico experiments. Genome-scale metabolism analyses use these reconstructions as input into data environments such as KBase",
"the Department of Energy Systems Knowledgebase (http://kbase.us). Several hypotheses can be tested simultaneously",
"e.g.",
"protein function identification",
"biological behavior simulations",
"and metabolic network comparisons3."
],
"content": "Introduction\n\nRecent advancements in genomic sequencing provide high quality, deep-coverage DNA sequences for tens of thousands of bacterial genomes. To manage this breadth of data, online tools such as RAST (http://rast.nmpdr.org/)1 leverage the SEED database2 by using homology and genomic context to determine gene functions encoded in the DNA sequences. This automated annotation service additionally generates a raw metabolic reconstruction of the genome for use in in silico experiments. Genome-scale metabolism analyses use these reconstructions as input into data environments such as KBase, the Department of Energy Systems Knowledgebase (http://kbase.us). Several hypotheses can be tested simultaneously, e.g., protein function identification, biological behavior simulations, and metabolic network comparisons3.\n\nMetabolic models are defined by the chemical reactions that characterize the vast metabolic network of an organism. Flux-balance analysis (FBA) uses these chemical reactions to provide understanding of the physiological capacity of the cell4. Mathematically, the stoichiometry of metabolic networks is represented by a two-dimensional numerical matrix, in which the values are the stoichiometric coefficients of the reactants and products. Each row and column in the matrix is associated with a metabolite and a metabolic reaction, respectively. For one stoichiometric reaction, the products of the reaction are given positive integers, the reactants are given negative integers, and non-associated metabolites are given zeros. Through a constraint-based approach, the FBA algorithm uses linear programming techniques to solve this system of stoichiometric coefficients, optimizing for biomass production or another objective function4–6.\n\nThe amount of published metabolic reconstructions for prokaryotic and eukaryotic organisms has increased over the past decade3,7. Through the increased use of next-generation sequencing and automated annotation software, metabolic models for new organisms are arising and older models are continuously being reconciled. However, a drawback to RAST and other gene annotation algorithms is the dependency on previous functional annotations. The breadth and quality of annotated functions vary among and across bacterial species, which is not accumulating as quickly as new sequences. Automatic generation of metabolic models is limited by our knowledge of cellular metabolism and biochemistry. In addition, an existing problem with gene databases is the inconsistent nomenclature used to name and define the function of a gene. Separate databases hold slightly different annotations for the same gene, which propagates into downstream tools, leading to a loss of information in analyses and mis-annotations in models built during the initial reconstruction. To bridge the gap between quality genome annotations and accurate metabolic models novel methods are needed to supplement the reconciliation process.\n\nMulti-phenotype Assay Plates (MAPs) provide a system to quantitatively monitor microbial growth while qualitatively deducing the metabolic capabilities of a microbe across a range of conditions. The MAPs technology uses optical density to measure substrate utilization of a clonal microbial population. MAPs have been an advantageous tool in past phenotypic studies3,8–20. Using MAPs we can measure bacterial growth in defined conditions to aid in the validation of microbial genome annotation software.\n\nIn this study, a metabolic model of Citrobacter sedlakii is built using a workflow combining experimental data and computational analysis (Figure 1). The genome of C. sedlakii was sequenced, annotated, and a metabolic reconstruction was subsequently generated using RAST and the KBase platform. Growth of C. sedlakii was measured in 96 different growth conditions and the resulting data was introduced into a novel computational pipeline, PMAnalyzer. The PMAnalyzer automatically parameterizes raw growth data and fits a logistic model of bacterial growth for each experimental condition. Observed phenotypes from the MAP experiments were used to ground truth the genome-scale metabolic model by running FBA simulations, which identified disparities in the metabolic reconstruction. Disparities were observed to have either been missed due to RAST mis-annotations or sequencing. Here, we introduce a high-throughput workflow to obtain large-scale metabolic reconstructions and reconciliations with observed growth phenotypes.\n\nInitial metabolic models are built and reconciled in KBase from RAST genome annotations. Phenotypic profiles from the MAPs technology and the PMAnalyzer are incorporated into the KBase FBA-reconciliation loop to optimize the model.\n\n\nMethods\n\nThe C. sedlakii isolate (ATCC 51115, CDC 4696-86) was provided by Dr. Marlene DeMers in the Department of Biology at San Diego State University. A glycerol frozen stock of the sample was plated on trypticase soy agar (Becton, Dickenson and Company) and incubated at 37°C for 24 hrs. A single colony was inoculated into 3 mL of trypticase soy broth (Becton, Dickenson and Company) and incubated, with shaking, at 37°C for 24 hrs. 500 µL of the overnight culture was mixed with 500 µL of 30% (weight/volume) filter sterilized glycerol and transferred to a cryogenic vial (Fisher Scientific) for storage at -80°C.\n\nC. sedlakii was grown overnight at 37°C for 24 hrs on 50% Luria-Bertani (LB) agar (Fisher Scientific) from a frozen glycerol stock. Three independent colonies, biological triplicates, were inoculated into 3 mL of a modified 3-morpholinopropane-1-sulfonate (MOPS) broth21 (1X MOPS (40 mM MOPS + 10 mM Tricine), 0.4% glycerol, 9.5 mM NH4Cl, 0.25 mM NaSO4, 1.0 mM MgSO4, 1.32 mM K2HPO4, 10 mM KCl, 0.5 μM CaCl2, 5 mM NaCl, and 6 μM FeCl3) and incubated for 24 hrs at 37˚C with agitation. Overnight cultures were centrifuged using an Eppendorf Centrifuge 5418R at maximum speed (14,000 rpm) to pellet cells and washed with 500 μL of 10 mM Tris/10 mM MgSO4 buffer, twice. Cells were re-suspended in 1 mL of 10 mM Tris/10 mM MgSO4 buffer and optical density at 600 nm (OD600) was measured using a Beckman Coulter DU 640 spectrophotometer. All suspensions were concentrated to achieve a final optical density of approximately OD600 = 0.1 after a fifteen fold dilution.\n\n10 µL of concentrated cells was transferred into each well of a sterile 96 well, micro-titer plate (Grenier Biosciences), which contained 60 µL of sterile water, 50 µL of 3X MOPS basal media, and 30 µL of 5X substrate (Supplementary Figure 1). Each plate was sealed with PCR grade plate film (Sigma SealPlate® film) and incubated on a Molecular Devices Analyst GT multi-plate plate reader (Molecular Devices, LLC.). Plate reader was programmed to incubate MAPs at 37°C and measure OD600 every 30 min, for a total of 32 hrs. Absorbance data was saved, extracted as a text file, and uploaded to the project website for data storage (http://vdm.sdsu.edu/).\n\nMOPS basal media is derived from the culture media provided by Neidhardt et al.21, and contains 1X MOPS (40 mM MOPS + 10 mM Tricine), 0.4% glycerol*, 9.5 mM NH4Cl*, 0.25 mM NaSO4*, 1.0 mM MgSO4*, 1.32 mM K2HPO4*, 10 mM KCl, 0.5 μM CaCl2, 5 mM NaCl, 6 μM FeCl3. Media was prepared with sterile Milli-Q water (Milli-Q Integral Water Purification Systems, EMD Millipore) and subsequently filter sterilized using 0.22 µm Sterivex filter unit (Millipore, Inc). (*These compounds are not included depending on the basal media. For example, 0.4% glycerol is not used in the carbon basal media, while 1.0 mM MgSO4 is replaced by 1.0 mM MgCl2 in the sulfur basal media).\n\nNutrient substrates were prepared by dissolving 1.25% (w/v) of the solid compound in sterile Milli-Q water and filter sterilized with a 0.22 µm Sterivex filter unit (Millipore, Inc). Substrate stocks were stored at 5X concentrations at room temperature in sterile conical tubes. Supplementary Figure 1 contains a detailed mapping of the substrates used in the MAPs.\n\nAs part of a DNA sequencing class at San Diego State University26 the C. sedlakii 119 genome was sequenced using 454 pyrosequencing with the GS Junior platform and assembled with Newbler version 2.7. RAST (http://rast.nmpdr.org/) was used for subsystem annotations and metabolic reconstructions1. Annotations were imported into the KBase environment where the metabolic model was viewed, manipulated, and used in flux-balance analysis (FBA) simulations. FBA was used to determine if the model bacteria is successful in ascertaining growth in specific conditions. The KBase command kbfba-importfbamodel was used to import the annotations into the Citrobacter_sedlakii_119 workspace and was named C.sedlakii_nogapfill. The Citrobacter_sedkakii_119 workspace and its objects are freely accessible to anyone. Using the KBase command kbfba-gapfill, initial gap-filling was performed on the model while specifying Luria-Bertani (LB) as the growth condition (the default ArgonneLBMedia formulation was used). The reconciled model was named C.sedlakii_ArgonneLB_gapfill and is provided in the workspace. The LB gap-filled model created a representative model that fulfills the general requirements needed to utilize a rich media source for growth.\n\nThe high-throughput analysis pipeline described below was executed in a Linux command-line environment and was developed using several programming languages, including bash, Perl version 5.16 (http://www.perl.org/), and Python version 3.4.1 (http://www.python.org/). Perl scripts were written to parse and format the MAP raw data files into the tab-delimited intermediate files. The primary analysis script (Python) used these intermediate files for modeling the growth curves. For ease of execution, a single bash program was created as a wrapper script that executes the parsing and analysis scripts as a cohesive, automated pipeline. Command-line arguments or a configuration file was used for user input and settings. All scripts are freely accessible from a Git repository at https://github.com/dacuevas/PMAnalyzer.\n\nPhenotypic responses were recorded by measuring the optical density at 600 nm (OD600) over time, which quantitatively represents the bacterial biomass concentration at each time point. The OD600 values are plotted to form the sigmoidal shape characteristic of bacterial growth curves. This characteristic curve, highlighted by Monod22 and modeled by Zwietering et al.23, consists of three phases: lag, exponential, and stationary phases. Zwietering et al.23 interprets these phases as parameters required to model growth, using his logistic equation\n\n\n\nwhere y0 (OD600) is the starting optical density, λ (hr) is the lag phase, μ (OD600•hr-1) is the maximum growth rate during the exponential phase, A (OD600) is the asymptote of the growth curve representing the carrying capacity of the population, and t (hr) is time. Supplementary Figure 2 provides a visual representation of a classical growth curve.\n\nTo parameterize the growth curves median values of the replicates were used. Python’s NumPy module version 1.8.1 and SciPy module version 0.14.024 provides several functions for optimizing nonlinear, multivariate functions. In this case, the minimize function was used in order to denote bounds and constraints on each parameter. The default Broyden, Fletcher, Goldfarb, and Shanno (BFGS) algorithm25 was used to minimize the sum of squared error between the logistic model from (1) and the raw data. As input, the algorithm requires estimations for each growth curve phase. The estimation for the asymptote was defined as the largest OD600 reading from three consecutive time points (2) and the maximum growth rate was defined as the largest change in OD600 over a 1.5 hrs window (3). The estimated lag time was set at 0.5 hr.\n\n\n\n\n\nThe result from (2) was also used as the upper bound for the minimization function. Lag time and maximum growth rate were not given an upper bound. Lower bounds for the asymptote, maximum growth rate, and lag time were 0.01, 0, and 0, respectively.\n\nEach well in the MAPs has a varying level of growth, including different lag times, maximum growth rates, and asymptotes. A single value that represents the overall level of bacterial growth per well was generated by adapting the logistic model with the asymptote:\n\n\n\nHere, ylogistic is the value from (1) at time i, and n is the number of data values used, which is the number of OD600 measurements recorded during the experimental run. For the C. sedlakii MAP, n equals 64. The asymptote factor A, rather than the maximum growth rate, contributes to defining growth (i.e., growth levels from wells that achieve a higher biomass yield separate from those growth levels of wells that exhibit less growth, Supplementary Figure 3). In certain instances, growth curve models were fitted with a high maximum growth rate but did not display growth (e.g., potassium sorbate, L-valine, L-lysine, L-leucine, D-aspartic acid, and L-isoleucine). Ultimately, (4) was implemented to distill each growth curve into a single boolean variable of growth (≥ 0.5) or no growth (< 0.5).\n\nThe kbfba-simpheno function in KBase executes multiple flux-balance analysis (FBA) processes in parallel. To perform this, a text file listing information on which media condition to use as the input media in each process is required. Information regarding the MAPs result, i.e., growth or no growth, for each media condition is also listed in the text file. Digital representations of rich LB media and 90 different media compositions used in the MAPs were generated as media data objects in KBase. Each media object represents a specific condition used in the MAPs. kbfba-simpheno performs a separate FBA on each media object and compares the result to the MAPs result listed in the information text file. FBA results are labeled as: Correct Positive assertions (FBA and MAPs both display growth), Correct Negative assertions (FBA and MAPs both display no growth), False Positive assertions (FBA asserts growth, MAPs display no growth), and False Negative assertions (FBA asserts no growth, MAPs display growth). Gap-fillings were attempted for conditions associated with false negative assertions and gap-generations were attempted for instances of false positive assertions. FBA and reconciliation was performed on the LB condition first in order to identify and integrate missing reactions required for growth on general, rich media. Thereafter, using the base model capable of growing on LB, FBA and reconciliation was performed on the minimal media conditions to target missing reactions in specific metabolic pathways. The minimal set of reactions determined by gap-filling was integrated into the model using the KBase function kbfba-integratesolution. To verify that the integration of new reactions produces additional correct positives and correct negatives the multi-FBA simulation was re-executed.\n\nReaction names for functions of genes may vary between databases, thus, causing mis-annotations by RAST when used alongside other tools. When identifying functions for metabolic reconstructions, this unclear nomenclature prevents reactions to appear in the metabolic model. To correct for this, following gap-filling, all missing reactions (excluding transporters and newly-modified bidirectional reactions) were cross-checked with the SEED to find similarly named reactions. The new list consisted of a mapping of gap-filled reaction names to possible alternative names. This list of reactions was then referenced back to the C. sedlakii RAST annotations in order to determine if the reaction was identified by RAST as the alternative name but not included in the metabolic model. When the search similar nomenclature did not resolve, a search for gap-filled reactions in closely related organisms was performed; i.e., Citrobacter koseri and E. coli K12. This consisted of Protein BLAST (blastp) searches of the reactions’ sequences from the SEED database against C. koseri and E. coli. Gap-filled reactions that existed in these genomes were included in the C. sedlakii model with high confidence since closely related taxonomic groups contain common genetic material and function.\n\nThe genes encoding the missing reactions may be present in low quality DNA sequences or low coverage genomic regions. Following sequence assembly, these sequences are not present within the contigs, preventing RAST from annotating the proposed function. However, neighboring genes or protein complexes may be present and annotated, suggesting that the gene in question is there but was poorly sequenced. From within related organisms, the protein sequences of these complexes were identified and searched against the C. sedlakii genome. Finding matches for neighboring genes increases the confidence of including the reaction into the model. This method is also applicable to those genes that were not sequenced or that fall between assembled contigs.\n\n\nResults\n\nC. sedlakii 119 was assembled into 320 contigs containing 4,604,104 nucleotides with an N50 of 28,039 bp. RAST annotated the genome as containing 4,035 protein encoding genes and 76 tRNA genes over 537 different subsystems (Figure 2). Hypothetical proteins consisted of 817 (~20%) of the protein coding sequences. Membrane transport features constituted 150 out of 3,031 subsystem features. RAST listed E. coli as a close neighbor and Citrobacter koseri as the closest Citrobacter neighbor. A BLASTn alignment (expected value of 10-4)27 against the C. koseri ATCC BAA-895 genome (NC_009792.1) resulted in a whole genome coverage of 69.8% (69.6% when including plasmids (NC_009793.1 and NC_009794.1).\n\nNumber of subsystems annotated by RAST for each subsystem group. Membrane transport subsystems are highlighted in separate plot.\n\nGrowth on MAPs of each biological triplicate was followed for 32 hrs on separate days. After completion, the OD600 readings were processed through the PMAnalyzer pipeline (including data parsing, curve-fitting, and growth level analysis) in 8 seconds. Using a 0.5 growth level (4) cutoff and manual inspection of curves falling under the cutoff, 48 out of the 90 growth conditions – 35 carbon-based media and 13 nitrogen-based media – exhibited growth (Figure 3 and Table 1).\n\nGrowth curves generated from the multi-plate reader over 32 hr. The y-axis is displayed in a log2 scale and substrate groups are distinguished by color: blue for carbon sources and red for nitrogen sources. (A) Standard error from the technical replicates are plotted as gray regions. (B) Logistic models of each growth condition.\n\nCitrobacter sedlakii growth (G) and no growth (NG) phenotypes. Carbon substrates are denoted with (C); nitrogen substrates are denoted with (N). FBA results before gap-filling are shown in the Initial Model column. MAP-FBA comparison results are: correct positive (CP), correct negative (CN), false positive (FP), and false negative (FN).\n\nThe initial metabolic model generated in KBase (C.sedlakii_nogapfill) from RAST annotations contained 1,367 reactions and 1,277 substrates. An FBA simulation using this model and specifying LB as the media source resulted in no growth, however only eight reactions were added to simulate growth. These eight were back referenced to the RAST annotations to check for mis-annotations or functional annotations not included in the model. Two gap-filled reactions (dimethylallyl-diphosphate:isopentenyl-diphosphate [EC 2.5.1.1] and geranyl-diphosphate synthase [EC 2.5.1.1]) were mis-annotated by RAST as they both shared a function with another annotated reaction (geranyltranstransferase (farnesyldiphosphate synthase) [EC 2.5.1.10]). Undecaprenyl pyrophosphate synthetase [EC 2.5.1.31] and quinolinate synthase [EC 2.5.1.72] were also annotated and shared a function with the gap-filled reactions undecaprenyl diphosphate synthase [EC 2.5.1.31] and quinolinate synthetase [EC 2.5.1.72], respectively. The fifth reaction (1-deoxy-D-xylulose-5-phosphate pyruvate-lyase (carboxylating) [EC 2.2.1.7]) was not annotated but a comparison between the C. koseri and E. coli K12 genomes revealed neighboring homologs in C. sedlakii genome flanking a gap where the gene should be. Therefore, it is likely that RAST missed this gene during gene calling due to a frameshift sequencing error. Homologs of the proteins for three reactions (ATP:dTMP phosphotransferase [EC 2.7.4.9]; glutamine amidotransferase [EC 2.4.2.-]; riboflavin transport in/out via proton symport) were not found in C. sedlakii but were identified in the C. koseri genome. In addition to the previous eight reactions, three additional reactions (meso-2,6-diaminoheptanedioate carboxy-lyase [EC 4.1.1.20]; NADH:guanosine-5'-phosphate oxidoreductase (deaminating) [EC 1.7.1.7]; prephenate:NADP+ oxidoreductase (decarboxylating) [EC 1.3.1.13]) were altered from being uni-directional to bi-directional. The base model (C.sedlakii_ArgonneLB_gapfill), with the ability to grow on rich media, contained 1,279 (+2) substrates and 1,375 (+8) reactions.\n\nUsing the base model, the 90 well simulation resulted in no growth for all 90 growth conditions (53.3% accuracy: 48 false negatives (FN)) (Table 1). Subsequently, gap-filling was performed resulting in the reconciled model (C.sedlakii_MOPS_simpheno). The total number of substrates increased to 1,301 (+24), the total number of reactions increased to 1,399 (+32), and nine reactions were modified to be bi-directional. When performing gap-filling on multiple conditions, KBase produced a separate solution for each condition. This results in reactions appearing in multiple solutions. To find the missing set of essential reactions, the gap-fill results were parsed to locate the minimum set of reactions present in the majority of the solutions. All new and modified reactions added to the model are listed in Table 2. Note: a number of reactions were present among the FN solutions; thus, including the reactions for the 13 FN conditions in Table 2 fulfilled the reaction set for other 35 FN conditions not shown in the table.\n\nEach gap-filled reaction is listed under their respective media condition, along with the primary source of the compound in parenthesis, i.e. (C) denotes carbon and (N) denotes nitrogen. E.C. numbers were supplied by KBase when viewing gap-filling results. Reactions listed with an asterisk denote the 13 transport reactions added to the model. Reactions listed with a yes in the Reversible column denote those that were already present in the model but made bi-directional through gap-filling.\n\nMis-annotation checks and cross-referencing with the C. koseri genome were performed for the 12 gap-filled, non-transport, newly added reactions. In four cases (phosphoribosyl-ATP pyrophosphohydrolase [EC 3.6.1.31]; 1-(5-phospho-D-ribosyl)-AMP 1,6-hydrolase [EC 3.5.4.19]; D-mannose-6-phosphate ketol-isomerase [EC 5.3.1.8]; D-lactate dehydrogenase [EC 1.1.2.5]) the DNA sequences were missing but their surrounding genes were sequenced and annotated, indicative of poorly sequenced genes or genes located between contigs. Two reactions (chorismate pyruvatemutase [EC 5.4.99.5]; pyrimidine phosphatase [EC 3.1.3.-]) were mis-annotated by RAST. Four reactions (ureidoglycolate amidohydrolase (decarboxylating) [EC 3.5.3.19]; allantoate amidinohydrolase [EC 3.5.3.4]; allantoin amidohydrolase [EC 3.5.2.5]; 5-oxoproline amidohydrolase (ATP-hydrolysing) [EC 3.5.2.9]) could not be identified in C. koseri and two reactions (D-Arabinose ketol-isomerase [EC 5.3.1.3]; myo-inositol:oxygen oxidoreductase [EC 1.13.99.1]) could not be identified in C. koseri nor in E. coli. Therefore, these six reactions were added with low confidence. From the six reactions, three reactions (including the transporter) were required for growth on allantoin. In fact, Citrobacter species have not been shown to be capable of utilizing allantoin as a sole source of nitrogen28. More in-depth experiments are needed to investigate C. sedlakii’s ability to grow in the allantoin-based growth condition.\n\nThe 90 simulations were executed again and resulted in growth on 54 conditions (6 FP) and no growth on 36 (0 FN), a final accuracy of 93.3%. The six false positive growth conditions were: 2 deoxy-D-ribose (carbon), glycine (carbon), L-pyro-glutamic acid (carbon), L-threonine (carbon), malate (carbon), and putrescine (carbon) (Table 1).\n\n\nDiscussion\n\nBack referencing the gap-filled reactions to the prior RAST annotations provides insight into KBase’s ability to build the initial reconstruction from subsystem annotations. Several reactions identified by RAST were not included in the metabolic model. This is either due to RAST not being able to determine the function of the gene using homology, or the model is not able to correctly incorporate the function from the annotation. Gap-fill on LB media identified four reactions that were later categorized as RAST mis-annotations, whereas gap-fill on the MAPs media identified only two reactions later categorized as RAST mis-annotations. By surveying neighboring homologs of gap-filled reactions in closely related genomes, one reaction on LB gap-fill and four reactions on MAPs gap-fill were categorized as missing due to poor sequence quality.\n\nAs more bacterial genomes are processed through this pipeline, mis-annotations will occur at a lower rate. In the case of C. sedlakii, six reactions out of the 19 non-transport, newly added reactions were missed due to mis-annotations; five reactions possibly located in-between contigs; eight not bearing homology with related organisms. Feedback from mis-annotations corrects the ambiguities for future bacteria metabolic reconstructions as administrators of KBase, RAST, and the SEED database are informed of these findings. Reactions that are gap-filled and not identified as mis-annotations will be recorded for future studies. Patterns of missing reactions provide insight into why RAST is not able to identify the functions from an organism’s sequences. Furthermore, closer investigation can answer if the gene encoding the protein for a particular species (or genus) does not share strong homology to its closest evolutionary neighbor.\n\nThirteen out of the 32 essential reactions added during gap-filling were transporters. For specific conditions, the only missing reaction that prevented growth during the simulation was observed to be transport proteins specific for the growth condition (see Table 2). Transporters are readily identified using homology-based searches, however, it is difficult to accurately identify which substrate(s) are actively transported using these techniques. A drawback of RAST is its dependence on sequence homology. Henry et al.30 indicated that poorly annotated transporters are typically missing from preliminary reconstructions using the SEED database. During metabolic reconstruction in RAST, a minimal set of reactions are uniquely chosen through an optimization equation. The equation contains a penalty parameter that favors intracellular reactions over transporters during the auto-completion step of the model reconstruction, thus further preventing transporters from being included in the draft model.\n\nFalse positive (FP) results were introduced during the reconciliation process. A false positive was defined as an FBA resulting in biomass production in nutrient conditions where C. sedlakii is not able to produce biomass, i.e. the MAPs assert no growth. An FP results from reactions added to the model that enable biomass production. The FP could also be occurring through bi-directional reactions that should actually be uni-directional. Regardless, both cases reflect an under-constrained model containing reactions with incorrect directionality or with reactions that do not belong. KBase function fba-gapgen attempts to correct these issues. To perform this function, parameters pertaining to the growth condition where the FP occurs and a growth condition where a correct positive occurs are required, allowing the algorithm to correct, or remove, reactions from the model without altering the outcome of the correct positive growth condition. In this case, the KBase software was unable to determine any reactions in our model to remove with the fba-gapgen function, suggesting that the model requires manual curation for these six FP growth conditions listed in Table 1.\n\nModels created using the discussed pipeline should be considered draft metabolic models. While the physiological experiments provide insight into an organism’s metabolic capabilities for substrate utilization and the subsequent biomass formation, the biochemical properties involved are not directly assessed. The gap-filling algorithm attempts to determine the minimal set of reactions required for growth for a specific condition but these are not considered finite decisions. Gene knockout experiments, extensive literature mining, and manual curation are required to enhance the models31. However, these alternatives are neither high-throughput nor considered in this pipeline but are encouraged for further studies. Cross-referencing to closely related organisms can give insight into the validity of adding a reaction to the model28.\n\nThe high-throughput pipeline presented combines physiological data with genomic information to result in more accurate metabolic models. Using experimentation to validate model prediction and improve model capabilities has been shown previously18,29, however, these processes are not streamlined to be fast or robust. Our methodology implements speed and robustness at every level of the work-flow. Using a multi-plate spectrophotometer, hundreds of different growth conditions targeting several metabolic pathways results in an expansive phenotypic profile. The PMAnalyzer pipeline executes rapidly and is integrated into an automated web server where users obtain phenotypic results quickly. DNA sequencing, annotation, and draft model construction are available in a day of using RAST and KBase. Finally, model reconciliation is performed in KBase with flux-balance analysis, gap-filling, and gap-generation commands.\n\n\nConclusion\n\nThe prevalence of complete and near-complete draft genomes is increasing as DNA sequencing becomes cheaper and more robust. Unique bacterial species are now being studied and their data is becoming more readily available for interpretation. We describe a process to combine DNA sequence data and phenotypic experiments to produce a programmatic metabolic model construct. Metabolic reconstructions are built using RAST genomic annotations by implementing PMAnalyzer, a high-throughput pipeline. Biochemical reactions not captured by homology-based algorithms are highlighted using flux-balance analysis and gap identification techniques hosted openly on the KBase platform. Citrobacter sedlakii was used as a model organism to describe, test, and critique the pipeline. FBA results using this model were shown to achieve a 93% prediction accuracy, an improvement from the initial model providing a 53% prediction accuracy.\n\n\nData and software availability\n\nfigshare: Phenotypic profiling data for elucidating genomic gaps. Doi: 10.6084/m9.figshare.115016932\n\nLatest Software script source code: https://github.com/dacuevas/PMAnalyzer\n\nSource code as at the time of publication: https://github.com/F1000Research/PMAnalyzer/releases/tag/V1.0\n\nArchived source code as at the time of publication: http://dx.doi.org/10.5281/zenodo.1141333\n\nSoftware License: GNU GPL v3.0",
"appendix": "Author contributions\n\n\n\nDAC and RAE conceived and designed the study. DAC wrote the paper, prepared figures and tables, developed and enhanced the PMAnalyzer code and pipeline, designed the experiments, performed experiments, and analyzed the data. DAC, DG, SES, and JR jointly discussed the design of the PMAnalyzer. SES designed the MAPs and performed the MAP experiments. ED provided DNA sequencer instrument, reagents, and sequences. JR, AS, and FR included helpful discussions with the MAPs. CSH contributed code to KBase. VV and RAO provided helpful discussions and suggestions with genome annotations.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work is partially supported by NSF grants CNS-1305112 and MCB-1330800 to Edwards, DUE-132809 to Dinsdale, DEB-1046413 to Rohwer, and by a STEM scholarship award funded by NSF grant DUE-1259951 to Cuevas.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nA special thank you to Barbara Bailey, Ben Felts, Jim Nulton, and Peter Salamon from the San Diego State University Bio Math group for their discussions and opinions with bacterial growth curve modeling.\n\n\nReferences\n\nAziz RK, Bartels D, Best AA, et al.: The RAST Server: rapid annotations using subsystems technology. BMC Genomics. 2008; 9: 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOverbeek R, Begley T, Butler RM, et al.: The subsystems approach to genome annotation and its use in the project to annotate 1000 genomes. Nucleic Acids Res. 2005; 33(17): 5691–5702. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOberhardt MA, Puchalka J, Fryer KE, et al.: Genome-scale metabolic network analysis of the opportunistic pathogen Pseudomonas aeruginosa PAO1. J Bacteriol. 2008; 190(8): 2790–2803. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOrth JD, Thiele I, Palsson B: What is flux balance analysis? Nat Biotechnol. 2010; 28(3): 245–248. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchilling CH, Letscher D, Palsson BO: Theory for the systemic definition of metabolic pathways and their use in interpreting metabolic function from a pathway-oriented perspective. J Theor Biol. 2000; 203(3): 229–248. PubMed Abstract | Publisher Full Text\n\nRaman K, Chandra N: Flux balance analysis of biological systems: applications and challenges. Brief Bioinform. 2009; 10(4): 435–49. PubMed Abstract | Publisher Full Text\n\nKeseler IM, Bonavides-Martinez C, Collado-Vides J, et al.: EcoCyc: a comprehensive view of Escherichia coli biology. Nucleic Acids Res. 2009; 37(Database issue): D464–D470. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBochner BR: New technologies to assess genotype-phenotype relationships. Nat Rev Genet. 2003; 4(4): 309–14. PubMed Abstract | Publisher Full Text\n\nJohnson DA, Tetu SG, Phillippy K, et al.: High-throughput phenotypic characterization of Pseudomonas aeruginosa membrane transport genes. PLoS Genet. 2008; 4(10): e1000211. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerkins AE, Nicholson WL: Uncovering new metabolic capabilities of Bacillus subtilis using phenotype profiling of rifampin-resistant rpoB mutants. J Bacteriol. 2008; 190(3): 807–814. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrüss BM, Campbell JW, Van Dyk TK, et al.: FlhD/FlhC Is a regulator of anaerobic respiration and the Entner-Doudoroff pathway through Induction of the methyl-accepting chemotaxis protein Aer. J Bacteriol. 2003; 185(2): 534–543. PubMed Abstract | Publisher Full Text | Free Full Text\n\nViti C, Decorosi F, Mini A, et al.: Involvement of the oscA gene in the sulphur starvation response and in Cr(VI) resistance in Pseudomonas corrugata 28. Microbiology. 2009; 155(Pt 1): 95–105. PubMed Abstract | Publisher Full Text\n\nZhou L, Lei XH, Bochner BR, et al.: Phenotype microarray analysis of Escherichia coli K-12 mutants with deletions of all two-component systems. J Bacteriol. 2003; 185(16): 4956–4972. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMols M, de Been M, Zwietering MH, et al.: Metabolic capacity of Bacillus cereus strains ATCC 14579 and ATCC 10987 interlinked with comparative genomics. Environ Microbiol. 2007; 9(12): 2933–2944. PubMed Abstract | Publisher Full Text\n\nBorglin S, Joyner D, DeAngelis KM, et al.: Application of phenotypic microarrays to environmental microbiology. Curr Opin Biotechnol. 2012; 23(1): 41–48. PubMed Abstract | Publisher Full Text\n\nCovert MW, Knight EM, Reed JL, et al.: Integrating high-throughput and computational data elucidates bacterial networks. Nature. 2004; 429(6987): 92–96. PubMed Abstract | Publisher Full Text\n\nFeist AM, Henry CS, Reed JL, et al.: A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 ORFs and thermodynamic information. Mol Syst Biol. 2007; 3: 121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOh YK, Palsson BO, Park SM, et al.: Genome-scale reconstruction of metabolic network in Bacillus subtilis based on high-throughput phenotyping and gene essentiality data. J Biol Chem. 2007; 282(39): 28791–28799. PubMed Abstract | Publisher Full Text\n\nBochner B, Gomez V, Ziman M, et al.: Phenotype microArray profiling of Zymomonas mobilis ZM4. Appl Biochem Biotechnol. 2010; 161(1–8): 116–123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLine JE, Hiett KL, Guard-Bouldin J, et al.: Differential carbon source utilization by Campylobacter jejuni 11168 in response to growth temperature variation. J Microbiol Methods. 2010; 80(2): 198–202. PubMed Abstract | Publisher Full Text\n\nNeidhardt FC, Bloch PL, Smith DF: Culture medium for enterobacteria. J Bacteriol. 1974; 119(3): 736–747. PubMed Abstract | Free Full Text\n\nMonod J: The Growth of Bacterial Cultures. Annu Rev Microbiol. 1949; 3: 371–394. Publisher Full Text\n\nZwietering MH, Jongenburger I, Rombouts FM, et al.: Modeling of the bacterial growth curve. Appl Environ Microbiol. 1990; 56(6): 1875–1881. PubMed Abstract | Free Full Text\n\nJones E, Oliphant T, Peterson P: SciPy: Open source scientific tools for Python. 2001. Reference Source\n\nConjugate Gradient Methods. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer New York. 2006; 101–134. Publisher Full Text\n\nEdwards RA, Haggerty JM, Cassman N, et al.: Microbes, metagenomes and marine mammals: enabling the next generation of scientist to enter the genomic era. BMC Genomics. 2013; 14: 600. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Madden TL, Schäffer AA, et al.: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997; 25(17): 3389–3402. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitchell NB, Levine M: Nitrogen Availability as an Aid in the Differentiation of Bacteria in the Coli-Aerogenes Group. J Bacteriol. 1938; 36(6): 587–598. PubMed Abstract | Free Full Text\n\nKim J, Reed JL: Refining metabolic models and accounting for regulatory effects. Curr Opin Biotechnol. 2014; 29: 34–38. PubMed Abstract | Publisher Full Text\n\nHenry CS, DeJongh M, Best AA, et al.: High-throughput generation, optimization and analysis of genome-scale metabolic models. Nat Biotechnol. 2010; 28(9): 977–982. PubMed Abstract | Publisher Full Text\n\nOrth JD, Conrad TM, Na J, et al.: A comprehensive genome-scale reconstruction of Escherichia coli metabolism--2011. Mol Syst Biol. 2011; 7: 535. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCuevas DA, Garza D, Sanchez SE, et al.: Phenotypic profiling data for elucidating genomic gaps. figshare. 2014. Data Source\n\nCuevas DA, Garza D, Sanchez SE, et al.: PMAnalyzer. Zenodo. 2014. Data Source\n\n\nSupplementary material\n\nEach well contains 60 µL of sterile water, 50 µL of 3X MOPS basal media, and 30 µL of 5X substrate. Unless noted, a concentration of 0.25% is used for each substrate. Carbon substrates are denoted by blue squares in rows A-F; nitrogen substrates are denoted by red squares in rows G and H. Water denotes wells using water instead of a carbon (or nitrogen) substrate.\n\nA bacterial growth curve can be parameterized into three phases: lag phase, exponential phase, and stationary phase. Parameters correspond to the logistic equation described in the text (Equation 1).\n\nClassification of growth conditions are based on the asymptote adjusted logistic model calculation in equation (4). Red triangles and blue circles indicate conditions asserting growth (≥ 0.5) and no growth (< 0.5), respectively. Classification is weighted by the final biomass yield (asymptote) rather than growth rate. Maximum growth rate can be misleading; e.g., growth curves highlighted in the blue box were modeled using high growth rates but do not assert growth. Growth curves highlighted in the red box were modeled using slightly lower growth rates but assert growth."
}
|
[
{
"id": "6026",
"date": "30 Sep 2014",
"name": "Matthew A. Oberhardt",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGenome-scale metabolic modeling (GSM) has gained interest over the last 2 decades as an increasingly powerful methodology for interrogating cell physiology and function. One of the holy grails in this field is the development of high quality models fully automatically, which would enable analysis of organisms from sequencing data alone, and, for example, integration of modeling with the interpretation of metagenomic data. A large contributor towards this end is the group at Argonne National Labs, which has produced RAST (a genome sequencing and annotation database), SEED (an automated platform to build genome-scale metabolic models), and now Kbase (an integrated platform that allows users to utilize the power of RAST, SEED, and many other functionalities).This article describes a new, automated pipeline for integrating phenotyping data (i.e., growth of an organism on many individual substrates in a microwell plate, such as in Biolog analysis) with genome-scale metabolic modeling in order to improve an automatically built genome-scale metabolic reconstruction produced through the automated RAST/Kbase system. This is an important problem and a technical issue that can greatly improve model reconstruction, automated and manual (the early stages of manual reconstruction are typically automated nowadays as well). The approach seems to provide satisfactory results. However, I have a few concerns about the approach, relating to its originality and some of its features. In general, it is not totally clear from the paper what its main contribution is, and how it is differentiated from previous work. This should be explicitly explained in the introduction, etc.My specific criticisms are as follows:Major criticisms:As far as I can tell, the problem that PMAnalyzer solves has already been solved previously. For example, in the original paper describing SEED (Henry et al., 2010), 22 models are automatically optimized against existing Biolog data. The paper reads: “A modified version of the Growmatch algorithm was included in the Model SEED pipeline to identify and correct the possible errors in the models that cause the incorrect predictions [in Biolog and essentiality data]…” In order to claim that PMAnalyzer is novel, it must be compared to such previous methods and shown to be superior or different in some way (I might simply not understand the difference; this should be explained clearly in the text, or some comparison shown). In general (and as stated before), please compare this work to previous works and explain what is the main scientific novelty of the paper (i.e. how does PMAnalyzer differ from previous works? Also if the sequencing / analysis of this organism is novel, that should also be explicitly stated as well). It is not clear from the text what part of the model building pipeline PMAnalyzer actually does (i.e., does it only analyze the growth curves? Does it do that and also run the gap filling? Etc.). Please explain this explicitly in the paper. I suggest also providing a schematic similar to Figure 1, but that specifically shows what the inputs and outputs to PMAnalyzer are (and optionally shows some of the internal mechanics of PMAnalyzer). Please remark on and justify whether there is an optimization step to reduce false positives (which is mentioned in the discussion, but not the results). The final paragraph of the results lists false positives, but doesn't go into detail or attempt to explain why these occurred; I suggest that the authors give some explanations here if possible on why these are tricky biological cases. Figure 3 would be greatly improved if the authors listed on each panel whether it was called ‘growth’ or ‘non-growth’ by PMAnalyzer. This is, after all, a way for the reader to visually validate the method.Minor criticisms:Please explain the genesis of equation 4, as it is unclear how/why it was formulated this way and it forms the critical cut-off criterion for the calls made in PMAnalyzer. In the section 'RAST annotations', the authors state that ‘following gap-filling, all missing reactions … were cross-checked with the SEED to find similarly named reactions’ -- similar to what? Is this a comparison between databases held in SEED vs. in RAST? Please clarify this. It would be interesting/informative to see Figure 2 in context of other models, especially those of close neighbors (e.g., e. coli and C. koseri). I suggest that the authors provide histograms of subsystems in one/both of those organism as well for comparison. The authors mention in the Discussion that there is an automated web server for executing PMAnalyzer. However, I could not find the link. Can they please link to this or remove this sentence?",
"responses": [
{
"c_id": "2190",
"date": "19 Sep 2016",
"name": "Daniel Cuevas",
"role": "Author Response",
"response": "Major criticisms: As far as I can tell, the problem that PMAnalyzer solves has already been solved previously. For example, in the original paper describing SEED (Henry et al., 2010), 22 models are automatically optimized against existing Biolog data. The paper reads: “A modified version of the Growmatch algorithm was included in the Model SEED pipeline to identify and correct the possible errors in the models that cause the incorrect predictions [in Biolog and essentiality data]…” In order to claim that PMAnalyzer is novel, it must be compared to such previous methods and shown to be superior or different in some way (I might simply not understand the difference; this should be explained clearly in the text, or some comparison shown). Response - Biolog data has been more commonly used by others to describe phenotypic response in various media sources. These data can help enhance the accuracy of genome-scale metabolic reconstructions, as described by Henry et al, 2010. It is important to note that the Multi-phenotype Assay Plates (MAPs) described here are different from Biolog plates: MAPs are non-proprietary assays used to measure biomass accumulation whereas Biolog technology measure substrate utilization. However, the MAPs are serving a similar purpose for model reconciliation as Biolog plates do as described by the GrowMatch paper. The novelty claimed here is the method in combining the high-throughput MAPs technology, the PMAnalyzer pipeline, and the KBase modeling pipeline. PMAnalyzer quickly and automatically calculates growth profiles for a wide spectrum of sugars, amino acids, and other compounds which the KBase modeling environment can also simulate with FBA. In addition to the FBA, KBase is performing the gap-fill and gap-gen algorithms to reconcile the model. This process uses the modified version of the GrowMatch algorithm to identify those changes (e.g., adding reactions, making reactions reversible). In general (and as stated before), please compare this work to previous works and explain what is the main scientific novelty of the paper (i.e. how does PMAnalyzer differ from previous works? Also if the sequencing / analysis of this organism is novel, that should also be explicitly stated as well). Response - This issue has been addressed in the recent changes to the manuscript where we clarify the difference between the Multi-phenotype Assay Plates (MAPs) technology and the Biolog Phenotype MicroArray system, and have also been explained in the response to the previous comment. It is not clear from the text what part of the model building pipeline PMAnalyzer actually does (i.e., does it only analyze the growth curves? Does it do that and also run the gap filling? Etc.). Please explain this explicitly in the paper. I suggest also providing a schematic similar to Figure 1, but that specifically shows what the inputs and outputs to PMAnalyzer are (and optionally shows some of the internal mechanics of PMAnalyzer). Response - This issue has been addressed in the recent changes. An additional flowchart has been added to Figure 1 describing in further detail the workflow of PMAnalyzer. Please remark on and justify whether there is an optimization step to reduce false positives (which is mentioned in the discussion, but not the results). The final paragraph of the results lists false positives, but doesn't go into detail or attempt to explain why these occurred; I suggest that the authors give some explanations here if possible on why these are tricky biological cases. Response - The issue has been addressed in the recent changes. Clarifications and explanations have been included in the Results and Discussion sections. Figure 3 would be greatly improved if the authors listed on each panel whether it was called ‘growth’ or ‘non-growth’ by PMAnalyzer. This is, after all, a way for the reader to visually validate the method. Response - Figure 3 has been updated to show Growth and No Growth curves. Minor criticisms: Please explain the genesis of equation 4, as it is unclear how/why it was formulated this way and it forms the critical cut-off criterion for the calls made in PMAnalyzer. Response - Equation 4 is a type of arithmetic mean that is least prone to noise in the data. Originally, the data input into this equation was the raw OD 600nm measurements; however, after fitting the logistic model, empirical data showed that using the fitted values and introducing the asymptotic value into the equation was able to further separate those growth curves displaying growth. In the section 'RAST annotations', the authors state that ‘following gap-filling, all missing reactions … were cross-checked with the SEED to find similarly named reactions’ -- similar to what? Is this a comparison between databases held in SEED vs. in RAST? Please clarify this. Response - This issue has been addressed in the recent changes with clarifications and explanations. It would be interesting/informative to see Figure 2 in context of other models, especially those of close neighbors (e.g., e. coli and C. koseri). I suggest that the authors provide histograms of subsystems in one/both of those organism as well for comparison. Response - Figure 2 has been updated to provide this information. The authors mention in the Discussion that there is an automated web server for executing PMAnalyzer. However, I could not find the link. Can they please link to this or remove this sentence? Response - The link (https://vdm.sdsu.edu/pmanalyzer) has been added to the manuscript."
}
]
},
{
"id": "7178",
"date": "27 Mar 2015",
"name": "Aaron Best",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Cuevas and colleagues represents an important attempt to streamline the assessment of phenotypic growth data in the context of genome scale metabolic models. The approach incorporates a non-proprietary phenotyping assay that should be accessible to a wide variety of research groups. The data are then coupled to a developing genome analysis and modeling environment, KBase and associated systems (the SEED and RAST) to enable evaluation and iterative refinement of metabolic models. The primary contribution of the work is to provide a methodological path for acquisition and reasonably quick integration of these data with genomic information.Below, I provide a series of questions, concerns and suggestions that I have for the each section of the manuscript as written.IntroductionThis section is generally well written, though I think the context for this type of analysis could be better described... that is, beyond mentioning simply KBase and RAST. Perhaps providing rationale for potential advantages of using this combination of systems rather than other options would be useful for the reader.In the final paragraph, the term \"ground truth\" is used to describe the phenotype data with respect to the metabolic modeling that will be performed. Foreshadowing some comments later, what is the evidence that the phenotype data are actually correct for all conditions tested?MethodsFirst, thank you for the detailed description of the methods. These are clearly written overall. Some detailed comments/suggestions.Describing growth of the initial cells and cultures from glycerol stocks, please define the shaking parameter (rpm) and define \"agitation\". These parameters can be critical factors for reproduction of experiments and are often organism dependent.Second full paragraph, you have defined the contents of 1X MOPS, but you also define it two paragraphs later in the context of the recipe for the basal media. This first instance could be removed and simply refer to the recipe later described.Question: In the plate format for growing the cells, you indicate that plates are sealed with a PCR grade plate film. What does this do to the aerobic/anaerobic state of each well? Is there any opportunity for gas exchange during incubation? Also, is there any shaking going on during incubation on the plate reader? It might be worth mentioning about caveats of usage of carbon/nitrogen sources being limited to these conditions, which aren't exactly known. Also, the modeling could be impacted by the aerobic/anaerobic status of the environment. Was modeling performed under both conditions? Would this impact the accuracy of the modeling results?I note the storage condition was at room temperature. Are all of the substrates stable at room temperature? How long would each stock be stored prior to use for replicates?In the \"Sequencing and metabolic reconstruction...\" section:The sentence that starts with \"FBA was used to determine...\". This is a confusing sentence. What is meant by this? The KBase workspace, Citrobacter_sedlakii_119, does not appear to exist in the current public release of KBase (as of March 25, 2015). I also am unable to find any FBA model objects searching for various forms of Citrobacter and sedlakii. There are not an public narratives that would match the series of commands that you describe as being run and as freely accessible. This needs to be corrected, likely by building a public narrative in the current system. Are the named commands for KBase still valid in the current production version of the system? It would be useful to include what apps and methods correspond to these commands. A public narrative in the current version of KBase would make this study replicable and easily transferred to other model systems.The github page for the PMAnalyzer software is good... to the point, clear.The explanation of the logistic model and absorbance data is also clear. In the description of the growth value, you end by stating that this is boiled down to a boolean growth/no growth status for each condition. I understand why this is done, given that the model reconciliation with growth phenotypes is occurring on a boolean level, but how much information is being lost by making this experimental design decision? The nature of the growth can be very important for understanding how the organism is behaving in an environment. The more immediate consequence of this decision is in the interpretation of False Negatives by the model (where the phenotype assay says \"growth\" and the model says \"no growth\"). How many of the false negatives had growth values near the 0.5 cutoff? The allantoin example could be a case like this (growth = 0.529, from curve_logistic_parameters.csv). The growth curve asymptote appears to be near 0.25 (Fig. 2a). This is very similar to values that are considered \"no growth\" phenotypes. Does it make sense to have the model gap fill three reactions in this case? Table 1 might be made more complete by adding a column for the growth value for each condition rather than that sitting in the supplemental data files (alternatively, highlight in the text that these values are given in that file). Related to this, in Supplemental Figure 3, you could highlight the point that represents allantoin. It would also be useful to highlight the water, negative control in the Supp. Fig. 3.This brings me to questions about how confidence in the growth value is determined (if at all). I see that the median value of the biological replicates is used to determine the y logistic and thus the growth value. I also see that standard error is indicated in Fig. 2a graphs. However, this does not allow for statistical evaluation of the growth. Would it be better to calculate growth for each replicate independently and then determine an average growth value with error around these? Perhaps there is a better statistical approach. In any case, this comes back to being able to state some confidence in these values to aid interpretation of potentially borderline cases. Please define \"sse\" in the curve_logistic_parameters.csv file.In the \"RAST annotations\" section, last paragraph. How does this fit in with the gap filling process for the model? Is the context information in close genomes actually used in the gap fill process, or is it a post hoc attribution of higher confidence to the gap fills that are included in the model?ResultsThe statistics on the genome assembly are worse than I would expect to see. In particular, is the coverage based on alignment by blastn to C. koserii a reasonable number? I can't quickly evaluate if this is typical of different Citrobacter genomes. How does a low coverage (~70%) affect the outcome of presence/absence of genes in the annotation and subsequent modeling process. In reading the results, it appears that the majority of reactions in the network are identified, but it may be worth addressing this explicitly.I note that you used manual inspection of growth curves just under the 0.5 cutoff... this is another area in which a statistical confidence in that value might help. If this is to become truly high throughput, manual inspection becomes untenable except in a few cases.In the description of gap filling reactions for complex media, you mention that EC 2.2.1.7 was not found, but is likely due to a frameshift error. Was any follow up sequencing or PCR performed to confirm the error or presence/absence of the gene in C. sedlakii? Or even just a blastx analysis of the region? In general, when you are discussing the evaluation of annotations in RAST, figures (supplemental) of key genomic regions would help the reader to evaluate the statements being made.In the paragraph beginning with, \"Using the base model, the 90 well simulation resulted in...\", you have a sentence that starts with, \"Note:\". This is a confusing sentence and structure. What are you trying to point out here? What reactions are being referred to? What percentage of gap filled reactions are transport reactions? Stating this clearly would improve clarity.The last statement in the Results section focuses on false positive conditions. Do you have any thoughts as to why these are coming up as FP? There is no follow up in the discussion about this. Are they central in a network of reactions, are they dual use reactions, etc.? Discussion RAST annotations and gap-filled reactions section: This section would also benefit from a supplemental figure that serves as an example of what is being discussed (also mentioned above). What is the connection between KBase, RAST and SEED? How does updating in one affect the others? This question gets at an assumption in the text that the relationships among systems are known to the reader. The text could be clarified, or key references added. FBA false positives section: Please expand to include more specific discussion of the 6 FP reactions identified at the end of the Results section. What are the growth values for these? Are any of them borderline? Last paragraph: It would be good to quantify what “several” means with respect to the number of metabolic pathways being targeted. “…available in a day of using RAST and KBase.” This sentence implies that sequencing, annotation, and model reconstruction can happen in a single day. This should refer only to the use of sequence data. Also, there in no mention of the phenotype data here in this context. I think it would be better to highlight that the system allows the user to produce a reasonably robust metabolic model quickly, giving more opportunity for in depth analysis of discrepancies and manual curation of the model given the phenotypic data. What is the link to the web service for the PMAnalyzer? Points to Address I’ve bolded several items in the above format for this review that I would consider to be major points to address and would make the manuscript stronger. Given that this is a methods paper, it is imperative that others can reproduce the work and/or employ the approach in other organism systems. Please update methods as requested above, paying particular attention to the KBase functionality and workflow.",
"responses": [
{
"c_id": "2189",
"date": "04 Oct 2016",
"name": "Daniel Cuevas",
"role": "Author Response",
"response": "In the final paragraph, the term \"ground truth\" is used to describe the phenotype data with respect to the metabolic modeling that will be performed. Foreshadowing some comments later, what is the evidence that the phenotype data are actually correct for all conditions tested? Response - The 96-well microtiter plate technology has been established and used in many research studies to date, some of which have been cited in this manuscript; thus, the efficacy of the data has been verified. The phenotype response taken from these growth curves is identifying growth or no growth of the bacteria in specific minimal media conditions. Classifying growth for a given sample can be difficult because there is not an established rule that translates quantitative optical density measurements into a qualitative growth/ no growth response. When a growth curve displays an ambiguous phenotype, such as described in the manuscript with the allantoin-based condition, other laboratory techniques can be used to offer a more precise answer. Methods Describing growth of the initial cells and cultures from glycerol stocks, please define the shaking parameter (rpm) and define \"agitation\". These parameters can be critical factors for reproduction of experiments and are often organism dependent. Response - The 250 rpm shaking speed has been added to the manuscript. In the plate format for growing the cells, you indicate that plates are sealed with a PCR grade plate film. What does this do to the aerobic/anaerobic state of each well? Is there any opportunity for gas exchange during incubation? Also, is there any shaking going on during incubation on the plate reader? It might be worth mentioning about caveats of usage of carbon/nitrogen sources being limited to these conditions, which aren't exactly known. Response - The PCR grade plate film still allows gas exchange to occur, and shaking does occur on the plate reader. This has been clarified in the methods section of the manuscript. Also, the modeling could be impacted by the aerobic/anaerobic status of the environment. Was modeling performed under both conditions? Would this impact the accuracy of the modeling results? Response - Flux-balance analysis was performed with oxygen exchange occurring in the metabolic model. I note the storage condition was at room temperature. Are all of the substrates stable at room temperature? How long would each stock be stored prior to use for replicates? Response - Yes, all substrates are stable at room temperature. Substrate stocks were prepared on a weekly basis. The sentence that starts with \"FBA was used to determine...\". This is a confusing sentence. What is meant by this? Response - Flux-balance analysis answers the question: with the given genome-scale metabolic model and the nutrients present in the environment, does the genome-scale metabolic model contain biochemical reactions that will intake the nutrients and create the necessary biomass components for cellular growth? Thus, FBA was used here to determine if the model is capable of growth in the same conditions as the MAPs. The KBase workspace, Citrobacter_sedlakii_119, does not appear to exist in the current public release of KBase (as of March 25, 2015). I also am unable to find any FBA model objects searching for various forms of Citrobacter and sedlakii. There are not any public narratives that would match the series of commands that you describe as being run and as freely accessible. This needs to be corrected, likely by building a public narrative in the current system. Response - The SBML files for the draft model and gap-filled models have been provided as supplementary material. Are the named commands for KBase still valid in the current production version of the system? It would be useful to include what apps and methods correspond to these commands. A public narrative in the current version of KBase would make this study replicable and easily transferred to other model systems. Response - At this time KBase does not use the IRIS system to perform genome-scale metabolic modelling. KBase now uses the Narrative graphical workflow to perform the same functions using the same data types. KBase has released publicly available narratives that describe these workflows (e.g., https://narrative.kbase.us/#appcatalog/app/fba_tools/build_metabolic_model/release). In the description of the growth value, you end by stating that this is boiled down to a boolean growth/no growth status for each condition. I understand why this is done, given that the model reconciliation with growth phenotypes is occurring on a boolean level, but how much information is being lost by making this experimental design decision? The nature of the growth can be very important for understanding how the organism is behaving in an environment. The more immediate consequence of this decision is in the interpretation of False Negatives by the model (where the phenotype assay says \"growth\" and the model says \"no growth\"). How many of the false negatives had growth values near the 0.5 cutoff? The allantoin example could be a case like this (growth = 0.529, from curve_logistic_parameters.csv). The growth curve asymptote appears to be near 0.25 (Fig. 2a). This is very similar to values that are considered \"no growth\" phenotypes. Does it make sense to have the model gap fill three reactions in this case? Table 1 might be made more complete by adding a column for the growth value for each condition rather than that sitting in the supplemental data files (alternatively, highlight in the text that these values are given in that file). Related to this, in Supplemental Figure 3, you could highlight the point that represents allantoin. It would also be useful to highlight the water, negative control in the Supp. Fig. 3. Response - Supplementary Figure 4 has been generated to show the different growth levels in terms of the 0.5 growth level cutoff. Would it be better to calculate growth for each replicate independently and then determine an average growth value with error around these? Perhaps there is a better statistical approach. In any case, this comes back to being able to state some confidence in these values to aid interpretation of potentially borderline cases. Response - Yes, this does indeed provide some statistical evidence of the growth level and each of the other growth parameters. Although the updated PMAnalyzer pipeline now does this order of analysis, it did not affect the results to this experiment; thus, the results were not altered in terms of identifying growth and no growth conditions. Please define \"sse\" in the curve_logistic_parameters.csv file. Response - “SSE” refers to the sum-squared error calculated between the logistic fitted growth model and the OD measurements. In the \"RAST annotations\" section, last paragraph. How does this fit in with the gap filling process for the model? Is the context information in close genomes actually used in the gap fill process, or is it a post hoc attribution of higher confidence to the gap fills that are included in the model? Response - This refers to post hoc, manual efforts made after the gap-filling process. Results The statistics on the genome assembly are worse than I would expect to see. In particular, is the coverage based on alignment by blastn to C. koserii a reasonable number? I can't quickly evaluate if this is typical of different Citrobacter genomes. How does a low coverage (~70%) affect the outcome of presence/absence of genes in the annotation and subsequent modeling process. In reading the results, it appears that the majority of reactions in the network are identified, but it may be worth addressing this explicitly. Response - The genome alignment to C. koseri were meant to paint a picture of the similarity of its DNA sequence to the C. sedlakii, which might lend insight into why some of the genes were not identified with a functional role. This affects the presence of functional roles as many of those putative genes are not assigned any role, thus leaving gaps in our metabolic model. In the paragraph beginning with, \"Using the base model, the 90 well simulation resulted in...\", you have a sentence that starts with, \"Note:\". This is a confusing sentence and structure. What are you trying to point out here? What reactions are being referred to? Response - Here I am pointing out that through gap-filling for only the 13 false negative conditions listed in Table 2, the other 35 false negative conditions were corrected, i.e., all 48 conditions where FBA asserted false negative results now assert true positive results. This has been clarified in the updated manuscript. What percentage of gap filled reactions are transport reactions? Stating this clearly would improve clarity. Response - This 46% has been added to the manuscript. Discussion What is the connection between KBase, RAST and SEED? How does updating in one affect the others? This question gets at an assumption in the text that the relationships among systems are known to the reader. The text could be clarified, or key references added. Response - The references for RAST and the SEED database explain their relationships. RAST uses the SEED subsystems information to annotate genomic sequences. Please expand to include more specific discussion of the 6 FP reactions identified at the end of the Results section. What are the growth values for these? Are any of them borderline? Response - The issue has been addressed in the recent changes. Clarifications and explanations have been included in the Results and Discussion sections. “…available in a day of using RAST and KBase.” This sentence implies that sequencing, annotation, and model reconstruction can happen in a single day. This should refer only to the use of sequence data. Also, there in no mention of the phenotype data here in this context. I think it would be better to highlight that the system allows the user to produce a reasonably robust metabolic model quickly, giving more opportunity for in depth analysis of discrepancies and manual curation of the model given the phenotypic data. Response - This clarification has been made in the manuscript. What is the link to the web service for the PMAnalyzer? Response- The link (https://vdm.sdsu.edu/pmanalyzer) has been added to the manuscript."
}
]
}
] | 1
|
https://f1000research.com/articles/3-210
|
https://f1000research.com/articles/5-597/v1
|
07 Apr 16
|
{
"type": "Research Note",
"title": "Understanding covariate shift in model performance",
"authors": [
"Georgia McGaughey",
"W. Patrick Walters",
"Brian Goldman",
"Georgia McGaughey",
"W. Patrick Walters"
],
"abstract": "Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets.",
"keywords": [
"covariate shift",
"model building",
"ChEMBL",
"logistic regression",
"k-NN"
],
"content": "Introduction\n\nA common prerequisite in supervised learning algorithms is that the training and prediction data arise from the same distribution and are independently and identically distributed (iid)1. Intuitively this is justified, as one should not expect to learn a classifier on one distribution of examples and apply it to accurately predict labels of examples drawn from a different distribution. Covariate shift is a machine learning technique that can be utilized in supervised learning when the training and prediction distributions are known to differ, but the concept being learned remains stationary. A recent book provides an excellent overview of the current state of the art in covariate shift methods2.\n\nCovariate shift frequently occurs during the drug discovery process where learning systems are built to predict physiochemical properties of interest. Initially a chemistry team may focus on a particular chemical series, and information from this series is used to train a learning system. As the project progresses, the chemistry team may refocus their efforts on a new, structurally distinct series. The accuracy of prospective computational predictions on the new series may be compromised as these molecules originate from a distribution that is distinct from the molecular set used to train the learning tool.\n\nFor example one may wish to build a learning system to predict hERG activity (unwanted cardiovascular toxicity). Initially the computational tool is trained using series A but must now predict on series B. The concept “binding to hERG” is fixed, however the area of interest has transitioned from chemical series A to chemical series B. The feature vectors describing these two sets are likely related but potentially different; and as such, their covariates have shifted. Put more mathematically, the probability of observing a feature vector from the prediction set is different from the probability of observing a feature vector from the training set. That is, the training and prediction sets are non-iid. A well-constructed learning system will recognize that predictions on series B are outside the “domain of applicability” of the model and predict with low confidence. The covariate-shift method attempts to adjust the domain of applicably so that it is more aligned with the prediction set.\n\nCovariate shift methods typically reweight instances in the training data so that the distribution of training instances is more closely aligned with the distribution of instances in the prediction set. This is accomplished by providing more weighting during model building to an instance in the training set that are similar to an instance in the prediction set. It has been shown3 that the appropriate importance weighting factor w(x) for each instance “x” in the training set is:\n\n\n\nwhere pt(x) is the probability of seeing instance x in the training set and pp(x) is the probability of seeing x in the prediction set. It is important to note that only the feature vector values (not their labels) are used in reweighting. The importance weighting scheme is intuitively understandable. If the probability of seeing a particular instance from the training set in the prediction is very small, then this instance should carry little weight during the training process and consequently have little effect on the decision function.\n\nFigure 1 plots two Gaussian distributions and w(x). If instances from the red distribution are used for training a classifier to predict on an instance from the green distribution then the blue curve gives the importance of each instance. Note the increased importance for instances from the training distribution overlapping with high-density regions of the prediction distribution.\n\n\nMethods\n\nFor our experiments, we use a logistic regression classifier where each training instance is weighed by its importance w(x). For the calculation of w(x) we use the Kullback-Leibler Importance Estimation Procedure (KLIEP) method developed by Sugiyama4. The KLIEP method is based on the Kullback-Leibler divergence theorem and attempts to find weights to minimize the divergence from ptrain(x) to ppredict(x). Briefly, the importance is modeled as a linear function:\n\n\n\nThe αi are the weights to be learned and φi the basis functions. The importance weight from Equation 1 can be rearranged and used to estimate the probability of observing a feature vector in the predictive set.\n\n\n\nThe KL divergence from pp(x) to its estimate p^p(x) can then be expressed as:\n\nKL[pp(x)||p^p(x)]=∫pp(x) log(pp(x)pt(x)w⌢(x))dx\n\nAfter algebraic manipulation, removing terms independent of w⌢(x) and adding constraints to ensure proper normalization, a final objective function to be maximized can be derived as (see 4 for details):\n\nmaximize{αl}l=1b[∑j=1nplog(∑l=1bαlφl(xj))]subject to:∑j=1nt∑l=1bαlφl(xj)=1and α1,α2,…,αb≥0\n\nThe resulting problem is convex and can be solved using standard optimization techniques. The result is an expression for w(x) that allows calculating weights for a training instance x. These weights can then be incorporated when training a classifier to obtain a co-variate shifted version of the classifier.\n\n\nToy example\n\nTo demonstrate the use of covariate shift methods, we repeated a simple toy experiment as detailed in 3. Figure 2 graphically displays the results we obtained.\n\nThe red training points are drawn from two (2) two-dimensional Gaussian distributions representing a class 1 and a class 2. The green prediction points are drawn from a slightly rotated version of the training distributions. The red line plots the classifier obtained when training on only the training points; the green line plots the classifier trained on both the training and prediction points (the optimal classifier in this case). The blue line plots the classifier trained on the training data that was weighted by the importance factor as estimated by the KLIEP method. Note how the blue line is shifted towards the optimal classifier, demonstrating the effect of the KLIEP algorithm and covariate shift.\n\n\nExperiments\n\nUsing the Python programming language, we implemented the KLIEP method combined with logistic regression and applied it to five different in-house ADME (absorption, distribution, metabolism and excretion) datasets. We compare KLIEP+Logistic Regression (KL+LR) to Logistic Regression (LR) and a k-NN classifier (k=5)5.\n\nFor each dataset the molecules were sorted by compound registration date. The first 75% of the data comprised the master training set while the remainder formed the master prediction set. Temporal ordering of the data represents the evolving coverage of chemical space by drug discovery projects and consequently captures the natural “shifting” of the covariates. Classifier performance statistics are generated by performing twenty different runs, each on a random 80% of the master files. Performance statistics for each classification task are then obtained by averaging the results of the twenty individual folds. In all cases, OpenEye6 path fingerprints are used as feature vectors. We experimented with different fingerprints provided by OpenEye (MACCS 166 bit structural keys and circular fingerprints) and found that they had no significant effect on the outcome.\n\nTo ensure the data was amenable to covariate shift we generated classifiers separating “training” from “prediction” data. Figure 3 shows performance of LR on this separation task. For each dataset we are able to compute highly accurate classifiers. This indicates that the training and prediction data are drawn from different distributions and hence are appropriate for covariate shift methods. This is a necessary condition for covariate shift but does not imply model improvement over unweighted data.\n\nFigure 4 compares the performance of KL+LR, LR and k-NN on the five (5) datasets. One can see from the graph that KL+LR failed to provide any statistical improvement over standard LR.\n\nWe extended the study to include an external dataset provided by ChEMBL7,8 such that others could use their own fingerprints and independently support or refute our claims. We chose the beta secretase IC50 data as it is a well established biochemical screen, highly accurate and contains > 7000 data points crossing multiple orders of magnitude, which are publically available.\n\nAnalogous to the internal datasets, we were able to demonstrate that the data could be separated and thereby appropriate for application of the covariate shift algorithm. Unfortunately, the outcome was the same: that is, as measured by overall classifier accuracy, there was no statistical advantage for reweighting the training set.\n\nA possible explanation for the failure of the covariate shift method to provide a boost in predictive performance could be that the calculated importance weights are all similar. This would cause each training example to exert the same influence on the decision function and thus the importance weighting would have no effect. This was not the case. Figure 5 plots the cumulative distribution function of the importance weight for the training set compound. The plot demonstrates that weights are distributed across a range of classifier performance.\n\n\nConclusions\n\nWe have applied the KLIEP method to five (5) internal data sets and one (1) external data set where covariate shift was evident. Although KL+LR was an advantage over k-NN, there is no statistical advantage of reweighting the training dataset. We are surprised with this outcome and are currently exploring other datasets where application of covariate shift may improve the predictions.\n\n\nData availability\n\nF1000Research: Dataset 1. The BACE IC50 data derived from the ChEMBL database, 10.5256/f1000research.8317.d1178829",
"appendix": "Author contributions\n\n\n\nBG conceived the study. BG designed the experiments and carried out the research. GM wrote the manuscript and provided the beta-secretase data set and contributed to the experimental design. PW provided oversight.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nZadrozny B: ICML 04 Proceedings of the twenty-first international conference on Machine learning. ACM New York, 2014.\n\nQuiñonero-Candela J, Sugiyama M, Schwaighofer A, et al.: Dataset Shift In Machine Learning. MIT Press, Cambridge, Massachusetts, 2009. Reference Source\n\nShimodaira H: Improving predictive inference under covariate shift by weighting the log-likelihood function. J Stat Plan Inference. 2000; 90(2): 227–244. Publisher Full Text\n\nSugiyama M, Suzuki T, Nakajima S, et al.: Direct importance estimation for covariate shift adaptation. Ann Inst Stat Math. 2008; 60(4): 699–746. Publisher Full Text\n\nA Matlab implementation of the KLIEP algorithm is freely available: http://www.ms.k.u-tokyo.ac.jp/software.html#KLIEP Reference Source\n\nOpenEye Scientific Software. (version 2014.Feb), 9 Bisbee Ct, Suite D, Santa Fe NM, USA. Reference Source\n\nGaulton A, Bellis LJ, Bento AP, et al.: ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012; 40(Database Issue): D1100–1107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBento AP, Gaulton A, Hersey A, et al.: The ChEMBL bioactivity database: an update. Nucleic Acids Res. 2014; 42(Database Issue): D1083–1090. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcGaughey G, Walters WP, Goldman B: Dataset 1 in: Understanding covariate shift in model performance. F1000Research. 2016. Data Source"
}
|
[
{
"id": "13266",
"date": "08 Apr 2016",
"name": "Robert Sheridan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is potentially an important negative result in QSAR, however I think some revision is necessary because some aspects are unclear.The title “Understanding covariate shift…” is a little weak. One could say “Failure of covariant shift to improve model performance…”It needs to be explicitly pointed out in the introduction that in most QSAR one builds a model then is able to predict arbitrary compounds. On the other hand, to use covariant shift, one must know which molecules one is predicting before one can generate the model. One can regard “lazy learning” as an extreme version of covariant shift: neighbors of the test set molecules are given weights of 1.0 and all other molecules are given weights of 0.I need a little more explanation in words of how the weighting is done for training set compounds. Since we are using substructure descriptors here, I am finding it hard to visualize. For example, are we just using distance to the nearest test set example, or are we looking at overlap of the training set descriptors with the distribution of test set descriptors?Practically no explanation is given as to what QSAR methods are being used. I know what K-NN is and I presume LR is linear regression. Why weren’t popular methods like random forest, SVM, or PLS tried? The color key in Figure 1 does not seem to match what is in the text. In any case, perhaps a better way of looking at would be the enclosed figure.",
"responses": []
},
{
"id": "13393",
"date": "25 Apr 2016",
"name": "Martin Vogt",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study investigates the influence of accounting for covariate shift in classification performance using logistic regression models. Overall, this short paper is very well and clearly written, however the method section should be expanded (see below). Although no increase in performance could be established by accounting for covariate shift, it provides an excellent basis for further investigations.Suggestions/Corrections:The method section should be expanded:I assume all models were trained as binary classifiers. This is potentially confusing as the chosen ADME properties in the experimental data could also have been modelled using regression models. This should be stated clearly and explained how labels (good/bad) are assigned to the training instances for the different ADME properties (and how labels are assigned to the ChEMBL data given the potencies). Which basis functions (kernels?) were used in equation (2)? What distance measure was used for k-NN (e.g., Soergel/Tanimoto, Hamming)? In Figure 3 (and 4), given the imbalance in data size between training and test set, consider reporting the balanced accuracy. E.g. a trivial classifier classifying each compound as \"training\" compound would have an accuracy of 75% based on the imbalance of the data set, which needs to be taken account when interpreting Figure 3. The authors provide a data set for download although they do not explicitly report the results for that data set. The results should be reported.Typos:In the formula for KL on page 3 the two vertical bars should have the same size. In Figure 1, the labels for the red and blue line are mixed up.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-597
|
https://f1000research.com/articles/5-1056/v1
|
01 Jun 16
|
{
"type": "Research Article",
"title": "Results of an attempt to reproduce the STAP phenomenon",
"authors": [
"Shinichi Aizawa"
],
"abstract": "In 2014, Obokata and colleagues reported their observation of a novel cell reprogramming phenomenon they named ‘stimulus-triggered acquisition of pluripotency’ (STAP). The most conclusive evidence for the pluripotency of so-called STAP cells was the purported ability of such cells to contribute to chimera formation. Here, I report the results of an attempt by Haruko Obokata to replicate the phenomenon under the supervision of the Scientific Validity Examination Team of RIKEN. In this follow-up study, putative STAP cells prepared by Haruko Obokata were injected into 1051 embryos, of which 591 were recovered. However, the injected cells made no significant contribution in any tissue in any of the embryos that developed.",
"keywords": [
"STAP",
"iPSCs",
"ES",
"stem cells",
"chimera",
"Oct-GFP",
"pluripotency"
],
"content": "Introduction\n\nInduced pluripotent stem cells (iPSCs), first reported by Takahashi and Yamanaka using a combination of exogenous genetic factors, have transformed our understanding of the gene regulatory mechanisms underlying cellular pluripotency and differentiation (Takahashi & Yamanaka, 2006). This discovery raised the possibility that cellular reprogramming may also be induced by activating endogenous pluripotency genes under certain conditions. In two reports published in Nature by Obokata et al. (2014a); Obokata et al. (2014b), the authors claimed to have observed how “external stimuli such as a transient low-pH stressor reprogram somatic cells into pluripotent cells,” which they referred to as the STAP phenomenon; subsequently, however, after multiple problems were found with the handling and presentation of the data in a pattern indicative of research misconduct, both papers were retracted.\n\nThe present article reports the results of a study conducted by Haruko Obokata in the RIKEN Center for Developmental Biology (CDB), which was designed to determine whether the STAP phenomenon was in fact reproducible. Obokata was permitted to perform this closely monitored study from July 14 to November 30, 2014 under my supervision as head of the Scientific Validity Examination Team, at the direction of the Head Office for Internal Reform organized by the RIKEN President. Unfortunately, I have been unable to contact her since the completion of the study, or to obtain her agreement to be listed as an author on this article. Nonetheless, given the extraordinary degree of attention and controversy the original STAP publications and research misconduct generated, I feel it is important to report the results of this investigation in the interests of clarifying the scientific record. In the Scientific Validity Examination Team, Hitoshi Niwa, one of the coauthors of the Nature papers, also conducted an independent examination of whether the STAP phenomenon was reproducible; the results of his examination have been reported previously (Niwa, 2016).\n\nThe investigation reported here consisted of two types of experiments; preliminary ones conducted without supervision, and formal ones conducted in the presence of expert witnesses. There were no significant differences in the data generated in the preliminary and formal experiments, and all are included together in this report. The experiments were conducted in a new setting, not in the laboratory that Obokata had used for the previous studies described in the retracted Nature publications (Obokata et al., 2014a; Obokata et al., 2014b). All reagents, materials, instruments, and experimental spaces were freshly furnished. Obokata was permitted to conduct experiments only in designated rooms. She prepared cell aggregates, but did not perform any of the subsequent analyses herself, other than observations of the cell aggregates by phase and fluorescence microscopy. Other members of the team conducted chimeric, FACS, RT-PCR and immunohistochemical analyses of the cell aggregates. In this report, I refer to the studies reported in the papers retracted (Obokata et al., 2014a; Obokata et al., 2014b) as \"the previous studies\" for the sake of brevity. I also refer to the technical tips published by several authors of the original articles for details of the experimental procedure (Obokata et al., 2014c).\n\n\nResults\n\nExperiments were performed using a transgenic mouse line harboring GFP under an Oct4 promoter (Ohbo et al., 2003); the line is the same as that used in the previous studies (Obokata et al., 2014a; Obokata et al., 2014b). The mouse line has been maintained in C57BL/6 background in a homozygous state. Spleens were dissected from homozygous newborn mice (6–8 days old) obtained by crossing a homozygous transgenic female with a homozygous transgenic male, or from hemizygous newborn mice (6–8 days old) obtained by crossing a homozygous transgenic female with a wild type 129 male. Spleen cells were prepared as described previously (Obokata et al., 2014a; Obokata et al., 2014c), but enrichment of CD45-positive cells by FACS sorting was omitted. The source of the cells used in these experiments were lymphocytes collected with Lympholyte following the manufacturer’s instructions (Cedarlane Laboratories, Ontario, Canada).\n\nThe stress treatment evaluated was the low-pH condition; no other conditions, such as trituration, were examined. The low-pH conditions included not only the previously reported induction by HCl (Obokata et al., 2014a; Obokata et al., 2014b; Obokata et al., 2014c), but also that by ATP. Although not described in the previous reports, the ATP treatment had been used most frequently by Obokata et al., and is described in their patent application regarding the STAP process (US Patent Application no.: 14/397,080). In brief, the low-pH condition was generated by suspending the 1×106 cells in 494 μl HBSS (Hank’s Balanced Salt Solution), adding 6 μl 200 mM ATP, and incubating for 15 min at 37°C in CO2 incubator. The low-pH treated-cells were cultured for 6–8 days, and cell aggregates of 50–100 μm showing green fluorescence were identified (see Materials and methods). Table 1 gives the frequency of the cell aggregates identified by Haruko Obokata (see Materials and methods). No apparent difference was found in the frequency of green fluorescent cell aggregates under either of the low-pH conditions (HCl or ATP) or genetic background of mice (C57BL/6 or F1 between C57BL/6 and 129). The observed frequency was approximately 10 green fluorescent cell aggregates per 106 cells seeded; this was approximately 10-fold lower than that in the previous studies. Most green fluorescent cell aggregates also exhibited higher or lower degrees of red fluorescence (Figure 1). No quantitative determination was made, but about one in three cell aggregates exhibited green fluorescence more intense than red fluorescence. Green fluorescent cell aggregates that exhibited no significant red fluorescence were rare.\n\n(A) Phase contrast views of typical five cell aggregates, (B) their green fluorescence and (C) their red fluorescence.\n\nChimera production was performed with spleens of a transgenic mouse line harboring GFP under a CAG promoter (Okabe et al., 1997) which were also maintained homozygously in C57BL/6 background; this line is different from the one previously used (Obokata et al., 2014a; Obokata et al., 2014b). Cell aggregates of 50–100 μm were selected by their cluster morphology by Obokata and subjected to the chimeric assay. Chimeras were made by members of the Laboratory for Animal Resources and Genetic Engineering, CDB, with expertise in chimera production with ES cells (http://www2.clst.riken.jp/arg/APDBN.html, http://www2.clst.riken.jp/arg/mutant_mice_generated_in_CDB.html)(present affiliation: Animal Resource Development Unit, Biosystem Dynamics Group, Division of Bio-Function Dynamics Imaging, Center for Life Science Technologies (CLST)). The previous report indicated that the generation of chimeras using STAP cells involved a distinct technical approach (Obokata et al., 2014a): “Single cell dispersion by trypsinization, as it is done in the chimera production with ES cells, caused low chimaerism. STAP spherical colonies were cut into small pieces using a microknife under the microscope. Small clusters of the cells are then injected into blastocysts.” In the present study, cell aggregates were one cut into small pieces by either glass capillary, laser beam (XY Clone: Nikko Hansen & Co., Osaka, Japan) or microsurgical knife (K-5310: FEATHER Safety Razor Co., Osaka, Japan) and were injected into a host embryo, either E2.5 8-cell stage or E3.5 blastocyst stage embryos of random-bred ICR (Charles River, Tokyo, Japan). Injected embryos were transplanted into the uterus of pseudopregnant females of the ICR strain, and recovered at E9.5 or E8.5 to judge the contribution of injected cells in each tissue by GFP-green fluorescence (Table 2). Notably, the previous study describes that small clusters of 'STAP' cells were injected into ‘E4.5 blastocysts’, and the next day, the chimeric blastocysts were transferred into pseudopregnant females (Obokata et al., 2014a). Five hundred and sixty four embryos (210 morula and 354 blastocyst) were injected with cell aggregates cut into pieces by glass capillaries, and 294 embryos were recovered at E9.5. Ninety-two embryos (48 morula and 44 blastocyst) were injected with cell aggregates cut into pieces by laser beam, and 58 embryos were recovered at E9.5. Three hundred and ninety five embryos (193 morula and 202 blastocyst) were injected with cell aggregates cut into pieces by microknife, and 239 embryos were recovered. Seven hundred and sixty seven embryos were injected with cell aggregates derived from C57BL/6 spleen, and 284 embryos with aggregates from F1 spleen between C57BL/6 and 129. Cell aggregates cut into pieces were injected into 451 morula- and 600 blastocyst-stage embryos. In total, 1,051 embryos injected with cell aggregates cut into pieces were transplanted into a foster uterus, and 591 embryos were recovered. The contribution of injected cells was judged by GFP green fluorescence in embryos (see Materials and methods). No significant contribution of the injected cells was observed in any of the 591 embryos examined. Pluripotency was not examined by injecting putative STAP cells into tetraploid embryos.\n\n1) Genetic background of CAG-GFP mice from which spleen was isolated.\n\n2) How cell aggregates were cut into small pieces.\n\n3) No. embryos injected with cell aggregates and transplanted into uterus of foster mothers.\n\n4) Embryos were transplanted into foster mothers on the next day after the injection of cell aggregates.\n\n\nDiscussion\n\nOne of the central claims in the original reports was that the purported STAP cells had the ability to differentiate into multiple lineages, including germ cells, when placed in a normal developmental environment. The present study focused on assessing pluripotency by chimera production using cell aggregates prepared by Obokata; however, no evidence of pluripotency was observed using this assay. In the original reports, the STAP cells were prepared by Haruko Obokata, while the chimera production and the establishment of ES (embryonic stem)-like STAP-SCs and TS (trophectoderm stem)-like FI-SCs were made by Teruhiko Wakayama.\n\nI encourage readers to recognize a number of limitations in the studies, which were conducted under strict time constraints and in the face of considerable, often adversarial, media scrutiny. Unfortunately, it was not possible to receive technical advice from Teruhiko Wakayama in the chimera production reported here, and it is unclear whether or to what extent the techniques for chimera production in the present study correspond to those used in the previous studies. Previous studies also examined the pluripotency of purported STAP cells by their potency to generate teratomas in immune-deficient mice. However, more than 105 cells are required to form teratoma subcutaneously in the flank of an immune-deficient mouse using ES or EC (embryo carcinoma) cells, and the process takes about one month. No teratoma formation was examined in the present study, since the frequency of green fluorescent cell aggregates was low and time was limited. Teratoma formation under the kidney capsule, which also takes about two months using blastocyst embryos, was also not examined.\n\nThe more critical question is whether and to what extent the STAP cell aggregates prepared by Obokata in this trial under new experimental conditions recapitulated the STAP cell aggregates reported in the previous study. The frequency of green fluorescent cell aggregates from low pH-treated, Oct-GFP transgenic spleen cells was 10-fold less than that in previous studies. Moreover, green fluorescence due to GFP expression cannot be distinguished from that due to autofluorescence, nor can GFP expression by reprogramming be distinguished from that due to non-specific gene expression in dying cells. The cell aggregates were not characterized in vitro in detail, but the following features were observed:\n\n(1) Preliminary FACS analysis of low pH-treated, Oct-GFP transgenic spleen cells suggested that the frequency of green fluorescent cells was very low and that the majority of surviving cells were CD45-positive after one week in culture under the conditions used in the present study. In the previous study, CD45+ cells were rare and a significant number of green fluorescent cells were observed (Figure 1c in Obokata et al., 2014a).\n\n(2) Preliminary RT-PCR analysis suggested that the majority of the cell aggregates generated in the present study did not express pluripotency markers, in contrast to the report of pluripotency marker expression in the previous study (Figure 2b in Obokata et al., 2014a), although there were cell aggregates at a low frequency that expressed one or multiple pluripotent markers.\n\n(3) Preliminary immunochemical analysis suggested that most of the cell aggregates in the present study did not express pluripotency markers. In contrast to the data shown in Figure 2a of the previous study, they did not express OCT4, SSEA1, NANOG and E-CADHERIN, (Obokata et al., 2014a).\n\nThe possibility cannot be excluded that the experimental conditions used in the present study in some way differed from the previously established optimum conditions for STAP induction. It is my view that it was beyond the scope of this examination to reassign each condition; a definitive answer to the question of whether the previously used conditions for inducing the STAP phenomenon can be indeed established or not must await further study. Nevertheless, I consider it is important to report that Haruko Obokata herself failed to reproduce the reported phenomenon, in that the putative STAP cells described here were unable to contribute to any tissues in a normal developmental environment.\n\nAnother reported feature of the STAP phenomenon was that while STAP cells themselves do not proliferate, two types of stem cells could be established from them: ES (embryonic stem)-like STAP-SCs and TS (trophectoderm stem)-like FI-SCs. However, as Obokata had no experience with these stem cell culture, she did not undertake the establishment of the secondary stem cell types in the present study.\n\n\nMaterials and methods\n\nC57BL/6NJcl and 129X1/SvJJmsSlc mice were purchased from CLEA Japan and Japan SLC, respectively. A transgenic mouse line harboring GFP under an Oct4 promoter (GOF-Tg; Ohbo et al., 2003) was provided by RIKEN BioResource Center (BRC) to CDB, and has been maintained in homozygous state under C57BL/6 background in CDB animal facility. A transgenic mouse line harboring GFP under a CAG promoter (CAG-GFP Tg; Okabe et al., 1997) was provided to CDB by Masaru Okabe at Osaka University, and has been maintained in homozygous state under C57BL/6 background in CDB animal facility. Animals were housed in environmentally controlled rooms, and animal experiments were conducted under the institutional guidelines for Animal and Recombinant DNA Experiments that are consistent with ARRIVE guidelines. The experiments were approved by Institutional Animal Care and Use Committee of RIKEN Kobe Branch (Permit No., AH26-01).\n\nNewborn male mice of 6–8 days old were euthanized using carbon dioxide and then sterilized with 70% ethanol. Two spleens were placed in a 15 ml conical tube, minced by scissors into paste, added with 5.5 ml HBSS (GIBCO 14170), mechanically dissociated using a Pasteur pipette and strained through a cell strainer (mesh size 40 μm, FALCON 352340) into another conical tube. Five ml of Lympholyte-M (Cedarlane CL5031) was added to the bottom of the tube beneath the cell suspension, and the tube was centrifuged at 1,500 g for 20 min. The middle lymphocyte layer was transferred into another tube and centrifuged at 800 g for 10 min. The pelleted cells were suspended in 500 μl HBSS, of which 6 μl was subjected to the counting of cell number; in exchange 6 μl 200 mM ATP (SIGMA 3377) or diluted HCl (10 μl 35% HCl to 590 μl HBSS) was added to the cell suspension. The cell suspension was incubated at 37°C for 15 min in 5% CO2 incubator, and then centrifuged at 1,500 rpm for 15 min at room temperature. After the supernatant was removed, B27 medium (DMEM/F-12 (GIBCO 11330) supplemented with 1,000 U LIF (ESGRO 1107), 2% B-27 (GIBCO 17504) and 1 μg/ml bFGF (WAKO 060-04543) was added to the cell pellets to obtain 1×106 cells/ml suspension; one ml of the suspension was plated in each well of a 24 well plate (FALCON 353047) and cultured at 37°C in 5% CO2 incubator for seven days to develop cell aggregates. Cell aggregates of 50–100 μm were examined for green and red fluorescence with an Olympus Fluorescent Microscope IX51 (mirror units: Olympus U-MNIBA2 to detect green fluorescence and Olympus U-MWIG2 to detect red fluorescence), and the number of candidate aggregates were counted by Haruko Obokata. Images were taken with an Olympus DP70 camera coupled with Olympus DP Controller software (version 1.2.1.108).\n\nCell aggregates prepared by Haruko Obokata were subjected to chimera production. Cell aggregates were cut into small pieces by either glass capillary, laser beam (XY Clone: Nikko Hansen & Co., Osaka, Japan) or microsurgical knife (K-5310: FEATHER Safety Razor Co., Osaka, Japan). The pieces were injected into host embryos of either E2.5 8-cell stage or E3.5 blastocyst stage embryos of random-bred ICR (Charles River, Tokyo, Japan). Injected embryos were transplanted into the uterus of pseudopregnant females of the ICR strain. Injection of cell aggregates and transplantation of the embryos into pseudopregnant females were performed as routinely done with ES cells (http://www2.clst.riken.jp/arg/Methods.html). Embryos were recovered at E9.5 or E8.5 and examined for the contribution of injected cells in each organ by detecting the presence of GFP-green fluorescence with LEICA fluorescence stereomicroscope M165FC (filter sets 10447407 and 10447408). E9.5 or E8.5 embryos of the CAG-GFP transgenic line used for the preparation of cell aggregates served as positive control and wild type ICR embryos as negative control for the green fluorescence detection.\n\nSee Niwa (2016) for QPCR, immunostaining and FACS analysis.\n\n\nData availability\n\nOpen Science Framework: Dataset: Results of an attempt to reproduce the STAP phenomenon, doi 10.17605/OSF.IO/48f2z (Aizawa, 2016).",
"appendix": "Author contributions\n\n\n\nShinichi Aizawa is responsible for all animal experiments. He supervised preparation of cell aggregate by Haruko Obokata and organized the chimera, FACS, RT-PCR and immunohistological analyses.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis examination was supported by a grant for Scientific Validity Examination by RIKEN President’s office.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nI would like to acknowledge Ms. Haruko Obokata’s participation and efforts in this study. I am indebted to Dr. Hiroshi Kiyonari and Mr. Kenichi Inoue for chimera production and animal breeding, Laboratory of Animal Resources and Genetic Engineering for animal housing, Dr. Mariko Yamane for RT-PCR analysis and a member of Scientific Validity Examination Team for immunochemical and FACS analyses. I am also grateful to Ms. Kana Bando-Kadowaki, Dr. Go Shioi, Dr. Takaya Abe, Mr. Atsushi Katayama, Mr. Shigekazu Saitou, Mr. Akira Kimura, Mr. Naohiko Oba and Mr. Masahito Hatanaka, for their support to this examination. I deeply thank two senior witnesses outside of RIKEN and seven witnesses from BioResource Center, Center for Integrative Medical Sciences and Brain Science Institute, RIKEN. I thank Mr. Douglas Sipp for critical comments on and copyediting of this report.\n\n\nReferences\n\nAizawa S: Dataset: Results of an attempt to reproduce the STAP phenomenon. Open Science Framework. 2016. Data Source\n\nNiwa H: Investigation of the cellular reprogramming phenomenon referred to as stimulus-triggered acquisition of pluripotency (STAP). Scientific Reports in press. bioRxiv. 2016. Publisher Full Text\n\nObokata H, Wakayama T, Sasai Y, et al.: Stimulus-triggered fate conversion of somatic cells into pluripotency. Nature. 2014a; 505(7485): 641–647. Retracted. PubMed Abstract | Publisher Full Text\n\nObokata H, Sasai Y, Niwa H, et al.: Bidirectional developmental potential in reprogrammed cells with acquired pluripotency. Nature. 2014b; 505(7485): 676–680. Retracted. PubMed Abstract | Publisher Full Text\n\nObokata H, Sasai Y, Niwa H: Essential technical tips for STAP cell conversion culture from somatic cells. Protocol Exchange. 2014c. Publisher Full Text\n\nOhbo K, Yoshida S, Ohmura M, et al.: Identification and characterization of stem cells in prepubertal spermatogenesis in mice. Dev Biol. 2003; 258(1): 209–225. PubMed Abstract | Publisher Full Text\n\nOkabe M, Ikawa M, Kominami K, et al.: 'Green mice' as a source of ubiquitous green cells. FEBS Lett. 1997; 407(3): 313–9. PubMed Abstract | Publisher Full Text\n\nTakahashi K, Yamanaka S: Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006; 126(4): 663–676. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "14102",
"date": "13 Jun 2016",
"name": "Austin Smith",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper Dr Aizawa reports the outcome of attempts to reproduce the claim that exposure to low pH can convert splenocytes into pluripotent cells, so-called STAP cells, that are capable of colonising the mouse embryo. Although the two STAP papers have now been retracted acknowledging multiple errors and misconduct, the retraction notice does not state that the results are irreproducible but only says “we are unable to say without doubt whether the STAP-SC phenomenon is real”. This study is therefore a valuable service to the community. It is unfortunate that Ms Obokata cannot be contacted. It would be desirable if she confirmed her agreement with the findings. However, I do not think there is any requirement for her to be a co-author because she carried out the work under the explicit direction and supervision of Dr Aizawa.\nThe study design, results and interpretation are clearly presented. Putative STAP cell aggregates, as defined by Ms Obokata on the basis of fluorescence and/or morphology, were obtained. In a comprehensive series of micro-injections these cells were introduced into morulae or early blastocysts then transferred to recipient mice. No contribution was detected in 591 recovered embryos inspected for expression of a constitutive GFP reporter. Therefore the findings reported in the STAP papers cannot be reproduced using “STAP” cells generated by Ms Obokata in supervised conditions. This is a helpful clarification for the field.\nI have a few minor suggestions and questions that could improve the clarity of the manuscript:\nIn Table 1 the heading Exp No should be changed to No. of Expts and the heading No. Cell Aggregates should be No. Fluorescent Cell Aggregates.\n\nIt is not clear from the Table or text what proportion of aggregates showed fluorescence or whether they all did. This should be stated.\n\nIt is stated that the source of fluorescence could not be confirmed. Were no aggregates generated from wildtype splenocytes without a reporter? How intense is the green fluorescence in aggregates compared with the Oct4-GFP level in embryos or ES cells? The text should explain that red fluorescence is autofluorescence.\n\nFor the chimaera experiments it is stated that “cell aggregates of 50-100mM were selected by their cluster morphology by Obokata”. Can “cluster morphology” be described more precisely?\n\nTypographical error: “cell aggregates were one cut into small pieces”.\n\nWere any injected embryos examined for donor cell survival/integration prior to uterine transfer?\n\nCould the author comment on the limit of detection (number of cells) for chimaera contribution at the stages examined using this reporter? The method “Embryos were ….. examined for the contribution of injected cells in each organ”. I assume this was in whole mount rather than dissected organs, but this should be declared.",
"responses": [
{
"c_id": "2201",
"date": "27 Sep 2016",
"name": "Shinichi Aizawa",
"role": "Author Response",
"response": "Dear Dr. Austin Smith, I thank you for your comments. The manuscript was revised incorporating your suggestions. My responses are as follows: The headings in Table 1 have been changed as suggested. All oct-gfp cell aggregates exhibited fluorescence to some degrees. No cell aggregates were generated from wild-type splenocytes. No direct comparison was made of the intensities of green fluorescence of cell aggregates with those in oct-gfp embryos or ES cells. I cannot state with certainty whether the green and red fluorescence was autofluorescence. RT-PCR analysis for GFP expression showed significant expression in several aggregates, but not in others that had green fluorescence; these data were very preliminary and thus are not shown. This examination focused on the multipotency of cell aggregates generated by Obokata using a chimeric assay, since this was the central feature of the STAP phenomena. Other data were only preliminary given the time constraints under which these experiments were performed, as described in Discussion. To make chimeras, cell aggregates were prepared with cag-gfp splenocytes, thus GFP expression or green fluorescence cannot be used as a measure for the selection of cell aggregates. For this reason, they could only be selected by cell cluster morphology. In the present study, the selection was dependent entirely on Obokata’s judgment. If she had succeeded, our plan was next to ask her to describe “cell cluster morphology” precisely. Many embryos injected with cag-gfp cell aggregates at 8-cell stage and cultured for one day to the blastocyst stage were examined for the presence of green-fluorescent cells, and such cells were found to be present. Chimeric extent was examined at E9.5 or E8.5 in whole mount. The retracted Nature papers show extensive colonization of the cells (Fig. 4 in the Article and Fig. 1 and Extended Data Fig. 1 in the Letter). The article reported eight chimeric embryos, showing more then 50% coat color contribution, of 48 chimeras obtained; these animals yielded “STAP”-derived offspring. This was the central finding in the now-retracted STAP reports. However, in the present study, no chimera equivalent to those in Fig. 4 in the article and Fig. 1 and Extended Data Fig. 1 in the Letter was obtained, nor were any chimeras obtained showing more than 50% contribution to coat pigmentation. Indeed, no chimera showing more than 1% contribution was obtained. I have now revised the text accordingly. We have not examined the limit of detection (minimum number of cells) with the cag-gfp mouse line used, since this was not the point of the present study. However, I believe it to be the case that if dozens of cells had been present together in any tissue, they would have been detectable in whole mount at E9.5 or E8.5. Best regards, Shin Aizawa"
}
]
},
{
"id": "15460",
"date": "22 Aug 2016",
"name": "Irene de Lazaro",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTwo 2014 Nature papers authored by Obokata et al. described a novel protocol for the generation of pluripotent cells from somatic cells, based on their exposure to stress-triggering conditions such as mechanical disruption or acid treatment. Such process was named as stimulus-triggered acquisition of pluripotency (STAP), but the studies were soon retracted on the grounds of several confirmed experimental errors and indications of research misconduct. The present work by Dr. Aizawa describes a systematic study aiming to clarify whether STAP putative pluripotent cells – generated by Obokata under the supervision of RIKEN’s Scientific Validity Examination Team – contribute to the development of the mouse embryo and hence can be considered bona fide pluripotent cells.\n\nThe conclusions reached in this study – that the claims for STAP cells functional pluripotency are not reproducible – are clearly supported by the data provided, since STAP cell contribution was not observed in any of the recovered embryos. The number of embryos injected with STAP cells to investigate their contribution to different tissues are satisfactorily high. In addition, the design of the study is very systematic and accounts for several potential sources of variability in the chimerism outcome (i.e. source of stress stimulus, technique utilised to cut the cell clusters before microinjection, stage of the embryos at injection and recovery time). Nevertheless, please find below a few suggestions and clarifications that in my opinion could enrich the study by Dr Aizawa.\n\nExperimental design:\nIn the retracted studies by Obokata et al., CD45+ spleen cells were selected by FACS as the source for STAP cells generation. In the present study, sorting for CD45 was omitted, and the commercial product Lympholyte that reportedly allows specific isolation of lymphocytes was used instead. Being this an alteration of the original protocol under investigation, which may introduce differences in the nature of the starting cell population, it would be beneficial if the author could explain any reason behind this change.\n\nIt is mentioned that the CAG-GFP transgenic mouse line utilized for chimerism experiments was different to that used by Obokata et al. in the previous, now retracted, studies. Was there a reason to select a different line?\n\nIn the original Obokata et al. studies, clusters of STAP cells were injected into E4.5 embryos. However, in the present study the injection was performed into embryos at the E2.5 or E3.5 stage. Was this parameter changed to attempt a higher degree of chimerism? An explanation for this change in the experimental setup would also be desirable for clarity.\n\nPresentation, treatment and discussion of data:\n\nFrequency of cell aggregates from Oct-GFP transgenic spleen after low pH treatment (Table 1): although it is stated in the text that “no apparent difference was found in the frequency of green fluorescence signal under either of the low-pH conditions (HCl or ATP) or genetic background of mice” this statement would be stronger if it was supported by an statistical test. Did the author conduct statistical analysis on these data?\n\nGreen and red fluorescence of cell aggregates: the author seems to imply that the detected signal was a result of autofluorescence, which has been in fact pointed out by other researchers attempting to reproduce the work (Tang et al. 2014; De los Angeles et al. 2015; see last comment below). However, this is not clearly stated in the text. In addition, doubts concerning green autofluorescence could be easily vanished by using an anti-GFP antibody or by measuring the levels of GFP mRNA by qPCR or GFP protein by Western Blot. I strongly encourage the author to perform such studies if samples are still available since they would definitely clarify the issue. The inclusion in the study of cell aggregates derived from a wild-type mouse line would have also avoided this ambiguity.\n\nGenetic background of mice to generate STAP cells for chimerism studies: It is first stated in the text that the CAG-GFP mice involved in chimerism studies were bred on a C57BL/6 homozygous background. However, it is later highlighted in the text, as well as in Table 2, that both C57BL/6 abnd F1(C57BL/6x129) were included. Since those were also the backgrounds selected to maintain Oct-GFP transgenics for the cell aggregates assay, this is confusing needs clarification from the author.\n\nResults included in Niwa, 2016: the author refers several times to the results reported by Niwa (Niwa, 2016), who also investigated the reproducibility of the STAP phenomenon for the Scientific Validity Examination Team in RIKEN . It is however not clearly specified whether Niwa’s studies were conducted on the same STAP cells produced by Obokata under strict supervision. Such clarification is important, since qPCR, immunostaining and FACS data are discussed, but not shown, and the reader is directed to Niwa’s work.\n\nAt least two other independent studies, carried out at institutions outside RIKEN, have aimed to clarify the STAP controversy (Tang et al. 2014, De los Angeles et al. 2015) and reached similar conclusions to those presented in this article. In particular, the issues on autofluorescence have been extensively scrutinized in De los Angeles et al. I believe that a brief discussion of the observations compiled in such studies would strengthen the present manuscript.\n\nTypographical error: “in the intestests of clarifying the scientific record” (page 2 of 8).",
"responses": [
{
"c_id": "2202",
"date": "27 Sep 2016",
"name": "Shinichi Aizawa",
"role": "Author Response",
"response": "Dear Dr. Irene de Lazaro, I thank you for your comments. The manuscript was revised incorporating your suggestions. My responses are as follows: There was no FACS cell sorter in the laboratory in which Obokata performed the set of supervised experiments reported here. She had previously obtained “STAP” cells using splenocytes prepared using Lympholyte-M, so we sought to determine whether she was able to repeat this in the present study. If she had succeeded, our plan was next to generate STAP cells using CD45+ cells sorted by FACS. The origin of the cag-gfp transgenic mouse line used in the retracted Nature papers is unclear, and was not reported in the papers. Dr. Wakayama informed us that he generated the cag-gfp mouse line himself while at the University of Hawaii, but we did not make a formal investigation into this. The mouse line was no longer maintained in the animal facility of CDB and was not available to us. Alternatively, the cag-gfp mouse line may have been actually an Acr/cag-gfp mouse line (Nakanishi et al., Genomics 80, 564-574 (2002)) as suggested in the report by Konno et al (Konno et al., Nature 525,E4-5 (2015). However, we only became aware of this possibility at the time of that report, which was after the start of Obokata’s replication attempt. In any case, the cag-gfp mouse line reportedly used in the original STAP reports is different from the cag-gfp mouse line (Okabe et al., 1997) we used in the present study. It is nonetheless difficult to conceive how the difference in cag-gfp transgene might affect the efficiency of “STAP cell” production and chimera generation. In Fig. 4a of the retracted Nature article, the embryo being injected with “STAP” cells clearly has a zona pellucida. However, E4.5 embryos typically no longer have this structure. In the absence of zona pellucida, injection is practically impossible. We note that E0 is generally defined as 0:00 am of the day when the plug is identified, and suggest that E4.5 may be a typographic error for E3.5. Alternatively, Dr. Wakayama may have artificially delayed the development of the embryo; however, this was not reported in the retracted Nature paper. We have now included a statistical analysis (t-test), which indicates that the efficiency of cell aggregate formation is significantly different between ATP treatment and HCl treatment in the C57BL/6 background. However, the difference is slight. We have revised the manuscript accordingly (Table 1 and page 5 in the text). This study focused on the multipotency of cell aggregates generated by Obokata using a chimeric assay as this was the central feature of the reported “STAP” phenomena. Given the time constraints of this study, other data were necessarily limited, as noted in the Discussion. As it was not the focus of the present study, I cannot state definitively that the red fluorescence observed was autofluorescence, although I feel that this is highly likely. RT-PCR analysis for GFP expression showed significant expression in several aggregates, but not in others that showed green fluorescence; however, these data were preliminary at best and are not presented. The effects on both cell aggregate formation and chimeric potency of the spleens’ genetic background were examined in the C57BL/6 and F1(C57BL6 x 129) background. It is well known that ES culture is strongly influenced by genetic background. Both of these backgrounds were used in the retracted Nature papers. I have now revised the manuscript (page 4 and page 6) to clarify this point. The cell aggregates in Niwa’s report were prepared by Niwa, not by Obokata. The two reports are now cited and briefly discussed (page 8–9). These works did not examine multipotency by chimeric assay, and the most important issue of the present report is that cell aggregates prepared by Obokata herself did not exhibit multipotency in chimeric assays. Best regards, Shin Aizawa"
}
]
}
] | 1
|
https://f1000research.com/articles/5-1056
|
https://f1000research.com/articles/5-2523/v1
|
14 Oct 16
|
{
"type": "Research Article",
"title": "Zika antiviral chemotherapy: identification of drugs and promising starting points for drug discovery from an FDA-approved library",
"authors": [
"Bruno S. Pascoalino",
"Gilles Courtemanche",
"Marli T. Cordeiro",
"Laura H. V. G. Gil",
"Lucio H. Freitas-Junior",
"Bruno S. Pascoalino",
"Gilles Courtemanche",
"Marli T. Cordeiro",
"Laura H. V. G. Gil"
],
"abstract": "Background The recent epidemics of Zika virus (ZIKV) implicated it as the cause of serious and potentially lethal congenital conditions such microcephaly and other central nervous system defects, as well as the development of the Guillain-Barré syndrome in otherwise healthy patients. Recent findings showed that anti-Dengue antibodies are capable of amplifying ZIKV infection by a mechanism similar to antibody-dependent enhancement, increasing the severity of the disease. This scenario becomes potentially catastrophic when the global burden of Dengue and the advent of the newly approved anti-Dengue vaccines in the near future are taken into account. Thus, antiviral chemotherapy should be pursued as a priority strategy to control the spread of the virus and prevent the complications associated with Zika. Methods Here we describe a fast and reliable cell-based, high-content screening assay for discovery of anti-ZIKV compounds. This methodology has been used to screen the National Institute of Health Clinical Collection compound library, a small collection of FDA-approved drugs. Results and conclusion From 725 FDA-approved compounds triaged, 29 (4%) were found to have anti-Zika virus activity, of which 22 had confirmed (76% of confirmation) by dose-response curves. Five candidates presented selective activity against ZIKV infection and replication in a human cell line. These hits have abroad spectrum of chemotypes and therapeutic uses, offering valuable opportunities for selection of leads for antiviral drug discovery.",
"keywords": [
"Zika",
"High content screening drug discovery",
"FDA-approved drugs"
],
"content": "Introduction\n\nZika virus (ZIKV) is a mosquito-borne virus transmitted by Aedes sp. mosquitoes across tropical and subtropical regions around the world. It is a positive single strand RNA flavivirus responsible for, in most of cases, asymptomatic infections. The most common symptoms of Zika are very similar to Dengue fever including headache, muscle and joint pain, mild fever, rash, and inflammation of the underside of the eyelid1 and, given the commonality of such symptoms, the diagnose is usually imprecise. The virus was first reported in Uganda in 1947 and 60 years after its discovery, only 15 cases were documented until the start of the current epidemics in Americas, mainly in Brazil2. Although ZIKV was first isolated nearly 70 years ago, very little is known about the virus biology, as most of the cases likely remained unreported and the transmission had been sporadic and silent for most of time3.\n\nThe Latin America Zika epidemic drew attention especially due to the related cases of microcephaly. Since 2014, the number of microcephaly cases in Brazil increased 20 times and its incidence overlapped with Zika epidemic areas2. Moreover, recent work has demonstrated that Zika infection impairs the growth of neurospheres4 and causes birth defects in mice5, indicating the virus influence in the fetal development. Although the more recent Zika outbreaks suggests that Aedes aegypti is the main vector, it has been shown that other mosquito species are capable of carrying and thus possibly transmitting the virus6 – for example, ZIKV was isolated from the ubiquitous Culex species, which is also present in countries of milder climates. But the importance of this mosquito as a potential disease vector is still not understood3. Furthermore, sexual transmission of Zika virus was already reported in temperate countries without mosquito vectors6 suggesting that Zika transmission could eventually be established outside tropical areas.\n\nRecent studies call attention for the risk of pre-immunity to Dengue leading to complications during Zika. Anti-Dengue antibodies could enhance the infection in Zika7, most likely by a mechanism known as antibody-dependent enhancement (ADE), which is also the pathophysiological mechanism that causes severe Dengue. The Dengue epidemics persists in the tropical and sub-tropical areas around the globe, and combined with the upcoming introduction of newly developed anti-Dengue vaccines8,9, it could lead to a potentially catastrophic scenario when Zika complications due to ADE are considered. For this reason, vaccines should not be the only control strategy to pursue against epidemic flaviviruses. Efforts must be focused in the development of novel approaches to control the pathogens, instead of just depending on the vector control and palliative care to ease the disease symptoms.\n\nHigh content screening (HCS) was recently used for discovery of inhibitors of Dengue virus (DENV) and Chikungunya virus (CHIKV) infection10,11. This is a cell-based and innovative image-based assay using libraries of small molecules against the viruses to identify compounds that possess antiviral activity during infection of a human host cell. The advantage of HCS over others High-throughput screening (HTS) assays (such as target-based) is that the amount of information that can be generated from images of a single treatment is not limited to a single value. Aside from the degree of viral infection and cell viability, other relevant information can be extracted from images such as morphological changes in host cell, protein localization, among others12. Another advantage is that HCS precludes the need for a validated target, as compounds can be screened against all putative molecular targets at a single experiment, in a physiologically relevant condition. This condition becomes a considerable advantage in the case of Zika, in which both viral and host targets remain to be discovered. Thus, cell-based screening is a viable strategy to rapidly advance drug discovery for Zika.\n\nDrug repurposing is a well-known strategy by the pharmaceutical industry, that speeds-up the drug discovery process. Also known as drug repositioning, it is basically the use of known drugs or compounds to treat new indications. The obvious advantage of drug repurposing over the traditional drug development is the gain in time and the lower costs, since the repurposed drug has already been approved for clinical use. For this reason, in addition to quickly enabling the start of clinical trials for a different therapeutic use, the risk of failure due to adverse toxicology is greatly reduced. Besides drug repurposing, these compounds can also serve as starting material for the development of leads for new therapeutic purposes.\n\nHere we describe a high content screening methodology for the discovery of inhibitors of ZIKV infection applied in a drug repurposing context. This assay was used to screen a library of FDA-approved drugs, resulting in the identification of five compounds with selective activity against ZIKV in human cells.\n\n\nMethods\n\nThe Zika virus (KX197192.1) used in this project was isolated from a patient in Pernambuco-Brazil in 2015.\n\nThe human hepatome cell Huh7 (JCRB0403), obtained from the Japanese Cell Bank, was cultivated in DMEM F-12 media (Sigma-Aldrich) supplemented with 10% fetal bovine serum (FBS) (Sigma-Aldrich), 100 units/mL of Penicillin and 100 μg/mL of Streptomycin (Sigma-Aldrich), at 37°C, 5% CO2.\n\nThe Aedes albopictus cell C636, kindly provided by Dr. Amílcar Tanuri from Universidade Federal do Rio de Janeiro, was cultivated in Leibovitz L-15 media (Sigma-Aldrich) supplemented with 10% FBS (Sigma-Aldrich), 0.26% tryptose phosphate (Sigma-Aldrich), 100 units/mL of Penicillin and 100 μg/mL of Streptomycin (Sigma-Aldrich), at 28°C13.\n\nThe mouse hybridoma cells D1-4G2-4-15 (HB-112), obtained from Rio de Janeiro Cell Bank, was cultivated in DMEM F-12 media (Sigma-Aldrich), supplemented with 10% FBS (Sigma-Aldrich), 100 units/mL of Penicillin and 100 μg/mL of Streptomycin (Sigma-Aldrich), at 37°C, 5% CO2. Exponentially growing hybridoma cells were used to produce ascitic fluid as described by Yokoyama et al.14.\n\nZika viruses were used to infect C636 cells at 80% confluency at a multiplicity of infection (MOI) of 0.01 for 96 h. The supernatant was harvested, aliquoted in sterile conical tubes and frozen at -80°C. The obtained viruses were quantified by plaque assay using Huh7 cells, as described for Dengue virus by Medina et al.15.\n\nThe NIH Clinical Collection compound library (Evotec) was used. The human recombinant Interferon α 2A (Sigma-Aldrich) was used as reference compound. The compounds were diluted in 100% dimethylsulfoxide (DMSO) (Sigma-Aldrich), with the exception of IFNα2A (Thermo Scientific) that was prepared in Dulbecco's Phosphate-buffered saline (DPBS) (Sigma-Aldrich) containing 0.5% (W/V) bovine albumin (Sigma-Aldrich).\n\nThe NIH Clinical Collection compound library (Evotec) was screened against ZIKV at 20 μM in 1% DMSO. MOCK-infected Huh7 and IFNα2A (1.55 nM) were used as positive controls, and the 1% DMSO (vehicle)-treated cells were used as negative control. In each run, a 10-point dose-response curve of the reference compound IFNα2A, starting at 1.55 nM and diluted in a factor of 2, was also used for assay quality control. The compounds were diluted 16.6× in DPBS 1× in the µClear Black 384-well plates (Greiner Bio-One) for a final volume of 10 µL of compound at 6% DMSO. After that, 50 μL of a mixture of Huh7 cells at 6 × 104 cells/mL and ZIKV at a MOI of 0.5, were added in each well of the plate resulting in a final concentration of 1% DMSO and a final volume of 60 μl/well. After 72 h of incubation at 37°C and 5% CO2, the cells were submitted to indirect immunofluorescence (IF) protocol as described below. The primary screening was performed in two independent experiments and the confirmation ratio was calculated by the number of common hits in both assays divided by the total number of hits of the first assay using Pearson test in Graphpad Prism software, version 6. Scatter-plot distribution of the entire screening was generated using Spotfire 7.0 (TIBCO).\n\nThe Huh7 cells were fixed with 4% (w/v) (PFA) (Sigma-Aldrich) for 30 min at room temperature, treated with 0.25% (v/v) Triton-X for 15 min and incubated with the primary monoclonal antibody D1-4G2-4-15 (HB-112) prepared in DPBS containing 2.5% FBS at 37°C for 2 h. After two wash steps with DPBS, plates were incubated with AlexaFluor594 conjugated goat anti-mouse IgG (Thermo Scientific) and 5 μg/mL of DAPI (4, 6 diamidino-2-phenylindole) (Sigma-Aldrich) in DPBS at 37°C temperature for 1 h, and then washed again twice with DPBS. After the final washing, digital images were acquired using a high content imaging system, the Operetta (Perkin Elmer). The digital images were taken from four different fields of each well at 20× magnification.\n\nThe acquired images were analyzed with the High Content Analysis (HCA) software Columbus (Perkin Elmer) for identification, segmentation and quantification of host cell nucleus, cytoplasm and intracellular virus labeling with the specific antibody (Figure 1). The HCA provides as output data for all images from one well the total number of cells and total number of infected cells. For the purpose of this study, the infection ratio (IR) was defined as the ratio between the total number of infected cells in all images from the well and the total number of cells in all images from the same well. The raw data for IR values were normalized to negative (infected cells, DMSO-treated) and positive controls (infected cells treated with Interferon α 2A at a concentration of 1.55 nM) to determine the normalized antiviral activity, according to the equation below:\n\n(i) Normalized Activity (NA) = [1- (Av. IRN – Av. IRT)/(Av. IRN – Av. IRP)] × 100\n\nwhere:\n\nAv. IRN: average infection ratio of negative control wells\n\nAv. IRP: average infection ratio of positive control wells\n\nAv. IRT: average infection ratio of test compound wells (in a given concentration)\n\nThe acquired images (input image) have the individual cells selected (find nuclei and find cytoplasm), the intensity of virus labeling is calculated (calculate fluorescence intensity) and the cells with signal higher than the defined threshold are selected as virus infected (select infected cells). Blue: Cell nuclei labeled with Dapi; Orange: D1-4G2-4-15 monoclonal antibody labeling.\n\nNA values of the reference compound dose-response curve were processed with the Graphpad Prism software, version 6, for generation of sigmoidal dose-response (variable slope) non-linear curve fitting and determination of the EC50 values, defined as the effective concentration resulting in a 50% inhibition of ZIKV infection. The statistical validity of the Zika virus high content screening was determined by calculating the Z'-factor27 using the infected Huh7 treated with 1% DMSO or IFNα2A as negative and positive controls, respectively. As quality control of the screenings, a 1% DMSO plate and two IFNα2A dose-response curve plates were performed in each run (Figure 2).\n\nUpper panel represents the 1% DMSO plate, control of the variability among the wells of the plate. Lower panel represents the dose-response of the reference compound Interferon α 2A, which serves as control of the entire run, where: 1% DMSO vehicle treated ZIKV infected cells (blue), representing the samples; 1.5 nM IFNα2A treated ZIKV infected cells (positive control) (green); Mock infected cells (red); 1% DMSO treated ZIKV infected cells (negative control) (grey).\n\nWere considered as hits, compounds that presented both normalized activity (see formula above) and cell ratio (number of cells of the tested compound divided by the mean of 1% DMSO-treated cells) equal or superior of 50%.\n\nTo confirm the compound activity against Zika viruses, the selected hits from both primary screenings were tested in a 9 point DRC, with 2-fold serial dilutions starting at 50 µM, using the same assay and data analysis described for the primary screening. The EC50 value was used to evaluate compound activity. The CC50 value, defined as the compound concentration resulting in a 50% reduction in cell viability compared with the infected IFNα2A treated cells, was used to evaluate cell toxicity. The compounds that presented the Selectivity Index (SI), which is calculated as SI = CC50/EC50, equal or higher than 1 and that reached at least 50% of maximum activity were considered as confirmed hits. Here we describe a high content screening methodology for the discovery of inhibitors of ZIKV infection applied in a drug repurposing context. This assay was used to screen a library of FDA-approved drugs, resulting in the identification of five compounds with selective activity against ZIKV in human cells.\n\n\nResults\n\nThe first step to develop the high content Zika virus screening assay was to adapt the ZIKV to infect a suitable cell line, in this case the human cell line Huh7, in 384-well plates. The optimal cell density, virus MOI and necessary period of time for the efficient viral infection in the host cell were determined. For this propose the Huh7 cells were seeded in four different densities (2×104, 4×104, 6×104 and 8×104 cells/mL), combined with three different MOI (0.25, 0.5 and 1) for 2, 3 or 4 days, using mock-infected Huh7 cells as controls. At the assay endpoint, all conditions were submitted to indirect immunofluorescence using as primary antibody the monoclonal mAb 4G2, which recognizes the E protein of flaviviruses, to detect the infected cells. Images were randomly acquired from all conditions and submitted to High Content Analysis for the determination of the infected and non-infected cells populations, followed by the determination of the infection ratio (ratio of infected cells to the total number of cells) and the cytotoxicity. Figure 1 shows a representation of the methodology employed to detect the viral infection in host cells. After analyzing the data, the cell density of 6×104 cells/mL, MOI = 0.5 and 72 h of infection were selected as the best conditions for virus infection (Table S1), presenting the highest infection ratio, varying from 60–90% and cell ratio combined with lowest variation of infection in 384 plates, with a coefficient of variation below 10%.\n\nThe Interferon α 2A was previously reported to have anti-flaviviral activity16,17 and, for this reason it was chosen as the reference compound in this assay. The antiviral activity of IFNα2A in ZIKV-infected Huh7 cells was verified in dose-response curves and. After 72 h, the infection level was determined by indirect immunofluorescence (IF) and the extracted data were analyzed and used to plot a sigmoidal dose-response curve (Figure 3). The EC50 of 2.07 pM and the minimal effective concentration (capable of eradicating the infection) of 1.5nM were determined for IFNα2A against ZIKV. As can be observed in Figure 3, IFNα2A activity can also protect against the ZIKV cytopathic effect, which leads to cell lysis; thus, the ratio between the number of cells in treated wells and the number of cells in non-treated wells (defined as the cell ratio) increases in dose-dependent manner to the concentration of IFNα2A, indicating the interferon capacity of protecting the host cells from lysis due to ZIKV infection.\n\nZIKV infected cells were treated with different doses of IFNα2A in a dose-response curve. After 72 h of incubation the cells were submitted to indirect immunofluorescence assay and the infection ratio determined. The data was normalized with the controls and the resultant normalized activity used to plot a sigmoidal dose-response curve (variable slope). The effective concentration resulting in a 50% inhibition of ZIKV infection (EC50) of 2.07 pM was obtained and the concentration of 1.5 nM defined as the effective concentration (capable of eradicating the infection). The cell ratio (number of cells of the tested compound divided by the mean of 1% DMSO-treated cells) is represented in red and the normalized activity in black.\n\nThe final step of the assay validation was the evaluation of the Z’-factor18 for ZIKV infection in Huh7 using IFNα2A as the reference compound. Figure 4A shows a representation of the assay performed, where cells viruses and the reference compound where dispensed following the designed 384-well plate template. The assay resultant data were used to generate a scatter plot and a Z'-factor18 of 0.63 was obtained (Figure 4B).\n\nA) Layout of the validation plate. B) Scatter plot of the infection ratio (number of infected cells divided by the total number of cells) among the validation plate. Dots represent each single tested well and colors represent different treatments, where: 1% DMSO vehicle treated ZIKV infected cells (blue), representing the samples; 1.5 nM IFNα2A treated ZIKV infected cells (positive control) (green); Mock infected cells (red); 1% DMSO treated ZIKV infected cells (negative control) (grey).\n\nThe screened library consisted of725 compounds from a collection of chemically diverse FDA-approved drugs with known and unknown mechanisms of action. The entire library was screened at 20 µM against ZIKV infecting Huh7 cells, using IFNα2A at 1.5 nM as the reference drug and 1% DMSO (vehicle)-treated infected cells as negative controls. As quality control of the assay, two dose-response curves of the reference compound and a 1% DMSO plate were performed (Figure 2). The library was screened in two independent experiments, and the correlation coefficient (R) of 0.81 obtained, which was determined for the normalized activity of each compound between the first (R1) and the second (R2) screens, including compounds and controls (Figure S1). The mean Z'-factor of the screenings were 0.74±0.06 for R1, and 0.56±0.09 for R2 (Figure 5). Out of 725 triaged compounds, 12 and 25 compounds were selected as hits for run 1 and run 2 respectively (Figure 5), resulting in a hit rate of 4%. As the total number of hits was low, all the hits from both primary screenings were selected for confirmation and further testing by dose-response curves.\n\nIn two independent runs, the compounds selected as hits (normalized activity ≥ 50% and cell ratio ≥ 0.5) are located in the right superior quadrant. Dots represent each single tested well and colors represent different treatments, where: ZIKV infected cells treated with different compound samples (blue); 1.5 nM IFNα2A treated ZIKV infected cells (positive control) (green); Vehicle DMSO 1% treated mock infected cells (red); 1% DMSO treated ZIKV infected cells (negative control) (grey). The table in the right summarizes the results obtained both runs.\n\nThe compounds selected as hits in the primary screenings were tested in dose-response to confirm their activity and obtain data regarding the maximum activity, selectivity and EC50. From 29 selected samples in the primary screening, 22 (76%) presented an SI ≥ 1 and maximum activity ≥ 50%, and were considered confirmed (Table 2 and Table S2).\n\nFrom the 22 confirmed compounds, five were selected for further analysis, based on the SI, maximum activity and EC50 (Table 2). These five hits are Lovastatin (Pubchem CID: 53232), an hypolipidemic agent; 5-Fluorouracil (Pubchem CID: 3385), a drug used in the treatment of cancer and that acts through irreversible inhibition of thymidylate synthase; 6-Azauridine (Pubchem CID: 5901), an antineoplastic and antipsoriatic agent, a broad-spectrum antimetabolite that is known to inhibit both DNA and RNA virus multiplication (orotic acid pathway); Palonosetron (identified as hydrochloride salt) (Pubchem CID: 6337614), a 5-HT3 antagonist with antiemetic properties, used in the treatment of chemotherapy-induced nausea and vomiting; and Kitasamycin (Pubchem CID: 44634697), a macrolide antibiotic (Josamycin family) with antimicrobial activity against a wide spectrum of pathogens.\n\nThere are numerous studies showing antiviral or activity of anticancer drugs19 and of macrolides20. Few papers suggest that statins exhibit anti-inflammatory and antiviral effects in vitro21. Recently, Palonosetron anti-ZIKV activity was also verified in an assay similar to the described in this work22.\n\n\nDiscussion\n\nThe advent of Zika virus infections and its fast spreading across the globe, together with reported association of the ZIKV with severe birth defects, including microcephaly and Guillain-Barré syndrome, has raised attention for the importance of searching for mechanisms of control the disease that go beyond vector surveillance and palliative supporting treatment to ease symptoms. Although the exact causes of microcephaly are still unknown, new data suggest that it may be caused by intrauterine infection during the development of the brain23–26. Additionally, other studies have shown in animal models that ZIKV is able to infect the placenta and cross it to infect the fetal brain5,27,28.\n\nRecent data from World Health Organization reported that in the last years 61 countries and territories presented mosquito-borne transmission of Zika. From these, 13 countries or territories described cases of microcephaly and other central nervous system malformations potentially associated with Zika virus infection. In addition, studies from 10 different countries have reported evidence of person-to-person transmission of Zika virus, probably via a sexual route29, indicating that Zika may not be restricted only to the tropical and sub-tropical areas where the mosquitoes of the genus Aedes sp. is found.\n\nNew studies demonstrated that plasma immune to Dengue viruses showed substantial cross-reaction to ZIKV, including being capable to initiate ADE of ZIKV infection7, which could, at least partially, explain the huge increment in the number of reported Zika virus infections after Dengue outbreaks and in areas where Dengue virus is prevalent. Moreover, this cross-reactivity of the anti-Dengue sera with Zika viruses could be a risk point for the newly developed anti-Dengue vaccines8,9.\n\nIn the present work, we developed a fast, robust and reliable technology of high content screening assay for Zika virus. This novel methodology identified five promising compounds (Table 2), among 725 FDA-approved compounds from the NIH Clinical Collection compound library. Two of these compounds were previously described as having anti-ZIKV activity. The 6-Azauridine, which has been reported to have anti-flaviviral activity against 11 members of the flaviviral family, including Zika virus17, and Palonosetron in a similar assay described in this work22. In fact, the detection of hit compounds with previously described anti-ZIKV activity in the screened library validates this approach and demonstrates that the assay is useful for the discovery of novel compounds capable of inhibiting ZIKV infection. It also reinforces that these compounds have promising activity against ZIKV and were able to stand scrutiny of two different screening assays. Conversely, compounds that were recently reported with anti-ZIKV activity30 such as Azathioprine, Dactinomycin, Digoxin, Mebendazole and Mefloquine, presented toxicity higher than 50% in our assay and Clofazimine, Mercaptopurine, Methoxsalen and Sertraline-HCl, which presented activity lower than 50% in our assay. This suggests that these compounds might have a narrow spectrum of activity against some but not all ZIKV isolates. Furthermore, the assay here described is also capable of identifying slowly acting drugs, which demand extended exposure to manifest their effect.\n\nThe clear advantage of this screening is the fact that the assay covers the viral entry, RNA synthesis and viral egress of the host cell, since the Huh7 are exposed to Zika virus for 72 h, respecting the viral biology during the infection of the host.\n\nAll the five herein identified active compounds are currently marketed drugs for distinct treatments. The molecular structure and pharmacokinetics data of the compounds are summarized in Table 1 and Table 2. Lovastatin belongs to the family of statins, which are widely used for lowering cholesterol in patients with hypercholesterolemia, to reduce risk of cardiovascular disease. A clinical trial tested the efficacy of the treatment of Dengue-infected patients with Lovastatin31, since the endothelial stabilizing effects of statins could decrease Dengue-related vasculopathy. Although Lovastatin anti-flaviviral activity was already reported in hepatitis C virus32 and Dengue virus33,34, no evidence of a beneficial effect on any of the clinical manifestations or on Dengue viremia was found. In addition, Lovastatin was reported to attenuate nervous injury in animal model of Guillain-Barré syndrome35.\n\n5-Fluorouracil is a product of the metabolism of floxuridine, a drug long used in the treatment of diverse types of cancer36. It belongs to a drug class known as antimetabolites, and is a pyrimidine analog that irreversibly inhibits thymidylate synthase, impairing the DNA synthesis. The anti-flaviviral activity of Floxuridine against Dengue and West Nile virus was already reported37,38, and here we demonstrate that it also has activity against Zika virus.\n\n6-Azauridine is generally administrated as the triacetylated prodrug, Azaribine. 6-Azauridine, is an antimetabolite capable to inhibit both DNA and RNA virus multiplication. 6-Azauridine was withdrawn from clinical use because of the occurrence of arterial and venous thromboembolic episodes in some psoriatic patients39. Early work demonstrated that viruses sensible to 6-Azauridine induced increased levels of uridine kinase, which converts uridine to uridine monophosphate, a nucleotide used in RNA synthesis, that could explain the activity of the 6-Azauridine on such viruses40. More recently, 6-Azauridine was reported to have a broad activity against 11 flaviviruses, including Zika17.\n\nKitasamycin is a natural product from Streptomyces narbonensis that belongs to the macrolide antibiotic class. The compound is a broad spectrum antimicrobial drug against several pathogens, such as Gram positive bacteria, mycoplasma and leptospira. This macrolide binds to bacterial ribosomal RNA and inhibits protein biosynthesis41. Although Kitasamycin is clinically used, this is the first time, to our knowledge, that it has been reported to have antiviral activity.\n\nPalonosetron is a 5-HT3 serotonin receptor antagonist used for preventing nausea and vomiting induced by chemotherapeutic agents. Palonosetron anti-ZIKV activity was also reported in a recent work22.\n\nTaking a closer look at the selected compounds, they clearly do not belong to the same class of molecules since their structures are quite different (Table 1). Moreover, the calculated properties of the molecules also vary widely (Table 2). Regarding the molecule size, 5-Fluorouracil and 6-Azauridine can be considered small, Palonosetron and Lovastatin are medium-sized while Kitasamycin is a big compound, in terms of drugs. 5-Fluorouracil and 6-Azauridine are hydrophilic while Palonosetron, Lovastatin and Kitasamycin are more lipophilic. 5-Fluorouracil, Lovastatin and Palonosetron have few H bond acceptors and donors, while 6-Azauridine and Kitasamycin have several. Finally, 6-Azauridine and Kitasamycin have a high topological polar surface area (TPSA) while Lovastatin, 5-Fluorouracil and Palonosetron have a low topological polar surface area, compatible with potential brain penetration, which could be a very important feature since the viral infection causes severe damage in the nervous systems under development. These calculated properties allow anticipating very different physico-chemical properties of these hits, likely resulting in very different absorption, distribution, metabolization and excretion profiles for these molecules. These profiles could be considered as advantages or drawbacks in a potential antiviral treatment depending on the target product profile of a Zika treatment and should be used to prioritize these chemotypes for further screening campaigns.\n\nThese compounds showed specific activity against a ZIKV isolate originated in Pernambuco-Brazil, one of the states with the highest number of microcephaly and other newborn nervous system malformations reported cases42.\n\nThe drugs here described can serve as important starting points for the development of analogs or new molecules for the treatment of Zika. Searching for structural analogs of the five molecules, 4,449 similar structures were identified in Pubchem (Table S3). Screening these analogs could help gaining knowledge on the structure-activity relationship (SAR), an important step on medicinal chemistry optimization of a lead compound. Moreover, 10 of these analogs are already marketed drugs (Table S3). We can also consider their historical therapeutic class or mechanism of action as clues to select known chemical entities with similar mechanisms of action to screen against ZIKV. For example, statins or macrolides, widely represented in the pharmacopeia, could be screened in order to identify more potent anti-ZIKV hits. The hits can also be used in target deconvolution studies to identify host molecules involved in ZIKV infection as was already described for other viruses like Chikungunya virus43.\n\nCombining the information generated in this study and the pharmaceutical properties available for the best compounds here identified, we considered Palonosetron as the most promising compound. This drug can be dosed either by oral or intravenous route, in humans its bioavailability is very high (97%) and half-life is very long (40h), making it a good candidate for in vivo confirmation. However, its metabolism, albeit low, mainly involves cytochrome P450 2D6. As there is a high interindividual variability in the efficiency and amount of CYP2D6 enzyme produced, it can be anticipated that this drug may be subjected to substantial variation when metabolized in humans. This problem could be addressed in a medicinal chemistry lead optimization project, provided that SAR is observed. However, this drawback did not prevent Palonosetron (commercialized by Esai as Aloxi®) to reach the market. Interestingly, other 5-HT3 antagonists like Dolasetron, Ondansetron, Granisetron, Tropisetron and Alosetron, discovered by different pharmaceutical companies, with similar or different chemotypes, also reached the market. These compounds, generally developed for treatment of chemotherapy-induced nausea have been widely prescribed (off-label) for morning sickness during pregnancy. The possibility to treat pregnant women with this class of compounds is another advantage in Zika infection, albeit their safety profile for newborns is currently controversial44,45.\n\nIn summary, the study developed here describes a high content screening assay which successfully identified five active compounds against Zika virus isolated in an area of high number of reported cases of newborn neural complications. Further investigation is needed to understand the mechanism of action responsible for the inhibition of the Zika virus infection. However, the molecules identified in this study are important starting points, since they can be further optimized to increase the efficiency inhibiting ZIKV infection. Moreover, based on the structure comparison, more than 4000 molecules where identified in the PubChem databank as analogs and structural variants which could be also be tested, and still more specific and potent compounds can still be identified or even designed.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of 'identification of drugs and promising starting points for drug discovery from an FDA-approved library', 10.5256/f1000research.9648.d13764246",
"appendix": "Author contributions\n\n\n\nBSP and LFJ were involved in project design. BSP performed the experiments and data evaluation. GC was responsible for the hit analysis and medicinal chemistry insights. MTC was responsible for isolating and propagating the ZIKV strain used in this study. All the authors contributed with the article writing and agreed with its content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has been funded by the Sao Paulo State Research Foundation - FAPESP (Process no. 2014/001162-7) and by the National Center for Research on Energy and Materials (CNPEM).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Dr. Amílcar Tanuri for supplying the Aedes albopictus C636 cell line and Dr. Carolina B. Moraes for critically reviewing this manuscript. We are also grateful to the National Institute of Health for providing the NIH Clinical Collection compound library used in this work.\n\n\nSupplementary data\n\nNormalized activity of the samples and controls were plotted in Spotfire software (Tibco) and the correlation coefficient (R) of the two runs calculated using Pearson test in GraphPad Prism software. Dots represent each single tested well and colors represent different treatments, where: ZIKV infected cells treated with different compound samples (blue); 1.5 nM IFNα2A treated ZIKV infected cells (positive control) (green); Vehicle 1% DMSO treated mock infected cells (red); 1% DMSO treated ZIKV infected cells (negative control) (grey).\n\n\nReferences\n\nWHO: Zika virus, Fact Sheets. 2016. Reference Source\n\nFauci AS, Morens DM: Zika Virus in the Americas--Yet Another Arbovirus Threat. N Engl J Med. 2016; 374(7): 601–4. PubMed Abstract | Publisher Full Text\n\nMusso D, Gubler DJ: Zika Virus. Clin Microbiol Rev. 2016; 29(3): 487–524. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarcez PP, Loiola EC, Madeiro da Costa R, et al.: Zika virus impairs growth in human neurospheres and brain organoids. Science. 2016; 352(6287): 816–8. PubMed Abstract | Publisher Full Text\n\nCugola FR, Fernandes IR, Russo FB, et al.: The Brazilian Zika virus strain causes birth defects in experimental models. Nature. 2016; 534(7606): 267–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHills SL, Russell K, Hennessey M, et al.: Transmission of Zika Virus Through Sexual Contact with Travelers to Areas of Ongoing Transmission - Continental United States, 2016. MMWR Morb Mortal Wkly Rep. 2016; 65(8): 215–6. PubMed Abstract | Publisher Full Text\n\nDejnirattisai W, Supasa P, Wongwiwat W, et al.: Dengue virus sero-cross-reactivity drives antibody-dependent enhancement of infection with zika virus. Nat Immunol. 2016; 17(9): 1102–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCapeding MR, Tran NH, Hadinegoro SR, et al.: Clinical efficacy and safety of a novel tetravalent dengue vaccine in healthy children in Asia: a phase 3, randomised, observer-masked, placebo-controlled trial. Lancet. 2014; 384(9951): 1358–65. PubMed Abstract | Publisher Full Text\n\nVillar L, Dayan GH, Arredondo-García JL, et al.: Efficacy of a tetravalent dengue vaccine in children in Latin America. N Engl J Med. 2015; 372(2): 113–23. PubMed Abstract | Publisher Full Text\n\nCruz DJ, Koishi AC, Taniguchi JB, et al.: High content screening of a kinase-focused library reveals compounds broadly-active against dengue viruses. PLoS Negl Trop Dis. 2013; 7(2): e2073. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCruz DJ, Bonotto RM, Gomes RG, et al.: Identification of novel compounds inhibiting chikungunya virus-induced cell death by high throughput screening of a kinase inhibitor library. PLoS Negl Trop Dis. 2013; 7(10): e2471. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGötte M, Gabriel D: Image-Based High-Content Screening in Drug Discovery. In Drug Discovery and Development - Present and Future. DI Kapetanovic, Editor. InTech: Switzerland; 2011. Publisher Full Text\n\nIgarashi A: Isolation of a Singh's Aedes albopictus cell clone sensitive to Dengue and Chikungunya viruses. J Gen Virol. 1978; 40(3): 531–44. PubMed Abstract | Publisher Full Text\n\nYokoyama WM: Production of monoclonal antibody supernatant and ascites fluid. Curr Protoc Mol Biol. 2008; Chapter 11: Unit 11.10. PubMed Abstract | Publisher Full Text\n\nMedina F, Medina JF, Colón C, et al.: Dengue virus: isolation, propagation, quantification, and storage. Curr Protoc Microbiol. 2012; Chapter 15: Unit 15D.2. PubMed Abstract | Publisher Full Text\n\nAjariyakhajorn C, Mammen MP Jr, Endy TP, et al.: Randomized, placebo-controlled trial of nonpegylated and pegylated forms of recombinant human alpha interferon 2a for suppression of dengue virus viremia in rhesus monkeys. Antimicrob Agents Chemother. 2005; 49(11): 4508–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrance JM, Scaramozzino N, Jouan A, et al.: Interferon, ribavirin, 6-azauridine and glycyrrhizin: antiviral compounds active against pathogenic flaviviruses. Antiviral Res. 2003; 58(1): 73–9. PubMed Abstract | Publisher Full Text\n\nZhang JH, Chung TD, Oldenburg KR: A Simple Statistical Parameter for Use in Evaluation and Validation of High Throughput Screening Assays. J Biomol Screen. 1999; 4(2): 67–73. PubMed Abstract | Publisher Full Text\n\nJordheim LP, Durantel D, Zoulim F, et al.: Advances in the development of nucleoside and nucleotide analogues for cancer and viral diseases. Nat Rev Drug Discov. 2013; 12(6): 447–64. PubMed Abstract | Publisher Full Text\n\nRetallack H, Di Lullo E, Arias C, et al.: Zika Virus in the Human Placenta and Developing Brain: Cell Tropism and Drug Inhibition. bioRxiv. 2016. Publisher Full Text\n\nGiguère JF, Tremblay MJ: Statin compounds reduce human immunodeficiency virus type 1 replication by preventing the interaction between virion-associated host intercellular adhesion molecule 1 and its natural cell surface ligand LFA-1. J Virol. 2004; 78(21): 12062–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarrows NJ, Campos RK, Powell ST, et al.: A Screen of FDA-Approved Drugs for Inhibitors of Zika Virus Infection. Cell Host Microbe. 2016; 20(2): 259–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMlakar J, Korva M, Tul N, et al.: Zika Virus Associated with Microcephaly. N Engl J Med. 2016; 374(10): 951–8. PubMed Abstract | Publisher Full Text\n\nCalvet G, Aguiar RS, Melo AS, et al.: Detection and sequencing of Zika virus from amniotic fluid of fetuses with microcephaly in Brazil: a case study. Lancet Infect Dis. 2016; 16(6): 653–60. PubMed Abstract | Publisher Full Text\n\nMartines RB, Bhatnagar J, Keating MK, et al.: Notes from the Field: Evidence of Zika Virus Infection in Brain and Placental Tissues from Two Congenitally Infected Newborns and Two Fetal Losses--Brazil, 2015. MMWR Morb Mortal Wkly Rep. 2016; 65(6): 159–60. PubMed Abstract | Publisher Full Text\n\nSarno M, Sacramento GA, Khouri R, et al.: Zika Virus Infection and Stillbirths: A Case of Hydrops Fetalis, Hydranencephaly and Fetal Demise. PLoS Negl Trop Dis. 2016; 10(2): e0004517. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiner JJ, Cao B, Govero J, et al.: Zika Virus Infection during Pregnancy in Mice Causes Placental Damage and Fetal Demise. Cell. 2016; 165(5): 1081–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi C, Xu D, Ye Q, et al.: Zika Virus Disrupts Neural Progenitor Development and Leads to Microcephaly in Mice. Cell Stem Cell. 2016; 19(1): 120–6. PubMed Abstract | Publisher Full Text\n\nWHO: Zika situation report. 2016. Reference Source\n\nXu M, Lee EM, Wen Z, et al.: Identification of small-molecule inhibitors of Zika virus infection and induced neural cell death via a drug repurposing screen. Nat Med. 2016. PubMed Abstract | Publisher Full Text\n\nWhitehorn J, Nguyen CV, Khanh LP, et al.: Lovastatin for the Treatment of Adult Patients With Dengue: A Randomized, Double-Blind, Placebo-Controlled Trial. Clin Infect Dis. 2016; 62(4): 468–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIkeda M, Abe K, Yamada M, et al.: Different anti-HCV profiles of statins and their potential for combination therapy with interferon. Hepatology. 2006; 44(1): 117–25. PubMed Abstract | Publisher Full Text\n\nRothwell C, Lebreton A, Young Ng C, et al.: Cholesterol biosynthesis modulation regulates dengue viral replication. Virology. 2009; 389(1–2): 8–19. PubMed Abstract | Publisher Full Text\n\nMartinez-Gutierrez M, Correa-Londoño LA, Castellanos JE, et al.: Lovastatin delays infection and increases survival rates in AG129 mice infected with dengue virus serotype 2. PLoS One. 2014; 9(2): e87412. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSarkey JP, Richards MP, Stubbs EB Jr: Lovastatin attenuates nerve injury in an animal model of Guillain-Barré syndrome. J Neurochem. 2007; 100(5): 1265–77. PubMed Abstract | Publisher Full Text\n\nLongley DB, Harkin DP, Johnston PG: 5-fluorouracil: mechanisms of action and clinical strategies. Nat Rev Cancer. 2003; 3(5): 330–8. PubMed Abstract | Publisher Full Text\n\nFischer MA, Smith JL, Shum D, et al.: Flaviviruses are sensitive to inhibition of thymidine synthesis pathways. J Virol. 2013; 87(17): 9411–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShum D, Smith JL, Hirsch AJ, et al.: High-content assay to identify inhibitors of dengue virus infection. Assay Drug Dev Technol. 2010; 8(5): 553–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlavík M, Elis J, Rašková H, et al.: Therapeutic effects of 6-azauridine-triacetate in psoriasis. Pharmacol Clin. 1970; 2(2): 120–125. Publisher Full Text\n\nRada B, Dragún M: Antiviral action and selectivity of 6-azauridine. Ann N Y Acad Sci. 1977; 284: 410–7. PubMed Abstract | Publisher Full Text\n\nFuruuchi T, Miura T, Kurihara K, et al.: Design and synthesis of novel leucomycin analogues modified at the C-3 position. Part II: 3-O-(3-Aryl-2-propenyl)leucomycin analogues. Bioorg Med Chem. 2008; 16(8): 4401–18. PubMed Abstract | Publisher Full Text\n\nKleber de Oliveira W, Cortez-Escalante J, De Oliveira WT, et al.: Increase in Reported Prevalence of Microcephaly in Infants Born to Women Living in Areas with Confirmed Zika Virus Transmission During the First Trimester of Pregnancy - Brazil, 2015. MMWR Morb Mortal Wkly Rep. 2016; 65(9): 242–7. PubMed Abstract | Publisher Full Text\n\nKarlas A, Berre S, Couderc T, et al.: A human genome-wide loss-of-function screen identifies effective chikungunya antiviral drugs. Nat Commun. 2016; 7: 11320. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEinarson A, Maltepe C, Navioz Y, et al.: The safety of ondansetron for nausea and vomiting of pregnancy: a prospective comparative study. BJOG. 2004; 111(9): 940–3. PubMed Abstract | Publisher Full Text\n\nDanielsson B, Wikner BN, Kallen B: Use of ondansetron during pregnancy and congenital malformations in the infant. Reprod Toxicol. 2014; 50: 134–7. PubMed Abstract | Publisher Full Text\n\nPascoalino B, Courtemance G, Cordeiro M, et al.: Dataset 1 in: Zika antiviral chemotherapy: identification of drugs and promising starting points for drug discovery from a FDA-approved library. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17012",
"date": "26 Oct 2016",
"name": "Paul S Anderson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe a cell based assay suitable for screening compounds for activity against Zika virus. The assay is compatible with high content screening methodology and was used in this mode to screen a library of FDA-approved drugs. Palonosetron, 6-Azauridine, 5-Fluorouracil, Lovastatin and Kitasamycin were identified as low micromolar hits. The mechanism(s) for inhibition of the Zika virus infection was not identified.",
"responses": []
},
{
"id": "17530",
"date": "15 Nov 2016",
"name": "Tom von Geldern",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nInfection by Zika virus (ZIKV) is an emerging global health crisis, and as of today there are no approved therapeutic interventions are available for patients. In a search for novel anti-ZIKV agents which could be moved quickly into clinical use, the authors have applied their well-established high-throughput, high-content assay platform to develop a ZIKV assay, and have applied this to evaluate a library of FDA-approved drugs. Building a high-quality high-content assay is a complex exercise, but this team is quite experienced, and has had particular success in the past with other tropical infectious agents.\nStarting from a library of 725 compounds, a series of staged triage steps leads to the identification of 5 validated hits. While the established mechanisms for the majority of these hits (anti-cancer, antibiotic) are recognized sources of anti-parasitic leads, two are quite unexpected. Because they have been sourced from a collection of approved drugs, all should be able to move rapidly into in vivo proof-of-concept studies. Additionally, they might serve as starting points for further optimization by drug discovery teams; the sharing of these lead structures is a particularly altruistic decision on the part of the team.\nThe work reported here is very similar to that reported recently by another multi-national, multi-disciplinary consortium1, who screened a nearly-identical collection in a similar manner. Interestingly, there was very little overlap in the hit-sets identified by these two teams. The origin of this difference is unclear, though the two groups used ZIKV from different sources; this may suggest that anti-viral activity is strain-dependent, complicating the development of agents with broad-based activity. It would be very interesting to have each team cross-test the other's set of leads in their own assay format.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2523
|
https://f1000research.com/articles/5-1519/v1
|
27 Jun 16
|
{
"type": "Software Tool Article",
"title": "CoNet app: inference of biological association networks using Cytoscape",
"authors": [
"Karoline Faust",
"Jeroen Raes",
"Jeroen Raes"
],
"abstract": "Here we present the Cytoscape app version of our association network inference tool CoNet. Though CoNet was developed with microbial community data from sequencing experiments in mind, it is designed to be generic and can detect associations in any data set where biological entities (such as genes, metabolites or species) have been observed repeatedly. The CoNet app supports Cytoscape 2.x and 3.x and offers a variety of network inference approaches, which can also be combined. Here we briefly describe its main features and illustrate its use on microbial count data obtained by 16S rDNA sequencing of arctic soil samples. The CoNet app is available at: http://apps.cytoscape.org/apps/conet.",
"keywords": [
"network generation",
"network construction",
"network inference",
"association networks",
"microbial networks",
"CoNet",
"Cytoscape"
],
"content": "Introduction\n\nModern sequencing technology in combination with dedicated analysis pipelines allows determining the relative abundances of microbial community members, thereby obtaining microbial count data. Such community profiling experiments have been carried out for thousands of samples from a variety of ecosystems, ranging from the world’s oceans (Bork et al., 2015) to the human gut (Falony et al., 2016; The Human Microbiome Project Consortium, 2012).\n\nThe analysis of species abundance patterns has a long tradition in ecology (Connor & Simberloff, 1979; Diamond, 1975; Gotelli & McCabe, 2002). More specifically, co-occurrence analysis detects significant co-occurrences or mutual exclusions across samples, which are interpreted as representing ecological relationships such as mutualism or competition or being due to similar responses to environmental factors. Co-occurrence analysis is an instance of network inference, an exploratory data analysis technique that attempts to unravel relationships between objects from repeated observations. The large number of microbial count tables resulting from the multitude of recent sequencing projects (e.g. Bork et al., 2015; Falony et al., 2016; Gilbert et al., 2014; The Human Microbiome Project Consortium, 2012) opens the way to unraveling the complex relationships between microorganisms from their abundances across samples. CoNet was developed to carry out microbial network inference, but its generic design makes it applicable to any data set where objects have been observed repeatedly.\n\n\nMethods/Implementation\n\nThe CoNet app wraps the CoNet command line tool. The command line and Cytoscape 2.× app version are implemented in Java 1.6, whereas the Cytoscape 3.× app version requires Java 1.7.\n\nIn general, the CoNet app is designed with minimum contact to Cytoscape, to ensure consistent behavior across different Cytoscape versions and to ease porting to future Cytoscape versions. The CoNet app is linked to Cytoscape only via its main menu and graph visualization classes. The Cytoscape-version-specific implementation of the graph visualization class is loaded via reflection at run time and is entirely separated from graph generation.\n\nA major challenge for the implementation of the CoNet app is inclusion of the large number of options available in CoNet, which allows users to customize each network inference step, from data preprocessing via threshold setting, network construction and assessment of significance. This problem was solved by implementing a single user input handling class, which collects and checks user input from the various menus and submits it to CoNet once the GO button is pushed. This design allows to export and to read in user settings files, which make experiments carried out with the CoNet app more reproducible.\n\nAnother challenge is the command line support. Network inference from large data sets is not feasible within Cytoscape and CoNet is best run on command line for these cases. To facilitate this step for the inexperienced user, the current settings of the CoNet app can be exported as a command line call, by clicking the \"Generate command line call\" button. This call can then be executed on command line by including the CoNet jar file in the class path. Networks generated on command line can be loaded either via Cytoscape network import functions (if saved in gml format ((Himsolt)) or more conveniently via the CoNet app (if saved in the custom gdl format). The CoNet app's manual includes a step-by-step tutorial for command line usage.\n\nThe CoNet app also integrates the popular network inference R Bioconductor package minet (Meyer et al., 2008). We decided to integrate it loosely via Rserve, a Java-R bridge capable of transferring R objects to Java and vice versa (http://rforge.net/Rserve/). Thus, advanced users can install and launch the Rserve server in R and configure the Rserve client settings (i.e. host and port) in CoNet app's configuration menu. The CoNet app's manual explains Rserve installation and usage.\n\nFinally, we also implemented solutions for error and help display. The CoNet app displays help pages in html format, which allows the user to follow links within these pages. The CoNet app's pdf manual is compiled from the help pages using prince (http://www.princexml.com/). Each menu is linked to its specific help page, easing navigation.\n\nWhen an error has been captured, an error report is generated that includes the error message as well as the CoNet app's current settings.\n\nCoNet takes a presence/absence, count or abundance matrix as input, where rows represent the objects of interest and columns their observations across locations or time points. Optionally, a second input matrix can be provided. This is of interest when two different measurements have been made for the same samples, for instance counts of microorganisms and concentrations of metabolites. CoNet's output consists of a network where significantly associated objects are connected by edges.\n\nDepending on the data type, a number of filters needs to be applied. For instance, for 16S rDNA count data, taxa with too few non-zero observations need to be removed and the data needs to be normalized or rarefied to account for sequencing depth differences. In the next step, the user can select from a number of different correlations (Pearson, Spearman, Kendall), similarities (mutual information, Steinhaus, distance correlation etc.) or dissimilarities (Kullback Leibler, Euclidean, Bray Curtis, Jensen-Shannon etc.) to score the association strength between the objects. For presence/absence (also termed incidence) data, the hypergeometric distribution or Jaccard distance can be chosen for the same purpose. CoNet's special strength is its capability to combine multiple such measures and/or to combine these measures with other network inference algorithms, e.g. those implemented in minet. The idea behind such an ensemble approach to network inference is to exploit the fact that different methods make different mistakes. If erroneous edges predicted by one method are not supported by the others, they can be filtered out, thereby reducing the number of false positives. The thresholds for the measures can be either set manually (using sliding windows for bounded measures) or automatically, by specifying the desired number of edges in the output network. The network can then be displayed either as a multigraph (with as many edges between two objects as selected measures) or as a graph (where scores of individual measures are combined). Optionally, the significance of the associations can be computed, e.g. with a permutation test. Multiple testing correction can be performed with either Bonferroni or Benjamini-Hochberg procedures. Figure 1 summarizes this workflow.\n\nCoNet offers a series of features that distinguish it from other network inference tools, such as its support for object groups. This feature allows a user to assign objects to different groups (e.g. metabolites and enzymes). Relationships can then be inferred only between different object types (resulting in a bipartite network) or only within the same object type. CoNet's treatment of two input matrices is built upon this feature.\n\nFurthermore, CoNet can handle row metadata, which allows for instance to infer links between objects at different hierarchical levels (e.g. between order Lactobacillales and genus Ureaplasma) while preventing links between different levels of the same hierarchy (e.g. Lactobacillales and Lactobacillaceae). CoNet can also parse sample metadata such as temperature or oxygen concentration, which are then correlated with the objects in the input matrix while being excluded from normalization. In addition, CoNet recognizes abundance tables generated from biom files (McDonald et al., 2012) and, in its Cytoscape 3.× version, reads biom files in HDF5 format directly, using the BiomIO Java library (Ladau). Phylogenetic lineages in these tables are automatically parsed and displayed as node attributes of the resulting network. CoNet also computes a few node properties, such as a node's total edge number as well as the number of positive and negative edges, the total row sum and the number of samples in which the object was observed (e.g. was different from zero or a missing value).\n\nTo ease the selection of suitable preprocessing steps, CoNet can display input matrix properties and recommendations based on them. Importantly, CoNet can also handle missing values, by omitting sample pairs with missing values from the association strength calculation. Finally, CoNet supports a few input and output network formats absent in Cytoscape, including adjacency matrices (import), dot (the format of GraphViz (http://www.graphviz.org/)) and VisML (VisANT's format (Hu et al., 2013)) (both for export).\n\n\nResults\n\nWe demonstrate the abilities of the CoNet app on a real-world example taken from the Qiita database (The Qiita Development Team, 2015). The Qiita database, which merges the previously separated QIIME and EMP databases, is a rich resource for processed 16S rDNA sequence data: each study is accompanied by a microbial count file in biom format computed from the raw sequence data with the QIIME pipeline (Caporaso et al., 2010).\n\nIn our example, we will demonstrate how to build an association network from microbial count data obtained from arctic soil samples (Chu et al., 2010). This data set was chosen for its sample number (sufficient to compute associations but short run times) as well as for the biological insights that are gained from the network analysis. The example showcases the CoNet app's ability to compute associations between higher taxonomic levels and to take environmental metadata into account, which is important for the interpretation of predicted microbial relationships.\n\nIn the Qiita database, the arctic soil study can be found under the title \"Soil bacterial diversity in the Arctic is not fundamentally different from that found in other biomes\" (study identifier: 104, see Supplementary material). This data set consists of 52 soil samples from the arctic tundra, which were sequenced with Roche FLX using primers targeting the V1V2 region of the 16S rDNA. The processed data can be downloaded from the Qiita study page (in Data Types, click on 16S, then click on the URL appearing below, expand the Files network, click on the file object containing BIOM in its name and then download the file with suffix .biom). The study also provides a mapping file with sample metadata (on the Qiita study page, click Sample Information and then the Sample Info button). We extract the pH of each sample by loading the sample information file into Excel, selecting the sample_name and ph columns and saving them to a separate, tab-delimited file.\n\nThe CoNet app is composed of the main window and several menus, including a \"Data\" menu with input and output options, a \"Preprocessing and filter\" menu, a \"Methods\" menu to select network construction methods, a \"Merge\" menu where the user can specify how results from different network construction methods should be merged, a \"Randomization\" menu for the assessment of edge significance and finally a \"Config\" menu for configuration.\n\nIn the following, we will build a network from the arctic tundra biom file. First, in the \"Data\" menu, the arctic tundra biom file is selected and the option \"Biom file in HDF5\" is enabled (direct biom file parsing is only supported in the Cytoscape 3.× version of the CoNet app). In the sub-menu \"Metadata and Features\", the option \"explore links between higher-level taxa\" is enabled together with the option \"Parent-child exclusion\" to compute correlations between higher-level taxa while preventing edges between taxa within the same lineage (e.g. Lactobacillales and Lactobacillaceae). Sample metadata (pH in this case) are passed to the CoNet app via the \"Select file\" button in the \"Features\" corner of the \"Metadata and Features\" sub-menu. Both \"Transpose\" and \"Match samples\" need to be enabled to convert sample metadata into rows and to match sample metadata identifiers to biom file identifiers.\n\nIn the \"Preprocessing and filtering\" menu, the parameter \"row_minocc\" is set to 20 to discard taxa with less than 20 non-zero values across samples. The sum of the discarded rows can be kept by enabling \"Keep sum of filtered rows\". In addition, \"col_norm\" is activated to divide each matrix entry by the sum of its corresponding column, thus avoiding the inference of spurious links due to sequencing depth differences.\n\nIn the \"Methods\" menu, Pearson, Spearman, Bray Curtis, Kullback Leibler and mutual information are selected. Their thresholds can be automatically set such that 1,000 top-scoring and 1,000 bottom-scoring edges (for anti-correlations) are included for each measure in the initial network, by typing \"1000\" as the value of the edge selection parameter and enabling \"Top and bottom\" in the \"Threshold setting\" sub-menu. At this stage, pushing \"GO\" will result in a multigraph, where microbial taxa are connected by up to five different measure-specific edges.\n\nThe significance of edges, that is their p-values, is computed in two CoNet launches, the first of which generates the permutation distributions and an intermediate network and the second the bootstrap distributions and the final network.\n\nFor the first launch, the user selects the \"edgeScores\" routine in the \"Randomization\" menu, with \"shuffle_rows\" as resampling parameter, and enables \"Renormalize\" (for details on renormalization, see Faust et al., 2012). The user then specifies a folder and a file name to export permutation scores and enables \"Save randomizations\" in the \"Save\" corner of the \"Randomization\" menu. Pushing \"GO\" will then launch the computation of edge- and measure-specific permutation distributions. Permutation alone is sufficient to set p-values on the edges, but we found that a combination of permutation and bootstrap is more stringent (Faust et al., 2012). Thus, the network generated in this first step should be considered as an intermediate result.\n\nIn order to compute bootstrap distributions and the final network, the user prepares a second CoNet launch, by selecting the \"bootstrap\" resampling method and a p-value merging method, for instance \"brown\" (Brown 1975), in the \"Randomization\" menu. P-value merging will unite measure-specific p-values for the same edge into a single edge-specific p-value. \"Renormalize\" is disabled and \"benjaminihochberg\" is selected as the multiple testing correction method. In the \"Save\" corner of the \"Randomization\" menu, another file name should be specified to store bootstrap distributions in a separate file. P-values of the final network are computed from both permutation and bootstrap distributions, thus previously generated permutation distributions have to be loaded into the CoNet app. This is done by selecting the permutation file generated in the previous step with the \"Load null distributions\" button. Pushing \"GO\" will then result in the final network, shown in Figure 2A.\n\nA: Result network obtained for bacterial counts from the arctic soil 16S rDNA example data set, downloaded from the Qiita database. B: Same as A, but with negative edges discarded. The remaining edges form clusters with different microbial composition. C: Neighbors of the pH node form two clusters: one correlated and the other anti-correlated to pH, which reflects the opposite pH preferences of the cluster members.\n\nThe CoNet app does not layout resulting networks, to leave the choice of the (potentially time-consuming) layout algorithm to the user. Here, the \"Organic\" layout from yFiles was applied and nodes were colored according to their class using Cytoscape's node coloring functionality.\n\nOnce permutation and bootstrap distributions have been computed, network generation can be quickly repeated by loading both distributions via the \"Load null distributions\" and \"Load randomization file\" buttons, respectively. Figure 2B shows the same network re-generated from pre-computed distributions, but with \"positive edges only\" enabled in the \"Preprocessing and filter\" menu. Figure 2C displays the neighbors of the pH node, which were selected and instantiated as a separate network using Cytoscape's node selection function \"First neighbors of selected nodes\" for undirected networks.\n\nThe computation of permutation and bootstrap distributions took ~5 minutes each for 100 iterations on a standard laptop.\n\nInput and settings files for the use case can be found in the Supplementary material.\n\n\nDiscussion\n\nAfter removal of negative edges, the arctic soil network forms two prominent clusters (Figure 2B), which are enriched with representatives of different classes, such that one cluster features mostly members of the Solibacteres and Acidobacteria, whereas the other consists mostly of Alphaproteobacteria and Chloracidobacteria. When examining the neighbors of the pH node (Figure 2C), members of the former cluster are found to be anti-correlated to pH, whereas members of the latter are correlated to it. Thus, network analysis helps to identify pH as a major driving factor for microbial soil communities, as has been found previously (Fierer & Jackson, 2006). The correlations with pH have also been described by the authors of the soil study (Chu et al., 2010). However, network analysis adds more details (correlations are computed on lower taxonomic levels) and discovers additional taxonomic groups impacted by pH, e.g. Chloracidobacteria. Furthermore, network inference suggests candidates for cross-feeding. For instance, the neighboring nodes of Bradyrhizobium, a nitrogen fixer that produces ammonium, may represent taxa that depend on ammonium as main nitrogen source.\n\nThe CoNet app offers mostly similarity-based network inference. Complementary apps that implement various Bayesian network inference algorithms are Cyni Toolbox (http://www.proteomics.fr/Sysbio/CyniProject), bayelviraApp (http://apps.cytoscape.org/apps/bayelviraapp) and MONET (Lee & Lee, 2005). ARACNE (http://apps.cytoscape.org/apps/aracne) exploits mutual information to build networks (Margolin et al., 2006). ExpressionCorrelation (http://www.baderlab.org/Software/ExpressionCorrelation) and MetaNetter (http://apps.cytoscape.org/apps/metanetter) also offer similarity-based network inference techniques, in case of the former specialized to gene expression and in the latter to metabolomics data. Results from these different network inference approaches could be combined with Cytoscape tools such as Merge Networks.\n\n\nConclusion\n\nIn this article, we have demonstrated the CoNet app on a typical 16S data set. Alternative use cases are for instance the inference of function networks (i.e. co-occurrence of orthologous gene groups) from metagenomics or metatranscriptomics data or taxon-metabolite networks from 16S and metabolomics data.\n\nWe hope that CoNet's integration into Cytoscape will lower the barrier for its employment by users less familiar with the command line version. Due to its flexibility and comprehensiveness, CoNet can be useful in a variety of applications and we thus hope it will find a broad user base.\n\n\nSoftware availability\n\nCoNet app page: http://apps.cytoscape.org/apps/conet\n\nCoNet tool web page: http://systemsbiology.vub.ac.be/conet\n\nLatest source code: http://sourceforge.net/projects/conet/\n\nArchived source code as at the time of publication: Zenodo, Biological network inference in Cytoscape, doi: 10.5281/zenodo.55715 (Faust & Raes, 2016)\n\nLicense: GNU General Public License version 2.0",
"appendix": "Author contributions\n\n\n\nK.F. developed the Cytoscape plugin, J.R. initiated and supervised the work. K.F. wrote the paper. Both authors agreed to the final content of the article.\n\n\nCompeting interests\n\n\n\nThe authors declare that they have no competing interests.\n\n\nGrant information\n\nK. F. and J.R. are supported by the Research Foundation Flanders (FWO) and the Flemish agency for Innovation by Science and Technology (IWT).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Gipsi Lima-Mendez and other members of the Raes lab, as well as all users of the CoNet app that have sent us constructive feedback or error reports that helped to improve this app. We further are indebted to Fah Sathirapongsasuti, Curtis Huttenhower and Jean-Sébastien Lerat, who significantly contributed to the command line version of CoNet.\n\n\nSupplementary material\n\nUse case data in CoNet app: inference of biological association networks using Cytoscape.\n\nThis file contains microbial count data, sample metadata, permutation settings and bootstrap settings associated with this submission. Description of each dataset is provided in the text file.\n\nClick here to access the data.\n\n\nReferences\n\nBork P, Bowler C, de Vargas C, et al.: Tara Oceans. Tara Oceans studies plankton at planetary scale. Introduction. Science. 2015; 348(6237): 873. PubMed Abstract | Publisher Full Text\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–336. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChu H, Fierer N, Lauber CL, et al.: Soil bacterial diversity in the Arctic is not fundamentally different from that found in other biomes. Environ Microbiol. 2010; 12(11): 2998–3006. PubMed Abstract | Publisher Full Text\n\nConnor EF, Simberloff D: The Assembly of Species Communities: Chance or Competition? Ecology. 1979; 60(6): 1132–1140. Publisher Full Text\n\nDiamond JM: Assembly of species communities. In Ecology and evolution of communities. Cody M, Diamond JM eds., Harvard University Press, 1975; 342–444. Reference Source\n\nFalony G, Joossens M, Vieira-Silva S, et al.: Population-level analysis of gut microbiome variation. Science. 2016; 352(6285): 560–564. PubMed Abstract | Publisher Full Text\n\nFaust K, Sathirapongsasuti JF, Izard J, et al.: Microbial co-occurrence relationships in the human microbiome. PLoS Comput Biol. 2012; 8(7): e1002606. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFaust K, Raes J: Biological network inference in Cytoscape. Zenodo. 2016. Data Source\n\nFierer N, Jackson RB: The diversity and biogeography of soil bacterial communities. Proc Natl Acad Sci U S A. 2006; 103(3): 626–631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilbert JA, Jansson JK, Knight R: The Earth Microbiome project: successes and aspirations. BMC Biol. 2014; 12: 69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGotelli NJ, McCabe DJ: Species Co-Occurrence: A Meta-Analysis of J. M. Diamond's Assembly Rules Model. Ecology. 2002; 83(8): 2091–2096. Publisher Full Text\n\nHimsolt M: GML: A portable Graph File Format [Online]. Reference Source\n\nHu Z, Chang YC, Wang Y, et al.: VisANT 4.0: Integrative network platform to connect genes, drugs, diseases and therapies. Nucleic Acids Res. 2013; 41(Web Server issue): W225–W231. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLadau J: Lightweight, portable library for working with HDF5 BIOM files using Java [Online]. Reference Source\n\nLee PH, Lee D: Modularized learning of genetic interaction networks from biological annotations and mRNA expression data. Bioinformatics. 2005; 21(11): 2739–2747. PubMed Abstract | Publisher Full Text\n\nMargolin AA, Nemenman I, Basso K, et al.: ARACNE: An algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006; 7(Suppl 1): S7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDonald D, Clemente JC, Kuczynski J, et al.: The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome. GigaScience. 2012; 1(1): 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeyer PE, Lafitte F, Bontempi G: minet: A R/Bioconductor package for inferring large transcriptional networks using mutual information. BMC Bioinformatics. 2008; 9: 461. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Human Microbiome Project Consortium: A framework for human microbiome research. Nature. 2012; 486(7402): 215–221. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Qiita Development Team: Qiita: report of progress towards an open access microbiome data analysis and visualization platform. In: 14th Python in Science Conference (SCIPY 2015), 2015. Reference Source"
}
|
[
{
"id": "14620",
"date": "13 Jul 2016",
"name": "Paul Wilmes",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article describes a Cytoscape plugin “CoNet app” designed for the inference of networks from microbial abundance or incidence matrices. The effort combining a versatile network inference tool with a user-friendly and widely used network visualization and analysis framework, such as Cytoscape is very valuable to the community.\nI would suggest certain improvements to the article to make it in itself more valuable for potential users to judge the applicability of the plugin to their datasets.\n\nIntroduction: As the authors of the plugin are well aware (being co-authors of “Correlation detection strategies in microbial data sets vary widely in sensitivity and precision” 1), co-abundance or co-occurrence analysis is an approach to ecological data interpretation that is not without caveats and as such, the article is lacking both mention of limitations of the approach and references to the successful use cases of earlier versions of CoNet. I suggest including both in the introduction.\nThe introduction also does little to explain the approach to potential users who are not familiar with the concept. E.g. the sentence “More specifically, co-occurrence analysis detects significant co-occurrences or mutual exclusions across samples, which are interpreted as representing ecological relationships such as mutualism or competition or being due to similar responses to environmental factors.” mixes up observations and analyses with interpretation. Similarly, relating to the first sentence of the introduction, microbial count data are not obtained from relative abundances, but microbial counts taken to infer relative abundances (the sentence is also ambiguous as to what these abundances are relative to). Furthermore, the second but last sentence of the introduction “The large number of microbial count tables resulting from the multitude of recent sequencing projects…” can be interpreted to advise for the co-analysis of results from different studies, which is most often not possible. These parts should be revised for clarity.\nMethods/Implementation: More details on the algorithms would be useful, or alternatively references to other publications which describe CoNet, as relates to the following points:\n“its capability to combine multiple such measures and/or to combine these measures with other network inference algorithms”,\n\n“CoNet can also parse sample metadata such as temperature or oxygen concentration, which are then correlated with the objects in the input matrix while being excluded from normalization.” and\n\n“Phylogenetic lineages in these tables are automatically parsed”. Also, what are positive and negative edges? How is mutual information integrated with measures which can be positive or negative?\nUse case: It would be helpful to shortly describe the size of the dataset (number of OTUs and number of samples) as part of the sentence “This data set was chosen for its sample number (sufficient to compute associations but short run times) as well as for the biological insights that are gained from the network analysis.” A general advice on the required sample number and or relationship between numbers of analyzed features and sample numbers would also be helpful. In addition, are the 100 iterations performed in this example a realistic number of iterations to be used in such an analysis?\nThe formulation “The significance of edges, that is their p-values” is a bit unfortunate. On a similar note, next to the permutations, is there a way in CoNet or the CoNet app to assess association strengths? An example of how the assessment of edge significance affects network size and structure would be informative. Figures: The large heading in Figure 1 should be removed. Figure 2 would benefit from a heading. The labels of figure 2 are not legible. It is unclear from the text and not mentioned in the legend, how the “classes” used for coloring nodes are defined. Are these classes in the taxonomic sense or different kinds of data? The color scheme for positive and negative edges should be explained. In panel C, the pH node should be more clearly pointed out.\n\nSmall comments: The referenced “Brown 1975” does not appear in the references.\nThe capitalization of “P-value” is inconsistent.\nAs the buttons in the app are actually called that, refer to “Data menu”, “Preprocessing and filter menu” etc.",
"responses": [
{
"c_id": "2220",
"date": "14 Oct 2016",
"name": "Karoline Faust",
"role": "Author Response",
"response": "\"The effort combining a versatile network inference tool with a user-friendly and widely used network visualization and analysis framework, such as Cytoscape is very valuable to the community.\" We would like to thank the reviewer for this appreciation of our work. Introduction In response to the reviewers' comments, we have rewritten the introduction, thereby rephrasing problematic sentences, pointing out limitations of microbial network inference and citing the evaluation. We also added a paragraph in the discussion to mention applications of CoNet. Methods/Implementation “its capability to combine multiple such measures and/or to combine these measures with other network inference algorithms”, We included an overview table comparing selected measures of association. We also added a paragraph on how measures can be combined in CoNet. “CoNet can also parse sample metadata such as temperature or oxygen concentration, which are then correlated with the objects in the input matrix while being excluded from normalization.” We improved this explanation of CoNet's treatment of sample metadata. “Phylogenetic lineages in these tables are automatically parsed”. We provided an example to better explain what we mean. Also, what are positive and negative edges? How is mutual information integrated with measures which can be positive or negative? We added an explanation. Use case The OTU number was added to the following sentence (which already listed the sample number): This data set consists of 4,022 operating taxonomic units and 52 soil samples from the arctic tundra, which were sequenced with Roche FLX using primers targeting the V1V2 region of the 16S rDNA. \"A general advice on the required sample number and or relationship between numbers of analyzed features and sample numbers would also be helpful.\" In general, the number of false positives increases with decreasing sample number. While assessment of significance counter-balances this effect, it is unreasonable to compute a correlation from a few observations only, even if it is strongly significant. However, we cannot provide a formula to compute where exactly to put the cut-off. \"In addition, are the 100 iterations performed in this example a realistic number of iterations to be used in such an analysis?\" We saw previously that there is no big difference between networks computed with 100 or 1000 iterations. The reason is that we are not computing p-values from a pure permutation test, where small p-values can only be reached by performing a sufficient number of iterations. Instead, we compute the p-value parametrically as the mean of the permutation distribution under the bootstrap distribution. Estimating the mean and standard deviation of normal distributions is less sensitive to iteration number than computing parameter-free p-values. We added this explanation to the text. \"The formulation “The significance of edges, that is their p-values” is a bit unfortunate. On a similar note, next to the permutations, is there a way in CoNet or the CoNet app to assess association strengths? An example of how the assessment of edge significance affects network size and structure would be informative.\" The p-value is an assessment of association strength. So are the scores of the measures themselves, e.g. Pearson's r and Spearman's rho, which are correlated with the p-value. We have added a remark explaining this to the text. Assessing the significance usually discards edges from the initial network, in some cases even removing all initial edges. The number of edges removed depends on the initially selected thresholds. In the use case, the initial network consists of 10000 edges, 1546 of which remain after assessment of significance and merging of measure-specific p-values into a single p-value. The exact edge number in the final network may vary slightly from run to run, due to variations in the permutation and bootstrap distributions. \"Figures: The large heading in Figure 1 should be removed.\" This heading was not intended as a Figure heading but as a heading to divide the text. We improved the layout and added headers to Figure 1 and 2. \"Figure 2 would benefit from a heading. The labels of figure 2 are not legible.\" Our aim here was to show the networks as obtained with the CoNet app when executing the use case, but we understand the point of the reviewer. As a compromise, we have now removed the labels and added a class-level color code. \"It is unclear from the text and not mentioned in the legend, how the “classes” used for coloring nodes are defined. Are these classes in the taxonomic sense or different kinds of data?\" These are taxonomic classes. We clarified this in the caption of Figure 2. \"The color scheme for positive and negative edges should be explained.\" We added an explanation of the color scheme in the main text and to the caption of Figure 2. \"In panel C, the pH node should be more clearly pointed out.\" The pH node stands out by differing in shape from the taxon nodes. We have clarified this by adding a legend to Figure 2. Small comments: \"The referenced “Brown 1975” does not appear in the references.\" We excuse for this oversight. We have added the reference. \"The capitalization of “P-value” is inconsistent.\" We now use p-value with a lower case p, unless it is the first word of a new sentence, where we use the upper case P. \"As the buttons in the app are actually called that, refer to “Data menu”, “Preprocessing and filter menu” etc. \" Done"
}
]
},
{
"id": "15254",
"date": "12 Sep 2016",
"name": "Alexander Eiler",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBe more precious, here. \"The idea behind such an ensemble approach to network inference is to exploit the fact that different methods make different mistakes.\"\n\nThese are different statistical inferences so based on the underlaying algorithms results will be different. Same may be better suited for parametric or none-parametric data, some perform better with larger or smaller sample numbers. The different methods have also different statistical power do identify significances. Some may produce more false positives or false negatives than others. Some guidance and references to statistical literature could be provided in the article.\nI really liked to see an implementation that calculates false discovery rate (after Benjamin Hochberg) over all statistical comparisons.",
"responses": [
{
"c_id": "2219",
"date": "14 Oct 2016",
"name": "Karoline Faust",
"role": "Author Response",
"response": "\"These are different statistical inferences so based on the underlaying algorithms results will be different.\" Thanks for pointing this out. We have now added an overview table of the strengths and weaknesses of selected measures available in CoNet. We also added a paragraph that discusses the different ways in which these measures can be combined in CoNet. \"I really liked to see an implementation that calculates false discovery rate (after Benjamin Hochberg) over all statistical comparisons.\" CoNet does allow computing false discovery rate over all statistical comparisons either by setting the number of initial edges sufficiently high or by setting the thresholds on the individual measures sufficiently low. We have added this remark to the article. Despite the fact that multiple testing correction is in most cases only applied after edges are discarded through initial filtering, CoNet is among the microbial network inference tools with the lowest false positive rates tested in Weiss et al., The ISME Journal 2016 (https://www.ncbi.nlm.nih.gov/pubmed/26905627, supplementary Figure 10) ."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1519
|
https://f1000research.com/articles/5-2490/v1
|
10 Oct 16
|
{
"type": "Software Tool Article",
"title": "CELL5M: A geospatial database of agricultural indicators for Africa South of the Sahara",
"authors": [
"Jawoo Koo",
"Cindy M. Cox",
"Melanie Bacou",
"Carlo Azzarri",
"Zhe Guo",
"Ulrike Wood-Sichra",
"Queenie Gong",
"Liangzhi You",
"Cindy M. Cox",
"Melanie Bacou",
"Carlo Azzarri",
"Zhe Guo",
"Ulrike Wood-Sichra",
"Queenie Gong",
"Liangzhi You"
],
"abstract": "Recent progress in large-scale georeferenced data collection is widening opportunities for combining multi-disciplinary datasets from biophysical to socioeconomic domains, advancing our analytical and modeling capacity. Granular spatial datasets provide critical information necessary for decision makers to identify target areas, assess baseline conditions, prioritize investment options, set goals and targets and monitor impacts. However, key challenges in reconciling data across themes, scales and borders restrict our capacity to produce global and regional maps and time series. This paper provides overview, structure and coverage of CELL5M—an open-access database of geospatial indicators at 5 arc-minute grid resolution—and introduces a range of analytical applications and case-uses. CELL5M covers a wide set of agriculture-relevant domains for all countries in Africa South of the Sahara and supports our understanding of multi-dimensional spatial variability inherent in farming landscapes throughout the region.",
"keywords": [
"spatial database",
"Africa South of Sahara",
"agricultural development",
"geographical information systems",
"data analysis"
],
"content": "Highlights\n\nSpatial datasets for development are often disciplinary and not interoperable\n\nDeveloped CELL5M as a spatial database for agricultural research and development.\n\nHarmonized +750 multi-discipline data layers at 5 arc-minute resolution\n\nKey themes include food production, agroecology, demographics, and market access\n\nSince 2010, CELL5M has been used in more than 100 published studies\n\n\nIntroduction\n\nOver 70 percent of the population in Africa South of the Sahara (SSA) live in rural areas, their livelihood and food security often depending on smallholdings and rainfed agriculture (Livingstone et al., 2011). Many are also farming some of the most degraded soils in the world (Cox & Koo, 2014), a challenge exacerbated by over-reliance on low-yielding crop varieties (Mueller et al., 2012) and inadequate market infrastructure (Guo & Cox, 2014). Erratic shifts in weather and climate-related shocks are particularly hard felt in the region (Challinor et al., 2007). Development practitioners recognize that Africa’s economic development largely hinges on smallholder investment through improved agricultural yields, nutrition, ecosystem services and marketing opportunities (Dixon et al., 2001). Historically, however, there has been a lack of reliable, granular data to inform and monitor food and agricultural policies at appropriate scales. With the launch of the Sustainable Development Goals (SDGs) (http://unstats.un.org/sdgs) —including zero global poverty and hunger by 2030—more granular, global and regional-level data need to reach decision makers for monitoring countries’ progress toward the goals.\n\nRecent progress in georeferenced data collection and dissemination has widened access to multi-disciplinary datasets and created opportunities to advance data analytics (Azzarri et al., 2016). As data capacity improves, however, the potential of georeferenced socioeconomic datasets has not been fully utilized (Azzarri et al., 2016). A key challenge is reconciling and harmonizing multi-disciplinary indicators that can inform agricultural investments across scales and borders. To this end, HarvestChoice (http://harvestchoice.org), a joint project between the International Food Policy Research Institute (IFPRI) and the University of Minnesota, developed the CELL5M database (http://dx.doi.org/10.7910/DVN/G4TBLF), an open access catalog of georeferenced baseline indicators covering a broad range of agriculture-relevant domains. In this paper, we provide an overview of CELL5M and present a range of tools and applications for spatial targeting and strategic decision-making.\n\n\nCELL5M Overview\n\nCELL5M is a geospatial database of biophysical and socioeconomic indicators for SSA covering four broad research domains: agriculture, agroecology, demographics and markets (Table 1). All indicators are referenced to a uniform geographical information systems (GIS) grid: a flat table populated by over 300,000 grid cells overlaying SSA at 5 arc-minute spatial resolution. Each grid cell (or pixel) is approximately 10 kilometer × 10 kilometer and holds a stack of georeferenced data layers. CELL5M currently consists of over 750 data layers, providing a unique platform for multi-faceted analysis and fine-grain visualization at the nexus of agriculture and economic development. The database serves as the core to a decision-support system enabling development practitioners and analysts to explore complex relationships between major agroecological challenges (e.g., soil and land degradation) and socioeconomic trends (e.g., poverty, health, and nutrition) (Azzarri et al., 2016). The structure of CELL5M allows for simplified numerical aggregations of gridded data along specific geographic domains, either sub-nationally (e.g., across administrative boundaries, agroecological zones or watersheds) or across country borders for regional analyses (e.g., Omamo et al., 2006)—all readily possible without GIS software. Users can visualize CELL5M indicators through HarvestChoice Mappr (http://harvestchoice.org/mappr) or download from HarvestChoice Dataverse at http://dataverse.harvard.edu/dataverse/harvestchoice (HarvestChoice, 2016a).\n\nTo refer to a cell’s boundary at any given spatial resolution, we created a universal identification system based on a basic unit of spatial analysis: the global grid cell (HarvestChoice, 2016b). In GIS, one typically uses coordinates (latitude and longitude) of the upper-left and lower-right corners of the grid cell’s bounding box, or coordinates of the centroid, along with information on the projection system. To simplify identification, we universally label each cell as a sequential integer number, or grid cell ID. The grid cell ID can facilitate raster-based data analyses, aggregations and data sharing. The upper-left corner of the grid (longitude: -180.0, latitude: 90.0) starts at zero and ends at 9,331,199 in the lower-right corner (longitude: 180.0, latitude: -90.0). This system also allows for the grid cell ID to be mathematically computed at specific locations and converted to different resolutions and projection systems. This grid cell ID is used internally as the primary key of CELL5M database tables.\n\nCELL5M indicators originate from a variety of sources and partnerships, including CGIAR, World Bank, FAO, International Institute for Applied Systems Analysis (IIASA; http://iiasa.ac.at), Center for International Earth Science Information Network (CIESIN; http://ciesin.org), WorldClim (http://worldclim.org), University of East Anglia (http://cru.uea.ac.uk) and Africa Soil Information Service (AfSIS; http://africasoils.net). Raw datasets are provided in multiple spatio-temporal resolutions, geographical extents, and formats (e.g., tabular, vector and raster). They undergo harmonization routines that aim to generate standardized, cross-regional comparable statistics at uniform scale (Figure 1). Raster and vector layers are typically re-projected to World Geodetic System (WGS) 84, a standard coordinate system for the Earth. Raster datasets of finer resolution (e.g., 30 arc-second) are aggregated using weights (e.g., land or population weights) or summarized (e.g., population headcounts) to 5 arc-minute resolution. Conversely, we apply a disaggregation process when the source data is coarser, which is generally the case with socioeconomic datasets that are geo-referenced to administrative units. Where applicable, care is taken to ensure that country totals of disaggregated data are consistent with official national statistics. To maximize coverage across SSA, missing data are imputed using coarser statistics and prior information. The result is a stack of harmonized, interoperable datasets based on a standardized grid system. CELL5M complies with open-data standards (Open Knowledge Foundation, 2016).\n\nUsing a variety of data sources and methods, CELL5M covers four broad research domains: biophysical, agricultural production, socio-economics and infrastructure (1). Using a combination of data resampling and harmonization routines (2), raw datasets are converted to a standard raster grid with a resulting set of uniform indicators across space and time (3). Indicators are distributed across platforms via application program interface and web mapping services (4). These services are freely and openly accessible through end-user tools (e.g., Mappr and Tablr, available at http://harvestchoice.org/) and decision-support systems (5); Africa RISING, FAOSTAT, the World Bank’s Living Standards Measurement Study-Integrated Surveys on Agriculture (LSMS-ISA) and the Bill and Melinda Gates Foundation (BMGF) already consume CELL5M into their own analytical platforms.\n\n\nKey data layers\n\nThis section provides additional methodological details on example key datasets included in CELL5M.\n\nBeyond national-level assessments, spatially-disaggregated crop production statistics are the cornerstone of any analysis that explores the social, economic and environmental consequences of agricultural change and policies. The Spatial Production Allocation Model (SPAM) developed by the International food policy research institute (IFPRI) generates highly disaggregated, global distribution of area, production and yield for 42 commodities—accounting for 90 percent of the world’s crop production (You et al., 2014). To generate these data layers, geospatial information on crops—including subnational crop production statistics, satellite-derived land cover imagery, maps of irrigated areas, biophysical crop suitability assessments, population densities, cropping intensities and prices—is integrated to generate a set of prior estimates. These priors are then fed into an optimization model that applies cross-entropy principles, and area and production accounting constraints to allocate crops into individual pixels of a global grid at 5 arc-minute resolution (You & Wood, 2006; You et al., 2009) (Figure 2). The result for each grid cell is the area, production, value of production, and yield of each crop, split by the shares grown under irrigated, high-input rainfed, low-input rainfed and subsistence rainfed conditions. CELL5M includes the SSA extent of SPAM; global coverage of SPAM data layers are available at http://mapspam.info.\n\nSPAM integrates information on crops (e.g., subnational crop production statistics, land cover satellite-data, maps of irrigated areas, biophysical crop suitability assessments, population densities, cropping intensities and prices) and cross-entropy principles to allocate crops into individual pixels of a GIS database. The result for each pixel is the area (shown above), production, value of production and yield of each crop.\n\nFarm households need access to markets to support agricultural and rural development, particularly in poorer regions. Challenging road conditions and inadequate infrastructure add to travel time and transportation cost, limiting farmers’ opportunity to purchase inputs and sell produce from remote crop production areas. The conventional method of measuring the Euclidean distance between two points in space (i.e., farm-gate and market) ignores the terrain, road conditions and infrastructure status, hence does not accurately capture travel time. Estimates of the travel time to markets provide a better proxy for market accessibility since they combine distance with other information including road quality, slope, land cover, and mode of transportation (Guo & Cox, 2014). To estimate market accessibility, we first identify the locations of different market centers and their sizes using population estimates from the Global Rural Urban Mapping Project (CIESIN et al., 2011). Then the travel times from farm-gate to the nearest cities of different population sizes are calculated using a spatial cost-distance algorithm and a combination of global spatial data layers including road network and type, elevation, slope, country boundaries, and land cover. CELL5M includes travel times to markets where populations are 20K (Figure 3), 50K, 100K, 250K, and at least 500K.\n\nWe estimate travel time to nearest market centers (cities) of different population sizes using a spatial cost-distance algorithm and a combination of global spatial data layers including road network and type, elevation, slope, country boundaries, water and land cover. Source: Authors (available from CELL5M).\n\nPoverty data layers in CELL5M are based on the comparison between household per-capita consumption expenditure and the $1.90 or $3.10/per-capita/day poverty lines (Figure 4), expressed in international equivalent purchasing power parity (PPP) dollars, circa 2011 (World Bank, 2014). By basing indicators on nationally- and regionally-representative household survey data, such as Household Income and Consumption Expenditure Survey (HICE), Integrated Household Survey (IHS), and Living Standards Measurement Study (LSMS), we avoid challenges with methods that combine national accounts and microdata (Chen & Ravallion, 2008; Deaton, 2005; Ravallion, 2003). Using microdata with expansion factors and national PPP adjustments guarantees the validity of national and subnational estimates and, along with data harmonization, allows cross-country and time comparisons based on the purchasing power of the local currency in each survey year. Results are further validated by comparing the statistics calculated from microdata with official national indicators reported by World Bank’s PovcalNet (http://iresearch.worldbank.org/PovcalNet). CELL5M includes 36 individual poverty and income data layers disaggregated across rural and urban domains.\n\nRatios are derived from a series of 41 nationally representative household surveys conducted around 2008 for the majority of countries. Monthly per capita expenditure is converted to 2011 PPP dollars and a series of derived poverty statistics are estimated and mapped across all representative administrative units. Each survey map is rasterized to a uniform 5-arc-minute grid. Urban and rural estimates are applied to rural and urban grid cells, respectively. Source: Authors (available from CELL5M).\n\nHarvestChoice’s grid-based crop modeling platform uses the Decision Support System for Agrotechnology Transfer (DSSAT) (Hoogenboom et al., 2009; Jones et al., 2003) to simulate crop growth and yield. The platform integrates biophysical data layers from CELL5M (e.g., crop geography, crop performance baseline, soil properties and climate characteristics) and estimates crop productivity response under various ‘what-if’ scenarios of change in agroecological conditions and farm management practices (e.g., maize profitability in response to doubling fertilizer application rates). The modeling platform has been used, for example, in ex-ante impact assessments of climate change (Nelson et al., 2009), agricultural technologies (Rosegrant et al., 2014), and climate variability associated with regional drought (Cervigni & Morris, 2016). CELL5M includes model-derived indicators on maize yield variability in low and high-input rainfed production systems (Koo & Cox, 2014).\n\n\nCELL5M use-cases\n\nWell over 100 published manuscripts have used CELL5M datasets since 2010, from various institutions around the globe (e.g., see Table 2). For example, CELL5M has been utilized to define and characterize study areas (e.g., van Wart et al., 2013); estimate market travel times (e.g., Damania et al., 2016); explore geography changes in crop production (e.g., Beddow & Pardey, 2015); calculate local agricultural commodity prices (e.g., Fjelde, 2015); map the threat of potential plant diseases (e.g., Kriticos et al., 2015); model climate change adaptations in agriculture (e.g., Robinson et al., 2015); and as a general data framework (e.g., Kwon et al., 2016). CELL5M datasets have also been widely used in GIS training courses at academic institutions (e.g., Deshazor, 2014), research grant proposals (e.g., Ousmane Badiane, personal communication, January 28, 2016), and agricultural development investment strategies (e.g., Stanley R. Wood, personal communication, February 12, 2016). The following sections describe examples in which partner organizations consume CELL5M to support food policy-relevant analyses.\n\nBibliography of the publications can be found in the Supplementary Information.\n\nThe Association for Strengthening Agricultural Research in Eastern and Central Africa (ASARECA) overlaid three key geospatial data layers from CELL5M—population density, market accessibility and agricultural potential—to construct ‘Agricultural Development Domains’ for investment targeting. Each domain is a distinct geographic area, where agricultural conditions (e.g., demographics, infrastructure and agroecology) are relatively homogeneous and distinguishable from others. CELL5M helps ASARECA to prescribe domain-specific interventions and evaluate their impacts accordingly (Johnson & Flaherty, 2010). While grid cell-level information is necessary to generate flexible aggregations across space, CELL5M is particularly useful for such analysis because of the mixed nature of the datasets. This domain approach guides ASARECA’s upscaling of agricultural technologies across their target region in East and Central African countries (Omamo et al., 2006). For example, a spatially-explicit understanding of market accessibility is underlying the development of interventions linking value-chain actors with producers. A similar domain-based approach was used to analyze the biophysical suitability of agricultural innovations to local contexts (e.g., Cox et al., 2015).\n\nThe last decade has witnessed a surge of interest in leveraging agricultural development for better nutrition. However, there is a dearth of rigorous evidence and policy-relevant research on agriculture-nutrition linkages (Pinstrup-Andersen, 2013). As part of the Advancing Research on Nutrition and Agriculture (AReNA) initiative, HarvestChoice overlaid CELL5M indicators to an extensive series of georeferenced Demographic and Health Surveys (DHS; http://www.dhsprogram.com). Figure 5 shows the location of 28,866 clusters in SSA. Combining such datasets allows for more advanced econometric analyses to explore, for example, the spatial relationships between farming systems, biophysical characteristics, agricultural performance, market access and rural diets. For example, by overlaying agroecological indicators from CELL5M with childhood stunting data from DHS, Azzarri et al. (2016) showed that early childhood wasting is significantly more prevalent in the arid and semi-arid zones of SSA.\n\nThere are 28,866 clusters across 32 countries. IFPRI’s AReNA (Advancing Research on Nutrition and Agriculture) initiative used datasets extracted from CELL5M for each cluster location in a series of econometric analyses to investigate the relationship between agriculture and nutrition outcomes. Source: Authors.\n\nAfrica has a rich landscape of farming systems and agricultural biodiversity. This diversity presents a challenge for quantitative analyses at regional scale. In Benin et al. (2011), data layers from CELL5M were used to construct a typology of food production systems across SSA. Agricultural productivity zones (APZs) were developed by first intersecting farming systems (Dixon et al., 2001) with other indicators related to natural endowment and socioeconomic development, calculated from data retrieved from CELL5M and then applying spatial clustering techniques (Guo & Yu, 2015). The resulting APZs (Figure 6) provide a more refined set of spatially-explicit typologies, compared to conventional country-level typologies, and allow policy makers to refine agricultural investment strategies.\n\nCompared to maps of farming systems (Dixon et al., 2001). APZ provides a finer distinction across the continent by further disaggregating farming systems according to the data retrieved from CELL5M and the intensity of vegetation and non-vegetation observed from satellite-based remote sensing data. The map highlights considerable variations of biophysical conditions within countries and agroecological zones, representing over 300 different classifications of APZs (see Guo & Yu, 2015, for more details on the legend).\n\nCELL5M serves as the core database powering a growing number of open-access tools (see the list at http://harvestchoice.org/products/tool) and third-party applications reaching out to multiple audiences from research analysts to decision makers (Figure 1). Gridded datasets are particularly easy to store in numerical matrices making them relatively manageable and simple to query. This allows us to serve CELL5M indicators through a RESTful Application Programming Interface (API), which allows computer programs to access and query CELL5M data using HTTP requests. CELL5M’s centroid coordinates (i.e., latitude and longitude) may be used to graph and summarize indicators using simple visualization tools (e.g., Tableau® or Microsoft Excel). Web-based interactive tools developed by HarvestChoice, for example Mappr (http://harvestchoice.org/mappr) and Tablr (http://harvestchoice.org/tablr) use the API to return tabular, graphical and spatial representations of CELL5M indicators. CELL5M raster layers are also served through a series of map services and may be queried via any GIS software compatible with OGC Web Map Service Standard (Open Geospatial Consortium, 2016) (e.g., ArcMap, QGIS, Leaflet or GDAL). For GIS users, the gridded data is also available in common raster formats (GeoTIFF and Esri ASCII). The World Bank’s micro-level datasets from the Living Standards Measurement Study-Integrated Surveys on Agriculture (LSMS-ISA) program uses CELL5M services to retrieved data for each survey site, including agroecological and market accessibility characteristics, to enrich its own data products (communications with the LSMS-ISA team, March 19, 2015).\n\n\nConclusions\n\nThrough open and transparent sharing of high-resolution, harmonized multi-disciplinary datasets, CELL5M supports our understanding of multi-dimensional spatial variability in farming landscapes throughout SSA and helps better target potential interventions. A growing list of use-cases shows that CELL5M’s reach has moved well beyond its initial scope and is now used by a larger pool of scientists and decision makers. With the double challenge of climate change mitigation and global food security, we anticipate an ever-growing demand for easy-to-access and easy-to-use, harmonized open datasets for agricultural research and economic development.\n\nIt is worth noting that many methodological shortcomings in harmonizing and imputing raw data from various sources still prevail. More research is required to develop reliable statistical methods to interpolate point-and administrative-level data and especially to generate reliable confidence intervals. This will also require more open datasets becoming available. Many institutions are already committed to freely open their agriculture and nutrition datasets, yet a broad community-wide effort is still needed to improve data interoperability and utilization (GODAN, 2015).\n\nWith advances in earth monitoring systems and image frequency and resolution, data products such as CELL5M necessitate further, continued investments to ensure that new data sources are incorporated, updated, modeled, and thoroughly validated. In that context, increased engagement with the broader community of data scientists and users is necessary for future success. We anticipate further collaboration with other emerging global data initiatives and partnerships (e.g., Global Partnership for Sustainable Development Data), especially those aimed at monitoring mechanisms towards achieving global development goals.\n\n\nData availability\n\nData tables in CSV format, grouped by theme in 18 zip-archived files, are available to download from the IFPRI HarvestChoice Dataverse at http://dx.doi.org/10.7910/DVN/G4TBLF (HarvestChoice, 2016a). Any analysis software capable of reading comma-separated values (CSV) files, such as Microsoft Excel or WMS-enabled GIS desktop tool (e.g. QGIS, ArcMap, matlab, Python, R, GDAL) can be used to analyze the data.",
"appendix": "Author contributions\n\n\n\nJK, MB, and ZG conceived the work. JK, MB, CA, ZG, UW, QG, and LY carried out the research. JK worked on agroecology data. CA and MB worked on demographics data. ZG led work on market access data. LY, UW, and QG worked on agricultural production statistics. MB worked on overall development of platform and applications. CC prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFinancial support of this work to the authors was provided by the Bill and Melinda Gates Foundation (Grant No. OPPGD1450), the CGIAR Research Program on Policies, Institutions, and Markets (PIM) (Activity No. 1.3.159), and the Bureau of Food Security at the United States Agency for International Development (USAID).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to acknowledge members of the CGIAR Consortium for Spatial Information (CGIAR-CSI) for contributing data and feedback.\n\n\nSupplementary material\n\nSupplementary Information for Table 2.\n\nSelected publications (from 2010 through August 2016) that used CELL5M for underlying data.\n\nClick here to access the data\n\n\nReferences\n\nAzzarri C, Bacou M, Cox CM, et al.: Subnational socio-economic dataset availability. Nat Clim Chang. 2016; 6: 115–116. Publisher Full Text\n\nBeddow JM, Pardey PG: Moving Matters: The Effect of Location on Crop Production. J Econ Hist. 2015; 75(1): 219–249. Publisher Full Text\n\nBenin S, Pratt AN, Wood S, et al.: Trends and spatial patterns in agricultural productivity in Africa, 1961–2010. ReSAKSS Annual Trends and Outlook Report 2011, 2011. Publisher Full Text\n\nCervigni R, Morris M: Confronting Drought in Africa’s Drylands: Opportunities for Enhancing Resilience. openknowledge. worldbank.org. Washington DC. 2016. Publisher Full Text\n\nChallinor A, Wheeler T, Garforth C, et al.: Assessing the vulnerability of food crop systems in Africa to climate change. Clim Change. 2007; 83(3): 381–399. Publisher Full Text\n\nChen S, Ravallion M: The developing world is poorer than we thought, but no less successful in the fight against poverty. World Bank Policy Res. Work Pap. 2008. Publisher Full Text\n\nCIESIN, IFPRI, WB, CIAT: Global Rural-Urban Mapping Project, Version 1 (GRUMPv1): Population Count Grid. Palisades, NY NASA Socioecon. Data Appl Cent. 2011. Publisher Full Text\n\nCox C, Koo J: Soil Fertility. In: Sebastian K. (Ed.), Atlas of African Agriculture Research and Development: Revealing Agriculture’s Place in Africa. International Food Policy Research Institute (IFPRI). Washington DC. 2014. Publisher Full Text\n\nCox CM, Kwon HY, Koo J: The biophysical potential for urea deep placement technology in lowland rice production systems of Ghana and Senegal. IFPRI Discussion Paper 1448. Washington DC. 2015. Reference Source\n\nDamania R, Berg C, Russ J, et al.: Agricultural Technology Choice and Transport. Am J Agric Econ. 2016. Publisher Full Text\n\nDeaton A: Measuring poverty in a growing world (or measuring growth in a poor world). Rev Econ Stat. 2005; 87(1): 1–19. Publisher Full Text\n\nDeshazor J: A Foray into GIS Instruction. 2014; (accessed 4.25.16). Reference Source\n\nDixon JA, Gibbon DP, Gulliver A: Farming systems and poverty: improving farmers’ livelihoods in a changing world. Rome: Food and Agriculture Association of the United Nations; Washington, DC: World Bank. 2001. Reference Source\n\nFjelde H: Farming or Fighting? Agricultural Price Shocks and Civil War in Africa. World Dev. 2015; 67: 525–534. Publisher Full Text\n\nGODAN: How can we improve agriculture, food and nutrition with open data? London. 2015. Reference Source\n\nGuo Z, Cox CM: Market access. In: Sebastian, K. (Ed.), Atlas of African Agriculture Research and Development: Revealing Agriculture’s Place in Africa. International Food Policy Research Institute (IFPRI), Washington, DC. 2014. Publisher Full Text\n\nGuo Z, Yu B: A spatial typology of agricultural productivity in Sub-Saharan Africa. In: Annabelle, H. (Ed.), Food Security: Challenges, Role of Biotechnologies and Implications for Developing Countries. Nova Science Publishers. 2015. Reference Source\n\nHarvestChoice, International Food Policy Research Institute (IFPRI); University of Minnesota: CELL5M: A Multidisciplinary Geospatial Database for Africa South of the Sahara. Harvard Dataverse, V3. 2016a. Publisher Full Text\n\nHarvestChoice, International Food Policy Research Institute (IFPRI); University of California Davis: HCID: Global Grid Cell Identification System at Multiple Spatial Resolutions. Harvard Dataverse, V1. 2016b. Publisher Full Text\n\nHoogenboom G, Jones JW, Wilkens PW, et al.: Decision Support System for Agrotechnology Transfer (DSSAT) Version 4.5 [CD-ROM].2009.\n\nJohnson E, Flaherty K: Strategic Analysis and Knowledge Support Systems for Agriculture and Rural Development in Africa: Translating Evidence into Action. International Food Policy Research Institute (IFPRI), Washington, D.C. 2010; 6. Reference Source\n\nJones JW, Hoogenboom G, Porter CH, et al.: The DSSAT cropping system model. Eur J Agron. 2003; 18(3–4): 235–265. Publisher Full Text\n\nKoo J, Cox CM: Effects Of Rainfall Variability On Maize Yields. In: Sebastian, K. (Ed.), Atlas of African Agriculture Research & Development. International Food Policy Research Institute, Washington D.C. 2014; 44–45. Publisher Full Text\n\nKriticos DJ, Ota N, Hutchison WD, et al.: The potential distribution of invading Helicoverpa armigera in North America: Is It just a matter of time? PLoS One. 2015; 10(3): e0119618. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKwon HY, Nkonya E, Johnson T, et al.: Global Estimates of the Impacts of Grassland Degradation on Livestock Productivity from 2001 to 2011. In: Economics of Land Degradation and Improvement – A Global Assessment for Sustainable Development. Springer International Publishing, Cham. 2016; 197–214. Publisher Full Text\n\nLivingstone G, Schonberger S, Delaney S: Sub Saharan Africa: The state of smallholders in agriculture. In: Conference on New Directions for Smallholder Agriculture, 24 – 25 January. Rome: IFAD. 2011; 20122. Reference Source\n\nMueller ND, Gerber JS, Johnston M, et al.: Closing yield gaps through nutrient and water management. Nature. 2012; 490(7419): 254–257. PubMed Abstract | Publisher Full Text\n\nNelson GC, Rosegrant MW, Koo J, et al.: Climate change: Impact on agriculture and costs of adaptation. International Food Policy Research Institute (IFPRI), Washington, DC. 2009. Publisher Full Text\n\nOmamo SW, Diao X, Wood S, et al.: Strategic Priorities for Agricultural Development in Eastern and Central Africa. International Food Policy Research Institute (IFPRI), Washington, DC. 2006; 150. Reference Source\n\nOpen Geospatial Consortium; Web Map Service [WWW Document]. (accessed 4.25.16). 2016. Reference Source\n\nOpen Knowledge Foundation; Tabular Data Package [WWW Document]. Data Protoc. (accessed 4.25.16). 2016.. Reference Source\n\nPinstrup-Andersen P: Nutrition-sensitive food systems: from rhetoric to action. Lancet. 2013; 382(9890): 375–376. PubMed Abstract | Publisher Full Text\n\nRavallion M: Measuring aggregate welfare in developing countries: How well do national accounts and surveys agree? Rev Econ Stat. 2003; 85(3): 645–652. Publisher Full Text\n\nRobinson S, Mason-D ’croz D, Islam S, et al.: Climate Change Adaptation in Agriculture: Ex Ante Analysis of Promising and Alternative Crop Technologies Using DSSAT and IMPACT. 2015. Reference Source\n\nRosegrant MW, Koo J, Cenacchi N, et al.: Food security in a world of natural resource scarcity: The role of agricultural technologies. International Food Policy Research Institute (IFPRI), Washington DC. 2014. Publisher Full Text\n\nvan Wart J, van Bussel LG, Wolf J, et al.: Use of agro-climatic zones to upscale simulated crop yield potential. F Crop Res. 2013; 143: 44–55. Publisher Full Text\n\nWorld Bank: Purchasing Power Parities and the Real Size of World Economies: A Comprehensive Report of the 2011 International Comparison Program. The World Bank, Washington DC. 2014. Publisher Full Text\n\nYou L, Wood S: An entropy approach to spatial disaggregation of agricultural production. Agric Syst. 2006; 90(1–3): 329–347. Publisher Full Text\n\nYou L, Wood S, Wood-Sichra U: Generating plausible crop distribution maps for Sub-Saharan Africa using a spatially disaggregated data fusion and optimization approach. Agric Syst. 2009; 99(2–3): 126–140. Publisher Full Text\n\nYou L, Wood S, Wood-Sichra U, et al.: Generating global crop distribution maps: From census to grid. Agric Syst. 2014; 127: 53–60. Publisher Full Text"
}
|
[
{
"id": "17421",
"date": "04 Nov 2016",
"name": "Paul West",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nCELL5M works toward solving a major problem in the area of food policy decision making; working with the glut of disparate data with differing spatial and temporal resolutions to identify accurate insights relevant to policy makers. Koo and colleagues provide a excellent overview of the CELL5M database. The paper also provides the basic information on the methods used to harmonize the multiple data sets.\n\nIt appears that the CELL5M team has thought through most of the pitfalls of decision making at this scale. Our main concern in presenting this to policy makers is in abstracting some of the possible inconsistencies in data scale. Here are a few specific comments related to this concern:\n\nIs there some acknowledgement where data has been disaggregated from the national scale, yet presented at a smaller scale? While there is no choice but to use spatially mismatched data in this type of work, it should be made very transparent when the data is not presented at its true scale.\n\nIt should also be transparent where indicators are created from source data at multiple scales. Combining national and sub-national data to create a fine-scale indicator can create a false sense of precision.\n\nMany of the indicators are likely derived from data sets that either have similar features, creating an uber metric. How do you avoid double counting / weighting some features more than others?\n\nWas the harmonization mainly spatial or did you also standardize feature names and units?\n\nMore general comments:\nThe abstract could be strengthened. It does not explicitly address this paper until after three long sentences. It’s then very general. A few points to make the abstract more concrete include:\n\nharmonized 750+ data sets for feature names, units, and spatial resolution.\n\nmain themes are: w,x,y,z.\nProvide the type of analysis that is possible and how it can be used (generalize one of the nice examples in the section “ Agricultural development domains”). Be explicit that can integrate social, economic, and biophysical data\n\nHow did you choose among the many data sets that provide similar information? For example, there are a few sources of data on crop production, yield gaps, and market access. Since different primary data (and methods) were used to create the various data sets, you will get different results when they are integrated here. For you audience, it’s probably better to only have a single data source for each feature, but it would be helpful to be clear on your general criteria for which data are included.\n\nThe unique cell ID is a great feature for integrating multiple data sets. This also allows for faster, more stable queries and spatial operations using the web mapper or offline.\n\nAlthough we have not yet used it, the CELL5M data set is a great source for harmonized data for accessing, exploring, and analyzing data for the many uses the authors reference (baseline, setting goals, targeting actions, assessing scenarios, etc.).",
"responses": []
},
{
"id": "17508",
"date": "10 Nov 2016",
"name": "Katharina Waha",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI’m following the HarvestChoice webpage for a while now and used data provided a few years ago, so it’s good to see the publication of their spatial database.\n\nThe title and abstract are appropriate for the content of the article. The basic methods for generating CELL5M are explained, however I would like to ask the authors to not only describe the sources of key layers, but all data sources they have used (maybe in the SI). This description should include at least the input data for each data set, the original resolution or spatial units, the base year(s) and a reference to a full documentation.\n\nThis is important in my opinion because at the moment you reconcile data sets only spatially but they might diverge in their methods and assumptions which prevents particular applications. A simple example are the livestock densities you use from the Gridded Livestock of the World 2007 which are modelled based on (among other things) climate as in WorldClim and the climate variables you present in CELL5M that use CRU. What happens often as well when developing a global data set is that areas are masked out or typologies of certain areas are created with distinct thresholds and if this information is not available to a user he might falsely interpret a spatial overlay of two data sets.\n\nFigure 6 needs a legend, Guo & Yu 2015 is a book chapter and not freely available.\n\nYour description of how to assign a grid cell ID is a bit over the top in my view and not worth mentioning or maybe shorten in to one sentence and add to the introduction. Geographic Information Systems are around for 40 years now and any GIS works with grid cell IDs, they might just not be in the order of your IDs.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2490
|
https://f1000research.com/articles/5-2471/v1
|
07 Oct 16
|
{
"type": "Research Article",
"title": "Metagenomic analysis of medicinal Cannabis samples; pathogenic bacteria, toxigenic fungi, and beneficial microbes grow in culture-based yeast and mold tests",
"authors": [
"Kevin McKernan",
"Jessica Spangler",
"Yvonne Helbert",
"Ryan C. Lynch",
"Adrian Devitt-Lee",
"Lei Zhang",
"Wendell Orphe",
"Jason Warner",
"Theodore Foss",
"Christopher J. Hudalla",
"Matthew Silva",
"Douglas R. Smith",
"Jessica Spangler",
"Yvonne Helbert",
"Ryan C. Lynch",
"Adrian Devitt-Lee",
"Lei Zhang",
"Wendell Orphe",
"Jason Warner",
"Theodore Foss",
"Christopher J. Hudalla",
"Matthew Silva"
],
"abstract": "Background: The presence of bacteria and fungi in medicinal or recreational Cannabis poses a potential threat to consumers if those microbes include pathogenic or toxigenic species. This study evaluated two widely used culture-based platforms for total yeast and mold (TYM) testing marketed by 3M Corporation and Biomérieux, in comparison with a quantitative PCR (qPCR) approach marketed by Medicinal Genomics Corporation. Methods: A set of 15 medicinal Cannabis samples were analyzed using 3M and Biomérieux culture-based platforms and by qPCR to quantify microbial DNA. All samples were then subjected to next-generation sequencing and metagenomics analysis to enumerate the bacteria and fungi present before and after growth on culture-based media. Results: Several pathogenic or toxigenic bacterial and fungal species were identified in proportions of >5% of classified reads on the samples, including Acinetobacter baumannii, Escherichia coli, Pseudomonas aeruginosa, Ralstonia pickettii, Salmonella enterica, Stenotrophomonas maltophilia, Aspergillus ostianus, Aspergillus sydowii, Penicillium citrinum and Penicillium steckii. Samples subjected to culture showed substantial shifts in the number and diversity of species present, including the failure of Aspergillus species to grow well on either platform. Substantial growth of Clostridium botulinum and other bacteria were frequently observed on one or both of the culture-based TYM platforms. The presence of plant growth promoting (beneficial) fungal species further influenced the differential growth of species in the microbiome of each sample. Conclusions: These findings have important implications for the Cannabis and food safety testing industries.",
"keywords": [
"Cannabis",
"safety",
"PathogINDICAtor-qPCR",
"3M-Petrifilm",
"Biomérieux-TEMPO",
"Illumina",
"metagenomics",
"microbiome"
],
"content": "Introduction\n\nPlant associated microbes may present risks of infectious illness for human end consumers. However, many plant-associated microbes may provide benefits for plant cultivation in terms of growth stimulation, insect or microbial resistance, or may simply be neutral passengers1–3. The microbiome of Cannabis leaves and flowers includes bacteria and fungi residing on the exterior surface of these tissues (epiphytes) as well as those residing within the plant tissues (endophytes). While epiphytic microbes may originate from many sources like aerosols, dusts and liquids, or via human contact, endophytes typically gain entry from the rhizosphere via root junctions, and subsequent translocation through the xylem4,5. Considering this and the known impact that the soil and root microbiome has on plant growth and development6,7, all sources of microbial inputs, including below ground compartments should be considered important for optimal Cannabis growth and consumer safety8.\n\nStudies on the natural Cannabis microbiome have identified several species of culturable endophytic fungi, including Penicillium citrinum, Penicillium copticola (a member of the citrinum section9) and several Aspergillus species10,11. Similar studies looking at culturable bacterial endophytes identified nearly a dozen isolates from the Bacillus clade and two mycobacteria1. Of those Bacillus species, B. subtilis, B. lichenoformis and B. pumilis have been isolated as endophytes and have been shown to be beneficial to growth in other plant species12–14. Finally, a recent investigation of the fungal microbiome in a number of dispensary-derived Cannabis samples identified numerous species including some toxigenic Penicillia and Aspergilli15. While there have not been any reported cases of Cannabis-related mycotoxin poisoning resulting from Penicillium infections, there have been numerous reported cases of serious or fatal pulmonary Aspergillosis associated with marijuana smoking in immunocompromised patients16–18. A multistate outbreak of Salmonellosis has also been reported19,20. Denver’s Department of Environmental Health has also issued warnings related to Cannabis extracts and Clostridium botulinum21.\n\nState Cannabis markets rely on a patchwork of testing regulations to protect patients and consumers. In terms of microbial testing, these vary widely from state to state. States such as Maine, Michigan, and Arizona currently do not impose testing regulations, while several states such as Connecticut, Massachusetts and New Mexico have adopted regulations based on the United States Pharmacopeia (USP) and American Herbal Pharmacopeia (AHP) recommended guidelines22. Specifically, the AHP recommends appropriate methods for testing microbial loads be adopted from the FDA Bacteriological Analytical Manual (http://www.fda.gov/Food/FoodScienceResearch/LaboratoryMethods/ucm2006949.htm). State regulators frequently use AHP guidelines to set limits of 105 CFU/g for Total Aerobic Bacteria (TAC), 104 CFU/g for Total Yeast and Mold (TYM), 103 CFU/g for Total Coliform and Enterobacteriaceae and < 1 CFU/g for pathogenic E. coli and Salmonella species. The AHP states, “It is important to note that microbial and fungal values do not typically represent pass or fail criteria and recommended limits may require adjustment over time.” New York and Hawaii specify some additional genera for testing such as Aspergillus, Klebsiella, Pseudomonas, Streptococcus, Mucor, and Penicillium. A few States require that testing laboratories follow the procedures outlined in the USP for microbiological examination of non-sterile products. Others allow testing laboratories to choose from a wide variety of technologies designed for the food testing industry. However, there is no peer-reviewed research supporting the effectiveness and validity of any of these protocols for Cannabis microbial testing. Furthermore, no studies to date have examined the impact of beneficial endophytes on the Cannabis microbiome and on microbial testing results.\n\nHere we present a next generation sequencing survey of DNA sampled directly from cured cannabis flowers before and after culturing using 3M Rapid Yeast and Mold PetrifilmTM, the Biomérieux Tempo® Total Yeast and Mold platform, and qPCR analysis using Medicinal Genomics ITS2-based TYM and 16S-based TAC assays. Sequencing and analysis of the fungal ribosomal operon internal transcribed spacer23,24 (ITS2) and the bacterial 16S ribosomal RNA gene V3 and V4 hypervariable regions25 (16S) allowed us to identify bacterial and fungal genera and species present in each case. The results highlight some organisms of concern and demonstrate that major fungal and bacterial compositional changes occur during culture-based TYM testing.\n\n\nMethods\n\nCannabis samples were derived from seven recently-established indoor growth facilities in Massachusetts, Maine and Rhode Island. Samples were prepared and placed into culture on 3M PetrifilmTM Rapid Yeast and Mold Count Plates (40–72 h) and Biomérieux TEMPO® YM cards (70–76 h) at 25 ± 1.0°C, according to the manufacturer’s instructions. All samples but two were also analyzed using Biomérieux TEMPO® AC cards to enumerate aerobic bacterial counts. For qPCR, Cannabis samples (250 ± 30 mg) were placed in Whirl-Pak® bags and massaged in 3.55 ml Trypticase Soy Broth (TSB; American Bioanalytical) for 1 minute. DNA was then extracted using SenSATIVAx reagents (Medicinal Genomics part #420001), as described previously15 and eluted with 50 μL ddH20. DNA was similarly extracted after growth on the two culture based platforms as described above. Colonies grown on 3M plates were scraped off into 285 μL of ddH2O, and 190 μL of those samples, or samples grown in TEMPO cartridges (liquid culture), were extracted using SenSATIVAx as above. Fungal species stocks from the American Type Culture Collection (ATCC) were reconstituted and incubated at the appropriate temperature, as recommended by ATCC product documentation. Cultures of ATCC strains were then grown in 5ml TSB for 5 days at room temperature and checked visually for turbidity. Serial dilutions were plated on 3M PetrifilmTM Rapid Yeast and Mold Count Plates, incubated at room temperature, and counted after 3–5 days. Colonies were scraped off the plates and DNA was then extracted as described above.\n\nThe cannabis samples used for this study were collected within the regulatory framework for the individual State Medical Marijuana programs by ProVerde Laboratories; an accredited ISO/IEC 17025:2005 cannabis safety testing laboratory. The purified DNA, which is not a schedule I substance, was tested to verify that the hydrophilic DNA purification does not contain hydrophobic cannabinoids and is therefore in accordance with the Hemp Associates vs DEA regarding hemp fiber shipment within the United States. Since all activities that involved handling of material containing cannabinoids was within the individual state requirements, no federal (FDA or DEA) registration or permission was required.\n\nDNA samples extracted directly from Cannabis samples, or after growth on the two culture-based platforms, were subjected to qPCR analysis. Quantitative PCR was performed using a commercially available TYM assay (TYM-PathogINDICAtor, Medicinal Genomics, Woburn MA), or TAC assay (TAC-PathogINDICAtor, Medicinal Genomics, Woburn, MA) in a Bio-Rad CFX 96 Touch qPCR instrument, according to the manufacturer’s instructions.\n\nPCR was performed using 5μL of DNA (3ng/μL) 12.5μL 2X LongAmp (NEB) with 1.25 μL of each 10 μM MGC-ITS3F and MGC-ITS3R primer or MGC-TAC_F and MGC-TAC_R primer (MGC-ITS3F: TACACGACGTTGTAAAACGACGCATCGATGAAGAACGCAGC), (MGC-ITS3R: AGGATAACAATTTCACACAGGATTTGAGCTCTTGCCGCTTCA), (MGC-TAC_F: TACACGACGTTGTAAAACGATCCTACGGGAGGCAGCAGT) and (MGC-TAC_R: AGGATAACAATTTCACACAGGGGACTACCAGGGTATCTAATCCTGTT) with 10μL ddH20 for a 25 μL total reaction. An initial 95°C 5-minute denaturation step was performed followed by 25 cycles of 95°C for 15s and 65°C for 90s. Samples were purified with 75 μL SenSATIVAx, washed twice with 100 μL 70% EtOH and bench dried for 5 minutes at room temperature. Samples were eluted in 25 μL ddH20.\n\nThe 16S amplicon targeted by the MGC primers (spanning the V3 and V4 hypervariable regions) is approximately 460 bp in size, and ITS2 amplicons from different fungal species are known to vary in size from ~0.5–1 kilobases. To enable representative coverage across the entire amplicon for sequencing and analysis of each sample, we enzymatically fragmented the amplicons to ~300 bp average size. Fragmentation was accomplished and DNA libraries were constructed using the commercially available Nextera Library Prep Kit (Illumina). 6ng of purified PCR product, 5 μL of TD buffer, 0.1 μL of TD enzyme and 3.9 μL ddH20 was combined for a total of 10 μL. The reaction was incubated at 55°C for 30 minutes followed by a 10°C hold. The reaction plate was immediately removed from the thermal cycler and purified with 15 μL of Agencourt Ampure XP (Beckman Coulter), washed twice with 200 μL 70% EtOH and bench dried for 10 minutes at room temperature. Samples were eluted in 25μL 10mM Tris-HCl.\n\n17.5 μL of 2X Q5 polymerase (NEB) was added to 10μL of purified DNA with 2.5 μL of i7 Nextera index primer, 2.5 μL L of i5 Nextera index primer, 0.5 μL of ILMN1 primer (50 μM), 0.5 μL of ILMN2 primer (50 μM), 1 μL 5-methyl-dCTP (10 μM) and 0.5 μL H2O. After an initial 72°C for 3 minutes and 98°C for 30 s, the library was amplified for 12 cycles of 98°C for 10 s, 63°C for 30 s, 72°C for 1 minute and a 10°C hold. Use of methylated nucleotides for PCR decontamination is described previously26,27. PCR samples were purified by mixing 52.5 μL of Agencourt Ampure XP into the PCR reaction. The samples were placed on a magnet for 15 minutes until the beads cleared and the supernatant could be removed. Beads were washed twice with 200 μL of 70% EtOH. Beads were left for 10 minute to air dry and then eluted in 25 μL of 10 mM Tris-HCl. 5 μL of each PCR product was pooled and quantified with a Qubit (Thermo) for proper dilution onto MiSeq version 2 chemistry according to the manufacturers’ instructions. 2×150 bp reads were selected to obtain maximal ITS2 sequence information.\n\n2×150 bp reads were de-multiplexed with Illumina software bcl2fastq v2.17.1.14. Sequences were classified at the Family, Genus and Species level by discriminative k-mer analysis using CLARK-S28 with the NCBI/RefSeq bacterial database and taxonomy, or UNITE29 fungal database and taxonomy. Cannabis chloroplast and mitochondrial sequences were included in the bacterial and fungal databases since they amplify with the 16S rRNA primers used, and the Nextera fragmentation process used in our lib prep may incorporate high copy number sequences even without amplification. Cannabis mitochondrial sequences generally comprised a large fraction of the classified reads (up to 97%) in DNA derived from plant material. The Cannabis reads were subtracted out to enable enumeration of the bacterial species down to 1% of classified non-Cannabis reads.\n\nSequences were alternatively classified by BLAST analysis of operational taxonomic units (OTUs) generated by clustering at the ≥ 97% sequence similarity level using USEARCH830. Each set of paired-end reads were merged using fastq_merge pairs31. We used cutadapt to trim primer and adaptor regions from both ends (http://cutadapt.readthedocs.io/en/stable/guide.html). Sequences were quality trimmed to have a maximum expected number of errors per 100 bases of less than 0.1 (Q30). OTUs with membership of at least 200 sequences were included in downstream analyses, and BLAST hits with less than 97% query coverage and 97% identity were discarded. Analyses of the USEARCH OTUs were performed in R (https://www.r-project.org). Each library was normalized by the total number of OTUs found. OTUs were associated with microbes based on the name and description provided by NCBI. R2 values were calculated by adjusted linear regression in R or by embedded formulas in Excel. In order to mitigate the large effect of noise in samples with low OTU counts, specificity analysis was done after pooling the un-normalized data.\n\n\nResults\n\nSummary results from the different testing platforms evaluated in this study for 15 samples with complete data are presented in Table 1. The samples were evaluated with Medicinal Genomics’ PathogINDICAtor ITS2-based TYM-qPCR and 16S-based TAC-qPCR assays directly from extracted plant material (Before), and from recovered medium after culture on the Biomérieux Tempo instrument using YM sample cards (After BMX). Samples were also evaluated directly using the Biomerieux instrument with Tempo YM and AC cards, or on 3M Rapid Total Yeast and Mold Count Plates (3M TYM). Results in bold type and shaded boxes indicate failed tests following the limits set for Massachusetts medicinal Cannabis.\n\nColumn 1: sample number; Column 2: results TYM-qPCR signals in terms of quantification cycle (Cq); Column 3: colony counts for 3M TYM plates; Column 4, inferred colony counts from BMX YM cards, Column 5: TYM-qPCR Cq signals after culture in the BMX YM system; Column 6: TAC-qPCR Cq signals from extracted plant material; Column 7: inferred colony counts from BMX AC cards, and Column 8: TAC-qPCR Cq signals after culture in BMX YM cards. Results in bold type and shaded boxes indicate failed tests. Abbreviations: BMX: Biomerieux, TYM: total yeast and mold, YM: yeast and mold, TNTC: too many to count, TAC: total aerobic count, AC: aerobic count, n.d.: not done. The AC and TYM failure thresholds for colony counts on the 3M and Biomerieux platforms are 100,000 CFU/g and 10,000 CFU/g respectively. The TAC and TYM qPCR failure thresholds are Cq ≤ 21 and Cq ≤ 26, respectively.\n\nOverall, the BMX TYM platform failed the highest number at 67% (10/15); the 3M TYM platform failed 60% (9/15), and the qPCR TYM failed 20% (3/15). The failure rates for the BMX AC and qPCR TAC assays were 13% (2/15) and 7% (1/15), respectively. An additional set of TYM qPCR tests were performed after growth on the BMX platform, resulting in 12/15 failures and confirming the presence of live, culturable fungi in 80% of the samples. The 3M TYM and BMX YM systems performed similarly in terms of pass/fail, with only one discrepancy, which had a value close to the failure threshold. The TYM-qPCR assay passed seven samples that failed on at least one of the two culture-based platforms. One of those (sample 4) had an elevated quantitation cycle (Cq) value approaching the failure threshold; the rest (samples 11–16) gave high Cq values, indicating very low fungal DNA levels (Table 1).\n\nThe sequencing data generated for this project are available at the NCBI short read archive; see Dataset 1 (Table I) for accession numbers and URLs. A summary of the CLARK-S classification results for each of the 15 samples, directly from plant material (before), or after culture on the 3M or BMX platforms, is provided in Dataset 1 (Table II: CLARK-S output for bacterial species analysis with read counts, Table III: matrix file with % classified reads at the species level for all TAC samples, Table IV: matrix file with % classified reads down to 1% at the species level from selected TAC samples used to generate charts, Table V: CLARK-S output for TYM analysis with read counts, Table VI: matrix file with % classified reads for all TYM samples, Table VII: matrix file with % classified reads down to 1% from selected TYM samples used to generate charts, Table VIII: matrix file with % classified reads down to 1% at the genus level from the same selected TAC samples as in Table IV).\n\nWhile the sequencing assay provides approximate intra-sample quantitation, it does not support inter-sample quantitation32. The sequencing procedure utilizes two PCR steps instead of the single PCR step used in qPCR (and does not utilize an internal probe for signal generation). Sample quantities are normalized prior to the Nextera reaction to ensure consistent shearing. These procedures are optimized to yield 1 million reads or more per sample for high sensitivity, but the read numbers are not proportional to microbial counts in the starting samples. Instead, the classified read counts and percentages simply indicate the genera or species present at detectable levels and their approximate proportions (with the caveat that the target amplicons from some species may amplify with lower efficiency owing to primer mismatches or extremes of G+C content). The qPCR Cq measured directly from extracted plant material provides the best inter-sample comparative metric. BLAST results from clustered OTUs were used to confirm prevalent species assignments on a case-by-case basis, but the results are not presented here owing to the very large number of OTUs generated by the USEARCH software (>12,000 across the full sample set).\n\nSequencing reproducibility: 14 frozen samples were amplified with ITS2 primers and sequenced 30–60 days apart; 13 of the comparative R square values for classified fungal species were greater than 0.999 and the remaining one was 0.966. Similarly, 20 frozen samples were amplified with 16S primers and sequenced 30–60 days apart; 18 of the comparative R square values for classified bacterial species were greater than 0.999 and the remaining two averaged was 0.998. These data imply highly reproducible genomic surveys of the amplified DNA present. No Template Controls (NTC) were also tested, producing very high Cq readings (>35) and very few classified reads (251 with TAC primers and 61 with TYM primers) controlling for the possibility of labware contamination contributing to the observed signals.\n\nSpecificity: To verify the specificity of the analysis for accurate discrimination between bacterial and fungal genera, we ran CLARK-S against the bacterial and fungal databases separately at the genus level using either 16S or ITS2 reads. There were 13,913,520 16S reads classified as bacterial, 2,293 16S reads classified as fungal, 6,220,745 ITS2 reads classified as fungal, and 241,351 ITS2 reads classified as bacterial (Dataset 1, Tables V and IX–XI; Table IX: genus level CLARK-S read counts for 16S reads against the fungal database, Table X: genus level CLARK-S read counts for ITS2 reads against the bacterial database, Table XI: genus level CLARK-S read counts for 16S reads against the bacterial database). From this we calculate the specificity (true neg/(false pos + true neg) of 16S analysis as 0.963 [=ITS2 reads classified as fungal/(ITS2 reads classified as fungal+ITS2 reads classified as bacterial)] and that of the ITS2 analysis as 0.9997 [=16S reads classified as bacterial/(16S reads classified as bacterial+16S reads classified as fungal)].\n\nPairs of samples from three of the seven growers were highly similar in their combined bacterial and fungal species prevalence as indicated by high correlation coefficients (CC): CC=0.92 for samples 1 and 2, CC=0.94 for samples 11 and 12, and CC=0.97 for samples 6 and 14. There was also moderate correlation between samples 6, 14 and 9, a third sample from the same grower: CC=0.66 for samples 6 and 9, CC=0.64 for samples 9 and 14. These samples represent different strains from the same grow and likely share similar soil environments.\n\nSix samples (numbers 11–16) failed in the BMX TYM test, but passed the MGC qPCR TYM test with low signals (Cq >40). Five of those (numbers 11, 12, 14–16) had elevated qPCR TAC signals, suggesting that the growth of bacteria could be contributing to colony counts and failures in the culture-based TYM tests. Sequencing results for each of those samples, before and after culture in BMX medium, confirm the presence of actively growing bacteria, and reveals the bacterial genera that are primarily responsible for the TAC-qPCR signals: Bacillus and Clostridium in sample 11 (~73% of classified reads, collectively), and Bacillus, Clostridium and Ralstonia in samples 14–16 (78–83% of classified reads, collectively in the each of the three samples). A different set of genera were observed after culture on 3M media: Ralstonia and Leifsonia in sample 11 (86% of classified reads, collectively), and Xanthomonas, Ralstonia and Streptococcus in samples 14–16 (61–75%, collectively in each sample).\n\nAll of the samples underwent a change in species composition after growth on the BMX or 3M yeast and mold platforms. Three of the 15 samples (numbers 5, 15 and 16) produced a similar distribution of species on the BMX and 3M platforms, with correlation coefficients (CC) of 0.41–0.82. The results from the remaining 80% of the samples, however, were strikingly different on the two platforms (CC: -0.03-0.21). Representative results from two of those samples, numbers 2 and 14, are shown in Figure 1.\n\nComparison of classified read percentages for bacterial 16S DNA on samples 2 and 14, before and after culturing on 3M and BMX media. The results represent all species observed down to 1% of classified reads. Large shifts in species prevalence are seen after growth on the two culture-based platforms.\n\nSignificant levels of Bacillus coagulans and Clostridium botulinum (a toxigenic pathogen) were observed together in two thirds of the samples (numbers 6–9 and 11–16) after incubation in the hermetically sealed cards of the BMX TYM platform. These organisms were detected before growth at very low levels (0.5% or less), indicating the presence of viable cells or spores in the samples. They were not detected at significant levels after growth on the 3M platform.\n\nOther potentially pathogenic bacterial species that were detected at proportions of >1% of classified bacterial reads on plant material before growth include: Acinetobacter baumannii, Acinetobacter pitti, Corynebacterium diphtheriae, Coxiella burnetii, Escherichia coli, Propionibacterium acnes, Pseudomonas aeruginosa, Ralstonia pickettii, Salmonella enterica, Staphylococcus aureus, Stenotrophomonas maltophilia, and Streptococcus pneumoniae. Some of these species, and others, were observed to grow differentially on the BMX and 3M platforms. Species that grew well on 3M but not BMX included S. maltophilia and Leifsonia xyli; those that grew well on BMX but not 3M included C. botulinum, B. coagulans, Pseudomonas fluorescens and C. tetani. Factors that may contribute to this are the presence of chloramphenicol (Cm) and possible low oxygen levels in the BMX platform. S. maltophilia is Cm sensitive and P. fluorescens is Cm resistant. C. botulinum and C. tetani are obligate anaerobes and B. coagulans is a facultative anaerobe.\n\nThe concordance between the two culture based platforms was much higher overall for fungi than for bacteria. The distribution of fungal species observed after growth on the BMX and 3M platforms was highly similar for nine of the 15 samples (cc 0.98-1.0), and low to moderate for another three samples (cc: -0.02-0.49). The remaining three samples did not include any fungi that could be classified at the species level. The following toxigenic fungi were detected levels at >1% of classified reads in at least one sample: Aspergillus fumigatus, Aspergillus ostianus, Aspergillus sydowii, Penicillium citrinum, Penicillium commune, and Penicillium steckii.\n\nWe expected that all fungal species would grow effectively on the 3M and BMX TYM platforms, but there were some notable exceptions. First, we observed that although Aspergillus species were present in 15 plant samples (average proportion: 25% of classified reads), they were only detected at low levels in three samples after culturing on either 3M or BMX media (average proportion: 1.1% or 0.4% of classified reads, respectively). Representative results from two such samples are shown in Figure 2. Second, Penicillium was the most prevalent genus observed before and after growth on both platforms, with the most prevalent species classifications being P. citrinum and P. olsonii. However, although Penicillium species were present at significant levels in sample 16 (76% of classified reads; Figure 2C), they did not grow well on either platform in this sample (2.7–5.6% of classified reads). Instead, substantial growth of Trichoderma species, primarily T. hamatum, was observed (80–90% of classified reads). T. hamatum is one of several Trichoderma species that have been shown to inhibit the growth of Penicillium and other toxigenic fungi33,34. Apparent competitive growth inhibition of Penicillium species was also observed in sample 4 where there was substantial growth of Fusarium species (23–72% of classified reads; Figure 2A), and in samples 1, 2 and 7 where there was substantial growth of Saccharomyces species (57–82% of classified reads).\n\nA) TYM platform discordance before and after growth. Results from sample 4 showing the percentage of reads classified into fungal genera based on sequencing of TYM ITS2 amplicons directly from the plant (Before), or after growth on the 3M or BMX platforms. The lower part of the figure shows the colonies observed on 3M media (left) and appearance of the BMX YM card (right) after growth. B) Poor growth of Aspergillus species. In 12/15 cases where Aspergillus species are detected by ITS2 sequencing, they do not grow on 3M or BMX media (results from sample 6). The lower part of the figure shows the colonies observed on 3M media (left) and appearance of the BMX YM card (right) after growth. C) Trichoderma antagonism. Penicillium species are present in material extracted directly from the plant in sample 16, but are displaced by Trichoderma after growth on 3M or BMX media.\n\nWhile the qPCR and sequencing assays are capable of detecting free DNA, all of the samples tested in this study appear to contain live spores or microbes. Even in the one sample (number 6) where the TYM-qPCR Cq did not decrease after growth in BMX media, the proportions of fungal species changed and TAC qPCR demonstrated growing bacteria with a 10 Cq decrease (from over 40 to 30.4) after culture.\n\nTo further evaluate the ability of Aspergillus species to grow on 3M Rapid TYM Petri-Films, we plated 10 fungal monocultures from ATCC stocks and measured the concordance between qPCR Cq and 3M CFU (Figure 3). The Aspergillus species CFU counts are approximately three orders of magnitude lower than expected based on Cq estimates that were developed and optimized by plating cultured cells of other species. Excluding the two Aspergillus species, the correlation between CFU/g and Cq is 0.71. The one other outlier in these data is Candida glabralta. The correlation between CFU per gram of plant material and Cq is 0.99 across the remaining eight different fungal species.\n\nCandida catenulata: ATCC 10565, Candida sphaerica: ATCC 8565, Candida krusei: ATCC 28870, Candida albicans: ATCC 10231, Candida glabralta: ATCC 15545, Yarrowia lipolytica: ATCC 18944, Rhodotorula mucilaginosa: ATCC 4557, Debaryomyces hanseii: ATCC 10623, Trichothecium Roseum: ATCC 90473, Aspergillus japonicus: ATCC 16873, Aspergillus flavus: ATCC 16870. Aspergillus demonstrates log scales lower growth at RT than most other yeast. “Expected” is the inferred CFU count from the Cq measurement using the formula CFU/g = 10[(42.185 – Cq Value)/3.6916].\n\n\nDiscussion\n\nThe samples selected for this study were derived from seven newly established indoor Cannabis growth facilities located in a humid coastal environment (Eastern Massachusetts, Maine and Rhode Island). They were enriched for samples that failed on either or both the 3M and BMX platforms, which are commonly used to test for bacteria, yeast and mold in the industry. Quantitative PCR was evaluated as a third approach to hopefully resolve discrepancies. The high failure rate observed in this study should not be taken as representative of industry-wide averages, which have been reported elsewhere15,35. The sample set provided an opportunity to investigate the diversity of species that grow in different culture-based platforms as well as to characterize the microorganisms that were responsible for the sample failures.\n\nMetagenomic sequencing data were collected on 15 samples, directly from plant material and after culture on both the 3M and BMX platforms. The sequencing results demonstrate substantial shifts in presence and abundance of bacterial and fungal species after growth on the two platforms. Thus both of the culture-based platforms are detecting and enumerating only a subset of the species present, and the final composition of microbes after growth is markedly different from the starting sample. Most concerning is the frequent identification of bacterial species in systems designed for the exclusive quantification of yeast and mold, as quantified by elevated TAC Cq values after culture in the BMX TYM medium. These observations call into question the specificity claims of these culture-based testing platforms. The presence of bacterial colonies on TYM growth plates or cards may falsely increase the rejection rate of Cannabis samples for fungal contamination, and induce growers to increase the use of fungicides unnecessarily.\n\nClassified reads corresponding to many pathogenic and/or toxigenic bacteria and fungi were detected on plant material, including the following at proportions of over 5%: Acinetobacter baumannii, Acinetobacter pittii, Escherichia coli, Propionibacterium acnes, Pseudomonas aeruginosa, Ralstonia pickettii, Salmonella enterica, Stenotrophomonas maltophilia, Penicillium citrinum, Aspergillus ostianus, Penicillium steckii and Aspergillus sydowii. While the proportions of classified reads corresponding to these organisms were generally low, there were several striking exceptions: >10–35% R. picketti in 9/15 samples, 97% S. maltophilia in one sample, 41% E. coli in one sample, 16–35% A. baumanii in two samples, 10–85% P. olsonii in five samples, 10–72% P. citrinum in 13 samples, and 21% A. ostianus in one sample. The CLARK-S classification software has been reported to have very high sensitivity and precision for sequence assignments28,36,37. Nevertheless, further work is required to confirm these species assignments and to check for the presence of toxins that may be produced by these microbes. The observations certainly call into question the wisdom of species-agnostic microbial quantitation for a product like medicinal Cannabis, which is used by many seriously ill or immunocompromised patients.\n\nCross-platform comparisons demonstrate that certain bacteria and fungi grow well on 3M plates, but not on BMX, or vice versa. There are certainly differences in terms of the media. For example, BMX medium includes chloramphenicol to suppress bacterial growth, and uses sealed growth chambers that may limit oxygen availability. The observation of anaerobic Clostridium species such as C. botulinum in proportions up to 35% of bacterial reads at the genus level on the BMX platform along with B. coagulans, a facultative anaerobe, suggests that the sealed BMX YM cards generate anaerobic conditions. B. coagulans is rhizobacterium that has been reported to promote growth in Solanum seedlings in concert with mycorrhizal fungi38.\n\nClostridium botulinum was only detected at very low levels before growth on BMX medium, and was not detected on 3M plates. Previous white papers have suggested C. botulinum is not a threat in Cannabis due to its anaerobic nature (http://cannabissafetyinstitute.org/wp-content/uploads/2015/06/Microbiological-Safety-Testing-of-Cannabis.pdf). However, C. botulinum should not be considered an irrelevant threat in Cannabis because it is known to vascularize as an endophyte in plants and produce pasteurization resistant spores39. Additionally, proximity between cultivation and processing may lead to contamination of finished products such as emulsified oils or concentrated extracts containing water. Media such as these provide anaerobic conditions and nutrients sufficient for C. botulinum and other anaerobes to thrive. This is most threatening to indoor cultivation facilities which also process, store, and package finished products on site, often in sub-optimal storage conditions. The fact that the organism was observed to proliferate in the BMX system suggests that its presence, even at low levels, could be a potential concern in emulsified Cannabis oil formulations or edible products that are stored in closed containers.\n\nOf greater potential concern than the bacterial growth is the failure of both culture-based TYM platforms to support efficient growth and detection of Aspergillus species, which were present in proportions of 18–58% of classified ITS2 reads at the genus level in 10/15 samples. Initially, it was suspected that the significant TYM qPCR and read counts might derive from dead cells, perhaps as a result of growers attempting to sterilize the plant material. Quantitative PCR data using active cultures grown in TSB, however, indicates that CFU counts from two Aspergillus species inoculated onto 3M TYM petri film were ~1000× lower than expected based on qPCR Cq values that accurately predict CFUs in other species (Figure 3). Elevated Cq values due to ribosomal DNA copy number amplification does not seem a likely explanation because the estimated copy numbers of several Aspergillus species are similar to those of other fungi40,41. While the presence of spores with a slow germination rate42 could explain the results on plant material, it does not explain the qPCR result using active cultures. Another factor could be the obligate hyphal growth nature of Aspergillus species43, wherein each colony forming unit may contain hundreds of interconnected hyphal cells.\n\nThese findings are surprising, and therefore a third culture-based system, manufactured by Biolumix, was tested for its ability to detect A. fumigatus after 48 hours of growth at 26°C following inoculation from a saturated TSB culture. The result was negative. The failure of three different culture-based platforms to detect Aspergillus species suggests the need for caution in the use of such platforms. Validation data for the detection of Aspergillus on 3M rapid TYM Petri-film presented in 3M’s marketing material44 is for culture at 25°C, whereas the instructions for use specify culture at room temperature (~4°C below 25°C). McClenny45 recommends longer times and higher temperatures to accurately detect Aspergilli with culture based methods. The 3M films used in this study were incubated at 25 ± 1.0°C for 72 hours and still showed low efficacy detecting Aspergilli.\n\nAspergillus is arguably the most significant fungal threat in Cannabis cultivation. Aspergillosis has been reported in numerous immunocompromised patients and, to date accounts for the only clinical reports of fatalities associated with an infectious organism linked to Cannabis consumption16–18,46–48. Vonberg et al. demonstrated a 57% fatality rate for Aspergillosis in hospital-bound immunocompromised patients, while also demonstrating airborne infectability at or below 1 CFU/cubic meter49. Growers may pasteurize Cannabis samples to avoid failing culture-based microbial testing, but Aspergillus spores are pasteurization resistant50, as are the toxins they produce51, so pasteurization does not eliminate the potential risk from these organisms.\n\nAnother interesting observation is the apparent growth inhibition of Penicillium species (P. citrinum, P. brevicompactum, P. olsonii and P. quercetorum) in several samples with high proportions of Trichoderma, Fusarium, Rhodotorula or Saccharomyces reads after culture (samples 1,2,4,7 and 16). Other classified species that failed to grow in some of those samples include Furcaspora eucalypti and Tilletiopsis pallescens. Organic growth practices often utilize beneficial bacterial or fungal endophytes52 to promote crop growth and to enable lower chemical fungicide use. For example, Trichoderma species are known to synthesize β-1,3 gluconases and a chitinase which work synergistically to break down the cell walls of other fungi53,54. The State of Nevada has issued guidelines for allowable pesticides for use in Cannabis cultivation that include various Trichoderma and Bacillus species55. However, in most states, the use of such beneficial microbes may be precluded by the requirement for stringent yeast and mold testing that does not discriminate between beneficial and harmful microorganisms. More specific nucleic acid based testing techniques can resolve this. The FDA is moving in this direction for food safety testing with the GenomeTrakr Network56.\n\nFinally, as observed in a previous study on the Cannabis fungal microbiome in a different sample set15, P. citrinum is highly prevalent in the samples tested here. This species has been isolated as a growth promoting endophyte in Cannabis and several other plant species10,11,57–59. P. citrinum produces the nephrotoxin citrinin, although it is not clear whether the presence of citrinin in Cannabis flowers or extracts represents an actual health threat. However, the high prevalence of P. citrinum in Cannabis samples suggests that it is an area worthy of further investigation.\n\nThese data have several limitations. Quantitative inter-sample comparisons cannot be performed with the sequencing data at present due to the lack of internal controls to help calibrate any pooling or sampling issues throughout the workflow. The qPCR data can be used to estimate inter-sample bacterial or fungal burden but these data do not always resolve to the genus or species level. Intra-sample comparisons can nonetheless provide information on the relative proportions of bacterial or fungal species. Sampling from BMX cards was straightforward, since it uses a liquid culture medium, but 3M sampling was subject to bias in scraping off colonies from culture plates. Additionally, the use of Nextera shearing and primer amplification may introduce some biases due to transposon integration preferences. The fragmentation approach is necessary to avoid ITS2 amplicon size bias in Illumina MiSeq clustering60,61.\n\n\nConclusions\n\nCulture based techniques used to measure the microbial burden and establish safety of Cannabis have several shortcomings. States adopt and implement regulations at different tolerance thresholds for bacteria and fungi without specifically detailing standardized methods or coordinating inter-laboratory ring testing. Yeast and mold counts from the culture-based platforms tested here are confounded by the growth of bacteria - even when antibiotics like chloramphenicol are included. The microbiome in the plant material tested changes radically after culturing, such that the microbes and counts that are finally observed bear little or no resemblance to those of the starting sample. This represents a classic observer effect, where the act of measuring the microbial composition using these culture-based methods fundamentally changes that composition - which is a well-studied phenomenon known as the “great plate-count anomaly”62. This is a serious issue, which clearly has implications beyond Cannabis safety testing. The 3M and BMX platforms tested here are also used widely in the food testing industry.\n\nPerhaps the most concerning observation is that one of the most regulated of fungal pathogens, Aspergillus - the only microbe to ever be associated with clinical harm concerning cannabis - grows poorly, and is therefore severely under-reported by current culture-based platforms. The differential growth of other toxigenic fungi, depending on the companion species present, further influences the results. Bacterial pathogens are not uncommon, and beneficial bacteria are also capable of influencing the growth or inhibition of other flora.\n\nWe have demonstrated that molecular testing is capable of accurately quantifying and identifying a wide spectrum of microorganisms present on Cannabis samples, while avoiding false positives due to the presence of bacteria for fungal testing. Molecular testing is rapid and is capable of distinguishing between harmful and beneficial microbes – permitting the use of the latter in organic cultivation practices to eliminate the need for reliance on chemical fungicides.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of metagenomic analysis of medicinal Cannabis samples, 10.5256/f1000research.9662.d13712363",
"appendix": "Author contributions\n\n\n\nInitial 3M, BMX, DNA extraction and qPCR analysis was performed by CJH and MS at ProVerde Laboratories. Confirmatory qPCR testing and DNA sequencing using purified DNA were performed at Medicinal Genomics by KM, JS, YH and LZ. WO performed the comparative Aspergillus growth confirmation experiments. Software configuration, testing and analysis was performed by RCL, TF, JW, AD-L and DRS. KM and DRS prepared the manuscript, and all authors were involved in revision and approval of the final manuscript.\n\n\nCompeting interests\n\n\n\nKM, JS, YH, RCL, AD-L, LZ, WO, JW, TF and DRS are employees of Courtagen Life Sciences, the parent company of Medicinal Genomics, which manufactures the commercial qPCR test used in this study.\n\n\nGrant information\n\nThis project was privately funded by the participating laboratories.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nKusari P, Kusari S, Lamshoft M, et al.: Quorum quenching is an antivirulence strategy employed by endophytic bacteria. Appl Microbiol Biotechnol. 2014; 98(16): 7173–83. PubMed Abstract | Publisher Full Text\n\nKusari P, Kusari S, Spiteller M, et al.: Implications of endophyte-plant crosstalk in light of quorum responses for plant biotechnology. Appl Microbiol Biotechnol. 2015; 99(13): 5383–90. PubMed Abstract | Publisher Full Text\n\nTurner TR, James EK, Poole PS: The plant microbiome. Genome Biol. 2013; 14(6): 209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCompant S, Clément C, Sessitsch A: Plant growth-promoting bacteria in the rhizo- and endosphere of plants: Their role, colonization, mechanisms involved and prospects for utilization. Soil Biol Biochem. 2010; 42(5): 669–78. Publisher Full Text\n\nReinhold-Hurek B, Hurek T: Living inside plants: bacterial endophytes. Curr Opin Plant Biol. 2011; 14(4): 435–43. PubMed Abstract | Publisher Full Text\n\nBonfante P, Genre A: Mechanisms underlying beneficial plant-fungus interactions in mycorrhizal symbiosis. Nat Commun. 2010; 1: 48. PubMed Abstract | Publisher Full Text\n\nBerendsen RL, Pieterse CM, Bakker PA: The rhizosphere microbiome and plant health. Trends Plant Sci. 2012; 17(8): 478–86. PubMed Abstract | Publisher Full Text\n\nWinston ME, Hampton-Marcell J, Zarraonaindia I, et al.: Understanding cultivar-specificity and soil determinants of the cannabis microbiome. PLoS One. 2014; 9(6): e99641. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoubraken J, Frisvad JC, Samson RA: Taxonomy of Penicillium section Citrina. Stud Mycol. 2011; 70(1): 53–138. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGautam A, Kant M, Thakur Y: Isolation of endophytic fungi from Cannabis sativa and study their antifungal potential. Archives Of Phytopathology And Plant Protection. 2013; 46(6): 627–35. Publisher Full Text\n\nKusari P, Kusari S, Spiteller M, et al.: Endophytic fungi harbored in Cannabis sativa L.: diversity and potential as biocontrol agents against host plant-specific phytopathogens. Fungal Divers. 2013; 60(1): 137–51. Publisher Full Text\n\nChung EJ, Hossain MT, Khan A, et al.: Bacillus oryzicola sp. nov., an Endophytic Bacterium Isolated from the Roots of Rice with Antimicrobial, Plant Growth Promoting, and Systemic Resistance Inducing Activities in Rice. Plant Pathol J. 2015; 31(2): 152–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaz IC, Santin RC, Guimarães AM, et al.: Eucalyptus growth promotion by endophytic Bacillus spp. Genet Mol Res. 2012; 11(4): 3711–20. PubMed Abstract | Publisher Full Text\n\nShi Y, Lou K, Li C: Growth and photosynthetic efficiency promotion of sugar beet (Beta vulgaris L.) by endophytic bacteria. Photosynth Res. 2010; 105(1): 5–13. PubMed Abstract | Publisher Full Text\n\nMcKernan K, Spangler J, Zhang L, et al.: Cannabis microbiome sequencing reveals several mycotoxic fungi native to dispensary grade Cannabis flowers [version 2; referees: 2 approved]. F1000Res. 2015; 4: 1422. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRuchlemer R, Amit-Kohn M, Raveh D, et al.: Inhaled medicinal cannabis and the immunocompromised patient. Support Care Cancer. 2015; 23(3): 819–22. PubMed Abstract | Publisher Full Text\n\nGargani Y, Bishop P, Denning DW: Too many mouldy joints - marijuana and chronic pulmonary aspergillosis. Mediterr J Hematol Infect Dis. 2011; 3(1): e2011005. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBal A, Agarwal AN, Das A, et al.: Chronic necrotising pulmonary aspergillosis in a marijuana addict: a new cause of amyloidosis. Pathology. 2010; 42(2): 197–200. PubMed Abstract | Publisher Full Text\n\nTaylor DN, Wachsmuth IK, Shangkuan YH, et al.: Salmonellosis associated with marijuana: a multistate outbreak traced by plasmid fingerprinting. N Engl J Med. 1982; 306(21): 1249–53. PubMed Abstract | Publisher Full Text\n\nCenters for Disease Control (CDC): Salmonellosis traced to marijuana--Ohio, Michigan. MMWR Morb Mortal Wkly Rep. 1981; 30(7): 77–9. PubMed Abstract\n\nHealth DDoE: Special Concerns Associated with Marijuana Extractions, Concentrations, Infusions, and Infused Foods. Public Health Inspections Division. 2016. Reference Source\n\nMarcu J: Cannabis Inflorescence Cannabis spp. Standards of Identity, Analysis, And Quality Control. American Herbal Pharmacopoeia. 2013. Reference Source\n\nSchoch CL, Seifert KA, Huhndorf S, et al.: Nuclear ribosomal internal transcribed spacer (ITS) region as a universal DNA barcode marker for Fungi. Proc Natl Acad Sci U S A. 2012; 109(16): 6241–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchoch CL, Robbertse B, Robert V, et al.: Finding needles in haystacks: linking scientific names, reference specimens and molecular data for Fungi. Database (Oxford). 2014; 2014: pii: bau061. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRam JL, Karim AS, Sendler ED, et al.: Strategy for microbiome analysis using 16S rRNA gene sequence analysis on the Illumina sequencing platform. Syst Biol Reprod Med. 2011; 57(3): 162–70. PubMed Abstract | Publisher Full Text\n\nMcKernan KJ, Spangler J, Helbert Y, et al.: DREAMing of a patent-free human genome for clinical sequencing. Nat Biotechnol. 2013; 31(10): 884–7. PubMed Abstract | Publisher Full Text\n\nMcKernan KJ, Spangler J, Zhang L, et al.: Expanded genetic codes in next generation sequencing enable decontamination and mitochondrial enrichment. PLoS One. 2014; 9(5): e96492. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOunit R, Lonardi S: Higher classification sensitivity of short metagenomic reads with CLARK-S. Bioinformatics. 2016; pii: btw542. PubMed Abstract | Publisher Full Text\n\nKõljalg U, Nilsson RH, Abarenkov K, et al.: Towards a unified paradigm for sequence-based identification of fungi. Mol Ecol. 2013; 22(21): 5271–7. PubMed Abstract | Publisher Full Text\n\nEdgar RC: UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nature methods. 2013; 10(10): 996–8. PubMed Abstract | Publisher Full Text\n\nEdgar RC: Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010; 26(19): 2460–1. PubMed Abstract | Publisher Full Text\n\nMcMurdie PJ, Holmes S: Waste not, want not: why rarefying microbiome data is inadmissible. PLoS Comput Biol. 2014; 10(4): e1003531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHasan MM, Rahman SM, Kim GH, et al.: Antagonistic potentiality of Trichoderma harzianum towards seed-borne fungal pathogens of winter wheat cv. Protiva in vitro and in vivo. J Microbiol Biotechnol. 2012; 22(5): 585–91. PubMed Abstract | Publisher Full Text\n\nAbou-Zeid AM, Altalhi AD, Abd El-Fattah RI: Fungal control of pathogenic fungi isolated from wild plants in Taif Governorate, Saudia Arabia. Roum Arch Microbiol Immunol. 2007; 66(3–4): 90–6. PubMed Abstract\n\nWurzer J: The Science of Cannabis. CannMed. 2016; 2016. Reference Source\n\nOunit R, Wanamaker S, Close TJ, et al.: CLARK: fast and accurate classification of metagenomic and genomic sequences using discriminative k-mers. BMC Genomics. 2015; 16: 236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKerepesi C, Grolmusz V: Evaluating the Quantitative Capabilities of Metagenomic Analysis Software. Curr Microbiol. 2016; 72(5): 612–6. PubMed Abstract | Publisher Full Text\n\nHemashenpagam N, Selvaraj T: Effect of arbuscular mycorrhizal (AM) fungus and plant growth promoting rhizomicroorganisms (PGPR's) on medicinal plant Solanum viarum seedlings. J Environ Biol. 2011; 32(5): 579–83. PubMed Abstract\n\nZeiller M, Rothballer M, Iwobi AN, et al.: Systemic colonization of clover (Trifolium repens) by Clostridium botulinum strain 2301. Front Microbiol. 2015; 6: 1207. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHerrera ML, Vallor AC, Gelfond JA, et al.: Strain-dependent variation in 18S ribosomal DNA Copy numbers in Aspergillus fumigatus. J Clin Microbiol. 2009; 47(5): 1325–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlack J, Dean T, Byfield G, et al.: Determining fungi rRNA copy number by PCR. J Biomol Tech. 2013; 24(1): 32–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarín S, Sanchis V, Sáenz R, et al.: Ecological determinants for germination and growth of some Aspergillus and Penicillium spp. from maize grain. J Appl Microbiol. 1998; 84(1): 25–36. PubMed Abstract | Publisher Full Text\n\nBrand A: Hyphal growth in human fungal pathogens and its role in virulence. Int J Microbiol. 2012; 2012: 517529. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNordby T: Rapid Quantitative Enumeration of Yeasts and Molds. 2013. Reference Source\n\nMcClenny N: Laboratory detection and identification of Aspergillus species by microscopic observation and culture: the traditional approach. Med Mycol. 2005; 43(Suppl 1): S125–8. PubMed Abstract | Publisher Full Text\n\nRemington TL, Fuller J, Chiu I: Chronic necrotizing pulmonary aspergillosis in a patient with diabetes and marijuana use. CMAJ. 2015; 187(17): 1305–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChusid MJ, Gelfand JA, Nutter C, et al.: Letter: Pulmonary aspergillosis, inhalation of contaminated marijuana smoke, chronic granulomatous disease. Ann Intern Med. 1975; 82(5): 682–3. PubMed Abstract | Publisher Full Text\n\nCescon DW, Page AV, Richardson S, et al.: Invasive pulmonary aspergillosis associated with marijuana use in a man with colorectal cancer. J Clin Oncol. 2008; 26(13): 2214–5. PubMed Abstract | Publisher Full Text\n\nVonberg RP, Gastmeier P: Nosocomial aspergillosis in outbreak settings. J Hosp Infect. 2006; 63(3): 246–54. PubMed Abstract | Publisher Full Text\n\nFujikawa H, Itoh T: Tailing of thermal inactivation curve of Aspergillus niger spores. Appl Environ Microbiol. 1996; 62(10): 3745–9. PubMed Abstract | Free Full Text\n\nCarvajal M, Bolaños A, Rojo F, et al.: Aflatoxin M1 in pasteurized and ultrapasteurized milk with different fat content in Mexico. J Food Prot. 2003; 66(10): 1885–92. PubMed Abstract\n\nAfzal R, Shinwari ZK, Iqrar I: Selective Isolation and Characterization of Agriculturally Beneficial Endophytic Bacteria from Wild Hemp using Canola. Pak J Bot. 2015; 47(5): 1999–2008. Reference Source\n\nUlhoa CJ, Peberdy JF: Regulation of chitinase synthesis in Trichoderma harzianum. J Gen Microbiol. 1991; 137(9): 2163–9. PubMed Abstract | Publisher Full Text\n\nHarman GE: Overview of Mechanisms and Uses of Trichoderma spp. Phytopathology. 2006; 96(2): 190–4. PubMed Abstract | Publisher Full Text\n\nBarbee J: Medical Marijuana Pesticide List (Revised 05/02/2016). State of Nevada Department of Agriculture. 2016. Reference Source\n\nFDA: Genome Trakr Network. Reference Source\n\nVega FE, Posada F, Peterson SW, et al.: Penicillium species endophytic in coffee plants and ochratoxin A production. Mycologia. 2006; 98(1): 31–42. PubMed Abstract | Publisher Full Text\n\nKhan AL, Al-Harrasi A, Al-Rawahi A, et al.: Endophytic Fungi from Frankincense Tree Improves Host Growth and Produces Extracellular Enzymes and Indole Acetic Acid. PLoS One. 2016; 11(6): e0158207. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhan SA, Hamayun M, Yoon H, et al.: Plant growth promotion and Penicillium citrinum. BMC Microbiol. 2008; 8: 231. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFadrosh DW, Ma B, Gajer P, et al.: An improved dual-indexing approach for multiplexed 16S rRNA gene sequencing on the Illumina MiSeq platform. Microbiome. 2014; 2(1): 6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPinto AJ, Raskin L: PCR biases distort bacterial and archaeal community structure in pyrosequencing datasets. PLoS One. 2012; 7(8): e43093. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHugenholtz P: Exploring prokaryotic diversity in the genomic era. Genome Biol. 2002; 3(2): REVIEWS0003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKernan K, Spangler J, Helbert Y, et al.: Dataset 1 in: Metagenomic Analysis of Medicinal Cannabis. Samples; Pathogenic Bacteria, Toxigenic Fungi, and Beneficial Microbes Grow in Culture-Based Yeast and Mold Tests. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16862",
"date": "10 Oct 2016",
"name": "Ethan B. Russo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting, well written and designed account comparing the accuracy and utility of genetic microbial testing as compared to standard microbiological culture techniques. All aspects of study design, methods and conclusions are well explained and defended, and should easily allow replication if comparable techniques are applied.\nI would suggest expansion of the study's implications in the abstract if the word count will permit this.\nIn 2005, Vancouver Coastal Health in British Columbia reported transmission of meningococcal cases by sharing of joints, and perhaps this pathogen deserves scrutiny given its ubiquity in young adults very likely to be engaging in social cannabis usage.\nThe legal analysis permitting cross-border transmission of DNA from cannabis material has important implications for greater adoption of similar analytical techniques, which certainly seems warranted given the advantages in accuracy in distinguishing beneficial, commensal and symbiotic microbiota from pathogens, and the speed of this approach to the issue.",
"responses": []
},
{
"id": "16857",
"date": "17 Oct 2016",
"name": "Justin Fischedick",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe purpose of this study was to investigate the composition of microorganisms found on cannabis samples and compare the ability of different culture based testing platforms with a qPCR method. Although the study provides some valuable data into some of the short comings of culture based methods it has some experimental design weaknesses that make it difficult to draw strong conclusions from this data set.\n\nIntroduction: Since the purpose of the study is to discuss the difference between a qPCR based microbial testing platform with culture based methods the introduction should focus more on discussing this in other industries. For example it's becoming well known that only a small percentage of organisms that exist in nature are easily cultured on the most common forms of media used. Rapid advances in sequencing are allowing metagenomic analysis of soil and plant microbiomes which also demonstrates the limitations of culturing methods. Issues like specificity between qPCR and culture based methods should be highlighted.\n\nMethods: Plant material - Nowhere in the methods section is any information provided about the cannabis plant material. Was it cannabis flowers? Were they dried? Was the sample homogenized in anyway? Information about all the samples used in this study should be summarized in a table or in a section within the methods part of the manuscript.\n\nResults:\n\nTable 1- This table highlights one of the main criticisms I have with this study. There are no replicates. These results seem to be based off a single analysis of each sample. Therefore we can't conclude anything about the reproducibility of the qPCR platform compared to the other platforms.\n\nIt is also interesting to note that most of the culture based methods detected levels of fungi that would be considered failures while most of the qPCR samples detected only low levels of fungal DNA.\n\nIt is difficult to follow from reading through the text of the manuscript which samples were analyzed by metagenomic sequencing. Every sample analyzed by metagenomic sequencing and a summary of their results, in terms of what species were detected and their approximate amounts, should be summarized somewhere in the manuscript for ease of reference and completeness of data presentation. Why those samples were chosen should be discussed.\n\nThroughout the results section numbers of samples are discussed but we don't know if those are the same samples or which samples shown in Table 1.\nFor example page 7 paragraph 2: \"First, we observed that although Aspergillus species were present in 15 plant samples (average proportion: 25% of classified reads), they were only detected at low levels in three samples after culturing on either 3M or BMX media (average proportion: 1.1% or 0.4% of classified reads, respectively).\". Which 15 plant samples? This kind of vague reference to samples needs to be corrected and be made more clear.\n\nPage 6, sequence reproducibility: \"14 samples\". Which samples?\n\nPage 6, paragraph 4: \"Pairs of samples from three of the seven growers....\". Which detected fungi were commonly found and correlated?\n\nPage 7, Figure 1: Why just discuss 2 and 14? Why not 7-13 or 15-16?\n\nTo summarize I would like to see more error bars (or +-) from replicates and more complete summaries of data and samples information.",
"responses": []
},
{
"id": "16859",
"date": "18 Oct 2016",
"name": "Philippe Henry",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWe would like to extend our sincere gratitude for the opportunity to provide an open peer review for the work of McKernan and colleagues on the Cannabis microbiome and uses of metagenomics to shed light on the microbial complement of the Cannabis phyllosphere. As strong proponents of open science, we engage to provide an objective assessment of the work presented here and to make suggestions aimed at improving the clarity and readability of the present work.\nThe microbiome (the collection of microbial genomes present on an organism or in an environment) has emerged as an additional dimension (in addition to genomic, epigenomic, metabolomic and phenotypic data…) from which one can harness cryptic information that may contribute to a particular biological phenomenon. Novel technical advances have enabled a metagenomic approach in which one can isolate and enumerate bacterial and fungal genus/species from environmental samples. In their paper, McKernan et al. compare the performance of traditional culture based techniques with commercial qPCR kits in terms of accuracy and ability to detect different types of microbes, with an emphasis on prokaryotic organisms, which was so far underrepresented in previous published work (including a recent F1000 research paper by the same lead author).\nWe support the authors’ work and understand that as pioneers in the Cannabis microbiome space, they are currently laying the foundation on which further development of assays will likely rely strongly. As such, much of our comments relate to improving the transparency of their results. We pose several questions/comments, which the author may choose to incorporate into updated versions of this paper. Overall, the paper is very well written and we do not have any editorial suggestions, except for the spelling of Biomerieux, which in one instance requires the accent aigu: “é”. Below, please find minor comments, which we would like to authors to consider:\nWe are in agreement with review 1 (Ethan Russo) that the abstract could be improved if word limit permits. We particularly think that the concluding statement could incorporate a stronger statement about the application of their approach in the Cannabis industry.\n\nA short statement on why the comparison between culture-based platforms and DNA-based detection is relevant (e.g. in “Background”) would be of general interest to the readership.\n\nIntroduction\nWe found the introduction to lack a common thread and rather abruptly “jumps” between the primary objectives of the paper: methodological comparison between different microorganism detection techniques, microbe-microbe interactions and health implications: e.g. in paragraph 1 the authors focus on microbiome-ecological considerations. Then again in paragraph 3, they bring up plant-microbe/microbe-microbe interaction although the rest of this paragraph is focus on methodologies.\n\nThe statement ”…no studies have examined the impact of beneficials…” does not make much sense there. Perhaps the considerations about Trichoderma could be saved for the discussion.\n\nSome brief background on the two culture platforms (3M, BMX) would help frame the need for other novel technologies in microbial detection.\n\nMethods\nAs a general comment to all sections from hereon, it would be helpful to have the same sub-headings (as much as possible) logically flow from methods to results and into the discussion points:\ne.g.\n\nDNA-extraction from plants Platform culturing qPCR Metagenomic / sequencing\n\n(Currently, the last paragraph of the Introduction gives an overview of the methods, the first technique that is mentioned is NGS, then qPCR; in the Methods and Results it is first qPCR and then NGS…)\n\nThe above-mentioned will help disentangle some of the concepts introduced here, and send a clearer message to the readership: why and when to use qPCR/metagenomics instead of conventional approaches. If there is sufficient evidence that the novel approach outperforms the old is another question that seems rather elusive in the current paper.\nIt would be valuable to share some information about their standard curve and how they derived their Cq values of 21 and 26 cycles for TYM and TAC assays respectively. This brings up the point that a more convincing comparison of culture based and qPCR assays need to be provided in order for the readership to assess if/when one should be used over another.\n\nIt would be useful to share the name of the R package used to undertake the USEARCH OTUs analyses.\n\nThe authors should consider perhaps using other multivariate statistics than bivariate correlation coefficients. Another angle to tackle this is to implement diversity analyses on the microbial community data using either Shannon’s diversity index or some other metric such as alpha- and beta- diversity in each sample. While sample size is likely limiting, are there other similarities between samples for common origins? Or grown in similar conditions?\n\nPlease expand on what you mean by “specificity analysis” and how this was undertaken.\n\nResults\nThe title of the first section is misleading: “qPCR and colony counts before and after”; i.e. there is no colony count before (refer to above comment about unifying sub-heading in paper sections).\n\nTable 1 could be presented in a clearer way: Table description and content (the column headers) are redundant; too much text in the table description; instead of sample nr., give some info in the sample identification (e.g. origin); visually separate TYM and TAC.\n\nThe presentation of the results (using excel bar plots), while understandable, is not that efficient at presenting the data at hand. Without overstepping, we suggest looking at multivariate plots that would be more suited to drive their points home.\n\nDiscussion\nIt seems counterintuitive that qPCR, being more sensitive than plating approach, would fail the lowest number of samples out of all approaches: Is the BMX positive bias toward C. botulinum a false positive or is the lack of C. botulinum detection by qPCR a false negative?\n\nIn that vein, it would be helpful to describe your strategy to assess false positives, i.e. how many negative controls were implemented? Any negative control with botulism?\n\nFailure thresholds are subjective in nature, please expand on how the Cq threshold is superior, what microbial load (e.g. ng of DNA) do each of the two Cq thresholds correspond to?\n\nWe found that while a large and varied bacterial assemblage was identified here, it would be important to note that modern Cannabis such as the 15 samples presented here have likely gone through several genetic and microbiotic bottlenecks. A recent paper on the Agave microbiome1 demonstrates the paucity of the microbiome in domesticated/farmed plants compared to wild relatives, likely due to the pervasive effects of monoculture. While the Cannabis domestication process is convoluted and masked by prohibition, it is likely that the same pattern is observed in Cannabis. Characterizing the genetic profiles of Cannabis, along with the microbiome of wild Cannabis accessions will likely yield enhanced inference in terms of the underlying mechanisms related to plant growth and disease tolerance.\n\nA larger part of the discussion should be dedicated to the community composition shift before and after culturing. Especially some considerations about the biological relevance of this shift: i.e. are the pathogens that grow on the plate relevant for human health/plant growth considerations? Or the other way round: are the pathogens detected before culturing relevant for human health/plant growth issues? It would be good to more explicitly separate what the authors think are artifacts caused by different methodologies (community shifts) with biologically relevant phenomena.\n\nIf the authors found polymorphisms in OTUs, they may want to suggest the application of the Cannabis microbiome to provide higher resolution to clustering exercises in highly related or poly-hybridized Cannabis accessions. This may also be used to trace the origin of particular dispensary samples to a cultivator or methodology of plant growth as using hydroponics, soil, aquaponics, etc. will likely influence the Cannabis rhizospheres and phyllospheres.\n\nWhile the authors discuss the presence of C. botulinum and Aspergillum spp. in emulsified oils, perhaps addressing the use of extraction technologies such as butane, propane, Nitrogen, CO2, Rosin tech and its putative impact on the presence of unwanted microorganisms may be a topic of general interest. In Canada until recently, only Cannabis flowers were prescribed as medicinal Cannabis. This study highlights some drawbacks of using this type of Cannabis for medical purposes, particularly when used in immune-compromised individuals, and indirectly supports the use of Cannabis extracts that can be dosed effectively with minimal risks of exposure to toxicogenic microbes.\n\nWe sincerely hope that the authors will find our review useful and we remain available for further discussion through the F1000 research platform.\n\nKind regards, Philippe Henry and Lukas Wille",
"responses": []
},
{
"id": "16858",
"date": "07 Nov 2016",
"name": "Jahan P Marcu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article represents an area of research that needs more attention. My only concerns are minor, and are regarding the figures in the article. The figures do not have any error bars/indication of replicability. It would be great if there were more reproducibility indicated within the figures, as this article will be highly read and potentially utilized in a growing industry.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2471
|
https://f1000research.com/articles/5-1518/v1
|
27 Jun 16
|
{
"type": "Method Article",
"title": "VISIT-TS version 2: A multimedia tool for population studies on tic disorders",
"authors": [
"M. Jonathan Vachon",
"Catherine W. Striley",
"Mollie R. Gordon",
"Miriam L. Schroeder",
"Emily C. Bihun",
"Jonathan M. Koller",
"Kevin J. Black",
"M. Jonathan Vachon",
"Catherine W. Striley",
"Mollie R. Gordon",
"Miriam L. Schroeder",
"Emily C. Bihun",
"Jonathan M. Koller"
],
"abstract": "Population-based assessment of Tourette syndrome (TS) and other tic disorders produces a paradox. On one hand, ideally diagnosis of tic disorders requires expert observation. In fact, diagnostic criteria for TS explicitly require expert assessment of tics for a definite diagnosis. On the other hand, large-scale population surveys with expert assessment of every subject are impracticable. True, several published studies have successfully used expert assessment to find tic prevalence in a representative population (e.g. all students in a school district). However, extending these studies to larger populations is daunting.\nWe created a multimedia tool to demonstrate tics to a lay audience, discuss their defining and common attributes, and address features that differentiate tics from other movements and vocalizations. A first version was modified to improve clarity and to include a more diverse group in terms of age and ethnicity. The result is a tool intended for epidemiological research. It may also provide additional benefits, such as more representative minority recruitment for other TS studies and increased community awareness of TS.",
"keywords": [
"Tourette syndrome",
"prevalence",
"epidemiology",
"method",
"video-audio media",
"video recording"
],
"content": "Introduction\n\nSome important questions in Tourette Syndrome (TS) require large-scale epidemiological studies. To give one example, studies have not yet had the power to definitively establish whether TS is equally common in people of African versus European descent. In the U.S., although diagnosis and treatment are about twice as common in European Americans (CDC, 2009), three prior studies in the U.S., though limited in various ways, all found tics to be more common in minorities (Costello et al., 1996; Lapouse & Monk, 1964; [Table 4]; personal communication Costello EJ to KJB, 1999; personal communication Peterson BS to KJB, 2008; Peterson et al., 2001). The results may differ so dramatically because of true genetic or epigenetic differences between racial groups (Robertson et al., 2009), or because social determinants of health care create barriers to diagnosis or treatment that create an artifactual difference in apparent prevalence (American Psychiatric Association, 2013, under Tic Disorders/Culture-Related Diagnostic Issues; CDC, 2009; Olfson et al., 2011). Settling this question will require large-scale prevalence studies that recruit an adequate, representative sample of minority populations.\n\nCubo (2012) reviews several factors that complicate epidemiological research on TS. One is that such studies generally must rely on assessments by lay interviewers. Although that approach has been very useful for psychiatric epidemiology in general, the validity may reasonably be questioned in the case of TS. There can be difficulties in conveying adequate descriptions of movements by words alone; probable miscategorization or failure to recognize some abnormal movements by both subjects and lay interviewers; the broad differential diagnosis of tics, including other movement disorders and normal movements; and misinterpretation of typical tics due to their intermittent nature, suppressibility and fluctuating severity over time or in response to the environment.\n\nWe were especially concerned that some respondents with tics, or whose children had tics, might not correctly interpret written descriptions of tics but would recognize the tics if they saw them. Supporting the potential importance of this concern, epidemiological studies that included expert examination (Comings et al., 1990; Cubo et al., 2011; Hornsey et al., 2001; Jin et al., 2005; Khalifa & von Knorring, 2003; Khalifa & von Knorring, 2005; Kurlan et al., 1994; Lanzi et al., 2004; Mason et al., 1998; Wang & Kuo, 2003) generally report a several-fold higher prevalence of tic disorders than do other epidemiological studies (CDC, 2009; Scahill et al., 2014).\n\nTo address these issues, we developed a multimedia screening interview to enhance population-based ascertainment of tic disorders by lay interviewers (“VISIT-TS”, Gordon et al., 2010). A video presented and discussed typical tics, and addressed a few difficulties in differential diagnosis, and then presented questions to gather the information required for diagnosis by DSM-IV-TR (American Psychiatric Association, 2000) and DSM-5 (American Psychiatric Association, 2013). After initial testing and application (unpublished report, Striley CW, Black KJ, Kelso N, and Vagelakos L), we revised the instrument. Here we describe the approach we took and the result: VISIT-TS v. 2.\n\n\nMethods\n\nWe first reviewed previous methods including the Yale Child Study Center questionnaire (Findley et al., 1999; Jagger et al., 1982), the Kiddie SADS semi-structured interview (K-SADS-PL) (Kaufman et al., 1997) and the interviews used by Apter et al. (1993); Gillberg & Rasmussen (1982, Appendix); Hornsey et al. (2001) and Mason et al. (1998) who used the Apter questions and the National Hospital Interview Schedule for GTS (Rickards & Robertson, 2003; Robertson & Eapen, 1996); Appendix I in Khalifa & von Knorring (2003); and Table 1 in Linazasoro et al. (2006). We also reviewed the Diagnostic Confidence Index (Robertson et al., 1999), the YGTSS (Storch et al., 2005), and the parent and child self-report forms used by the Tourette Syndrome Association International Consortium for Genetics (1999). An expert in psychiatric epidemiology (CWS) developed the questions that would be posed, in consultation with a movement-disorders-trained neuropsychiatrist (KJB). The interview was designed to address both current (past month) and lifetime symptoms and included information needed for TSSG, DSM-IV-TR and DSM-5 criteria for TS.\n\nWe wrote a script addressing the following aims: demonstrate tics, discuss their defining and common attributes, and address features that differentiate tics from other movements and vocalizations. We then selected video clips from patients and research volunteers who gave written permission to re-use their video separately from patient care or the research study they had participated in. We also obtained permission from people with tics to re-use selected video clips that they had already made publicly available on YouTube. The final video was produced by Ty Travis (San Tan Valley, Arizona, USA). We dubbed the final product VISIT-TS, for “Video-Integrated Screening Instrument for Tics and Tourette Syndrome” (Gordon et al., 2010).\n\nThe first version of VISIT-TS was used in an initial reliability and validity study that provided experience and initial feedback from interviewees and staff (unpublished report, Striley CW, Black KJ, Kelso N, and Vagelakos L). It was also shared with about a dozen other movement disorders experts and we reviewed their feedback. In response to this initial experience and feedback, we made many changes, including new video clips, thereby reducing the amount of time the narrator is shown and showing more diversity in ethnicity and age. We added and improved graphics, including written text while examples of tics appear in the background. To avoid confusion, we removed videos showing examples of non-tic movements, and we eliminated medical terms unfamiliar to the general public, such as chorea. Finally, we restored an unintentionally omitted question on lower facial tics. We call the revised product VISIT-TS v. 2.\n\n\nResults\n\nThe revised VISIT-TS multimedia tool includes almost 100 video clips defining and demonstrating tics including simple and complex motor tics as well as simple and complex vocal tics, edited to a length of 5 minutes. Following the clips, 16 questions are presented in written and spoken form, one at a time, accompanied in most cases by brief video of the phenomenon being ascertained (see Appendix 1; question 4 of the video, at about 6:09, is a good brief demonstration). The questions take another 5 minutes. The video clips demonstrate adults and children in similar numbers, both sexes (male:female ≈ 5:4), and include some ethnic diversity (about 1 in 8 clips are Hispanic or non-white). VISIT-TS v. 2 is freely available for noncommercial use at https://zenodo.org/collection/user-kjb or at http://dx.doi.org/10.5281/zenodo.55604.\n\n\nDiscussion\n\nThis approach is based on the premise that survey respondents will respond more accurately about tics in themselves (or their children) after the interviewer shows them a brief video about tics than they would if only asked about history by questionnaire or by cross-sectional lay observation. Because tics can come and go, can be suppressed, and often resemble intentional movement or vocalizations, diagnosis of tic disorders can be challenging (Black et al., 2016; Cubo, 2012).\n\nSome data are available to judge the sensitivity of lay diagnostic instruments for tic disorders. In two studies, about half of the children who had previously been diagnosed with TS were missed by research screening: 1 of 2 in Landgren et al., 1996 and 8 of 15 in Snider et al., 2002. Conversely, routine clinical assessment for tics is also insensitive; Kadesjo & Gillberg (2000) report that a tic diagnosis had been considered during child psychiatric treatment in only 1 of 18 children with TS. Khalifa & von Knorring (2003) examined the sensitivity of their questionnaire but only by comparing questionnaire responses from parents to those from teachers. It appears Wang & Kuo (2003) collected physician examination data on questionnaire-negative children, but those data were not reported. Linazasoro et al. (2006) do not specify whether any of the tics diagnosed by a physician observing a classroom of students for 20 minutes were missed by parent or teacher questionnaires. Stefanoff et al. (2008) diagnosed a tic disorder in 6% of children whose parents and teachers noticed no tics; this is more remarkable given that the diagnosis rate in screen-positive children was only 18%. Cubo et al. (2011) found sensitivities of 36%–73% for questionnaires completed by teachers, observers or parents. In a recent study, a semistandardized diagnostic interview (the DISC) captured only about half the cases of TS, and there was little agreement between DISC results and expert clinician diagnosis (Lewin et al., 2014).\n\nThe most detailed data on the sensitivity of questionnaires for tic diagnosis come from the study of Mason et al. (1998). They gave questionnaires containing the 4 tic screening questions of Apter et al. (1993) to students, parents and teachers. They also screened for tics with direct classroom observation by Dr. Mason, a psychologist trained in tic detection at the Queen Square, London, TS center; she watched each classroom for an hour, 2 minutes per student. To confirm the diagnosis, Mason then directly examined all 16 consenting screen-positive students in a traditional clinical setting. Importantly, Mason also examined 8 students randomly chosen from screen negatives, i.e. those who had no tics reported by themselves, parents, or teachers, and no tics observed in the classroom. Remarkably, 3 of the 8 had at least one tic when examined directly, counted only if it had been present for at least a year by history! This very high rate of missed chronic tic disorders (37.5%) suggests that traditional questionnaires and interviews are insufficiently sensitive. VISIT-TS was designed to improve sensitivity by making sure subjects and parents have seen typical tics on video before answering questions about them.\n\nLinazasoro & colleagues (2006) used a method somewhat similar to the VISIT-TS approach, i.e., they showed a videotape of tics as part of an initial lecture to parents and teachers, followed by a survey that included a short written description of tics. Independently, “all children were directly observed in the classroom by an expert clinician in the field of tics who diagnosed tics based exclusively on the characteristics of the movements”, with a limit of 20 minutes’ observation per classroom (p. 2107). However, the authors note limitations of their work including the fact that children were observed collectively, for a relatively brief period of time, and while engaged in school work, when tics may have been suppressed. The questionnaires actually identified more children (98) than the expert (57), suggesting either that parents and teachers overdiagnosed some movements as tics, or that they were describing tics present in the past but no longer present, or that the classroom observation was not an adequately sensitive clinical comparison. A videotape demonstrating tics was released (Tourette Syndrome Association, 1990), but it was intended for a professional audience rather than for epidemiological studies.\n\nThe VISIT-TS also has limitations. The DSM and TSSG criteria explicitly require application by properly trained experts, so VISIT-TS is primarily intended as a screening tool rather than as a substitute for clinical expertise. The sensitivity of VISIT-TS has not been reported. Nevertheless, a 5- to 10-minute video-illustrated questionnaire is probably a reasonable compromise for epidemiological or other tic studies that require screening large population samples for tic disorders.\n\n\nData availability\n\nThe video can be found here: http://dx.doi.org/10.5281/zenodo.55604 (Vachon et al., 2016).",
"appendix": "Author contributions\n\n\n\nMJV assembled video clips and wrote the first draft and KJB edited it. CWS and KJB drafted the questions. MJV, MRG and ECB identified video clips. CWS, MRG, JMK and KJB created the first version of the video. MJV, CWS, MRG, MLS and KJB contributed to the spoken script for the video. MJV and JMK edited video clips. All authors approved the final draft.\n\n\nCompeting interests\n\n\n\nVISIT-TS v. 2 is published under a CC BY-NC-ND license, i.e. freely available for noncommercial uses. Funds from any future commercial application of VISIT-TS are intended to benefit research on or clinical care for tic disorders.\n\n\nGrant information\n\nFunded by research grants from the Tourette Syndrome Association to CWS and by the National Institutes of Health (K24 MH087913, to KJB).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors gratefully acknowledge all those who allowed their or their children’s video clips to be included in VISIT-TS. Thanks to Ben Holt, Johanna Hartlein, Alison Stempel and Daniel Murray for work on this or the previous version, and to Ty Travis for video production.\n\n\nSupplementary material\n\n\n\n\nReferences\n\nAmerican Psychiatric Association: Diagnostic and statistical manual of mental disorders. (4th ed., Text Revision). Washington, DC: American Psychiatric Association. 2000. Reference Source\n\nAmerican Psychiatric Association: Diagnostic and statistical manual of mental disorders. (5th ed). Arlington, VA: American Psychiatric Association. 2013. Publisher Full Text\n\nApter A, Pauls DL, Bleich A, et al.: An epidemiologic study of Gilles de la Tourette's syndrome in Israel. Arch Gen Psychiatry. 1993; 50(9): 734–8. PubMed Abstract | Publisher Full Text\n\nBlack KJ, Black ER, Greene DJ, et al.: Provisional Tic Disorder: What to tell parents when their child first starts ticcing [version 1; referees: 3 approved]. F1000Res. 2016; 5: 696. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCenters for Disease Control and Prevention (CDC): Prevalence of diagnosed Tourette Syndrome in persons aged 6–17 Years - United States, 2007. MMWR Morb Mortal Wkly Rep. 2009; 58(21): 581–585. PubMed Abstract\n\nComings DE, Himes JA, Comings BG: An epidemiologic study of Tourette's syndrome in a single school district. J Clin Psychiatry. 1990; 51(11): 463–9. PubMed Abstract\n\nCostello EJ, Angold A, Burns BJ, et al.: The Great Smoky Mountains Study of Youth. Goals, design, methods, and the prevalence of DSM-III-R disorders. Arch Gen Psychiatry. 1996; 53(12): 1129–36. PubMed Abstract | Publisher Full Text\n\nCubo E: Review of prevalence studies of tic disorders: methodological caveats. Tremor Other Hyperkinet Mov (N Y). 2012; 2: pii: tre-02-61-349-1. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCubo E, Sáez Velasco S, Delgado Benito V, et al.: Validation of screening instruments for neuroepidemiological surveys of tic disorders. Mov Disord. 2011; 26(3): 520–526. PubMed Abstract | Publisher Full Text\n\nFindley DB, King RA, Leckman JF: Yale Child Study Center Tourette’s syndrome–obsessive-compulsive disorder specialty clinic symptom questionnaire. Appendix 1. in Leckman JF, Cohen DJ: Tourette’s Syndrome—Tics, Obsessions, and Compulsions: Developmental Psychopathology and Clinical Care. New York: Wiley, 1999; 415–454.\n\nGillberg C, Rasmussen P: Perceptual, motor and attentional deficits in seven-year-old children: background factors. Dev Med Child Neurol. 1982; 24(6): 752–70. PubMed Abstract | Publisher Full Text\n\nGordon M, Striley C, Koller JM, et al.: VISIT-TS: Video-Integrated Screening Instrument for Tics and Tourette Syndrome. Annual meeting, American Neuropsychiatric Association, Tampa, FL, 17–20 March 2010. J Neuropsychiatry Clin Neurosci.22(2): 246. Publisher Full Text\n\nHornsey H, Banerjee S, Zeitlin H, et al.: The prevalence of Tourette syndrome in 13–14-year-olds in mainstream schools. J Child Psychol Psychiatry. 2001; 42(8): 1035–9. PubMed Abstract | Publisher Full Text\n\nJagger J, Prusoff BA, Cohen DJ, et al.: The epidemiology of Tourette’s syndrome: a pilot study. Schizophr Bull. 1982; 8(2): 267–278. PubMed Abstract | Publisher Full Text\n\nJin R, Zheng RY, Huang WW, et al.: Epidemiological survey of Tourette syndrome in children and adolescents in Wenzhou of P.R. China. Eur J Epidemiol. 2005; 20(11): 925–7. PubMed Abstract | Publisher Full Text\n\nKadesjö B, Gillberg C: Tourette’s disorder: epidemiology and comorbidity in primary school children. J Am Acad Child Adolesc Psychiatry. 2000; 39(5): 548–55. PubMed Abstract | Publisher Full Text\n\nKano Y, Ohta M, Nagai Y, et al.: Association between Tourette syndrome and comorbidities in Japan. Brain Dev. 2010; 32(3): 201–7. PubMed Abstract | Publisher Full Text\n\nKaufman J, Birmaher B, Brent D, et al.: Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL): initial reliability and validity data. J Am Acad Child Adolesc Psychiatry. 1997; 36(7): 980–8. PubMed Abstract | Publisher Full Text\n\nKhalifa N, von Knorring AL: Prevalence of tic disorders and Tourette syndrome in a Swedish school population. Dev Med Child Neurol. 2003; 45(5): 315–9. PubMed Abstract | Publisher Full Text\n\nKhalifa N, von Knorring AL: Tourette syndrome and other tic disorders in a total population of children: clinical assessment and background. Acta Paediatr. 2005; 94(11): 1608–14. PubMed Abstract | Publisher Full Text\n\nKurlan R, Whitmore D, Irvine C, et al.: Tourette's syndrome in a special education population: a pilot study involving a single school district. Neurology. 1994; 44(4): 699–702. PubMed Abstract | Publisher Full Text\n\nLanzi G, Zambrino CA, Termine C, et al.: Prevalence of tic disorders among primary school students in the city of Pavia, Italy. Arch Dis Child. 2004; 89(1): 45–7. PubMed Abstract | Free Full Text\n\nLandgren M, Pettersson R, Kjellman B, et al.: ADHD, DAMP and other neurodevelopmental/psychiatric disorders in 6-year-old children: epidemiology and co-morbidity. Dev Med Child Neurol. 1996; 38(10): 891–906. PubMed Abstract | Publisher Full Text\n\nLapouse R, Monk MA: Behavior deviations in a representative sample of children: variation by sex, age, race, social class and family size. Am J Orthopsychiatry. 1964; 34: 436–46. PubMed Abstract | Publisher Full Text\n\nLewin AB, Mink JW, Bitsko RH, et al.: Utility of the diagnostic interview schedule for children for assessing Tourette syndrome in children. J Child Adolesc Psychopharmacol. 2014; 24(5): 275–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLinazasoro G, Van Blercom N, de Zárate CO: Prevalence of tic disorder in two schools in the Basque country: Results and methodological caveats. Mov Disord. 2006; 21(12): 2106–2109. PubMed Abstract | Publisher Full Text\n\nMason A, Banerjee S, Eapen V, et al.: The prevalence of Tourette syndrome in a mainstream school population. Dev Med Child Neurol. 1998; 40(5): 292–6. PubMed Abstract | Publisher Full Text\n\nOlfson M, Crystal S, Gerhard T, et al.: Patterns and correlates of tic disorder diagnoses in privately and publicly insured youth. J Am Acad Child Adolesc Psychiatry. 2011; 50(2): 119–131. PubMed Abstract | Publisher Full Text\n\nPeterson BS, Pine DS, Cohen P, et al.: Prospective, longitudinal study of tic, obsessive-compulsive, and attention-deficit/hyperactivity disorders in an epidemiological sample. J Am Acad Child Adolesc Psychiatry. 2001; 40: 685–695. PubMed Abstract | Publisher Full Text\n\nRickards H, Robertson M: A controlled study of psychopathology and associated symptoms in Tourette syndrome. World J Biol Psychiatry. 2003; 4(2): 64–8. PubMed Abstract | Publisher Full Text\n\nRobertson MM, Banerjee S, Kurlan R, et al.: The Tourette syndrome diagnostic confidence index: development and clinical associations. Neurology. 1999; 53(9): 2108–12. PubMed Abstract | Publisher Full Text\n\nRobertson MM, Eapen V: The National Hospital Interview Schedule for the assessment of Gilles de la Tourette syndrome. Int J Methods Psychiatr Res. 1996; 6(4): 203–226. Reference Source\n\nRobertson MM, Eapen V, Cavanna AE: The international prevalence, epidemiology, and clinical phenomenology of Tourette syndrome: a cross-cultural perspective. J Psychosom Res. 2009; 67(6): 475–83. PubMed Abstract | Publisher Full Text\n\nScahill L, Specht M, Page C: The Prevalence of Tic Disorders and Clinical Characteristics in Children. J Obsessive Compuls Relat Disord. 2014; 3(4): 394–400. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnider LA, Seligman LD, Ketchen BR, et al.: Tics and problem behaviors in schoolchildren: prevalence, characterization, and associations. Pediatrics. 2002; 110(2 Pt 1): 331–6. PubMed Abstract\n\nStefanoff P, Wolanczyk T, Gawrys A, et al.: Prevalence of tic disorders among schoolchildren in Warsaw, Poland. Eur Child Adolesc Psychiatry. 2008; 17(3): 171–8. PubMed Abstract | Publisher Full Text\n\nStorch EA, Murphy TK, Geffken GR, et al.: Reliability and validity of the Yale Global Tic Severity Scale. Psychol Assess. 2005; 17(4): 486–91. PubMed Abstract | Publisher Full Text\n\nThe Tourette Syndrome Association International Consortium for Genetics: A complete genome screen in sib pairs affected by Gilles de la Tourette syndrome. Am J Hum Genet. 1999; 65(5): 1428–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTourette Syndrome Association: Tourette Syndrome: A Guide to Diagnosis. VHS videotape, 28 minutes. Tourette Syndrome Association, Bayside, NY, © 1990.\n\nVachon MJ, Striley CW, Gordon MR, et al.: VISIT-TS version 2 video (release 20160516). Zenodo. 2016. Publisher Full Text\n\nWang HS, Kuo MF: Tourette’s syndrome in Taiwan: an epidemiological study of tic disorders in an elementary school at Taipei County. Brain Dev. 2003; 25(Suppl 1): S29–31. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "14613",
"date": "12 Jul 2016",
"name": "Kerstin J. Plessen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the article “VISIT-TS version 2: A multimedia tool for population studies on tic disorders” by Vachon, MJ, Striley, CW, Gordon, MR, Schroeder, ML., Bihun, EC, Koller, JM, and Black, KJ, a new method of classifying tics by lay persons is proposed as an aid in larger scale population studies.\n\nThe authors provide a clear account for the need for improving existing methods of detecting and differentiating tics and present an interesting solution in the form of their new multimedia tool. It is a positive aspect that participants with tics include different age groups, races, and both sexes. However, some of the video clips present one separate tic, whereas several tics are present in the video.\n\nThe rationale for developing the screening tool is its integration in large-scale epidemiological studies. Even though a few epidemiological studies focusing on tics and Tourette have been carried through, future methods will probably offer possibilities for screening cohorts for several co-occurring conditions. The authors may thus consider the possibility of further pruning the video (almost 10 minutes) and comparing two versions (e.g. the long one and a shorter version focusing on the different types of tics, which are more or less presented in the first few minutes of the video) for their validity against expert ratings. The shorter the time used for a screening, the more it will be used in big cohort studies. This could be integrated into future perspectives at the end of the manuscript.\n\nIt would also be interesting to carry out similar tests of the sensitivity as mentioned in the article by comparing the VISTS-TS to clinician ratings.\n\nStorch et al. (2005) is used as a reference for the YGTSS – the authors might consider adding the reference of Leckman’s (1989) original work.",
"responses": [
{
"c_id": "2074",
"date": "13 Jul 2016",
"name": "Kevin J Black",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We appreciate the thoughtful suggestions from Drs. Plessen and Hagstrøm. Our primary intent was to help lay people decide whether they (or their child) had any tics, not to distinguish which tics they had. I see from your response that we did not make that point clear. I think there are benefits to presenting and asking about different tics separately, even for our more limited goal, but your perspective also suggests an opportunity to shorten or otherwise improve on the video. You wrote, \"some of the video clips present one separate tic, whereas several tics are present in the video.\" I believe you are saying that some video clips may show two tics while the narration is discussing only one of them. If I understood that correctly, it's a good point that we had not considered. We may be able to address that in a future version of the video."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1518
|
https://f1000research.com/articles/5-390/v1
|
23 Mar 16
|
{
"type": "Data Note",
"title": "Flow cytometry analysis of epithelial cell populations from touch samples using the BD Influx flow cytometry platform",
"authors": [
"Ye Jin Kwon",
"Cristina E. Stanciu",
"M. Katherine Philpott",
"Christopher J. Ehrhardt",
"Ye Jin Kwon",
"Cristina E. Stanciu",
"M. Katherine Philpott"
],
"abstract": "‘Touch’ or trace cell mixtures submitted as evidence are a significant problem for forensic laboratories as they can render resulting genetic profiles difficult or even impossible to interpret. Optical signatures that distinguish epidermal cell populations from different contributors could facilitate the physical separation of mixture components prior to genetic analysis, and potentially the downstream production of single source profiles and/or simplified mixtures. For this dataset, optical properties including forwards scatter (FSC), side scatter (SSC), and fluorescence emissions in the Allophycocyanin (APC) channel were measured in epithelial cell populations from touch samples collected from several different contributors on multiple days to assess inter- and intra-contributor variability.",
"keywords": [
"forensic science",
"flow cytometry",
"epithelial cell",
"touch mixtures"
],
"content": "Introduction\n\nFlow cytometry has proven a viable approach for differentiating cell populations in many types of uncompromised (i.e. non-degraded) forensic mixture sample (Dean et al., 2015; Schoell et al., 1999; Verdon et al., 2015). However, application to ‘touch’ or trace epithelial cell mixtures remains a challenge since many cell surface features are lost or obscured during the process of keratinocyte differentiation, leaving few biochemical or structural features in shed corneocytes that vary between individual contributors. Recent research has suggested that optical properties such as autofluorescence at red wavelengths may be a potentially discriminating feature for epidermal cell populations in some touch mixture samples (Stanciu et al., 2016). In this study, we examined the consistency of such signatures using a different flow cytometry platform (BD Influx Cell Sorter) and set of contributors.\n\n\nMethods\n\nTouch samples were collected from six volunteers using the following protocol which was approved by the VCU-IRB (#HM20000454_CR). Volunteers rubbed a sterile polypropylene conical tube (P/N 229421; Celltreat Scientific) for five minutes using their entire hand (i.e., palm and fingers). Cells were collected from the surface with six sterile pre-wetted swabs (P/N 22037924; Fisher Scientific) followed by two dry swabs. To elute the cells into solution, the swabs were manually stirred then vortexed for 15 seconds in 10 mL of ultrapure water (18.2 MΩ∙cm). The entire solution was then passed through a 100 µm filter mesh prior to flow cytometry. Flow cytometry analysis of eluted cells was performed on the BD Influx Cell Sorter (Becton Dickinson) using the 488nm, 561nm, and 640nm lasers. Channel voltages were set as follows: Forward Scatter (FSC, 17.5V), Side Scatter (SSC, 16V) and Allophycocyanin (APC, 74.6V).\n\n\nDataset content\n\nFlow cytometry source data for all samples are provided in Flow Cytometry Standard (.fcs) format files. Source data files are organized into four different flow cytometry surveys, each involving a different set of donors, all of n are designated aswhom were sampled on the same day. File names are labeled with the anonymized sample ID number used for all experiments. Replicate measurements from the same cell solutio ‘rep1’, ‘rep2’, and so forth. A table of analyzed samples (labeled by Donor ID) across each of the four experiments is provided.\n\n\nData availability\n\nF1000Research: Dataset 1. Influx touch epithelial samples, 10.5256/f1000research.8338.d116907 (Kwon et al., 2016).",
"appendix": "Author contributions\n\n\n\nCE conceived the study. CE, CS, and YK designed the experiments. CS and YK carried out the research. CE and KP prepared the first draft of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis project was funded by the National Institute of Justice Award number 2013-DN-BX-K033 (PI: Ehrhardt). Flow cytometry analyses were performed at the University of Virginia Flow Cytometry Facility which is supported through the University of Virginia Cancer Center National Cancer Institute P30-CA044579-23 Center Grant.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nDean L, Kwon YJ, Philpott MK, et al.: Separation of uncompromised whole blood mixtures for single source STR profiling using fluorescently-labeled human leukocyte antigen (HLA) probes and fluorescence activated cell sorting (FACS). Forensic Sci Int Genet. 2015; 17: 8–16. PubMed Abstract | Publisher Full Text\n\nKwon YJ, Stanciu CE, Philpott MK, et al.: Dataset 1 in: Flow cytometry analysis of epithelial cell populations from touch samples using the BD Influx flow cytometry platform. F1000Research. 2016. Data Source\n\nSchoell WM, Klintschar M, Mirhashemi R, et al.: Separation of sperm and vaginal cells based on ploidy, MHC class I-, CD45-, and cytokeratin expression for enhancement of DNA typing after sexual assault. Cytometry. 1999; 36(4): 319–323. PubMed Abstract | Publisher Full Text\n\nStanciu CE, Philpott MK, Bustamante EE, et al.: Analysis of red autofluorescence (650–670nm) in epidermal cell populations and its potential for distinguishing contributors to 'touch' biological samples [version 1; referees: awaiting peer review]. F1000Res. 2016; 5: 180. Publisher Full Text\n\nVerdon TJ, Mitchell RJ, Chen W, et al.: FACS separation of non-compromised forensically relevant biological mixtures. Forensic Sci Int Genet. 2015; 14: 194–200. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "13031",
"date": "19 Apr 2016",
"name": "Dieter Deforce",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article suggests \"touch samples\" however the data set contains no data on forensic relevant touch samples. The six samples were from volunteers rubbing their entire hand. These are \"fresh\" cells and might not show the same flow characteristics as cells left behind on an object after a touch contact and having \"aged\" on the object. The dataset provided is as it is not really relevant to the forensic field. In addition I think that 6 samples is maybe a too limited number for this kind of studies.",
"responses": [
{
"c_id": "2208",
"date": "07 Oct 2016",
"name": "Christopher Ehrhardt",
"role": "Author Response",
"response": "We agree with the reviewer that there is an important distinction between ‘fresh’ biological samples and ones that are aged and/or degraded since the latter is more likely to be encountered in forensic caseworking samples. Uncompromised samples can nonetheless provide an important foundation for future studies that explicitly examine the effects of aging/degradation on optical signatures identified by the initial study. We also note that there are forensic scenarios where fresh touch samples may be collected and analyzed for an investigation. For example, a firearm may be discarded by a suspect and immediately collected during pursuit by law enforcement. However, to increase the forensic relevance of this dataset the new version of the manuscript includes samples that were aged between 12 hours and seven days before collection, as well as samples that were collected from forensically relevant substrates such as replica firearms and knife handles. For the updated version of the manuscript we have also expanded the dataset to include 33 different contributors."
}
]
},
{
"id": "13496",
"date": "26 Apr 2016",
"name": "Peter K Rogan",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe a dataset used for flow cytometric analysis of sloughed epithelial cells from a set of 6 individuals. It is not at all clear why these data are different from those reported in their copublished research note (http://f1000research.com/articles/5-180/v1) which provides an analysis of these data. The anonymized names of the individuals in the two papers are the same and they are both in .fcs format. The Data Note is larger than the data in the Research Note, presumably because it contains technical replicates. The differences are trivial. In fact, the Research Note would have greater validity if the Data Note dataset was incorporated into this document and discussed there.The second point that should be addressed is how the statistics of the fluorescence distributions (FS, SS) for different samples from the same individual or from different individuals can be compared given that the intensities vary, presumably as a result of the differences in yield from each sample. The reliability of using these histograms for making comparisons between replicates or individuals for forensic or any other applications could be suspect (eg. sample R12) due to noise, broad distributions or other factors. I suggest that the authors determine and provide minimum threshold criteria for analysis of a sample or comparison with other samples.",
"responses": [
{
"c_id": "2209",
"date": "07 Oct 2016",
"name": "Christopher Ehrhardt",
"role": "Author Response",
"response": "We agree that the differences between this manuscript and another publication currently in review at F1000 research could have been clearer. The cited manuscript (http://f1000research.com/articles/5-180/v1) was a preliminary survey of autofluorescence using one flow cytometry platform, the BD FACSCanto II Analyzer, whereas this study used two additional flow cytometery platforms, the BD FACSAria II cell sorter and the BD Influx Cell Sorter; different channel voltages were applied on each of the three flow cytometers, and the Influx is equipped with a different set of excitation lasers than the Aria and Canto. Additionally, for each manuscript a separate, non-overlapping set of touch samples was analyzed, although the reviewer is correct that some of the same donors contributed samples to both studies, albeit collected on different days. Given observed differences in fluorescence profiles for some contributors from one day to the next, sampling from the same contributor over multiple days adds to our understanding of this phenomenon. To further distinguish this Data Note from previous publication we have significantly revised the Data Note to include a more comprehensive set of samples and experimental conditions, investigating both optical properties and antibody binding capacities of ‘touch’ epithelial cell populations. This includes flow cytometry data for autofluorescence as well as antibody hybridizations with HLA and Cytokeratin probes. As the reviewer suggests, we are also in the process of incorporating aspects of this dataset into other manuscripts that discuss the biochemical and forensic applications of flow cytometry analysis of touch samples (e.g., pre-print available at http://biorxiv.org/content/early/2016/03/28/045948). We also agree with the reviewer's second point that there may be significant variability in the level of autofluorescence exhibited by touch samples. This in turn may be influenced by a variety of factors such as presence or absence of exogenous compounds, and/or intrinsic biological characteristics of the cells themselves. Understanding this variability, particularly for samples derived from the same contributor or for aged/degraded samples, is necessary to assess the potential utility of flow cytometry-based cell separation techniques such as FACS for downstream DNA profiling of separated cell populations derived from ‘touch’ biological mixtures. We would note that although differences in cell yield change the number of cells (Y axis) fluorescing in the red portion of the spectrum at a given RFU value (X axis), it is nonetheless possible to develop a sense of how the average intensity of red autofluorescence exhibited by cells collected from different individuals, or from the same individual on different days, varies by comparing histograms, regardless of cell yield. We agree that an individual’s total cellular contribution to a biological mixture will ultimately be an important factor in whether that individual’s DNA profile can be successfully generated from a sorted mixture. However, any kind of minimum cell count threshold would be inextricably linked to the sensitivity/efficiency of downstream DNA typing methodologies used on sorted cell populations, and are beyond the scope of this Data Note."
}
]
}
] | 1
|
https://f1000research.com/articles/5-390
|
https://f1000research.com/articles/5-2463/v1
|
06 Oct 16
|
{
"type": "Research Article",
"title": "Complete analysis of the H5 hemagglutinin and N8 neuraminidase phylogenetic trees reveals that the H5N8 subtype has been produced by multiple reassortment events",
"authors": [
"Andrew R. Dalby"
],
"abstract": "The analysis of the complete H5 hemagglutinin and H8 neuraminidase phylogenetic trees presented in this paper shows that the H5N8 avian influenza has been generated by multiple reassortment events. The H5N8 strain does not have a single origin and is produced when the H5 hemagglutinin and N8 neuraminidase re-assort from other H5 and N8 containing strains. While it was known that there had been a re-assortment to incorporate the Guangdong H5 hemagglutinin at the start of the Korean outbreak, the results show that there have also been multiple reassortment events amongst the non-Korean sequences.",
"keywords": [
"H5N8",
"Guangdong",
"hemagglutinin",
"neuraminidase",
"reassortment",
"phylogenetics"
],
"content": "Introduction\n\nThe H5N8 subtype of influenza A virus was first isolated in Ireland in 19831. Until the 2014 outbreak in Korea this was a relatively rare subtype that only occurred sporadically. This Korean outbreak was more sustained and over a much wider geographical region than the earlier outbreaks. The Korean outbreak was also important because it resulted from a viral reassortment which contains the highly pathogenic H5 segment originally isolated from a goose in Guangdong in 19962. The epidemiology of the Guangdong H5 containing H5N8 virus has been investigated extensively3–7 but there is much less investigation of the subtypes that contain the non-Guangdong H5 which has been the predominant hemagglutinin in North America. The most significant finding from these previous studies is that the H5 Guangdong hemagglutinin has now been introduced to North America via bird migratory pathways and that this H5 has not undergone subsequent reassortment into other avian influenza subtypes including H5N2.\n\nAn interesting question is why there are large gaps in the history of the sampling of the H5N8 subtype? This is particularly true in the case of the non-Guangdong H5 containing sequences. In order to create a complete history of the H5N8 subtype a phylogenetic analysis of all of the H5 hemagglutinin and N8 neuraminidase sequences was undertaken.\n\nThere are three possible explanations for the breaks in detection. They could result from inadequate sampling of the H5N8 avian influenza, the virus might have been present in wild birds, but because avian influenza is often asymptomatic it might be only cryptically expressed, or H5N8 might occur sporadically because it has been created by reassortment events but the new reasserted virus does not spread widely because it is not competitive with alternative reassortment subtypes and so it does not form a continuous population.\n\nA systematic environmental study of bird diseases in the Delaware Bay, as part of the Southeastern Cooperative Wildlife Disease Study8 has only reported the H5N8 subtype sporadically providing evidence against inadequate sampling and cryptic expression. With a systematic collection of biological and environmental samples if a sustained viral population had been present at this location then it is likely that it would have been detected even if infection is asymptomatic.\n\nThis leaves the alternative hypothesis that the H5N8 virus occurs sporadically as a result of reassortment events, but that these events do not produce a sustainable H5N8 viral population. This hypothesis can be tested by constructing the complete phylogenetic trees of H5 and N8. The H5N8 subtype samples that fall within a single clade of the H5 hemagglutinin and N8 neuraminidase phylogenetic trees will most likely be the product of a single reassortment. If the H5N8 subtype sequences are scattered widely across the phylogenetic trees, then this would indicate multiple reassortment events that have generated the H5N8 subtype from other subtypes.\n\nThis paper shows that the H5N8 subtype is distributed widely across both the H5 and N8 phylogenetic trees and that the sporadic nature of H5N8 is a result of multiple reassortment events that have generated the subtype rather than cryptic expression of the virus.\n\n\nMaterials and methods\n\nAll of the available H5 hemagglutinin segments (4007 sequences) and N8 neuraminidase segments (1840 sequences) were downloaded from the NCBI Influenza Virus Resource on the 27th of June 20159. The search was restricted to full-length sequences from any host. Manual inspection and editing of the sequences was carried out using Mega6.0610. During manual editing the 5' end of the sequence was edited to remove the un-translated region. All sequences were trimmed to the start codon and stop codons. Sequences with missing nucleotides were removed.\n\nThe H5 clades for the H5N8 subset of hemagglutinin sequences were assigned using the Highly Pathogenic H5N1 Clade Classification tool available as part of the Influenza Research Database11–13. While this tool was created for the H5N1 subtype, the recent H5N8 outbreak has been identified as belonging to the new 2.3.4.4 subclade that is part of the classifier.\n\nThe H5 hemagglutinin and N8 neuraminidase sequences were aligned using Muscle v 3.8.31. FastTree2.1 was used to create a maximum likelihood tree for all of the sequences using the GTR + gamma evolutionary model14.\n\nFasttree -boot 10000 –nt –gtr –gamma –quote filename.fas > filename.tree\n\nGiven the large number of taxons it is computationally challenging to calculate non-parametric boot-strapped trees, instead FastTree calculates a local support values of each of the splits within the tree using the Shimodaira-Hasegawa (SH) log likelihood test15. This has been shown to have a high correlation to non-parametric bootstrap values14.\n\nThe resulting trees were edited, visualised and annotated with FigTree 1.4.216. The trees were displayed as phylograms in order to examine the effect of sampling. The full name, chronological and geographical information was included in the trees as these are essential for determining the homogeneity of the clades. Nodes were labelled with the support values calculated by FastTree, which are a Log Likelihood Ratio. Trees and sub-trees were all rooted to the earliest chronological sequences within the tree.\n\nSupplementary data-files for the phylogenetic analysis of the H5 hemagglutinin are available from http://dx.doi.org/10.5281/zenodo.20653 and for the N8 neuraminidase from http://dx.doi.org/10.5281/zenodo.20655.\n\n\nResults and discussion\n\nTable 1 shows a summary of the H5N8 sequences that are not classified as being members of the 2.3.4.4 clade by the influenza database pathogenic H5N1 classification tool. The complete table of results is given in supplementary table 1. All of the sequences from the Korean outbreak are classified as part of the 2.3.4.4 clade and this also includes many of the 2014 North American sequences but not Californian quail sequence KP101004, which is part of the American non-Guangdong classification. There are two non-Guangdong clades that can be identified as sources of H5 hemagglutinin in H5N8, an American and Eurasian clade.\n\nThe Quang Ninh sequence belongs to the Guangdong grouping but it is part of a different sub-clade, 2.3.2.1c. This subclade contains H5N1 sequences that were found in long range migratory birds such as Geese, Cranes and Whooper Swans in Mongolia and Japan between 2009 and 2011. This suggests that this sequence occurred from a distinct reassortment in migratory birds to that which produced the Korean outbreak.\n\nThese results show that the H5N8 hemagglutinins are widely distributed across the H5 clades and that almost all of the North American sequences fall outside of the clades within the current nomenclature system12. This demonstrates that there have to have been multiple reassortment events between different H5 and N8 clades which have generated the H5N8 subtype. These need to be explored through a more detailed analysis of the complete phylogenetic trees of the H5 hemagglutinin and N8 neuraminidase. Four distinct reassortment events are already clear. One involving the 2.3.4.4 Guangdong clade, another involving the 2.3.2.1c Guangdong clade and two more involving the Eurasian and American non-Guangdong H5 hemagglutinins.\n\nThe full H5 tree contains 4007 sequences, and is rooted on the 1959 Scottish H5N1 hemagglutinin sequence CY015081 (Figure 1). The tree has been collapsed into two main clades which correspond to American (clade 1) and Eurasian sequences (clade 2). There is then a small cluster of ancestral sequences to these groups that includes the Irish H5N8 sequences (shown in red) that form a subclade with an H5N2 sequence from Italy in 1980 and the German H5N6 and H5N2 sequences from 1984–1985. This Irish group represents the first recorded reassortment that produced the H5N8 subtype. Given that its sequence neighbours both before 1983 and afterward are from the H5N2 subtype it is plausible that the H5N8 hemagglutinin originated in the H5N2 subtype. Within the highly pathogenic avian influenza H5N1 classification the Irish H5 sequences are attributed to the American non-Guangdong clade but this more detailed analysis show that there is a European ancestral group that predates the American clade and that the Irish sequences belong there.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in red.\n\nFigure 2 shows clade 1, the American non-Guangdong clade, rooted on the 1966 H5N9 sequence from a Turkey in Ontario (AB558456). This clade contains the H5N8 sequences from Colorado 2006, a quail in California in 2014, a mallard in California in 2011 and a ruddy turnstone in New Jersey in 2001. Each of these appears as a single sequence in a subclade made up of other non-H5N8 viral subtypes. This suggests that each of these occurrences of H5N8 is the result of a different reassortment event.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nThe 2001 New Jersey sequence is in a subclade with H5N2 and H5N7 virus sequences collected in the same location in the same year. However, it is not clear which subtype is the source of the H5 hemagglutinin. The 2006 Colorado sequence is part of a subclade with the H5N2 subtype and there are a number of other H5N2 viruses from Arkansas, Minnesota and Wisconsin that are closely related and that were also detected in 2006. It is therefore most likely that a reassortment took place between H5N8 and H5N2. The 2011 California sequence forms a distinct subclade where the other members are all H5N1 sequences also from California and also from 2011. This suggests that the H5N8 is the result of a reassortment between H5N8 and H5N1. The 2011 Californian sequence, 2006 Colorado and California 2014 sequences are the most similar to each other but the clade is dominated by H5 hemagglutinins from the H5N5 subtype.\n\nThe quail sequence from California in 2014 is particularly important because this could easily be mistaken as being part of the main outbreak of the Guangdong H5 containing virus which is found in the other American H5N8 sequences from that year17. However, sequence analysis makes it clear that this is not part of that group and that it is part of the American non-Guangdong clade and that this most likely originated in another reassortment of H5N5 or H5N6 with an N8 containing subtype.\n\nConfusion about the sources of outbreaks can affect the measures taken to prevent the spread of the disease. This is especially true in the case of highly pathogenic avian influenza where there is likely to be a significant economic impact if the outbreak cannot be managed successfully. Both the Japanese and European outbreaks were contained and while the highly pathogenic Guangdong H5 was not allowed to spread widely in domestic flocks it was present in wild birds18–20.\n\nClade 2 is predominantly Eurasian was rooted on the 1991 H5N3 Altai sequence and can be divided into two subclades (Figure 3). One that contains non-Guangdong Eurasian sequences and a second that contains the Guangdong sequences. The non-Guangdong Eurasian sequences include a single H5N8 virus from a duck in Thailand in 2002, which is classified as Eurasian Non-Guangdong under the existing nomenclature12 (Figure 4). There is a considerable distance between this sequence and any other H5N8 subtype sequences. This is a clear indication that there must have been a reassortment in Thailand to produce the H5N8 and that it was most likely between an H5N2 subtype hemagglutinin and an N8 containing subtype. This new tree shows that the existing nomenclature for H5 sequences outside of the Guangdong lineage does not adequately cover the diversity of this group of sequences.\n\nThis tree is rooted on the EU564116 sequence from a duck in Altai in 1991. Nodes are labelled and coloured with the local bootstrap likelihood values. There are both non-Guangdong and Guangdong subclades.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nThe Guangdong containing subclade is the lineage that has been studied most extensively and this is taken as the prevailing form of pathogenic H5 containing highly pathogenic avian influenza19,21. Within the Guangdong sequences there are three distinct sets of H5N8 sequences. The large bulk correspond to the Buan H5N8 sequences (Figure 5) that have been previously described19. This grouping also includes the North American sequences of H5N8 and H5N2 spread by long distance bird migration22. However the Gochang sequences form a distinct grouping along with Chinese sequences from Zheijiang, Shandong and Jiangsu (Figure 6). This strongly suggests that the Gochang and Buan groups are two different reassortment events even though they are both within the 2.3.4.4 highly pathogenic H5 clade12. The other distinct group contains a single H5N8 sequence from Quang Ninh and is in clade 2.3.2.1c. (Figure 7) again this must have been the product of another reassortment event.\n\nThese results from the hemagglutinin trees suggest that there have been a minimum of five reassortment events within the American non-Guangdong sequences, another within the Eurasian non-Guangdong sequences and at least two more in the Guangdong clade. This makes a total of at least eight separate reassortment events that have produced H5N8 from other subtypes. Examining the N8 neuraminidase tree can be used to confirm these reassortment events and to show that absences of H5N8 in the chronological record do not result from poor sampling.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequence is highlighted in blue.\n\nThe neuraminidase trees are harder to summarise even though they contain far fewer sequences. This is because they have a less clear clade structure. Figure 8 shows the N8 phylogenetic tree rooted on the 1963 Ukrainian duck sequences. Once again, the H5N8 subtype is close to the root of the tree in the case of the initial Irish outbreak in 1983. There is however, only a single Irish H5N8 sequence amongst a group of H3N8 sequences. Unlike the H5 hemagglutinin there is no existing clade nomenclature for the N8 neuraminidase sequences.\n\nFrom the neuraminidase tree there are four distinct clades but these are much more heterogeneous than in the case of the H5 trees. The four clades correspond roughly to long range migratory birds, far eastern migratory birds, Gochang and Buan and finally an American clade. Only the Gochang and Buan and the American clade contain the H5N8 subtype and will be considered here.\n\nThe simplest of these clades to view is the Gochang and Buan clade where the H5N8 subtypes are clustered tightly together in one subclade with only a few non H5N8 sequences (Figure 9). This highly homogeneous clade is very different to the mixtures of subtypes found in the other clades. The structure of this clade suggests that the Gochang and Buan H5N8 viruses originate from a single source of the N8 neuraminidase but that they divided from one another before the Korean outbreak. This is in agreement with the hemagglutinin trees which show that the H5 hemagglutinins for the Buan and Gochang groups have two more distinct origins. This shows that reassortment can occur on a local level between closely related sequences and produce multiple lineages in the same geographical location.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequence is highlighted in blue.\n\nThe tree is rooted on the EU429700 sequence from a duck in Eastern China in 2004. The Buan sequences are highlighted in orange and the Gochang sequences are highlighted in blue. Nodes are labelled and coloured with the local bootstrap likelihood values.\n\nClade 4 (Figure 10) contains the seven distinct H5N8 sequences that are distributed widely across the clade as singletons. None of the sequences are adjacent to each other in the tree and most are in distantly related subclades. Clade 4.1 (Figure 11) contains the 2001 New Jersey and 2011 Californian H5N8 sequences along with the 2006 Colorado, 2014 California and 2013 Quang Ninh sequences within sub-sub-clade 4.1.1. This agrees with the results of the H5 hemagglutinin sequences phylogenetic tree and is strong evidence that each of the H5N8 viruses corresponds to a reassortment event. What is more significant is the presence of the Quang Ninh sequence amongst the North American sequences as this shows that the N8 neuraminidase circulates more widely than the different H5 hemagglutinin lineages. Clade 4.2 contains the 2002 New York sequence (Figure 13) and clade 4.3 contains the 2012 Thailand sequence (Figure 14). Again this demonstrates how there is a global dispersion of the N8 neuraminidase clade 4, compared to the Guangdong H5 which until 2014 was not present in North America.\n\nThis is a collapsed view. Nodes are labelled and coloured with the local bootstrap likelihood values.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequences are highlighted in blue.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequence is highlighted in blue.\n\nNodes are labelled and coloured with the local bootstrap likelihood values. The H5N8 sequence is highlighted in blue.\n\n\nConclusions\n\nThe results presented here show that there is a high degree of reassortment that generates new influenza subtypes. The presence of a high proportion of singleton sequences shows that most of H5N8 is often not the preferred subtype as it did not produce a wide-spread outbreak. The Guangdong H5 containing reassortment has produced an H5N8 capable of wider circulation. However it is still possible that once again H5N8 will die out only to return sporadically given that the virus has already undergone further reassortment in North America to produce an H5N2 Guangdong H5 containing subtype23.\n\nIn a wider context, reassortment events that create new subtypes need to be accounted for before reliable phylogenetic analysis can be carried out. Sampling for tree construction based on viral subtype, without any consideration of reassortment will be misleading. If we ignore these reassortment events, then we will introduce sampling bias to the trees. Sampling bias is introduced because you selectively sample sequences within a clade that share the same subtype whereas many of the neighbouring hemagglutinin and neuraminidase sequences may actually be from other subtypes.\n\nWhere phylogenetic analysis focuses on a viral segment of a specific subtype a complete analysis of all of the sequences for from all subtypes for that segment, as has been performed here, is rarely carried out. The criteria for including sequences in these analyses are usually based on chronological or geographical limits, but these limits reduce the generalizability of the hypothesis being considered. To make sure that sampling is effective a complete phylogenetic analysis of that segment is required. After this then clades and subclades can be selected for further analysis using geographical or chronological criteria. In this way the only bias introduced is that from sequence collection and availability.\n\nThis analysis only considered reassortment from the perspective of the glyco-proteins as the reassortment of these protein produces a novel influenza sub-type. Further analysis needs to also include the other viral segments in order to provide a more complete picture of reassortment in avian influenza.\n\n\nData availability\n\nZENODO: Phylogenetic Analysis of the Influenza H5 hemagglutinins, doi: 10.5281/zenodo.2065324\n\nZENODO: Phylogenetic analysis of the influenza N8 neuraminidases, doi: 10.5281/zenodo.2065525",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author declares that no supporting grants were involved in this work.\n\n\nAcknowledgments\n\nI would like to thank an anonymous referee from a preceding paper for pointing out the importance of identifying reassortment events in sampling for phylogenetic analysis.\n\n\nReferences\n\nAlexander DJ, Parsons G, Manvell RJ: Experimental assessment of the pathogenicity of eight avian influenza A viruses of H5 subtype for chickens, turkeys, ducks and quail. Avian Pathol. 1986; 15(4): 647–62. PubMed Abstract | Publisher Full Text\n\nXu X, Subbarao K, Cox NJ, et al.: Genetic characterization of the pathogenic influenza A/Goose/Guangdong/1/96 (H5N1) virus: similarity of its hemagglutinin gene to those of H5N1 viruses from the 1997 outbreaks in Hong Kong. Virology. 1999; 261(1): 15–9. PubMed Abstract | Publisher Full Text\n\nLee YJ, Kang HM, Lee EK, et al.: Novel reassortant influenza A(H5N8) viruses, South Korea, 2014. Emerg Infect Dis. 2014; 20(6): 1087–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu H, Peng X, Xu L, et al.: Novel reassortant influenza A(H5N8) viruses in domestic ducks, eastern China. Emerg Infect Dis. 2014; 20(8): 1315–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJeong J, Kang HM, Lee EK, et al.: Highly pathogenic avian influenza virus (H5N8) in domestic poultry and its relationship with migratory birds in South Korea during 2014. Vet Microbiol. 2014; 173(3–4): 249–57. PubMed Abstract | Publisher Full Text\n\nJhung MA, Nelson DI; Centers for Disease Control and Prevention (CDC): Outbreaks of avian influenza A (H5N2), (H5N8), and (H5N1) among birds--United States, December 2014-January 2015. MMWR Morb Mortal Wkly Rep. 2015; 64(4): 111. PubMed Abstract\n\nDalby AR, Iqbal M: The European and Japanese outbreaks of H5N8 derive from a single source population providing evidence for the dispersal along the long distance bird migratory flyways. PeerJ. 2015; 3: e934. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrown JD, Swayne DE, Cooper RJ, et al.: Persistence of H5 and H7 avian influenza viruses in water. Avian Dis. 2007; 51(1 Suppl): 285–9. PubMed Abstract | Publisher Full Text\n\nBao Y, Bolotov P, Dernovoy D, et al.: The influenza virus resource at the National Center for Biotechnology Information. J Virol. 2008; 82(2): 596–601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTamura K, Stecher G, Peterson D, et al.: MEGA6: Molecular Evolutionary Genetics Analysis version 6.0. Mol Biol Evol. 2013; 30(12): 2725–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSquires RB, Noronha J, Hunt V, et al.: Influenza research database: an integrated bioinformatics resource for influenza research and surveillance. Influenza Other Respir Viruses. 2012; 6(6): 404–16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith GJ, Donis RO; World Health Organization/World Organisation for Animal Health/Food and Agriculture Organization (WHO/OIE/FAO) H5 Evolution Working Group: Nomenclature updates resulting from the evolution of avian influenza A(H5) virus clades 2.1.3.2a, 2.2.1, and 2.3.4 during 2013–2014. Influenza Other Respir Viruses. 2015; 9(5): 271–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsen FA, Kodner RB, Armbrust EV: pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree. BMC Bioinformatics. 2010; 11(1): 538. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrice MN, Dehal PS, Arkin AP: FastTree 2--approximately maximum-likelihood trees for large alignments. PloS One. 2010; 5(3): e9490. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShimodaira H, Hasegawa M: Multiple comparisons of log-likelihoods with applications to phylogenetic inference. Mol Biol Evol. 1999; 16(8): 1114–6. Reference Source\n\nRambaut A: FigTree, a graphical viewer of phylogenetic trees. 2007. Reference Source\n\nLee DH, Torchetti MK, Winker K, et al.: Intercontinental Spread of Asian-Origin H5N8 to North America through Beringia by Migratory Birds. J Virol. 2015; 89(12): 6521–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdlhoch C, Gossner C, Koch G, et al.: Comparing introduction to Europe of highly pathogenic avian influenza viruses A(H5N8) in 2014 and A(H5N1) in 2005. Euro Surveill. 2014; 19(50): 20996. PubMed Abstract | Publisher Full Text\n\nKu KB, Park EH, Yum J, et al.: Highly pathogenic avian influenza A(H5N8) virus from waterfowl, South Korea, 2014. Emerg Infect Dis. 2014; 20(9): 1587–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVerhagen JH, Herfst S, Fouchier RA: Infectious disease. How a virus travels the world. Science. 2015; 347(6222): 616–7. PubMed Abstract | Publisher Full Text\n\nYoon H, Moon OK, Jeong W, et al.: H5N8 Highly Pathogenic Avian Influenza in the Republic of Korea: Epidemiology During the First wave, from January Through July 2014. Osong Public Health Res Perspect. 2015; 6(2): 106–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIp HS, Dusek RJ, Bodenstein B, et al.: High Rates of Detection of Clade 2.3.4.4 Highly Pathogenic Avian Influenza H5 Viruses in Wild Birds in the Pacific Northwest During the Winter of 2014–15. Avian Dis. 2016; 60(1 Suppl): 354–8. PubMed Abstract | Publisher Full Text\n\nPasick J, Berhane Y, Joseph T, et al.: Reassortant highly pathogenic influenza A H5N2 virus containing gene segments related to Eurasian H5N8 in British Columbia, Canada, 2014. Sci Rep. 2015; 5: 9484. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDalby A: Phylogenetic Analysis of the Influenza H5 hemagglutinins. Zenodo. 2015. Data Source\n\nDalby A: Phylogenetic analysis of the influenza N8 neuraminidases. Zenodo. 2015. Data Source"
}
|
[
{
"id": "17349",
"date": "02 Nov 2016",
"name": "Catherine Macken",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author examines the emergence of avian influenza A viruses (AIAV) having the H5N8 subtype, using genomic sequences from a publicly available influenza sequence database. He points out, correctly, that isolates of H5N8 AIAV has been reported sporadically, but infrequently, since 1983, with the majority of detections occurring during the recent outbreak of H5N8 viruses in Korea. These H5N8 viruses carry a hemagglutinin (HA) segment from the so-called “Guangdong lineage”, the lineage associated with the highly pathogenic AIAV H5N1 viruses that were first detected in Asia in 1996.\n\nIn order to demonstrate the sporadic emergence of H5N8 AIAV over 30+ years of history, the author conducts a phylogenetic analysis all full-length HA from all influenza A viruses having H5Nx subtype, and the complementary analysis of all full-length neuraminidase (NA) from all influenza A viruses having HxN8 subtype.\n\nThe analytical approach is appropriate. Perhaps other phylogenetic inference methods could give stronger results, but I doubt that alternatives will change the basic conclusions. It is clear that AIAV having H5N8 subtype have been seen sporadically in the past, but not until 2010 were these found in conjunction with HA from the Guangdong lineage.\n\nUsing his extensive HA(H5) and NA(N8) phylogenies, the author attempts to infer the subtype of the donor viruses of the sporadically occurring H5N8 viruses. In some instances, the evidence is reasonably strong. However, circumstantial evidence based on collocation of subtypes with high bootstrap support in a phylogeny does not constitute proof of the donor subtypes, a point that the author recognizes. (Such proof is extremely difficult to obtain, requiring a field sample with a mixture of the relevant donors and reassortant viruses.)\n\nI would like the author to have provided some context for his results. How “unusual” is this pattern of sporadic emergence of newly formed reassortants having a particular mixture of HA and NA subtypes? For example, the database contains four sequences from H5N4 viruses, one collected from each of 2006 and 2009, and two collected in 2010. Given the time period separating these detections, it is highly likely that at least three separate reassortment events gave rise to these four viruses. The author could also describe the various other NA subtypes that have reassorted with HA from the Guangdong lineage. The fact that the Guangdong HA’s have been circulating for two decades and only recently have they reassorted with another subtype to produce a “successful” novel H5Nx subtype is of interest.\n\nFrequent reassortment among avian influenza viruses has been well documented. For the most part, novel AIAV reassortants do not persist long before they reassort again, as is seen in this paper. An important question is why a particular combination of segments becomes “successful”, i.e. able to spread widely in its host species. The answer to this question is beyond the scope of the author’s work. This paper does show, for one subtype, the history of emergence, disappearance and re-emergence, including a recent outbreak.\n\nI have a number of minor comments.\nI found the tip labels of phylogenetic trees confusing. I would prefer the format “strain name (subtype).”\n\nPlease collapse subtrees in Fig. 2.\n\nNumerical support for branches: I would prefer these values to be given only when the branch is relevant to the thesis of the paper. Reporting support values that are very low or even 0, such as in Figures 5 and 8, is not helpful.\n\nI would like Figs 5-7 to be combined, so that I can see how these subtrees relate to the overall evolution of the Guangdong HA/H5.\n\nA number of the sequences from old isolates are duplicates. These should be removed before analysis.\n\nThe author criticizes phylogenetic analysis based on a specific subtype. I disagree with this criticism. I believe that the choice of dataset depends on the hypothesis of interest. It may be most appropriate to focus on A/H3N2 viruses from humans or A/H5N1 viruses from chickens when, for example, considering antigenic drift in the respective hemagglutinin proteins.\n\nSurveillance of avian influenza is a major focus of the CEIRS program. It extends beyond the study in the Delaware Bay. Subtypes not represented in the public sequence database are likely to be rare in the host population. Except in restricted situations such as outbreaks in a domestic poultry flock, it is unlikely that the subtype of an avian virus is known before sampling and laboratory identification. Therefore, sampling bias is unlikely to be a source of lack of sequences from a particular subtype.",
"responses": [
{
"c_id": "2269",
"date": "03 Nov 2016",
"name": "Andrew Dalby",
"role": "Author Response",
"response": "I would like to thank Dr Macken for her comments and review. While it is not possible to know for sure the ancestors of a reassortment event collocation in a specific time-frame is strongly suggestive especially if reassortment can be placed within the same time frame. The bootstrap is often treated as a measure of the accuracy of a phylogenetic tree when it is actually a measure of the precision of that sample of data producing that tree topology with that tree construction method. Tree reconstruction is limited by the quality of the sampling of the data, and no number of bootstraps can create the correct tree from a poor sample. I do not consider bootstraps as a measure of likely ancestry but more as a measure of clustering of sequences and their position within clades. If there is a clade with different subtypes then these are more likely to have a shared ancestry. The H5 trees contain the sporadic occurrence of many other H5 containing subtypes and show that even H5N2 and H5N1 have multiple reassortment events. The reasons for the focus on H5N8 is because there are such large breaks in the historical record, because of the global distribution of the subtype and most importantly because there was finally a sustained outbreak in Korea and then later Taiwan. It was this different behaviour between the Korean outbreak and previous cases of H5N8 that drew my attention. This does point to the actual question of interest, which is why some reassortments thrive but many do not. Even for the Korean outbreak when the Guangdong containing sequence reached North America it rapidly reassorted to produce a H5N2 subtype. This implies that while in North America the N2 is preferred, there is no equivalent N2 replacement for N8 in Korea or Taiwan. This shows a specific preference for neuraminidase lineages for hemagglutinin lineages that goes beyond the level of subtype. The reason for criticising studies that use just sequences from a single subtype is that if you do this then you do not get a contiguous sample of the hemagglutinin or neuraminidase sequence changes. If you consider the H5 hemagglutinin in H5N8 for example, if H5N8 becomes H5N2 and then returns to H5N8 you are missing the sequence changes that occur in the hemagglutinin in the H5N2 subtype. This will affect antigenic studies as much as phylogenetic studies (and possibly even more severely depending on the number of changes that occur in the H5N2 hemagglutinin). This is because you will be sampling irregularly in time. For the best possible sampling you need to see all of the individual nucleotide sequence changes. Where you have steps that involve more than a single change then there are several different orders for these changes which cannot be resolved unambiguously. By sampling bias I am considering the relatively small number and narrow geographical focus of current sampling efforts. I was not questioning the experimental validity of studies such as CEIRS and implying that the experiments are biased. My meaning was that most sampling is convenience sampling and this is inherently biased. For example Africa and South America have very few sequences available and only now is there increased sampling in Russia where the migratory breeding grounds for many long range bird migrants are found. While a subtype might be rare locally this might not be true globally. For example the Guangdong H5 lineage was not known in North America until the H5N8 outbreak and localisation of lineages seems to occur in many subtypes including H5 and H9."
}
]
},
{
"id": "18901",
"date": "25 Jan 2017",
"name": "Justin Bahl",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author describes the history of the H5N8 avian influenza subtype by performing phylogenetic analysis of all H5 hemaglutinin and N8 neuraminidase gene sequences regardless of subtype. The objective of this analysis is to demonstrate reassortment events between the H5 and N8 gene segments to explain gaps in collection history of the H5N8 subtype. However, the quality of this analysis is doubtful for several reasons: foundational research on this subject appears to be ignored, assumptions are unclear and/or incorrect, the methods are subjective, strong assertions and conclusions are made with weak evidence, and the presented figures are often uninterpretable. In comparing this study to the published literature, the methods used are qualitative and highly sensitive to bias. I recommend the author uses other, more quantitative methods rather than reading the tree.\n\nInfluenza A virus has been an important model organism to study the mechanisms of evolutionary change. In addition to mutation, migration, natural selection and genetic drift, genomic reassortment of influenza gene segments is a well-known, well-established mechanism of influenza evolution and diversity1-8. Scientists have been reporting reassortment events since the 1970’s, including several that have resulted in pandemic disease. While the author no doubt knows this, results and conclusions are posed in such a way as to lead the reader to believe reassortment is a novel discovery. Reframing these findings in the context of previous studies on reassortment would be a helpful means to avoid this. Alternatively, the author may wish to approach the subject with a question that delves deeper into the phenomenon of reassortment itself or further investigate the methods of its analysis, rather than stopping at the conclusion that reassortment exists. The author has also overlooked some important systematic studies demonstrating that persistence of LPAI subtypes through time is rare (if ever). The author asks whether the H5 and N8 associations are randomly distributed across the tree or do they cluster assuming adequate sampling (a dubious assertion). With the understanding that LPAI subtypes rarely persist in wild birds, the signal of persistence in domestic birds might be an artifact of systematic surveillance vs. outbreak investigation of an HPAI subtype.\n\nThe methods in this analysis are also unclear and subjective. The author hopes to estimate whether or not H5 and N8 subtype genes associate persistently through time and space, immediately ruling out sampling bias and “cryptic expression” as causes for these associations. But what is meant by cryptic expression? And how was it assessed? Maybe branch lengths? To evaluate reassortment of the H5N8 subtype, the author compares phylogenetic analyses of the H5 and N8 genes. This is a valid method to investigate reassortment, especially if the goal of the manuscript is to report circulating genotypes. But, the author is making inferences or suggestions when the strength of those assertions has not been assessed. For example, the author, referring to the early H5N8, states, \"Given that its sequence neighbours both before 1983 and afterward are from the H5N2 subtype it is plausible that the H5N8 hemagglutinin originated in the H5N2 subtype.\" Two immediate issues with this statement become clear: 1) there is no scale bar provided to allow the readers to assess how much evolution occurred on the branch leading to the H5N8 and 2) the ancestral state being H5N8 is equally likely. The author also makes inferences about the Gochang and Buan groups, but again, no scale bar is provided, the relationship of these two groups is not depicted, and the NA analysis shows them to be monophyletic (ie. single introduction into HPAI H5). It's unclear how robust this assessment of multiple reassortment histories is.\nBootstrapping, as presented here, only provides support for the estimated bifurcation of a particular node, but provides no information on the strength of evidence for a reassortment event. For instance, a single sequence of H5N8 within a clade of sequences from a different subtype may indicate a reassortment event (assuming systematic surveillance, as the author does). However, the evidence for reassortment becomes less clear as in the case of the author’s claim that the Gochang and Buan sequences developed from a local reassortment. The author provides no evidence to test this claim except for the heterogeneity/homogeneity of the clade based on bootstrap values, which can be influenced by phylogenetic error. The author should consider using BATS to assess the probability of association given the phylogenies generated. Furthermore, the analysis provides no information on the extent of the circulation, context of surveillance, or number of infected or durations of circulation. All of these can affect branch lengths that might lead to incorrect inference of ancestral subtype. Additionally, the author might want to look at all H5N8 genomes to assess the relative diversity resulting from reassortment using for example tree-to-tree comparison methods. Within the past several years a variety of computational methods have been developed that can help test these claims of reassortment, both dependent5, 9, 10 on and independent11-13 of phylogeny. These listed methods would provide statistical support for reassortment events that would go beyond the subjective claims made by this author.\n\nOne last recommended improvement for this article would be to focus on making the figures more readable and easier to interpret. Bootstrap values are often difficult to read as they are overlapped by branches and nodes. Coloring nodes by bootstrap value is probably not necessary since the values are listed, but if it is felt that the color coding must be included, the nodes should be enlarged, otherwise it is difficult to tell colors apart. In several cases these figures are uninterpretable. It also seems slightly misleading to keep the Gochang and Buan clades together for N8 phylogeny, but separate them into two figures when analyzing H5, especially without providing a tree to put these clades in context. There are also better ways to label taxa names than highlighting them in FigTree and taking a screen capture.\n\nWhile the reassortment history of the H5N8 subtype is an interesting subject, the analysis presented qualifies as an initial step in understanding the global dynamics of H5N8 reassortment and re-emergence. Improved analytical methods that provide quantitative support would greatly strengthen this paper and provide an empirical framework to test the robustness of the inferences made. Finally, those inferences must be presented in context of previously published works.",
"responses": [
{
"c_id": "2885",
"date": "19 Jul 2017",
"name": "Andrew Dalby",
"role": "Author Response",
"response": "This is NOT a paper about foundations and the concept of reassortment. The reviewers are creating a straw-man argument and this is rhetoric and has no place in peer review. The title is clear is saying that this is a specific case of reassortment. The paper has one and ONE only objective. To try and explain why H5N8 appears and disappears from our records of circulating influenza subtypes. I am NOT saying anything at all about reassortment which I accept fully. This is well established within the community and reference to that assumed knowledge is irrelevant to the aims of the paper. I have made all of my assumptions explicit and I have made all of my data and calculations available. Everything that I have done is fully reproducible and can be checked by anyone who wishes. This is vital for this sort of work and I will not compromise on this. In fact some data cannot be included because of groups from Hong Kong who make access to their data particularly difficult. The methods are identical to those used by the WHO H5 nomenclature project. They produced their trees using Fasttree. I used Muscle for alignment as it is recommended by the authors of Fasttree. Fasttree has 17,000 cites it is well known and widely used. This is about the tone of the writing and if you can name a specific claim then the wording could be changed but this does not alter the underlying analysis only the way it is interpreted. All trees were supplied as vector images but it appears that the online version has bitmaps which is not what I was expecting. The need for large images is to reduce my subjective choices of what data I show to you. They are needed to make it clear that H5N8 sequences are distributed widely across the H5 hemagglutinin and H8 neuraminidase trees. It would be easier to take a clade based numerical approach IF clades existed for the North American sequences but as yet outside the Guangdong lineage clade naming does not exist. Being qualitative does not mean that they are susceptible to bias. I can be quantitative and still biased. Bias is something that affects sampling and signifies not taking an appropriately random sample from a population. What the referees mean is that the trees I have shown are selective and possibly subjective. I am fully aware of this which is why all of the data from all of the trees is available. The referees can chose their own views of the data if they think that they can find a better way but it will show exactly the same pattern of reassortment I have shown in the paper. The paper is framed in the very clear way of saying what I am doing. I am looking at H5N8 reassortment. Not reassortment in general. Reassortment in particular in a subtype that is absent over a number of years when other H5 containing subtypes are circulating. There is nothing more and I would not even vaguely attempt to take the results in a more general direction because that would take a statistical analysis of all H5 subtypes including the common H5N2 and H5N1 subtypes which have no interest to me. The persistence of LPAI subtypes is something that could/should be mentioned but I do note that the referees cannot find a reference for these studies. As I have studied H9N2 before I am not aware of breaks in the history of that subtype which is a typical example of an LPAI. In fact I would dispute this claim strongly as completely unfounded based on the data. More recently I have been working on H7N2 which DOES show the same breaks in time but which is distinct because the sequence is not present in any other H7 containing subtypes either. This indicates that the issue is sampling and that the virus is circulating cryptically and is not sampled at all. I do not assert anything about randomly distributed. I say that they are widely distributed because they are. The word random never appears in the text and this is another straw-man argument. If I though that their distribution was a random process then I certainly would have mentioned it as one of my research areas is stochastic processes. If H5N8 was created by a single reassortment event then all H5N8 hemagglutinins would be in the same clade for both hemagglutnin and neuraminidase. I have made no hypothesis about their random distribution and I have made no attempt to carry out a test of this hypothesis by a combinatorial test which could be done. The good thing about influenza is that if a tree is locally polyphyletic with different subtypes and if the tree is correct then there HAS TO BE REASSORTMENT. I cannot have a part of the tree containing H5N1, H5N2 and H5N8 without the H5 hemagglutinin having reasserted with the 3 different neuraminidase genes. N1 does not mutate into N2 or N8 they are gained or lost in reassortment. For this reason I considered the findings obvious and simple to see but the referees seem to be showing me that this is not as obvious as I imagined. I dispute that LPAIs do not persist in wild birds, even if they don’t this is still not an objection that fits the observation that H5N8 is widely distributed in both the H5 and N8 trees. If it is in domestic birds you would know because it is not cryptic. Ducks stop laying and chickens die with H5N8. The HA and NA trees ARE sufficient evidence on their own. Adding the internal genes would give further evidence and if the referees want to go through a tree with 35,000 sequences for each of the internal genes in it then I am happy to let them. I have actually done parts of this analysis and I can include it if it is required and it supports what I have stated 100%. They are reassortment events and the PB2 genes at least show the same patterns as the H5 hemagglutnin and N8 neuraminidase. A problem with the internal genes is that they are often not sequenced properly or deposited. This is a particular problem for the PA segment. At some point if the entire clade is H5N2 and there is an H5N8 sequence within the clade then there has to be a reassortment for the N8 to appear. N2 does not mysteriously transform into N8 the N8 has to come from somewhere. The supposition where the tree is not well sampled are sketchy and as stated that is a plausible hypothesis not a definite event. The Gochang and Buan groups have been done to death and I myself have done a detailed BEAST analysis of them. These are NOT the focus of the paper which is the US clade where the H5N8 subtype appears and disappears from the timeline. These groups actually provide a positive control showing what I would expect to see if all of the US H5N8 sequences originated in a single reassortment event. In fact these groups disappear in the analysis of the internal genes. I did not set out to show anything about Buan and Gochang as that is already accepted knowledge. Bootstrap values actually only tell you about the ambiguity of the tree generated by that software using that data. They are a test of reliability and not of accuracy. If you have a biased sample then the bootstrap will be biased as well. They tell you nothing at all about phylogenetic accuracy and personally I consider them a poor statistical measure, but it is impossible to get anything published in phylogenetics without including them. Susan Holmes who worked with Brad Efron has written extensively about what they can and cannot tell you but this work is sadly under cited. Mostly it tells you if your sampling is adequate or inadequate and if you have regions of identical or near identical sequences as the order these are placed in the tree is ambiguous. I was being speculative about ancestral subtype. The key finding is that there is an ancestor that is from another subtype and not H5N8 as the H5N8 has to come from somewhere. That it did not come from somewhere once but multiple times is the issue and I can certainly shorten the paper by removing the speculative elements. I have used all of the available H5 and N8 sequences which means all of the H5N8 sequences available at the time I carried out the analysis. There will be more now showing exciting new findings in Taiwan and that the H5N8 is once more circulating in migrating birds. But as I stated this is not the point. I want to look at what happened in the past when H5N8 was sporadic. I am not looking at its present or its future. I want to say how it did reassort and evolve and for those older sequences complete genomes are lacking. What the referees are talking about is a totally different type of study. Which while interesting I will leave for others to carry out. They are not FigTree screen captures all were submitted as pdf files as vector images and can be enlarged to whatever size the viewer needs in order to see them clearly. This is a problem of journal production. The Gochang and Buan Clades are further apart in the H5 phylogeny which is why they were divided. I can put them together but I tried to make the figures as simple as I could."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2463
|
https://f1000research.com/articles/5-2003/v1
|
16 Aug 16
|
{
"type": "Research Article",
"title": "Using diverse U.S. beef cattle genomes to identify missense mutations in EPAS1, a gene associated with high-altitude pulmonary hypertension",
"authors": [
"Michael P. Heaton",
"Timothy P.L. Smith",
"Jacky K. Carnahan",
"Veronica Basnayake",
"Jiansheng Qiu",
"Barry Simpson",
"Theodore S. Kalbfleisch",
"Timothy P.L. Smith",
"Jacky K. Carnahan",
"Veronica Basnayake",
"Jiansheng Qiu",
"Barry Simpson"
],
"abstract": "The availability of whole genome sequence (WGS) data has made it possible to discover protein variants in silico. However, existing bovine WGS databases do not show data in a form conducive to protein variant analysis, and tend to under represent the breadth of genetic diversity in U.S. beef cattle. Thus, our first aim was to use 96 beef sires, sharing minimal pedigree relationships, to create a searchable and publicly viewable set of mapped genomes relevant for 19 popular breeds of U.S. cattle. Our second aim was to identify protein variants encoded by the bovine endothelial PAS domain-containing protein 1 gene (EPAS1), a gene associated with high-altitude pulmonary hypertension in Angus cattle. The identity and quality of genomic sequences were verified by comparing WGS genotypes to those derived from other methods. The average read depth, genotype scoring rate, and genotype accuracy exceeded 14, 99%, and 99%, respectively. The 96 genomes were used to discover four amino acid variants encoded by EPAS1 (E270Q, P362L, A671G, and L701F) and confirm two variants previously associated with disease (A606T and G610S). The six EPAS1 missense mutations were verified with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry assays, and their frequencies were estimated in a separate collection of 1154 U.S. cattle representing 46 breeds. A rooted phylogenetic tree of eight polypeptide sequences provided a framework for evaluating the likely order of mutations and potential impact of EPAS1 alleles on the adaptive response to chronic hypoxia in U.S. cattle. This public, whole genome resource facilitates in silico identification of protein variants in diverse types of U.S. beef cattle, and provides a means of translating WGS data into a practical biological and evolutionary context for generating and testing hypotheses.",
"keywords": [
"Beef cattle",
"Whole genome sequence",
"EPAS1",
"HIF2A",
"Pulmonary hypertension",
"Brisket disease"
],
"content": "Introduction\n\nThe number of disease-causing mutations discovered in bovine genes has increased with the advancement of next-generation sequencing, reference genomes, and high density single nucleotide polymorphisms (SNPs) arrays. There are currently 114 Mendelian traits with known causal mutations in 117 cattle genes1. The majority of these mutations cause disease by changing the gene’s protein sequence and thereby altering its normal function. As the list of major genes influencing bovine traits grows longer, there is increasing interest in the protein variants encoded by these genes, either in the source population, or in other uncharacterized populations. For example, in 2006 when a K211 mutation was discovered in the prion gene of a U.S. cow that developed atypical BSE, there was a pressing need to search for this potentially pathogenic allele in other U.S. cattle2,3. In principle, these searches can be accomplished in silico with access to population-scale gene sequence data.\n\nGene function can be affected by a wide range of genomic sequence differences including: large scale structural polymorphisms like translocations, inversions and copy number variants4; and small scale differences like methylation, insertions/deletions (indels), and SNPs5. Variants that alter amino acid sequences, such as missense, nonsense, frameshift, and splice site mutations, are among those most likely to affect function6. These variants are readily identified by aligning genomic sequences of animals to an annotated reference genome assembly. An essential first step in understanding a gene’s potential influence on a trait, is determining whether any protein variants are encoded in a set of reference individuals7,8.\n\nHowever, access to population-scale gene sequence data has been a limiting step for biomedical veterinary researchers studying U.S. cattle. The reagent costs for a traditional, single-gene approach with PCR-based Sanger exon resequencing of a population set can exceed tens of thousands of dollars, and take more than a year to complete9. In addition, exon resequencing is a recurring cost for each gene analyzed. Approaches that use whole exome sequencing are less costly per gene, but incompletely target some bovine genes. Gene coverage with whole genome sequence (WGS) approaches is limited only by the quality of the reference genome, and the amount of data produced. WGS approaches have the advantage of high genotype accuracy and a low cost per gene, as the sequencing only needs to be accomplished once. The primary disadvantages of WGS approaches are the cost of production, the need for computing and informatics systems, and expertise for complex analyses. Regardless of the sequencing approach chosen, selecting the appropriate individuals for study, and verifying their identity and data integrity is essential.\n\nPopulation-scale WGS data has been reported for a number of major cattle projects and is available at the National Center for Biotechnology Information (NCBI) through BioProjects and the Sequence Read Archive (SRA). Projects include 234 purebred cattle from global Holstein-Friesian, Fleckvieh, Jersey, and Angus breeds5; and 379 Canadian cattle from six purebred beef breeds, three crossbred beef breeds, and the Holstein dairy breed10. Key influential animals were chosen for these projects with the primary goal of using them to impute genetic variants and improve the accuracy of genomic prediction and genome wide association studies. However, these animals are not ideally suited for protein variant discovery across U.S. beef cattle populations, and the SRA data files are not readily searchable by gene. A third related population-scale WGS BioProject used 154 U.S. bulls from seven popular breeds and a mix of 116 crossbred and purebred animals from eight other breeds11. However, the average genome coverage for these influential sires was 2.5 fold, and thus, insufficient for determining genotypes of individual animals.\n\nThe present report describes a publicly available and searchable set of mapped genomes for 96 beef sires from 19 breeds of U.S. cattle. These sires were chosen to minimize the relationships shared between pedigrees. The average read depth for these genomes was 14.8 and access is available without restrictions12. The genome sequences may be either viewed directly with open source, high-performance visualization software13,14 via United States Department of Agriculture (USDA), Agricultural Research Service (ARS) internet sites, or downloaded from the SRA at NCBI. Visualization software, in combination with online access, allows users to navigate to a gene, visually identify, and accurately record protein variants occurring in U.S. beef cattle populations. Thus, if a gene mutation is reported in one breed, it can quickly be evaluated in other breeds, and searched for additional variants that may potentially affect protein structure.\n\nThe feasibility of this process was demonstrated for the bovine endothelial Per-ARNT-Sim (PAS) domain-containing protein 1 gene (EPAS1) encoding the hypoxia inducible transcription factor 2A (HIF2A). EPAS1 was selected for analysis because two linked missense mutations were reported to be associated with high-altitude pulmonary hypertension (PH) in Angus cattle15. PH causes right-sided heart failure (RHF) and has been recognized as an increasing problem in North American beef cattle in feedyards, and in dairy cattle16,17. Moreover, cattle treated for bovine respiratory disease complex (BRDC) in feedyard environments were three times more likely to die from RHF, and died earlier in the feeding period17. The linked missense mutations in EPAS1 encode threonine (T) and serine (S) at amino acid positions 606 and 610, respectively, and were associated with high-altitude PH when compared to the more common allele encoding alanine (A) and glycine (G) at these positions15. Animals carrying one or two copies of the EPAS1 T606, S610 allele were significantly more likely to develop PH at high altitudes. Although other genes are likely involved, EPAS1 appears to be a major gene affecting PH in cattle15.\n\nThe protein encoded by EPAS1 is one of three HIF-alpha proteins that plays an important role in transcriptional regulation of the hypoxic response in metazoans, and is highly conserved in mammals (reviewed in 18). HIF2A is an oxygen sensing subunit that forms a heterodimer with a common, constitutively expressed beta subunit, HIF1B. Each HIF heterodimer binds to specific hypoxia responsive elements and transcriptionally activates different sets of genes. HIF2A controls the chronic hypoxia response by binding small molecules, interacting with proteins, and binding to DNA. Amino acid substitutions in HIF2A have the potential to drastically affect those critical interactions.\n\nIn humans and mice, EPAS1 missense mutations affecting the HIF2A oxygen-dependent degradation domain (ODDD) cause a gain-of-function, activate the hypoxia pathway, and can cause familial PH19,20. The EPAS1 substitutions at positions 606 and 610 reported by Newman et al. are also in the ODDD of HIF2A, although not at the same positions as those in humans and mice15. Although no additional EPAS1 variants were previously observed in exons from the 20 Angus cattle sequenced15, a broader reference set of U.S. beef cattle was not evaluated. Here, we report the discovery of four additional EPAS1 missense mutations, a rooted phylogenetic tree of eight distinct HIF2A sequences, a genetic test for typing them, and HIF2A variant frequencies in a separate collection of 1154 U.S. cattle representing 46 breeds. The results illustrate the utility of the approach, and provide a resource for evaluating protein variants in specified genes of interest. Knowledge of cattle protein variants affecting function is critical for transitioning from a descriptive phase of genomics to an applied phase where animal health, welfare, and production may be improved.\n\n\nMethods\n\nThis article contains no studies performed with animal subjects. Archival DNA was used from extracts of samples that were either: purchased from commercial sources that collected them for artificial insemination of cattle and not for research, purchased from individuals that collected them privately for their purposes (such as food), or donated to the U.S. Meat Animal Research Center (USMARC) by private individuals that collected them privately for their own purposes.\n\nThe discovery panel consists of 96 unrelated individuals from 19 popular U.S. beef breeds (USMARC Beef Diversity Panel version 2.9 [MBCDPv2.9], Figure 1). The current panel design was based on a previous set of commercially-available sires from 16 breeds with minimal pedigree relationships (MBCDPv2.1)21. For both panels, pedigrees were obtained from leading suppliers of U.S. beef cattle semen and analyzed to identify unrelated individuals for inclusion. On the basis of the number of registered progeny, the breeds in the MBCDPv2.1 were estimated to represent greater than 99% of the germplasm used in the US beef cattle industry, contain more than 187 unshared haploid genomes, and allow a 95% probability of detecting any allele with a frequency greater than 0.01621. As previously described, this “threshold” frequency was defined as the minimum allele frequency at which the probability of observing the allele at least once in an animal group was 0.95. The probability of observing an allele at least once is 1 − (1 − p)n where “p” is the frequency of the allele and “n” is the number of independent samplings, or, in this case, the number of unshared haploid genomes for diploid organisms. This assumes that samplings (haploid genomes) are independent and identically distributed (the same p applies to all animals). Setting power or the probability of observing the allele at least once to 0.95 results in the equation: 0.95 = 1 − (1 − p)n. Solving this equation for p yields p = 1 − (0.05)1/n for all p between 0 and 1. The panel was updated to increase the number of beef breeds from 16 to 19, and remove the Holstein breed which was well represented in other WGS datasets. To make room for three additional beef breeds (Braunvieh, Corriente, and Tarentaise), the maximum number of sires within a breed was reduced from eight to six (NCBI BioProject PRJNA324822).\n\nThis group of 96 registered beef sires was chosen to have a minimum of pedigree relationships within each of the 19 breeds.\n\nA separate set of cattle samples was used to validate results obtained from the above discovery panel. The validation panel consisted of samples from male and female registered purebred cattle with diverse pedigrees. Samples were from semen, blood, or hair follicles, depending on gender and availability as previously described3. Where possible, animals within breed were chosen so they did not share parents or grandparents, and none were closely related to the 96 sires in the MBCDPv2.9. The breeds and samples used were: Angus (n = 24), Ankole-Watusi (n = 24), Ayrshire (n = 24), Beefmaster (n = 24), Belgian Blue (n = 24), Blonde d'Aquitaine (n = 24), Brahman (n = 24), Brahmousin (n = 24), Braunvieh (n = 24), Brangus (n = 24), Brown Swiss (n = 26), Charolais (n = 24), Chianina (n = 24), Corriente (n = 24), Devon (n = 24), Dexter (n = 24), Gelbvieh (n = 24), Guernsey (n = 25), Hereford (n = 24), Highland (n = 24), Holstein (n = 86), Indu-Brazil (n = 24), Jersey (n = 28), Limousin (n = 24), Maine-Anjou (n = 24), Marchigiana (n = 24), Mini-Hereford (n = 24), Mini-Zebu (n = 24), Montbeliard (n = 24), Murray Grey (n = 21), Nelore (n = 24), Piedmontese (n = 24), Pinzgauer (n = 24), Red Angus (n = 24), Red Poll (n = 24), Romagnola (n = 24), Salers (n = 24), Santa Gertrudis (n = 24), Senepol (n = 23), Shorthorn (n = 24), Simmental (n = 24), Tarentaise (n = 24), Texas Longhorn (n = 24), Texas Longhorn, Cattlemen’s Texas Longhorn Registry (CTLR, n = 22), Tuli (n = 24), and Wagyu (n = 24).\n\nDNA was extracted from commercial semen with a typical phenol:chloroform method and stored at 4°C in 10 mM TrisCl, 1 mM EDTA (pH 8.0) as previously described22. Approximately 5 μg of bovine genomic DNA was fragmented by focused-ultrasonication to generate fragments less than 800 bp long (Covaris, Inc. Woburn, Massachusetts USA). These fragments were used to make an indexed, 500 bp paired-end library according to the manufacturer’s instructions (TruSeq DNA PCR-Free LT Library Preparation Kits A and B, Illumina, Inc., San Diego, California USA). After construction, indexed libraries were pooled in groups of four to eight, and sequenced with a massively parallel sequencing machine and high-output kits (NextSeq500, two by 150 paired-end reads, Illumina Inc.). After sequencing, the raw reads were filtered to remove adaptor sequences, contaminating dimer sequences, and low quality reads. Pooled libraries with compatible indexes were repeatedly sequenced until 40 GB of data with greater than Q20 quality, was collected for each sire. In preliminary trials, 40 GB of Q20 data consistently resulted in greater than 10-fold read coverage for each animal. Previous results showed that this level of coverage provided scoring rates and accuracies that exceeded 99%23.\n\nThe DNA sequence alignment process was similar to that previously reported23. Briefly, FASTQ files corresponding to a minimum of 40 GB of Q20 sequence were aggregated for each animal. DNA sequences from FASTQ files were aligned individually to UMD3.124 with the BWA aln algorithm version 0.7.1225, then merged and collated with bwa sampe. The resulting sequence alignment map (SAM) files were converted to binary alignment map (BAM) files, and subsequently sorted via SAMtools version 1.3.126. Potential PCR duplicates were marked in the BAM files using the Genome Analysis Toolkit (GATK) version 3.627. Regions in the mapped dataset that would benefit from realignment due to small indels were identified with the GATK module RealignerTargetCreator, and realigned using the module IndelRealigner. The BAM files produced at each of these steps were indexed using SAMtools. The resulting indexed BAM files were made immediately available via the Intrepid Bioinformatics genome browser http://www.intrepidbio.com/ with groups of animals linked at the USMARC WGS browser http://www.ars.usda.gov/Services/Docs.htm?docid=25585. The raw reads were deposited at NCBI BioProject PRJNA324822. Mapped datasets for each animal were individually genotyped with the GATK UnifiedGenotyper with arguments “--alleles” set to the VCF file (File S1), “--genotyping_mode” set to “GENOTYPE_GIVEN_ALLELES”, and “--output_mode” set to “EMIT_ALL_SITES”. Lastly, some SNP variants were identified manually by inspecting the target sequence with IGV software version 2.1.28 (described in the Methods section entitled ‘Identifying protein variants encoded by EPAS1’). In these cases, read depth, allele count, allele position in the read, and quality score were taken into account when the manual genotype determination was made.\n\nGenotypes from a set of 121 reference SNPs were used as an initial verification of the WGS datasets. Many of these DNA markers have been widely used for parentage determination, animal identification, and disease traceback (Table S1)21,28,29. The 121 reference SNPs were previously genotyped across the MBCDPv2.9 by multiple PCR-Sanger sequencing reactions, two independent designs of multiplexed matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) genotyping assays, and multiple bead array platforms, and are tabulated in Table S2. The error rate in the WGS data was estimated by comparing the consensus genotypes for these SNPs to the WGS genotypes. An animal’s WGS dataset passed initial verification when the accuracy of the WGS genotypes exceeded 97%, and the average mapped read depth was proportional to the amount of WGS data collected. Animals’ datasets that failed this initial verification were inspected closely for contaminating and/or missing files. Electronic file transfer errors resulted in contaminated and missing data for approximately one third of the 96 WGS datasets and required systematic testing, correction, and reprocessing. Linear regression analysis was accomplished in Excel version 2016. Access to the sequence via USDA internet site (http://www.ars.usda.gov/Services/Docs.htm?docid=25585) and Intrepid Bioinformatics site (http://server1.intrepidbio.com/FeatureBrowser/customlist/record?listid=7686214634) was provided as soon as the .BAM files were produced. Because the raw datasets were available online as they were produced, the FASTQ files were deposited in the NCBI SRA only after they were validated as described above. These 96 sets of files can be accessed through BioProject PRJNA324822 in the Project Data table under the Resource Name: SRA Experiments. SNPs from the BovineHD BeadChip (Illumina Inc.) were selected for comparison because they were numerous, uniformly distributed across the bovine genome, and available. Based on the nucleotide sequence of the probes obtained from the manufacturer, the positions of the SNPs were verified via a BLAT process as previously described23. A total of 772,990 variant positions were successfully mapped with this process, with 54 positions being discrepant when compared to those in the manufacturer’s most recent release of probe descriptions. The VCF file for these 772,990 variants is provided (File S1). The genotypes from the WGS data were compared to those from the high-density bead array with a custom program written specifically for this operation. Three classes of discordant genotypes were identified. First, were those scored as homozygous in the WGS data and heterozygous in bead array data. These could have resulted from low coverage in WGS data at that position, or errors in the bead array caused by probes hybridizing to repeated sequences. The second type of discordance was scored as heterozygous in the WGS data, and homozygous in the bead array data. These could have resulted from allele-specific probe hybridization problems in the bead array platform. The final category consisted of missing genotypes in the bead array data, which were likely caused by errors in the conversion of the manufacturer’s “AB” genotype calls to the nucleotide calls.\n\nUsing public internet access to USMARC sites, the nucleotide variation in the exon regions of EPAS1 was visualized with open source software installed on a laptop computer and recorded manually in a spreadsheet. Briefly, a Java Runtime Environment (Oracle Corporation, Redwood Shores, CA) was first installed on the computer. When links to the data were selected from the appropriate web page, IGV software13,14 automatically loaded from a third-party site (Intrepid Bioinformatics, Louisville KY) and the mapped reads were loaded in the context of the bovine UMD3.1 reference genome assembly. For viewing EPAS1 gene variants, WGS from a set of eight animals of different breeds was loaded (“mixed groups of 8”, http://www.ars.usda.gov/Research/docs.htm?docid=25586) and the IGV browser was directed to the appropriate genome region by entering “EPAS1” in the search field. The IGV zoom function was used to view the first exon at nucleotide resolution with the “Show translation” option selected in IGV. The exon sequences were visually scanned for polymorphisms that would alter amino acid sequences, such as missense, nonsense, frameshift, and splice site mutations. Once identified, the nucleotide position corresponding to a protein variant was viewed and recorded for all 96 animals. Using IGV, codon tables, and knowledge of the HIF2A protein sequence (NP_777150), the codons affected by nucleotide alleles were translated into their corresponding amino acids and their positions noted. Haplotype-phased protein variants were assigned unambiguously in individuals that were homozygous, and those individuals with only one variant amino acid. A maximum parsimony phylogenetic tree was constructed manually from the unambiguously phased protein variants and used to infer phase in any remaining variants with simple maximum parsimony assumptions.\n\nWGS datasets from five closely-related Bovinae species were mapped to the cattle reference assembly UMD3.1 with a process similar to that previously reported23. These mapped Bovinae samples included two each of yak, gaur, and banteng; and one sample each of plains bison, water buffalo. The mapped genomes were visually inspected across the EPAS1 exons in the same browser environment as the cattle data, and variant codons were recorded. Information about the source and the content of the WGS datasets is provided in Table S3. Because reference SNP genotypes are not readily available for these species, verification of the integrity and quality of the newly sequenced Bovinae WGS datasets was limited. For each dataset, the mapped read density in conserved exons was estimated and compared to the amount of Q20 sequence collected for that animal. No inconsistencies were noted between the expected and observed read depths. In addition, distinctive homozygous “species-specific” nucleotides were observed for each species, and these same nucleotides were not observed in the other species. The genomes for all eight animals were made viewable by IGV at http://www.ars.usda.gov/Services/Docs.htm?docid=25585. They are also available at NCBI BioProjects: PRJNA325061, PRJNA221623, and PRNJA207334.\n\nA single multiplex assay was designed for the six EPAS1 missense SNPs with the information in Table 1 with software provided by the manufacturer (Agena Biosciences, San Diego, California, USA). The oligonucleotide sequences and assay conditions are provided in Table S4. After design and validation with bovine control DNAs for each SNP, the MBCDPv2.9 DNA was tested in a blinded experiment in which the true genotypes were unknown by those typing the samples. Assay design and genotyping was performed at GeneSeek (Lincoln, Nebraska, USA) with the MassARRAY platform and iPLEX Gold chemistry according to the manufacturer’s instructions (Agena Biosciences). MALDI-TOF MS genotypes for six SNPs are provided for the MBCDPv2.9 and 1154 of 1168 cattle from 46 breeds in Table S5.\n\naThe bovine EPAS1 gene is oriented in the sense direction with regards to the UMD3.1 reference assembly. All sequences presented are from the sense strand.\n\nbHIF2A protein domain abbreviations: PAS-B, Per-Arnt-Sim domain B; ID, interdomain; and ODDD, oxygen-dependent degradation domain.\n\ncIUPAC/IUBMB ambiguity codes used for nucleotides: R = a/g, Y = c/t, M = a/c, K = g/t, S = c/g, W = a/t39.\n\ndThe major allele is listed first.\n\neMinor allele frequency in MBCDPv2.9\n\nfMissense mutations associated with high altitude PH15.\n\n\nResults\n\nA beef cattle diversity panel was designed to broadly sample the genetic diversity of U.S. populations, while fitting within the constraints of a 96-sample format, often used for automated DNA sequencing and genotyping. The composition and design of the panel was updated from a previously reported set as described in the Methods. A minimum of four sires were included for each breed, with the more popular U.S. breeds having five or six animals (Figure 1). There was relatively little power for detecting rare variants within breed, since not more than 12 haploid genomes were sampled (95% probability of detecting any polymorphism with a frequency greater than 0.22, Methods). Despite the modest power within breed, sequencing the entire panel significantly increased the chances of detecting relatively rare variants segregating in U.S. beef cattle. With more than 187 of 192 unshared haploid genomes in the 96 sires, it was estimated there was a 95% probability of observing polymorphisms with a frequency greater than 0.016. Thus, the power for allele detection in this beef diversity panel was derived from having exceedingly few pedigree relationships within breed, and essentially none between breeds.\n\nThe WGS was generated by sequencing indexed pools of libraries whose composition was adjusted iteratively across multiple instrument runs to achieve at least 40 GB of FASTQ sequence. The average amount of total sequence per sample was 48.3 GB (±12.0) and varied between 40.2 GB and 109.4 GB. This approach reduced the overall data production cost, however each animal had data files from multiple sequencing runs that required manual collation prior to analysis, and thus increased the labor cost. In addition to the usual challenges of sample contamination, sample switches, missing data, variable quality data, and data transfer errors, the FASTQ files produced by the instrument had identical names across multiple machine runs. This added another layer of complexity to maintaining file provenance. The process of manually aggregating and transferring an average of 42 similarly-named FASTQ files for each animal was inherently prone to error and unavoidable with the instrument and the institutional network security restrictions.\n\nThus, to verify the WGS data integrity at the end of the process, genotypes from a set of 121 reference SNPs were used as a first test. These SNPs are distributed across the genome, highly-informative in U.S. beef cattle, have been widely used for bovine parentage testing (Methods). The WGS-derived genotypes for these 121 SNPs were obtained by viewing an animal’s mapped reads at the relevant genome coordinates, with public software, a third party database, and web links created for this task (illustrated in Figure 2A, http://www.ars.usda.gov/Research/docs.htm?docid=25586). As described in the Methods, data inconsistencies of multiple types were discovered by comparison with the known reference genotypes and corrected in approximately one third of the file sets. Comparison to the reference SNP genotypes also provided a check for the expected linear relationship between the amount of sequence collected and the depth of reads mapped to the reference assembly. Regression analysis showed that the average read depth at the 121 reference SNPs was directly proportional the amount of sequence collected (Figure 2B). The 48.3 GB of sequence collected for each animal resulted in an average of 14.4-fold depth of mapped read coverage. The overall accuracy of WGS genotypes for the 121 reference SNPs was 99.5%, with 56 sires having 100% concordance (Figure 2C). The few WGS genotype errors observed were typically caused by undetected heterozygous alleles at sites with low read coverage. Thus, the use of 121 reference SNPs was effective for discovering and repairing errors in these WGS datasets, and verifying the coverage.\n\nPanel A: Computer screen image of one animal’s WGS data aligned to bovine reference assembly UMD3.1 at a reference SNP site. The heterozygous C/T genotype is shown as viewed with the IGV software13,14. Panel B: Linear relationship between mapped read depth and the amount (Gb) of Q20 WGS data collected. At each SNP position, the read depth and genotypes were visualized and manually recorded for 121 parentage SNPs. A list of these 121 parentage SNPs and their sequence information is provided in Table S1. Panel C: Genotype scoring accuracy for 121 parentage SNPs in 96 sires. Consensus reference genotypes (n = 11,616) for the parentage SNPs were previously determined by multiple methods (Table S2).\n\nA broader characterization of the coverage and quality of each dataset was accomplished by comparing an average of 730,410 of SNP genotypes from each sire to those from a high density bead array (Methods). The average distribution of read depths was slightly positively skewed with a mode of 12.5 when combined for all animals (Figure 3A). The average read depth for these 730 k SNPs (14.8) was in close agreement with that for the 121 reference SNPs (14.4), confirming that the smaller SNP set was not biased subset of the larger set. Averaged over all animals, the concordance between WGS genotypes and those from bead arrays was high (98.8%, Figure 3B) and also agreed well with results from the 121 reference SNPs (99.5%). A surprising feature of this analysis was that the genotype concordance reached a maximum at approximately 99%, in spite of increasing coverage. Thus, WGS datasets with 13-fold and 33-fold coverage had 99.1 and 99.2 % concordance, respectively, possibly reflecting the percentage of bead array genotypes with problems. One notable exception was Corriente sire 19202900 which had a concordance of 91.8% (Figure 3B). However, the 121 reference SNP genotypes for this same animal were 98.4% accurate (119/121). This result suggests that the lower genotype concordance in the Corriente sire may have been caused by the quality of the bead array data. For all other animals, the discordant genotypes were infrequent, with “allele dropouts” being the most common type (Methods). Allele dropouts were inferred at a SNP site when one allele of a heterozygote was not detected (i.e., “dropped”). Although rare, there were more dropped alleles observed in the bead array data (1.1%) than for the WGS data (0.7%). Taken together, the analyses indicate that the WGS datasets from these 96 diverse beef sires are of sufficient quality and coverage for use in identifying and decoding gene variants in U.S. beef cattle.\n\nPanel A: The distribution of average WGS read depth across 730 k SNP sites for 96 sires combined. Panel B: A comparison of the average WGS read depth per animal to the average genotype concordance between 730 k WGS and bead array genotypes.\n\nThe 96 sets of aligned WGS data were visually analyzed in the EPAS1 coding region to identify potential HIF2A protein variants (Methods). EPAS1 consists of 16 exons spanning 90 kb of genomic DNA and encodes an 870 amino acid protein with multiple functional domains (Figure 4A and Figure 4B). Viewing the aligned sequences and detecting variants was simple, fast, and accurate with the IGV software and a browser developed for this purpose (Figure S1). Four previously undescribed missense mutations were discovered and predicted to cause the substitution of glutamine (Q) for glutamate (E) at position 270; leucine (L) for proline (P) at position 362; glycine (G) for Alanine (A) at position 671; and phenylalanine (F) for leucine (L) at position 701 (Table 1 and Figure 4B). The two additional amino acid variants previously associated with high-altitude PH, were also observed (A606T and G610S). No other missense, nonsense, frameshift, splice site, or indel variants affecting the coding region were detected. Haplotypes encoding seven predicted HIF2A variants were translated and placed in the context of a phylogenetic tree (Figure 4C). Five of seven predicted HIF2A protein variants (variants “2”, “4”, “5”, “6”, and “7”) were previously unreported, and accounted for 17% of the total in the beef cattle diversity panel.\n\nPanel A, genomic DNA map of EPAS1: blue arrows, exon regions; grey horizontal lines, intron regions. Panel B, map of HIF2A domains in relationship to missense mutations found in cattle: bHLH, basic helix-loop-helix domain; PAS-A and PAS-B, Per-Arnt-Sim domains; ODDD, oxygen-dependent degradation domain; N-TAD, N-terminal transactivation domain; C-TAD, C-terminal transactivation domain. Panels C and D represent results from the 96-member, 19 breed diversity panel and the 1154-member, 46 breed set, respectively. The most frequent HIF2A isoform (“variant 1”) was used as the reference sequence for the trees. For “variants 1” through “8”, each node in the tree represents a different isoform of HIF2A that varies by one amino acid compared to adjacent nodes. The areas of the circles are proportional to the variant frequency in the group of cattle tested. “Variant 3” (pink circle; T606, S610) is identical to that associated with high-altitude PH in Angus cattle15. “Variant 2” (Q270) is identical to the 870 amino acid protein encoded by the bovine reference assembly UMD3.1.\n\nTo verify the accuracy of EPAS1 genotypes and determine the protein variant frequencies in a larger set of U.S. cattle, MALDI-TOF MS assays were developed for the six missense SNPs (Methods). In a blinded test, 575 of 576 (99.8%) EPAS1 MALDI-TOF MS genotypes from the 96 sires were concordant with those from WGS, confirming that the newly discovered SNPs were authentic and the WGS and MALDI-TOF MS genetic tests were accurate. The average HIF2A variant frequencies in a set of 1154 purebred cattle from 46 breeds were similar to those observed in the beef cattle diversity panel (Table 2 and Figure 4D) with a call rate of 98.8%.\n\naHIF2A protein variant allele definitions are shown in Figure 4.\n\nbThe bolded residues are those differing from “variant 1”.\n\ncThe coefficient of determination for these frequencies (r2) was 99.9\n\nThe HIF2A isoform associated with an increased risk for high-altitude PH in Angus cattle (T606, S610; “variant 3”) was observed in 18 of 46 breeds, with four breeds having frequencies higher than Angus (Table 3). The Guernsey dairy breed had the highest proportion of the risk allele with 18 of 26 animals (69%) having one or two copies of “variant 3” (Table S5). Notably, all 96 animals from the Bos indicus breeds (Brahman, Nelore, Indu-Brazil, and mini-zebu) were homozygous for the most common HIF2A “variant 1” (Table 4). An important result of typing the extended 46 breed set of cattle, was the discovery of an unlinked T606 mutation (“variant 8”, Figure 4D) present in Romagnola, Chianina, and Maine-Anjou cattle (Table S5). The discovery of an eighth variant brought the number of possible HIF2A diploid combinations to 36, and underscored the importance of accurate HIF2A typing in animals used to study PH and RHF in beef cattle.\n\naHIF2A protein variant allele definitions are shown in Figure 4. A hyphen indicates an allele frequency of zero.\n\nbHIF2A protein “variant 3” contains the T606, S610 missense mutations previously associated with high-altitude pulmonary hypertension15.\n\naHIF2A protein variant allele definitions are shown in Figure 4. A hyphen indicates an allele frequency of zero.\n\nbHIF2A protein “variant 3” contains the T606, S610 missense mutations previously associated with high-altitude pulmonary hypertension15.\n\nTo determine the most likely phylogenetic root of the HIF2A tree, and thus establish a possible order of mutational events, HIF2A sequences were analyzed in eight individuals from closely related species: from the Bos, Bison, and Bubalus genera. HIF2A “variant 1” was the likely ancestral root, based on its similarity to HIF2A from the most closely related species (Figure 5). Thus, the S610 mutation likely occurred on the T606 haplotype and is the more recent mutation of the two. Identifying breeds and individuals that have the HIF2A T606 allele provides the opportunity for future comparisons of the relative effects of T606 alone (“variant 8”), or in combination with S610 (“variant 3”).\n\nThe cattle HIF2A “variant 1” was used as the reference sequence for comparison with HIF2A from five other species (Methods). Cattle HIF2A residues were highly conserved between these species and only differed at 11 total sites. In “variant 1” the cattle residues at these 11 positions were: V442, K613, T663, L644, M649, R656, M661, L668, F678, V693, H733. For cattle “variants 1” through “8”, the areas of the circles shown are proportional to the variant frequency in the group of 1250 cattle tested. The nodes derived from analysis from other species are indicated with a black filled circles and do not represent frequency information.\n\nThe highly conserved HIF2A amino acid residues across vertebrates provides insight into the potential impact of missense mutations in cattle, because invariant residues tend to be critical for protein function. The 870 amino acid sequence of cattle HIF2A is highly similar to those from sheep, whale, human, mouse, and alligator (97, 90, 88, 83, and 73% identity, respectively). Alignment of cattle HIF2A sequences with 70 available species of the Gnathostomata superclass showed that a third of the residues (288 of 870) were perfectly conserved throughout (Table S6). Of the six HIF2A variant sites identified in cattle, the most conserved residue was glutamate at the E270Q site, which was present in all 70 Gnathostomata tested, 37 of which are shown in Figure 6. The leucine residue of the L701F variant site was less conserved, but still present throughout the Amniota, with the phenylalanine variant being present in the Tetrapoda and higher. The glycine residue at the G610S variant site was conserved in the Laurasiatheria, with the notable exception of S610 in swine, a species known for a marked pulmonary vasoconstrictive response to hypoxia. The proline residue of the P362L variant site was conserved through Cetartiodactyla with leucine present in Perissodactyla and higher. The alanine residue of the A671G variant site was conserved in the Bovidae with threonine and other residues present in Cetartiodactyla and higher. Variant A606T was the least conserved of all the variant sites with the alanine residue only conserved in the Bovinae, and the threonine residue present in other ruminants with isoleucine present in Cetartiodactyla and higher. Based exclusively on the degree of conservation across vertebrate species, the predicted ranking for potentially deleterious EPAS1 missense mutations in cattle was: E270Q > L701F > G610S > P362L > A671G > A606T. However, the actual impact of these polymorphisms on cattle is dependent on additional factors, some of which are discussed in the next section.\n\nHIF2A sequences from a representative subset of 37 species from the Gnathostomata superclass were deduced from WGS or downloaded from GenBank. Symbols are as follows: letters, IUPAC/IUBMB codes for amino acids; dot, amino acid residues identical to those in cattle HIF2A “variant 1”; dash, polypeptide region missing in shark HIF2A.\n\n\nDiscussion\n\nOur primary goals were to create a searchable and publicly viewable genomics resource consisting of 96 sires representing a broad cross section of U.S. beef cattle, and demonstrate its use for identifying missense mutations in EPAS1, a bovine gene associated with high-altitude PH and RHF15. To that end, we first determined the amount of WGS required to achieve accurate diploid genotyping when aligned to the bovine reference genome. Our findings verified that 40 GB of short read, paired-end DNA sequence data from the NextSeq500 instrument, provided an average of 12-fold genome coverage. Accordingly, we used 40 GB as a minimum for each animal, aligned the sequences, and made them available online for downloading or viewing with a custom visualization tool that supports accurate assessment of genetic variation. The average coverage of the beef cattle diversity panel was 14.8-fold and resulted in an average genotype accuracy of approximately 99%. These numbers were consistent with results obtained with 379 Canadian beef and dairy cattle, 65 Swiss dairy cattle, and 96 human WGS datasets, sampled at various coverage levels, and compared to bead array data10,30,31.\n\nSurprisingly, the amount of effort required to verify the identity and accuracy of the datasets was nearly equal to that required to produce it. Comparing WGS genotypes from 121, well-distributed, highly-informative reference SNPs was sufficient to identify and correct errors in the datasets. However, larger SNP datasets from commercial DNA microarrays provided an additional level of confidence that was useful. Considering the time and resources invested in collecting the WGS, it seemed prudent to have a significant set of independently obtained reference SNP genotypes available for each animal in the group, and use them to validate each WGS dataset.\n\nAn important secondary goal of this project was to provide biomedical veterinary researchers the ability to readily inspect gene mutations reported in one breed, evaluate them in other breeds, and search for any additional mutations that may affect protein structure. A web-based platform was created to achieve this goal, and then tested by analyzing EPAS1, a gene where missense mutations had been previously documented. Six missense mutations, including the two that were previously reported to be associated with high-altitude PH15, were readily identified by viewing the aligned raw sequence. The variants were validated by designing MALDI-TOF MS genetic tests and genotyping a broader population of animals. Determining the haplotype phase of these mutations in a total of 1250 individuals from 46 breeds, resulted in eight predicted HIF2A protein variants, five of which were previously unknown. Comparing HIF2A variant frequencies in the 96-member, 19 breed diversity panel with those of the 1154-member, 46 breed set confirmed that the smaller panel was a good approximation of the larger one (r2 = 99.9). Together, the five newly identified HIF2A variants comprised 16.4% of the total alleles in 46 breeds, and represented a significant proportion of the HIF2A variation in cattle. The MALDI-TOF MS genetic test for EPAS1 was designed to facilitate future study of these variants, and provide a way to control for EPAS1 stratification in studies of bovine respiratory diseases. The ability to identify the full range of protein variants in a population is critical for designing studies intended to test the candidate gene’s influence on a trait.\n\nThe report by Newman et al.15, describing an EPAS1 T606, S610 gene variant associated with high-altitude PH, raises intriguing questions about the biological mechanisms leading to disease. Among them is the possibility that only one of the two linked missense mutations is pathogenic. Our findings suggest that the S610 variant would be the more plausible candidate of the two. Five lines of evidence are consistent with this hypothesis. First, the S610 missense mutation appears to have arisen on an existing EPAS1 T606 haplotype, making it the more recent mutation. Younger mutations in functional genes are more likely to be deleterious that older ones32. Second, the G610 residue is conserved across ruminant species, which is consistent with the removal of deleterious alleles by strong purifying selection. Third, among the Laurasiatheria representatives, swine is the only one besides cattle to have the S610 allele. Swine also develop severe PH when exposed to chronic hypoxia33. Fourth, the T606 residue occurs in sheep and goats which have only mild responses to chronic hypoxia33,34. Fifth, the T606 residue is also present in Tibetan Antelope, a species that evolved at high-altitude and does not suffer from PH. These observations are consistent with the hypothesis that the S610 missense mutation is pathogenic and causes high-altitude PH in cattle. Alternatively, both alleles may be required in combination to cause disease, or they may be in genetic linkage with an undiscovered cause of high-altitude PH. Regardless, the pathogenic mutation hypothesis is testable with appropriate individuals from Chianina and Maine-Anjou cattle breeds, since examples of HIF2A T606, S610 (“variant 3”) and HIF2A T606 (“variant 8”) are present in both breeds.\n\nOf the remaining EPAS1 missense mutations, E270Q stands out as having the greatest potential for affecting the function of HIF2A. This prediction is based on the observation that the E270 residue was invariant across all 70 vertebrate species evaluated. The E270Q variant is located in the PAS-B domain, the second of two tandemly positioned PAS domains. Mutations in murine PAS-B have been shown to affect the ability of HIF2A to sense chemical signals via ligand binding, and thus stabilize the transcriptionally active heterodimer in response to hypoxia, although the E270 residue was not specifically tested35. EPAS1 haplotypes encoding Q270 residues accounted for 22% of the total in 80 Holstein sires used in the present study, and could be a potential cause of the PH observed Holstein cattle16. These 80 Holstein cattle were also devoid of the EPAS1 T606, S610 haplotype associated with PH in Angus cattle. Overall EPAS1 Q270 alleles are present at a frequency of 8% in U.S. cattle tested (HIF2A variants “2”, “5”, and “7”, Figure 4) and are predicted to have deleterious biological consequences.\n\nAnother highly conserved residue was L701, which was invariant through the Amniota. However, the F701 substitution was present only on the Q270 haplotype (“variant 7”) and is located in an interdomain region of HIF2A of unknown function. Moreover, the F701 substitution was only observed in one of 1250 animals tested (Salers, no. 19999882). The WGS for this animal at position chr11:28663897, together with concordant genotypes by MALDI-TOF MS confirmed the authenticity of this SNP. However, its exceedingly low frequency makes it unlikely that this substitution, no matter how disruptive, would have a significant impact on U.S. cattle. The remaining missense mutations P362L and A671G were neither highly conserved in vertebrates, nor located in HIF2A regions of known significance. However, their combined frequency in cattle was not insignificant at nearly 8%. Either of these could potentially affect HIF2A function and thereby influence traits associated with EPAS1. The substitution of proline for leucine is a particularly significant change that has been shown to cause functional disruptions in other proteins such as T4 lysozyme and caspase-936,37. However, the impact of this substitution remains unknown.\n\nAfter more than 20 years of selection for herd sires with low pulmonary artery pressures (PAP), 50% of calf mortalities were still attributed to PH in some high-altitude ranches38. We hypothesize that selection for the most common, ancestral HIF2A sequence will enrich for the most favorable allele for U.S. beef cattle and complement efforts that employ PAP testing. This HIF2A sequence contains residues E270, P362, A606, G610, A671, and L701 (“variant 1”, Figure 4) and has the highest amino acid sequence identity when compared to HIF2A from yak, gaur, banteng, and bison. The frequency of “variant 1” was 100% in the Brahman, Brown Swiss, Indu Brazil, Nelore, and mini-zebu breeds; greater than 90% in Ankole-Watusi, Beefmaster, Brahmousin, Devon, Marchigiana, Piedmontese, Romagnola, Simmental, Tuli, Wagyu; and 78% overall. Thus, if employed, selection for HIF2A “variant 1” would not be an exercise in introgression, but rather an effort to remove a minority of potentially deleterious alleles. This may help reduce the overall incidence of PH and the problems associated with it in cattle.\n\n\nConclusion\n\nIn summary, the WGS resources described here are suitable for use in identifying and decoding gene variants in the vast majority of U.S. beef cattle. When applied to EPAS1, the findings suggest that there may be deleterious alleles circulating in U.S, in addition to those previously associated with high altitude PH. These resources, including the web interface, underlying sequence data, genetic tests, and the associated information are available to researchers, companies, veterinarians, and producers for use without restriction.\n\n\nData availability\n\nValidated FASTQ files are available in the NCBI SRA under accession numbers SRR4001609-SRR4002095; SRR4004613-SRR4004644; SRR4002950-SRR4003067; SRR4003069-SRR4003073; SRR4003075-SRR4003079; SRR4003081-SRR4003085; SRR4003087-SRR4003094; SRR4003096-SRR4003139; SRR4003141-SRR4003146; SRR4003148-SRR4003152; SRR4003154-SRR4003158; SRR4003160-SRR4003164; SRR4003166-SRR4003170; SRR4003172-SRR4003177; SRR4003179-SRR4003182; SRR4003184-SRR4003188; SRR4003190-SRR4003451; SRR4004645-SRR4004679; SRR4004680-SRR4004734; SRR4004736-SRR4004891; SRR4004893-SRR4004920; SRR4004922-SRR4004948; SRR4004950-SRR4004982; SRR4004991-SRR4004992; SRR4004994-SRR4004997; SRR4005006-SRR4005012; SRR4005021-SRR4005026; SRR4005044-SRR4005048; SRR4005057-SRR4005062; SRR4005071-SRR4005195. The data have also been deposited with links to BioProject accession number PRJNA324822 in the NCBI BioProject database (https://www.ncbi.nlm.nih.gov/bioproject/).\n\nIn addition, access to the aligned sequences is available via USDA internet site: http://www.ars.usda.gov/Services/Docs.htm?docid=25585. Download access to the .BAM files is available at the Intrepid Bioinformatics site: http://server1.intrepidbio.com/FeatureBrowser/customlist/record?listid=7686214634.",
"appendix": "Author contributions\n\n\n\nConceived and designed the experiments: MPH, TPLS, TSK.\n\nPerformed the experiments: BS, JKC, JQ, MPH, TPLS, TSK, VB.\n\nAnalyzed the data: BS, JKC, JQ, MPH, TPLS, TSK, VB.\n\nContributed reagents/materials/analysis tools: BS, JKC, JQ, MPH, TPLS, TSK, VB.\n\nWrote the paper: MPH, TPLS, TSK.\n\n\nCompeting interests\n\n\n\nThe authors declare the following interests: co-authors V.B., J.Q, and B.S. are full-time employees at GeneSeek, a Neogen company that provides agrigenomic and veterinary diagnostic services. T.S.K. is the CEO of Intrepid Bioinformatics, a company that provides web-based systems to privately store, analyze, curate, share, and remotely access genetic data. There are no patents, products in development, or marketed products to declare. These interests do not alter the authors’ adherence to all the journal’s policies on sharing data and materials.\n\n\nGrant information\n\nFunding for this research was provided by the USDA, ARS appropriated projects 5438-32000-029-00D (MPH) and 5438-31320-012-00D (TPLS). This work was conducted in part using the resources of the University of Louisville’s research computing group and the Cardinal Research Cluster.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank R. Godtel and K. McClure for outstanding technical assistance, and J. Watts for secretarial support. We thank Drs. D. Grotelueschen and P. Hackett for thoughtful discussions and improvements to the manuscript; Dr. L. Cundiff for advice in designing the MARC beef cattle diversity panel, and Dr. W. Laegreid for supporting it; Dr. D. Coover (SEK Genetics, Inc., Galesburg, KS), D. Frank (ABS Global, Inc., DeForest, WI.), B. House (Select Sires, Inc., Plain City, OH), and C. Rugg (Bovine Elite, Inc., College Station, TX.) D. Trimmer (Accelerated Genetics, Baraboo, WI), The Braunvieh Assoc. of America, J. Spawn (North American Corriente Assoc., and the American Tarentaise Assoc.) for assistance in sire selection; Drs. C. Chase (Senepol), H. Neibergs (Wagyu), and T. Sonstegard and C. Van Tassell (Holstein) for providing samples from additional breeds; D. Davis (Cattlemen’s Texas Longhorn Registry) for providing an alternate set of Texas Longhorn samples; Drs. D. Armstrong and E. Louis from Omaha's Henry Doorly Zoo for providing gaur blood samples, Drs. L. Chemnick and O. Ryder from the San Diego Zoo's Beckman Center for Conservation Research for providing banteng DNA samples; Mr. M. Swartz from the International Yak Association for providing a frozen hide sample of Queen Allante D171, after she died of natural causes; Mr. M. Bredthauer from the Straight Arrow Bison ranch for providing a liver sample from an animal processed for food; and Mr. R. Goode (Goode Cattle Co. Damon, TX) for his generous and dedicated assistance in collecting hair samples from rare U.S. breeds. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the USDA. The USDA is an equal opportunity provider and employer.\n\n\nSupplementary material\n\nTable S1. Features of 121 USMARC bovine parentage SNPs.\n\nClick here to access the data\n\nTable S2. Summary WGS statistics and reference genotypes for 121 parentage SNPs with the USMARC Beef Diversity Panel Version 2.9.\n\nClick here to access the data\n\nTable S3. Source information for individuals from Bovinae species mapped to the cattle UMD3.1 reference assembly.\n\nClick here to access the data\n\nTable S4. MALDI-TOF MS assay design for six missense mutations in bovine EPAS1.\n\nClick here to access the data\n\nTable S5. HIF2A protein variant diplotypes in 1154 animals from 46 breeds of U.S. cattle.\n\nClick here to access the data\n\nTable S6. Alignment of HIF2A sequences from Gnathostomata species.\n\nClick here to access the data\n\nFigure S1. Screen image of IGV software displaying EPAS1 E270Q genotype data for eight cattle.\n\nClick here to access the data\n\nFile S1. VCF file of 772,990 SNP variants used in comparing WGS genotypes to those from a high-density bead array.\n\nClick here to access the data\n\n\nReferences\n\nNicholas FW, Hobbs M: Mutation discovery for Mendelian traits in non-laboratory animals: a review of achievements up to 2012. Anim Genet. 2014: 45(2): 157–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRicht JA, Hall SM: BSE case associated with prion protein gene mutation. PLoS Pathog. 2008; 4(9): e1000156. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Keele JW, Harhay GP, et al.: Prevalence of the prion protein gene E211K variant in U.S. cattle. BMC Vet Res. 2008; 4: 25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBickhart DM, Liu GE: The challenges and importance of structural variation detection in livestock. Front Genet. 2014; 5: 37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaetwyler HD, Capitan A, Pausch H, et al.: Whole-genome sequencing of 234 bulls facilitates mapping of monogenic and complex traits in cattle. Nat Genet. 2014; 46(8): 858–65. PubMed Abstract | Publisher Full Text\n\n1000 Genomes Project Consortium, Abecasis GR, Altshuler D, et al.: A map of human genome variation from population-scale sequencing. Nature. 2010; 467(7319): 1061–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJordan DM, Ramensky VE, Sunyaev SR: Human allelic variation: perspective from protein function, structure, and evolution. Curr Opin Struct Biol. 2010; 20(3): 342–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacArthur DG, Manolio TA, Dimmock DP, et al.: Guidelines for investigating causality of sequence variants in human disease. Nature. 2014; 508(7497): 469–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Clawson ML, Chitko-Mckown CG, et al.: Reduced lentivirus susceptibility in sheep with TMEM154 mutations. PLoS Genet. 2012; 8(1): e1002467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStothard P, Liao X, Arantes AS, et al.: A large and diverse collection of bovine genome sequences from the Canadian Cattle Genome Project. Gigascience. 2015; 4: 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnelling WM, Bennett GL, Keele JW, et al.: A survey of polymorphisms detected from sequences of popular beef breeds. J Anim Sci. 2015; 93(11): 5128–43. PubMed Abstract | Publisher Full Text\n\nKalbfleisch TS: Intrepid Bioinformatics. 2006. Reference Source\n\nRobinson JT, Thorvaldsdóttir H, Winckler W, et al.: Integrative genomics viewer. Nat Biotechnol. 2011; 29(1): 24–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThorvaldsdóttir H, Robinson JT, Mesirov JP: Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013; 14(2): 178–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNewman JH, Holt TN, Cogan JD, et al.: Increased prevalence of EPAS1 variant in cattle with high-altitude pulmonary hypertension. Nat Commun. 2015; 6: 6863. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalherbe CR, Marquard J, Legg DE, et al.: Right ventricular hypertrophy with heart failure in Holstein heifers at elevation of 1,600 meters. J Vet Diagn Invest. 2012; 24(5): 867–77. PubMed Abstract | Publisher Full Text\n\nNeary JM, Booker CW, Wildman BK, et al.: Right-Sided Congestive Heart Failure in North American Feedlot Cattle. J Vet Intern Med. 2016; 30(1): 326–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLendahl U, Lee KL, Yang H, et al.: Generating specificity and diversity in the transcriptional response to hypoxia. Nat Rev Genet. 2009; 10(12): 821–32. PubMed Abstract | Publisher Full Text\n\nGale DP, Harten SK, Reid CD, et al.: Autosomal dominant erythrocytosis and pulmonary arterial hypertension associated with an activating HIF2 alpha mutation. Blood. 2008; 112(3): 919–21. PubMed Abstract | Publisher Full Text\n\nTan Q, Kerestes H, Percy MJ, et al.: Erythrocytosis and pulmonary hypertension in a mouse model of human HIF2A gain of function mutation. J Biol Chem. 2013; 288(24): 17134–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Chitko-McKnown CG, Grosse WM, et al.: Interleukin-8 haplotype structure from nucleotide sequence variation in commercial populations of U.S. beef cattle. Mamm Genome. 2001; 12(3): 219–26. PubMed Abstract | Publisher Full Text\n\nHeaton MP, Grosse WM, Kappes SM, et al.: Estimation of DNA sequence diversity in bovine cytokine genes. Mamm Genome. 2001; 12(1): 32–7. PubMed Abstract | Publisher Full Text\n\nKalbfleisch T, Heaton MP: Mapping whole genome shotgun sequence and variant calling in mammalian species without their reference genomes [version 2; referees: 2 approved]. F1000Res. 2013; 2: 244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZimin AV, Delcher AL, Florea L, et al.: A whole-genome assembly of the domestic cow, Bos taurus. Genome Biol. 2009; 10(4): R42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate long-read alignment with Burrows-Wheeler transform. Bioinformatics. 2010; 26(5): 589–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKenna A, Hanna M, Banks E, et al.: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010; 20(9): 1297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatukumalli LK, Lawley CT, Schnabel RD, et al.: Development and characterization of a high density SNP genotyping assay for cattle. PLoS One. 2009; 4(4): e5350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Keen JE, Clawson ML, et al.: Use of bovine single nucleotide polymorphism markers to verify sample tracking in beef processing. J Am Vet Med Assoc. 2005; 226(8): 1311–4. PubMed Abstract | Publisher Full Text\n\nCheng AY, Teo YY, Ong RT: Assessing single nucleotide variant detection and genotype calling on whole-genome sequenced individuals. Bioinformatics. 2014; 30(12): 1707–13. PubMed Abstract | Publisher Full Text\n\nBaes CF, Dolezal MA, Koltes JE, et al.: Evaluation of variant identification methods for whole genome sequencing data in dairy cattle. BMC Genomics. 2014; 15: 948. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKiezun A, Pulit SL, Francioli LC, et al.: Deleterious alleles in the human genome are on average younger than neutral alleles of the same frequency. PLoS Genet. 2013; 9(2): e1003301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTucker A, McMurtry IF, Reeves JT, et al.: Lung vascular smooth muscle as a determinant of pulmonary hypertension at high altitude. Am J Physiol. 1975; 228(3): 762–7. PubMed Abstract\n\nTucker A, Rhodes J: Role of vascular smooth muscle in the development of high altitude pulmonary hypertension: an interspecies evaluation. High Alt Med Biol. 2001; 2(2): 173–89. PubMed Abstract | Publisher Full Text\n\nCorrada D, Soshilov AA, Denison MS, et al.: Deciphering Dimerization Modes of PAS Domains: Computational and Experimental Analyses of the AhR:ARNT Complex Reveal New Insights Into the Mechanisms of AhR Transformation. PLoS Comput Biol. 2016; 12(6): e1004981. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGray TM, Arnoys EJ, Blankespoor S, et al.: Destabilizing effect of proline substitutions in two helical regions of T4 lysozyme: leucine 66 to proline and leucine 91 to proline. Protein Sci. 1996; 5(4): 742–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKundu A, Bag S, Ramaiah S, et al.: Leucine to proline substitution by SNP at position 197 in Caspase-9 gene expression leads to neuroblastoma: a bioinformatics analysis. 3 Biotech. 2013; 3(3): 225–34. Publisher Full Text | Free Full Text\n\nNeary JM, Gould DH, Garry FB, et al.: An investigation into beef calf mortality on five high-altitude ranches that selected sires with low pulmonary arterial pressures for over 20 years. J Vet Diagn Invest. 2013; 25(2): 210–8. PubMed Abstract | Publisher Full Text\n\nNC-IUB: Nomenclature for incompletely specified bases in nucleic acid sequences. Recommendations 1984. Nomenclature Committee of the International Union of Biochemistry (NC-IUB). Proc Nat Acad Sci USA. 1986; 83(1): 4–8. PubMed Abstract | Free Full Text"
}
|
[
{
"id": "15711",
"date": "30 Aug 2016",
"name": "Joseph M. Neary",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHeaton et al. are to be commended for a thorough description of a novel approach to discovering bovine protein variants in silico. Further, the authors demonstrate the application of their newly created and publicly accessible mapped genomes of 19 beef breeds by identifying protein variants encoded by EPAS1, a gene recently associated with susceptibility to pulmonary hypertension (PH) in cattle at altitudes of 5,200-7,850 ft. (Newman et al., 2015).\n\nThe authors also provide an excellent discussion of the plausible in vivo effects of the amino acid variants encoded by EPAS1. Of particular interest, is the different variant frequencies reported among the various breeds (Figure 4; Table 4). Interestingly, Holstein cattle are susceptible to hypoxia-induced PH and serve as a favorable animal model of the equivalent human condition; yet, as discussed by the authors, PH in Holstein cattle may be due to an amino acid variant (Q270) that differs from S610 variant that contributes to PH susceptibility in Angus cattle. This demonstrates the importance of a having publicly searchable set of mapped genomes for popular breeds of U.S. cattle.\n\nMinor revisions:\nThird paragraph, first line of Introduction: remove conjunction (“However”) from the start of the sentence and place mid-sentence (suggestion only). This may improve sentence flow.\n\nI am uncomfortable with the term “high-altitude pulmonary hypertension” because it implies that this physiological phenomenon is only problematic in high altitude regions when, in fact, it occurs at even moderate altitudes in both dairy and beef breeds of cattle (Malherbe et al., 2012; Neary et al., 2016). The Newman study was performed on measurements collected at 5,200-7,850 ft., which in my opinion, is more consistent with moderate than high altitude. “High-altitude” also implies that hypobaric hypoxia is the sole etiology; however, as you discuss, other causes of hypoxia and inflammation, such as respiratory disease, may also be involved. The term pulmonary hypertension (PH) ought to be sufficient.",
"responses": [
{
"c_id": "2206",
"date": "29 Sep 2016",
"name": "Michael Heaton",
"role": "Author Response",
"response": "We agree with the reviewer’s suggestions and have made these changes. Thus, we have removed “high altitude” as a modifier of “pulmonary hypertension” in the title and throughout the article."
}
]
},
{
"id": "16383",
"date": "23 Sep 2016",
"name": "Matthew C. McClure",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall a very nice paper on EPAS1 and high-altitude pulmonary hypertension (HAPH). Only time will tell if the more important outcome of this paper is the determination that the G610S alleles likely the causative mutation for HAPH (over A606T), that multiple EPAS1 alleles could together cause/impact some version of HAPH, or that there is finally a public database to compare whole genome sequence data between studies. As cattle producers and breed associations move to use more genetic tests to minimize genetic variants that cause economic loss in the breed the first 2 points above will be very useful, especially with “50% of calf mortalities… attributed to PH in some high-altitude ranches”, as noted in the manuscript. For researchers studying other genetic diseases the latter point will be more important as such a public database is needed to help filter out non-causative alleles. Currently such researchers have had to turn to other scientists that have their own WGS databases, or to semi-private database like the 1000 bull dataset where you must be a member to access the data. In the past I personally have had to use the former option and was grateful that the bovine community is full of gracious scientists that would help.\n\nThis is a very well written paper and was a pleasure to review.\n\nOverall I do approve of the manuscript to be indexed and advise only minor edits and some suggestions listed below.\n\nSuggestions:\nIn the abstract you state that the “existing bovine WGS dabases… diversity in US beef cattle”. Suggest this be changed to global beef cattle or similar as this issue is not confined to the USA.\n\nPage 3, 4th paragraph: While it’s a semi-private WGS dataset, given its size, I do recommend including the 1000 Bulls project when talking about population-scale WGS data. You could mention that it’s there but not publicly available which is one of the main advantages your WGS data has as reported in the manuscript\n\nRecommended:\nIntroduction: You state that “114 Mendelian traits with known causative mutations” and reference a 2014 paper. I advise updating that number as OMIA lists (23/9/2016) that 130 Mendelian disorders have their causative mutation known in cattle. I’m fine if you keep the reference you have and include a OMIA reference as your reference is an excellent review paper.\n\nTable 1: You present a large amount of data about each variant and this is wonderful as so many publications would only list one or two of your columns. I would like to see the flanking sequence be increased from 5bp per side to 15bp. Ensemble returns 0 hits for all 6 of your flanking sequences with BLASTing against the UMD3.1 assembly. Increasing it to my recommended length allows BLAST to find unique placements of the sequence. For future researchers to be able to test for disease causing variants having minimal information listed in publication makes it a challenge to make sure you are correctly designing diagnostic probes. I speak from experience in designing diagnostic probes for the IDB cattle chip used in Ireland.\n\nTable 3 appears to be only a subset of Table 4 with no additional information. Recommend removing Table 3 and only using Table 4. If you have a justifiable reason for keeping Table 3 than advise listing the breeds in alphabetical order and not in variant 3 frequency order.\n\nTable 4. In this you show no animals having the variant 7 allele, yet in the 5th paragraph on page 14 you state that “the F701 substitution was only observed in one of the 1250 animals tested (Salers,…)\". Please include the frequency of this allele in the Salers row in Table 4.\n\nFigure 5: Why is node ‘3’ in grey? It's not stated in the Figure.\n\nFor multiple Supplementary tables: Advise including this information from Table 1: cIUPAC/IUBMB ambiguity codes used for nucleotides:…..\n\nOr stating that IUPAC ambiguity codes are used, right now the reader just have to figure it out that a S means the individual is G/C for that SNP.\n\nTable S1: Don’t see why columns C, G, or H are needed: Distance between markers (Mb), Nearest gene (UMD3.1), Distance to nearest gene (kbp), respectively. The rest of the data in the table is excellent and wish more papers provided SNP data at this level. Really like that the ambigious SNP in the flanking sequence are shown in bold and capital. I recommend removing column C, G, and H as they are meaningless as to information on Parentage SNP.\n\nTable S2: You need to note that a “A, C, T, or G” shown as the genotype indicates that this is a homozygous SNP. One can figure it out since you also show the IUPAC ambiguous letters for heterozygous genotypes, but it would be better to state.\n\nTable S5: Column E is labelled as “EPAS1 protein variant diplotype”. In Table 1, 2, 3 you use the phrase “HIF2A protein variant\". I advise using the same here, otherwise you can easily confuse the reader. Especially as HIF2A is the name of the protein coded by the EPAS1 gene, so if you are referring to protein variants HIF2A is more appropriate than EPAS1.",
"responses": [
{
"c_id": "2207",
"date": "29 Sep 2016",
"name": "Michael Heaton",
"role": "Author Response",
"response": "We have provided point-by-point responses to the reviewer's suggestions and recommendations below: Reviewer suggestion: “In the abstract you state that the ‘existing bovine WGS dabases… diversity in US beef cattle’. Suggest this be changed to global beef cattle or similar as this issue is not confined to the USA.” Authors response: We agree and have made this change. Reviewer suggestion: “Page 3, 4th paragraph: While it’s a semi-private WGS dataset, given its size, I do recommend including the 1000 Bulls project when talking about population-scale WGS data. You could mention that it’s there but not publicly available which is one of the main advantages your WGS data has as reported in the manuscript” Authors response: We agree. Please note that the first reference provided in this paragraph (Daetwyler et al. Nat Genet. 2014; 46(8): 858–65) is the current published reference for the 1000 bulls project. Their sequence is public and available at http://www.ncbi.nlm.nih.gov/bioproject/PRJNA238491. In addition, the WGS from our article is available for inclusion in subsequent rounds of the 1000 bulls project. Reviewer recommendation: “Introduction: You state that “114 Mendelian traits with known causative mutations” and reference a 2014 paper. I advise updating that number as OMIA lists (23/9/2016) that 130 Mendelian disorders have their causative mutation known in cattle. I’m fine if you keep the reference you have and include a OMIA reference as your reference is an excellent review paper.” Authors response: The reviewer correctly points out that the number of Mendelian traits is increasing. We have updated the text to reflect the 130 entries currently listed on the OMIA web site: http://omia.angis.org.au/home/. Reviewer recommendation: “Table 1: You present a large amount of data about each variant and this is wonderful as so many publications would only list one or two of your columns. I would like to see the flanking sequence be increased from 5bp per side to 15bp. Ensemble returns 0 hits for all 6 of your flanking sequences with BLASTing against the UMD3.1 assembly. Increasing it to my recommended length allows BLAST to find unique placements of the sequence. For future researchers to be able to test for disease causing variants having minimal information listed in publication makes it a challenge to make sure you are correctly designing diagnostic probes. I speak from experience in designing diagnostic probes for the IDB cattle chip used in Ireland.” Authors response: We agree. To ensure a robust BLAST result, we have included 50 bp of flanking genomic sequence on each side of the variant SNPs in Table 1 (101 bp total). Reviewer recommendation: “Table 3 appears to be only a subset of Table 4 with no additional information. Recommend removing Table 3 and only using Table 4. If you have a justifiable reason for keeping Table 3 than advise listing the breeds in alphabetical order and not in variant 3 frequency order.” Authors response: We have deleted Table 3 and have adjusted the text accordingly. Reviewer recommendation: “Table 4. In this you show no animals having the variant 7 allele, yet in the 5th paragraph on page 14 you state that “the F701 substitution was only observed in one of the 1250 animals tested (Salers,…)\". Please include the frequency of this allele in the Salers row in Table 4.” Authors response: Table 4 contains genotypes for 1154 animals, including 23 Salers. None of these animals have the F701 variant. There are an additional five Salers in the 96-member beef cattle diversity panel. One of those five Salers has the allele. We have reworded the sentence on page 14 to clarify: “Moreover, the F701 substitution was only observed in one of 1250 animals tested (Salers sire no. 19999882 in MBCDPv2.9).” Reviewer recommendation: Figure 5: Why is node ‘3’ in grey? It's not stated in the Figure. Authors response: “Variant 3” (pink circle; T606, S610) is identical to that associated with PH in Angus cattle. We have added this sentence to the figure legend. Reviewer recommendation: “For multiple Supplementary tables: Advise including this information from Table 1: cIUPAC/IUBMB ambiguity codes used for nucleotides:….. Or stating that IUPAC ambiguity codes are used, right now the reader just have to figure it out that a S means the individual is G/C for that SNP. Authors response: We agree and will add this information as footnotes to Supplementary Material Tables S2 and S5. Reviewer recommendation: “Table S1: Don’t see why columns C, G, or H are needed: Distance between markers (Mb), Nearest gene (UMD3.1), Distance to nearest gene (kbp), respectively. The rest of the data in the table is excellent and wish more papers provided SNP data at this level. Really like that the ambigious SNP in the flanking sequence are shown in bold and capital. I recommend removing column C, G, and H as they are meaningless as to information on Parentage SNP.” Authors response: The information in Column C (genomic spacing) is an important feature of parentage markers since close proximity between two markers reduces the likelihood of meiotic recombination and thus affects the power for parentage assignment. However, the SNPs distance can be derived from UMD3.1 position so we have removed this redundant column. Regarding the “Distance to nearest gene” column, many parentage SNPs are in genes and thus the gene name is sometimes used as an identifier for parentage SNP. Because the parentage SNPs are heavily used on bead arrays and in stand-alone assays, their proximity to genes and their location in a gene is information that gets used quite a lot. For these reasons, we prefer not to delete these columns because this information is tedious to extract from other sources. Reviewer recommendation: “Table S2: You need to note that a “A, C, T, or G” shown as the genotype indicates that this is a homozygous SNP. One can figure it out since you also show the IUPAC ambiguous letters for heterozygous genotypes, but it would be better to state.” Authors response: We agree and have added this sentence as a footnote to Table S2: “Homozygous genotypes are listed as a single letter corresponding to the nucleotide at those sites, whereas, IUB ambiguity codes denote heterozygous genotypes those sites: Y = C/ T; R = A/G; M = A/C; K = G/T; W = A/T; S = C/G.” Reviewer recommendation: “Table S5: Column E is labelled as “EPAS1 protein variant diplotype”. In Table 1, 2, 3 you use the phrase “HIF2A protein variant\". I advise using the same here, otherwise you can easily confuse the reader. Especially as HIF2A is the name of the protein coded by the EPAS1 gene, so if you are referring to protein variants HIF2A is more appropriate than EPAS1” Authors response: We agree and have change the heading for Column E to “HIF2A diplotype.”. We have also added this footnote to cell G6: “Homozygous genotypes are listed as a single letter corresponding to the nucleotide at those sites, whereas, IUB ambiguity codes denote heterozygous genotypes those sites: Y = C/ T; R = A/G; M = A/C; K = G/T; W = A/T; S = C/G. Genotypes listed as \"O\" indicate missing information.”"
}
]
}
] | 1
|
https://f1000research.com/articles/5-2003
|
https://f1000research.com/articles/5-2031/v1
|
18 Aug 16
|
{
"type": "Research Article",
"title": "Collecting clinical data in primary ciliary dyskinesia- challenges and opportunities",
"authors": [
"Israel Amirav",
"Mary Roduta Roberts",
"Huda Mussaffi",
"Avigdor Mandelberg",
"Yehudah Roth",
"Revital Abitbul",
"Anthony Luder",
"Hannah Blau",
"Soliman Alkrinawi",
"Micha Aviram",
"Marta Ben-Ami",
"Moshe Rotschild",
"Lea Bentur",
"David Shoseyov",
"Malena Cohen-Cymberknoh",
"Eitan Kerem",
"Avraham Avital",
"Chaim Springer",
"Avigdor Hevroni",
"Husein Dabbah",
"Arnon Elizur",
"Elie Picard",
"Shmuel Goldberg",
"Joseph Rivlin",
"Galit Livnat",
"Moran Lavie",
"Nael Alias",
"Ruth Soferman",
"Heike Olbrich",
"Johanna Raidt",
"Julia Wallmeier",
"Claudius Werner",
"Niki T. Loges",
"Heymut Omran",
"Mary Roduta Roberts",
"Huda Mussaffi",
"Avigdor Mandelberg",
"Yehudah Roth",
"Revital Abitbul",
"Anthony Luder",
"Hannah Blau",
"Soliman Alkrinawi",
"Micha Aviram",
"Marta Ben-Ami",
"Moshe Rotschild",
"Lea Bentur",
"David Shoseyov",
"Malena Cohen-Cymberknoh",
"Eitan Kerem",
"Avraham Avital",
"Chaim Springer",
"Avigdor Hevroni",
"Husein Dabbah",
"Arnon Elizur",
"Elie Picard",
"Shmuel Goldberg",
"Joseph Rivlin",
"Galit Livnat",
"Moran Lavie",
"Nael Alias",
"Ruth Soferman",
"Heike Olbrich",
"Johanna Raidt",
"Julia Wallmeier",
"Claudius Werner",
"Niki T. Loges",
"Heymut Omran"
],
"abstract": "Rationale: Primary ciliary dyskinesia (PCD) is under diagnosed and underestimated. Most clinical research has used some form of questionnaires to capture data but none has been critically evaluated particularly with respect to its end-user feasibility and utility.\nObjective: To critically appraise a clinical data collection questionnaire for PCD used in a large national PCD consortium in order to apply conclusions in future PCD research.\nMethods: We describe the development, validation and revision process of a clinical questionnaire for PCD and its evaluation during a national clinical PCD study with respect to data collection and analysis, initial completion rates and user feedback.\nResults: 14 centers participating in the consortium successfully completed the revised version of the questionnaire for 173 patients with various completion rates for various items. While content and internal consistency analysis demonstrated validity, there were methodological deficiencies impacting completion rates and end-user utility. These deficiencies were addressed resulting in a more valid questionnaire.\nConclusions: Our experience may be useful for future clinical research in PCD. Based on the feedback collected on the questionnaire through analysis of completion rates, judgmental analysis of the content, and feedback from experts and end users, we suggest a practicable framework for development of similar tools for various future PCD research.",
"keywords": [
"response rate",
"cohorts",
"symptoms",
"questionnaire",
"clinical trial"
],
"content": "\n\nAt a Glance Commentary:\n\nWhat is the key question? It has been suggested that clinical data may be the first source of information when evaluating patients with PCD yet, only a few instruments to collect clinical data have been developed, and none have been critically evaluated.\n\nWhat is the bottom line? Challenges in the development, validation and administration process of a clinical questionnaire for PCD are described.\n\nWhy read on? Based on the feasibility results, validity analysis and feedback collected, a newly revised validated questionnaire tool is now available providing opportunities for PCD research in additional patient populations and contexts.\n\nAbbreviations: PCD- Primary Ciliary Dyskinesia, HVM- High Speed Video Microscopy, TEM-Transmission Electron microscopy, NO-Nitric oxide, IF-Immunofluorescence, LFT-Lung Function Tests, CT-Computed Tomography, RDS-Respiratory Distress Syndrome, CF-Cystic Fibrosis, ENT-Ear Nose & Throat.\n\n\nIntroduction\n\nPrimary ciliary dyskinesia (PCD) is a genetic disease affecting the motile cilia in the respiratory system1–3. Clinical manifestations include neonatal respiratory distress, recurrent otitis media, sinusitis, and recurrent lung infections. As the clinical manifestations are variable and commonly encountered in children, the diagnosis is often delayed, particularly in the absence of situs inversus4,5. Clinical criteria have been suggested as an aid to PCD diagnosis5–11. Only a few large scale studies have used questionnaires to gather clinical information for PCD. For example, one of the largest pan-European studies5 used a questionnaire to survey numbers and some characteristics of pediatric PCD patients (i.e., age, sex, age of diagnosis and presence of situs inversus) while the North American PCD consortium have used a much more detailed questionnaire in their prospective clinical studies8,9. A recent clinical internet-based tool is recently being developed10.\n\nAs PCD clinical research is evolving, it is important that instruments for clinical data collection include relevant questions that generate accurate data, are valid for its intended purpose and use, are user friendly, and can be completed within a reasonable time.\n\nImproved data collection instruments are of great importance, particularly for rare diseases such as PCD, where collaboration and data sharing is paramount. Unfortunately, none of the existing data collection tools have been critically evaluated. The purpose of this article is to describe the challenges of developing an improved clinical questionnaire for PCD, and evaluate its feasibility and potential utility for the end user. It is anticipated that this critical assessment of a PCD questionnaire will be useful for the development and refinement of similar tools to collect clinical data in various PCD studies.\n\n\nMethods\n\nIn 2011 we decided to characterize the clinical features of PCD patients in Israel. A prospective National Israeli PCD Consortium (NIPC) study was conducted between the years 2011–2013 in subjects presenting with the typical clinical phenotype of PCD in 14 pediatric pulmonology centers (for details see ref. 12). Ethical approval was obtained from Institutional Review Boards at each centre that collected patients’ data (Ziv Medical Center, Schneider Children’s Medical Center, Edith Wolfson Medical Center, Soroka Medical Center, Hadassah-Hebrew University Medical Centers, Rambam Medical Center, Western Galilee Hospital, Saint Vincent De-Paul Hospital, Assaf Harofeh Medical Center, Shaare Zedek Medical Center, Edmond & Lili Safra Children’s Hospital, Sheba Medical Center, Carmel Medical Center, Dana Children’s Hospital) as well as from the Israeli Ministry of Health. and all patients and guardians signed written informed consent. The study was registered on ClinicalTrials.gov (NCT 01070914). To complement the clinical phenotypic data and in order to verify the diagnosis, the subjects had a comprehensive study visit whereby they also underwent a series of tests including nasal NO (nNO), nasal brushing of samples for transmission electron microscopy (TEM), immunofluorescence (IF), high-speed video microscopy (HVM) and blood sampling for genetics. In this study, the presence of at least 2 abnormal results for these tests were used as criteria to define PCD3. To capture the most accurate clinical information in this study, a special questionnaire had to be developed.\n\nOnly a few questionnaires have been previously used to specifically collect clinical data in PCD and none of them as far as we know, have been systematically and critically evaluated5–11.\n\nTo validate the content of the NIPC questionnaire, a national expert panel was created. It consisted of three pediatric pulmonologists, one adult pulmonologist and one ENT surgeon with expertise in PCD. All members had been in practice at least 15 years after certification and have run PCD clinics for at least 10 years.\n\nThe initial draft of the NIPC questionnaire was developed by the panel members after reviewing existing questionnaires5,6,8–11 and selecting content. Across multiple iterations the panel provided feedback regarding the content relevance and representativeness with respect to the clinical presentation of PCD. Discussions between panel members were conducted electronically and by conference calls. Based on feedback of the panel, modifications to the questionnaire were made after each iteration. The major concern of the panel was the excessive length and detail of the initial draft. It was felt that both organizational and content changes had to be addressed in order to make completion of the questionnaire feasible within a reasonable amount of time.\n\nAs the proposed study had planned to enroll subjects who were clinically suspected of having PCD, there were many items the panel found unnecessary or irrelevant. For example, history probing about meconium aspiration was considered irrelevant for the diagnosis of PCD. Likewise, items regarding disorders of family members extending to relatively remotely-related members (e.g., biological maternal grandfather) were also deemed unnecessary. While genetic predisposition is important in PCD, the panel felt that there are more direct ways to obtain such information (e.g. questions on consanguinity and pedigree creation). Another problem identified by the panel was overlapping content. Many questions were repeated in various sections while providing similar information. For example, radiological evidence for sinus abnormality appeared both in the medical history and in the test sections. Consensus on questionnaire content was reached after approximately 10 iterations. During the revision process, we tried to balance the need of adding new clinical data with reducing the burden of a long and impractical questionnaire.\n\nThe main sections of the revised NIPC questionnaire were: A. Demographic details (9 items), B. Family history (8 items), C. Past medical history (52 items) and D. Physical examination and basic tests (radiology, spirometry, sputum etc.) (38 items). Response options were in the form of: YES/NO/Do not know; commentary or multiple choice. Some items included subsidiary questions. For example, when enquiring about the presence of chronic cough, a subsidiary question followed asking when the symptom began.\n\nBeyond the history regarding PCD diagnosis, the questionnaire included several \"rule out\" questions such as normal sweat tests for cystic fibrosis.\n\nThe NICP questionnaire was emailed to the local principal investigator (PI) of each center (n=14) approximately one week prior to a scheduled visit. The local PI was asked to forward the questionnaire to the other physicians in their group (e.g. a center may have three physicians where each one physician is responsible for the care of a few patients with PCD). The physicians were asked to print out the questionnaire and complete it for each of their scheduled patients within one week.\n\nThe national PI and 2 or 3 research assistants (RA) joined the local PIs during actual patient visits (study sessions) in order to assist in the ancillary diagnostic test procedures (nasal brushing, video-microscopy etc.). There were 20 study sessions in all of the participating centers spanning an almost two-year period during which some centers had more than one session. Completed questionnaires were physically collected from each center by the study PI (IA) at the end of each session. Questionnaires were scanned and reviewed for missing or unclear data by the study PI and a RA. Unanswered questions were noted and a reminder to complete the questions was e-mailed to the local PI within 2 weeks of collection.\n\nA second attempt to complete unanswered questions was done by transferring the received questionnaire data to an Microsoft Excel 2010 table, marking all missing or unclear answers, and e-mailing the table to the responsible physician. When a third e-mail reminder was necessary, the first author also personally phoned the local PI at each center to encourage them to complete the unanswered questions.\n\nQuantitative analysis. Completion rates of each item on the questionnaire, as an indicator of feasibility, were calculated as a percentage, and then averaged across items for each section of the questionnaire (Microsoft Excel 2010). Completed items were coded as 1, whereas uncompleted items were coded as either relevant (code 2) or not relevant to the particular patient (code 3, e.g. an uncompleted item about fertility in a child). If an uncompleted item included a N/A answer, it was also coded as 3. Data were analyzed for the total group, and for those eventually diagnosed with PCD.\n\nQualitative analysis. Qualitative evaluation of the questionnaire feasibility and utility was completed from two post–hoc perspectives. The first was based upon guidelines on best practices for questionnaire development13. These guidelines focus on: a) features of the questionnaire (visual presentation, language and format, mode of administration), b) characteristics of the participant completing the questionnaire (their workflow, their degree of cooperation, their relationship with the researcher), and c) interaction of the participant with the questionnaire (i.e., what must the participant do to and data sources required to complete the questionnaire?).\n\nThe second was the physicians’ post-hoc user feedback. This feedback was collected through electronic communications, oral discussions and through an electronic survey using http://www.questionpro.com/a/listSurveys.do.\n\nThe survey is included in under Data availability (Dataset 1). Participants were first presented with specific items with a low completion rate and then were asked to suggest potential reasons for this. They were also surveyed about more general topics such as relevancy and representativeness of the items for PCD, preference for electronic vs paper records, time required to complete the questionnaire and future use in their practice.\n\nContent validation was conducted as previously described during the development stage. To assess internal consistency, we first computed frequencies and descriptive statistics for all items using IBM SPSS Statistics for Windows, Version 23.038 Attention was paid to the proportion of responses coded as not completed or unknown and not relevant. Items with a large proportion of responses coded as not relevant were often subsidiary questions to a lead question. Judgments were made regarding the added value of retaining these subsidiary questions for the analysis of internal consistency, over and above the information already provided by the lead question. Items with a large proportion (i.e., greater than 0.25) of responses coded as not completed or unknown also warranted a closer look at the item with judgments made on whether it should be included in further investigations of internal consistency. Internal consistency for the two major subsections (medical history and physical examination) was then examined using Cronbach’s alpha. Missing data were handled using the default option of listwise deletion in IBM SPSS Statistics for Windows, Version 23.038.\n\nTaken together, the results were used to inform revisions to the PCD questionnaire.\n\n\nResults\n\nTwenty-two physicians in 14 centers completed questionnaires for 173 subjects (of which 104 were eventually confirmed as PCD). Table 1 summarizes the initial completion rates for each subsection of the PCD questionnaire by group (i.e., total, PCD, non-PCD).\n\nItem codes: 1=completed, 2=not completed yet item relevant, 3=not completed and item not relevant.\n\nOverall, the average initial response rate was 82% and increased to 88% following two reminders. Subsections A (range=95–97%) and B (range=75–80%) had high initial completion rates. Subsections C (range=52–59%) and D (range=55–59%) had lower completion rates (p<0.01). This pattern of completion rates corresponded to items that required recall of readily available information about the individual and their past/family medical history (subsections A and B), whereas subsections C and D required more specific tests information. No significant differences in response rates were observed between PCD and non-PCD groups. Figure 1 illustrates the changes in response rate (%) between initial completion and following reminders for major individual questions.\n\nAll questions ending with a (?) had yes/no/unknown answers. The rest had free text answers (e.g. birth date). A, B, C and D are the questionnaire sections. Abbreviations: NRDS-Neonatal Respiratory Distress Syndrome; LFT- Lung Function Test; FVC- Forced Vital Capacity; FEV1-Forced Expiratory Volume at 1st second; CT- Computed Tomography; CF- Cystic Fibrosis.\n\nAmong the uncompleted items, seven were identified where a response was relevant and required (i.e., item coded as 2). One item was in the family history subsection B, five items were in the medical history subsection C, and one item was in the physical examination subsection D. Two of these items, both in the medical history subsection C, had a non-response of > 50%. The first of these items (64% non-response rate) enquired about the patient’s hearing loss with sub-questions to identify its nature: conductive, neuronal, or unknown. A potential reason why this item might not have been answered is that the response options were not exhaustive. If the patient did not have hearing loss, then none of the response options would have been the appropriate answer. Therefore, leaving the item unanswered was one course of action. The second item with >50% non-response was a sub-question about lung transplant which was found to be confusing. It was only post-hoc analysis that revealed the problematic nature of these items.\n\nFive uncompleted yet relevant items had non-response rates between 25–50%. For example, one item enquired whether the patient had family members who suffered from any of 12 various symptoms. Although the symptoms were specific (e.g., chronic rhinitis, fertility disorders), patients may not have known their family history to that degree of detail or the information may not have been entered in the chart. Other questions with low response rate asked for the results of specific tests such as the sweat chloride concentration. If this information was not readily available from the chart or from the patient, this question would not have been answered.\n\n\nQualitative analysis\n\nFeatures of the questionnaire. There were two areas which could have been improved upon. First was consideration of terminology used and its consistency in interpretation across participants. For example, questions about “hearing loss”, and its sub-categories may not have a universally accepted definition thus resulting in differing interpretations11. Second was consideration of questionnaire length. At 98 questions, the NIPC questionnaire was an improvement in length when compared with previously used questionnaires increasing the chances of collecting complete data14.\n\nCharacteristics of participants. In terms of the participant group, the questionnaire content and length may have impacted completion given how busy physicians’ practices are. However, continuous email and phone support provided by the study PI to the physicians likely helped to achieve completion rates of almost 90% using personal reminders.\n\nInteraction between the participant and questionnaire. The effort to complete the clinical questionnaire ranged from minimal (e.g., completing demographic section) to at least moderate (e.g., completing family history or physical examination). Answers for some questions required specific data for which information within the records/charts were not easily available. In these cases, the physician had to complete the questionnaire during or after the actual patient visit. Analysis of free text answers was challenging and was not contributory in most cases.\n\nSeventeen out of the 22 end users (i.e., physicians) provided feedback and identified reasons for low completion rates of some items. For example, 70% identified lack of available data as a reason for a low completion rate of the question about polysplenia. Twelve percent suggested that this item was irrelevant. A low completion rate for the item enquiring about onset of chronic cough and onset of wheezing were attributed to lack of available data in 53% of responders. This was also the case with “hearing disorders” (response rate of 47%). Fifty-eight percent thought that the fertility question was irrelevant to their subjects (children). Most (63%) reported that it took them less than an hour to complete the questionnaire in one session; 31% reported they required several sessions to do it; 81% preferred the use of electronic rather than paper versions and 75% said they will use the questionnaire as a diagnostic tool in their practice.\n\nSeveral items had a high proportion (>50%) of responses coded as not relevant. All these items were subsidiary questions to lead questions and were carefully reviewed before internal structure analysis. For example, the lead question C5 asks whether situs inversus is present. The follow-up questions C6 to C9 had 67.1% responses coded as not relevant, likely because situs inversus was not present in their patient. The PI reviewed these items individually and determined that question C5 captures the presence of this primary feature (i.e., situs inversus) within the patient sample whereas questions C6–C9 are subsumed conceptually by C5 and could be excluded from further analysis of internal structure. For the medical history subsection, Cronbach’s alpha was 0.63 and for the physical examination/tests subsection Cronbach’s alpha was 0.72.\n\n\nDiscussion\n\nPrevious studies have collected some clinical data in describing their PCD population without the use of a specific questionnaire. To better contextualize our study within the current state of acquiring clinical data in PCD, we systematically reviewed studies published in the English literature over the past 5 years (2010–2015) that have detailed clinical characteristics of PCD patients. In brief, two authors (IA, AM) independently searched MEDLINE and EMBASE followed by additional bibliographies of all selected studies. Disease-specific terms (primary ciliary dyskinesia, Kartagener syndrome, immotile ciliary syndrome, immotile cilia syndrome) were combined with clinical data-specific terms (clinical features, characteristics, symptoms, gender, age, family history, consanguinity, ethnicity, pneumonia, rhinitis, otitis, situs, bronchiectasis, atelectasis, clubbing, polyps, sinusitis) as search parameters.; Inclusion criteria included all studies published before the start of this search on December 10, 2015 on PCD patients (n≥30) that systematically detailed clinical characteristics. Categories of clinical items collected in each study were then tabulated (Table 2).\n\nAbbreviations:\n\nNRDS-Neonatal Respiratory Distress Syndrome\n\nLRTI-Lower respiratory Tract Infection\n\nLFT- Lung Function Test\n\nTEM-Transmission Electron Microscopy\n\nENT-Ear, Nose & Throat\n\nCF- Cystic Fibrosis\n\nDifferences in judgment were resolved by consensus. Twenty-two studies6,8,11,15–34 totaling 1799 patients with PCD met our inclusion criteria. The most striking finding of our review was the wide variability and lack of standardization in collecting clinical data in PCD studies. While there were a few clinical items that were collected frequently [e.g., situs (in 100% of studies), age (91%), otitis and bronchiectasis (in 86%)], there was no consistency with many other clinical items. This review was very informative and supported motivation for our work in developing improved instruments (such as a questionnaire) for collecting clinical data in PCD. Here we present the first report and discuss what we have learned about the development and field evaluation of a clinical questionnaire for PCD. As clinical data collection is fundamental not only for clinical care but also for registry development and studies such as cohort, interventions, genetic and epidemiological, these lessons are applicable across many other settings.\n\nQuestionnaires must have a high response rate since the ultimate goal is to obtain accurate and high quality clinical data14. The quantitative analysis in the present study identified questions that had low response rates or might have been completed inconsistently. Reasons for low completion included issues such as perceived relevance of the item to the patient, problematic wording and formatting of the question, information not readily available in the patient chart, and difficulties of the patient or parents recalling pertinent medical information, the latter already identified in a previous PCD study35. Questions with low response rates could be given back to the end users or expert panel to render a judgment on question relevance or format.\n\nBased on guidelines for questionnaire development, we suggest that like any other diseases, PCD questionnaires should be developed with target users in mind, considering their characteristics and workflow to assure as high a response rate as possible while minimizing the burden of participation. Despite the “generic” nature of the clinical questions, the specific purpose of the clinical questionnaire and instructions for its completion should still be explicit and clear in any future individual study. Given that some questions will require very specific information, access to medical charts should be encouraged upfront. Since the present study targeted subjects with a high suspicion of PCD, the expert panel suggested that in such populations questions be more focused toward specific PCD symptoms taking into account that these subjects have probably undergone considerable previous evaluation and, in particular, queries regarding associations to respiratory illness (e.g., removing previous probing about pancreatic insufficiency).\n\nBefore distribution to end users, the tool should be pilot tested at least once with a small sample representative (e.g., clinicians or research assistants)14. Pilot testing should be done to check on formatting, clarity of wording, content coverage and to test if the questions are working in the ways originally intended. One group of testers should typically be experts in the field to inform content validity whereas another group should be the target end users. The more feedback received about the questions from these two groups, the more likely the tool will be in a form that is relevant to the data collection purpose and readily acceptable to participants in order to achieve as high as possible completion rates.\n\nThe end-user feedback in this study helped us to revise many questions. Moreover, based on this feedback, analysis of items coded as 2 (i.e., not completed but relevant) suggested that while content validation was adequate, the expert panel did not sufficiently attend to other methodological aspects such as item wording, clarity, interpretations and structure. These issues might have been better addressed by questionnaire experts. Of interest, but not surprising, is the fact that most of the end users preferred electronic rather than paper versions. Such an internet based PCD registry, likely to reduce cost and time36 is indeed being recently developed10.\n\nOver time, ongoing analysis of question completion rates in various PCD populations and contexts may show that the items flagged in this paper continue to provide low response rates. This empirical evidence suggests further discussion as to whether these items should remain in the questionnaire or at the very least, discussion on how to improve response rates beyond what was outlined previously.\n\nAlthough beyond the scope of the present study, it would be useful in the future to correlate the various questions with individual patients’ data and to evaluate their predictive diagnostic or prognostic value.\n\n\nConclusion\n\nThis is the first critical appraisal of a clinical questionnaire for PCD. Based on analysis of questionnaire completion rates, validation of the questionnaire, literature review and feedback from both experts and end users, we have now developed a shorter, clearer and more user-friendly updated generic version. This newly revised version may be freely downloaded from the Data availability section of this paper (Dataset 2) by any researcher and/or clinician interested in collecting clinical data about PCD. Indeed, it was recently used in a multi-national PCD European study37. Clinical research in PCD will gain much benefit from future use and further validation of this tool with additional patient populations and contexts.\n\n\nData availability\n\nF1000Research: Dataset 1. PCD- Post Study Feedback questionnaire to physicians, 10.5256/f1000research.9323.d13191039\n\nF1000Research: Dataset 2. Newly (2016) Revised Suggested PCD Questionnaire, 10.5256/f1000research.9323.d13191140",
"appendix": "Author contributions\n\n\n\nIsrael Amirav had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. He drafted and wrote the manuscript.\n\nIA, MRR and HO: contributed to study conception and design, acquisition and analysis of data, and assembly of the manuscript.\n\nIA, MRR, HO, AM, RA and CW critically reviewed and revised the manuscript. HO, NTL, JR, JW, CW and HOl performed TEM, IF and genetic analysis.\n\nIA, HM, AM, YR, SA, MA, MR, LB, DS, AA, AH, HD, AE, EP, SG, JR, GL, EK, MCC, HB, MBA, AL, CS, ML and NA prepared and submitted individual institutional applications for ethical approval and substantially contributed to the patients' recruitment and collection of their clinical data and material.\n\nAll authors have read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nGrant #3-6176 to Israel Amirav from Chief Scientist, Israel Ministry of Health.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nWe thank all the children and families who participated in the study. This work was supported by grant no. 3-6176 from the Chief Scientist Office of the Ministry of Health, Israel. Dr Michael Newhouse and The (US) Genetic Disorders of Mucociliary Clearance Consortium are to be thanked for their useful comments and help.\n\n\nReferences\n\nKnowles MR, Daniels LA, Davis SD, et al.: Primary ciliary dyskinesia. Recent advances in diagnostics, genetics, and characterization of clinical disease. Am J Respir Crit Care Med. 2013; 188(8): 913–922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLucas JS, Burgess A, Mitchison HM, et al.: Diagnosis and management of primary ciliary dyskinesia. Arch Dis Child. 2014; 99(9): 850–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWerner C, Onnebrink JG, Omran H: Diagnosis and management of primary ciliary dyskinesia. Cilia. 2015; 4(1): 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLucas JS, Leigh MW: Diagnosis of primary ciliary dyskinesia: searching for a gold standard. Eur Respir J. 2014; 44(6): 1418–1422. PubMed Abstract | Publisher Full Text\n\nKuehni CE, Frischer T, Strippoli MP, et al.: Factors influencing age at diagnosis of primary ciliary dyskinesia in European children. Eur Respir J. 2010; 36(6): 1248–58. PubMed Abstract | Publisher Full Text\n\nNoll EM, Rieger CH, Hamelmann E, et al.: Questionnaire to preselect patients with a high probability of primary ciliary dyskinesia. Klin Padiatr. 2011; 223(1): 22–6. PubMed Abstract | Publisher Full Text\n\nBehan L, Dimitrov BD, Kuehni CE, et al.: PICADAR: a diagnostic predictive tool for primary ciliary dyskinesia. Eur Respir J. 2016; 47(4): 1103–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoone PG, Leigh MW, Sannuti A, et al.: Primary ciliary dyskinesia: diagnostic and phenotypic features. Am J Respir Crit Care Med. 2004; 169: 459–67. PubMed Abstract | Publisher Full Text\n\nLeigh MW, Ferkol TW, Davis SD, et al.: Clinical Features and Associated Likelihood of Primary Ciliary Dyskinesia in Children and Adolescents. Ann Am Thorac Soc. 2016. PubMed Abstract | Publisher Full Text\n\nWerner C, Lablans M, Ataian M, et al.: An international registry for primary ciliary dyskinesia. Eur Respir J. 2016; 47(3): 849–59. PubMed Abstract | Publisher Full Text\n\nSommer JU, Schäfer K, Omran H, et al.: ENT manifestations in patients with primary ciliary dyskinesia: prevalence and significance of otorhinolaryngologic co-morbidities. Eur Arch Otorhinolaryngol. 2011; 268(3): 383–8. PubMed Abstract | Publisher Full Text\n\nAmirav I, Mussaffi H, Roth Y, et al.: A reach-out system for video microscopy analysis of ciliary motions aiding PCD diagnosis. BMC Res Notes. 2015; 8(1): 71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDillman DA, Smyth JD, Christian LM: Mail and Internet Surveys: The Tailored Design Method. Third edition. New York: John Wiley and Sons. 2009.\n\nEdwards P: Questionnaires in clinical trials: guidelines for optimal design and administration. Trials. 2010; 11: 2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavis SD, Ferkol TW, Rosenfeld M, et al.: Clinical features of childhood primary ciliary dyskinesia by genotype and ultrastructural phenotype. Am J Respir Crit Care Med. 2015; 191(3): 316–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDjakow J, Kramná L, Dušátková L, et al.: An effective combination of sanger and next generation sequencing in diagnostics of primary ciliary dyskinesia. Pediatr Pulmonol. 2016; 51(5): 498–509. PubMed Abstract | Publisher Full Text\n\nHosie PH, Fitzgerald DA, Jaffe A, et al.: Presentation of primary ciliary dyskinesia in children: 30 years' experience. J Paediatr Child Health. 2015; 51(7): 722–6. PubMed Abstract | Publisher Full Text\n\nYiallouros PK, Kouis P, Middleton N, et al.: Clinical features of primary ciliary dyskinesia in Cyprus with emphasis on lobectomized patients. Respir Med. 2015; 109(3): 347–356. PubMed Abstract | Publisher Full Text\n\nBoon M, Smits A, Cuppens H, et al.: Primary ciliary dyskinesia: critical evaluation of clinical symptoms and diagnosis in patients with normal and abnormal ultrastructure. Orphanet J Rare Dis. 2014; 9: 11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCao Y, Shao C, Song Y, et al.: Clinical analysis of patients with primary ciliary dyskinesia in mainland China. Clin Respir J. 2015. PubMed Abstract | Publisher Full Text\n\nCohen-Cymberknoh M, Simanovsky N, Hiller N, et al.: Differences in disease expression between primary ciliary dyskinesia and cystic fibrosis with and without pancreatic insufficiency. Chest. 2014; 145(4): 738–744. PubMed Abstract | Publisher Full Text\n\nMullowney T, Manson D, Kim R, et al.: Primary ciliary dyskinesia and neonatal respiratory distress. Pediatrics. 2014; 134(6): 1160–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVallet C, Escudier E, Roudot-Thoraval F, et al.: Primary ciliary dyskinesia presentation in 60 children according to ciliary ultrastructure. Eur J Pediatr. 2013; 172(8): 1053–60. PubMed Abstract | Publisher Full Text\n\nKim RH, A Hall D, Cutz E, et al.: The role of molecular genetic analysis in the diagnosis of primary ciliary dyskinesia. Ann Am Thorac Soc. 2014; 11(3): 351–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBusquets RM, Caballero-Rabasco MA, Velasco M, et al.: Primary ciliary dyskinesia: clinical criteria indicating ultrastructural studies. Arch Bronconeumol. 2013; 49(3): 99–104. PubMed Abstract | Publisher Full Text\n\nDemarco RC, Tamashiro E, Rossato M, et al.: Ciliary ultrastructure in patients with chronic rhinosinusitis and primary ciliary dyskinesia. Eur Arch Otorhinolaryngol. 2013; 270(7): 2065–2070. PubMed Abstract | Publisher Full Text\n\nZietkiewicz E, Bukowy-Bieryllo Z, Voelkel K, et al.: Mutations in radial spoke head genes and ultrastructural cilia defects in East-European cohort of primary ciliary dyskinesia patients. PLoS One. 2012; 7(3): e33667. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlanchon S, Legendre M, Copin B, et al.: Delineation of CCDC39/CCDC40 mutation spectrum and associated phenotypes in primary ciliary dyskinesia. J Med Genet. 2012; 49(6): 410–416. PubMed Abstract | Publisher Full Text\n\nKnowles MR, Leigh MW, Carson JL, et al.: Mutations of DNAH11 in patients with primary ciliary dyskinesia with normal ciliary ultrastructure. Thorax. 2012; 67(5): 433–441. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPifferi M, Bush A, Caramella D, et al.: Agenesis of paranasal sinuses and nasal nitric oxide in primary ciliary dyskinesia. Eur Respir J. 2011; 37(3): 566–571. PubMed Abstract | Publisher Full Text\n\nTheegarten D, Ebsen M: Ultrastructural pathology of primary ciliary dyskinesia: report about 125 cases in Germany. Diagn Pathol. 2011; 6: 115. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarthin JK, Petersen N, Skovgaard LT, et al.: Lung function in patients with primary ciliary dyskinesia: a cross-sectional and 3-decade longitudinal study. Am J Respir Crit Care Med. 2010; 181(11): 1262–1268. PubMed Abstract | Publisher Full Text\n\nZiętkiewicz E, Nitka B, Voelkel K, et al.: Population specificity of the DNAI1 gene mutation spectrum in primary ciliary dyskinesia (PCD). Respir Res. 2010; 11: 174. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrulière-Escabasse V, Coste A, Chauvin P, et al.: Otologic features in children with primary ciliary dyskinesia. Arch Otolaryngol Head Neck Surg. 2010; 136(11): 1121–1126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStannard WA, Chilvers MA, Rutman AR, et al.: Diagnostic testing of patients suspected of primary ciliary dyskinesia. Am J Respir Crit Care Med. 2010; 181(4): 307–314. PubMed Abstract | Publisher Full Text\n\nUhlig CE, Seitz B, Eter N, et al.: Efficiencies of Internet-based digital and paper-based scientific surveys and the estimated costs and time for different-sized cohorts. PLoS One. 2014; 9(10): e108441. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPanayiotis Y: Setting up and implementing PCD diagnostics: Expertise from Cyprus, Greece and Poland. Presented at BESTCILIA final event & international PCD conference, Limassol Cyprus. 2016.\n\nIBM Corp: Released 2016. IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp. Reference Source\n\nAmirav I, Roberts MR, Mussaffi H, et al.: Dataset 1 in: Collecting clinical data in primary ciliary dyskinesia-challenges and opportunities. F1000Research. 2016. Data Source\n\nAmirav I, Roberts MR, Mussaffi H, et al.: Dataset 2 in: Collecting clinical data in primary ciliary dyskinesia-challenges and opportunities. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15746",
"date": "19 Aug 2016",
"name": "Bruce Rubin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an outstanding study demonstrating validation of a PCD clinical questionnaire by an expert group of clinicians from Israel. The data are well presented and the results will be valuable to clinicians caring for children and adults with PCD.\nHowever this is still quite a long questionnaire (98 questions and an hour to complete) and the authors have not made it clear how this questionnaire is to be used:\nAs a diagnostic screening tool this could be stratified and streamlined to identify criteria sufficient for patients to be referred for additional testing such as nNO, TEM and video, or genetics.\n\nAs a tool to monitor disease progression, many of the background questions would be redundant on repetition.\n\nAs a tool to evaluate the outcomes of specific interventions (results of therapy) many of these questions are irrelevant.\n\nAs a \"general PCD questionnaire\" it is long and time consuming to complete and may have limited application for an individual patient or clinician.\nI would appreciate it if Prof Amirav would discuss how he would see this questionnaire used in \"real life\" and if shortened versions might be more relevant for specific clinical applications.",
"responses": [
{
"c_id": "2188",
"date": "16 Sep 2016",
"name": "Israel Amirav",
"role": "Author Response",
"response": "We thank Prof Rubin for his thoughtful comments. We completely agree that for \"real life\" applications the questionnaire can be modified and adapted. In any of these situations, the items can be streamlined to represent the specific goal in mind. Recent papers have identified the role of few clinical features that are highly associated with proven PCD (7,9). Using these suggested more selective items and removing all others from our proposed questionnaire, may indeed can substantially improve the response rate as well as the completion time. Similarly, for intervention studies there is clearly no need to include many of the Demographics (section A), Family history (section B) and Medical history (section C) items, but to focus on specific modifiable items (for example pulmonary function tests). We have now included this notion in our revised discussion."
},
{
"c_id": "2216",
"date": "05 Oct 2016",
"name": "Bruce Rubin",
"role": "Reviewer Response F1000Research Advisory Board Member",
"response": "Israel - Are you comfortable proposing what these modified questionnaires might consist of? Have you enough data from this validation study to do this? It would be very useful. Thanks Bruce"
}
]
},
{
"id": "16025",
"date": "09 Sep 2016",
"name": "Zorica Živković",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have read carefully the article on collecting clinical data in primary ciliary dyskinesia, the usefulness and comprehensiveness of the questionnaire used. I admire great effort and workload of the authors to set up this questionnaire and to approve its applicability in real life. From that point the topic has achieved its main goal. The other point is the low feedback responses and related questions that have been analyzed. Very interesting and valuable issues were raised. All of us working with questionnaires spent most of our time trying to avoid low response rate. Since this paper has no input on clinical diagnosis or treatment option in primary ciliary dyskinesia I would suggest this key problem to be pointed out. What would be the possible opportunities (as noted in the title) for solving this problem?\nIn the methodology section, I would like to add the way that physicians filled up the questionnaires with their patients, precise time frame that they have been used including the time for explanations and clearing up the complicated or hardly understandable questions. Having in mind very detailed time consuming questionnaire, the importance should be in national based registry making, which should be pointed as well.\nI must say that reviewing this manuscript was real honor and I strongly support its publishing.",
"responses": [
{
"c_id": "2187",
"date": "16 Sep 2016",
"name": "Israel Amirav",
"role": "Author Response",
"response": "Thanks for your favourable comments. The point of low response rate is well appreciated. We believe this is a major concern in all questionnaire type studies. We have highlighted its importance as we made it one of the major outcomes of our study (see quantitative analysis). The comments about opportunities for clinical diagnosis and possible treatments are correct yet these were beyond the purpose of this study. We nevertheless, just before the conclusion section we mentioned that \"Although beyond the scope of the present study, it would be useful in the future to correlate the various questions with individual patients’ data and to evaluate their predictive diagnostic or prognostic value.\" With regard to the time required to fill the questionnaire, we have partially referred to it in our evaluation based on feedback from end users: \"Most (63%) reported that it took them less than an hour to complete the questionnaire in one session; 31% reported they required several sessions to do it;\" The reviewer is correct that we have not specifically asked about the time needed for explanation and clearing up complicated questions. A good point for evaluation of future instruments! We completely agree that the questionnaire could be also used in national registries and we have mentioned it in the 2nd paragraph of the discussion: \"As clinical data collection is fundamental not only for clinical care but also for registry development and studies such as cohort, interventions, genetic and epidemiological, these lessons are applicable across many other settings.\""
}
]
}
] | 1
|
https://f1000research.com/articles/5-2031
|
https://f1000research.com/articles/5-2430/v1
|
03 Oct 16
|
{
"type": "Research Article",
"title": "Quercetin feeding protects plants against oxidative stress",
"authors": [
"Jasmina Kurepa",
"Timothy E. Shull",
"Jan A. Smalle",
"Jasmina Kurepa",
"Timothy E. Shull"
],
"abstract": "Background: Flavonoids are a complex group of plant-made phenolic compounds that are considered of high nutraceutical value. Their beneficial impacts on human health relate predominantly to their capacity to serve as antioxidants, thus protecting cells against the damaging impact of reactive oxygen species. Recent studies have also pointed at an essential role for flavonoids as antioxidants in plants. Results: Here we show that the flavonoid quercetin, which is known to protect human cells from oxidative stress, has the same effect on plant cells. Under oxidative stress conditions, Arabidopsis plants grown on quercetin-supplemented media grew better than controls and contained less oxidized proteins. This protection was also observed in the dicot Nicotiana tabacum and the aquatic monocot Lemna gibba. Conclusion: Quercetin can be used as a general antioxidant stress protectant for plants.",
"keywords": [
"Flavonoids",
"quercetin",
"antioxidants",
"Arabidopsis",
"tobacco",
"duckweed",
"paraquat"
],
"content": "Introduction\n\nCellular redox homeostasis is maintained by a complex antioxidant defense system, which includes antioxidant enzymes and low-molecular-weight scavengers1–4. Concerted action of the enzymatic and non-enzymatic components of this defense system counteracts excessive levels of reactive oxygen species (ROS), which can damage cellular components, while preserving adequate levels of ROS required for signaling and cellular redox regulation5–10.\n\nThe rapid and excessive generation of ROS is a common response to abiotic stresses and thus can be viewed as a converging point for stress signaling and defense responses11–14. One of the common responses to stress-induced ROS generation is increased flavonoid biosynthesis15–19. Although there is a large body of evidence that supports a role for flavonoids as ROS scavengers, the actual in vivo function of flavonoids as antioxidants in plants was a matter of debate20–22. The main points of contention were (1) that flavonoids are mainly found in vacuoles and are thus compartmentalized from the main site of ROS production in plant cells (i.e., chloroplasts), (2) that flavonoids are enriched in epidermal cells and thus cannot play a significant role in protecting cells of the majority of plant tissues, and (3) that plant cells have an elaborate and efficient antioxidant defense system that successfully suppresses ROS accumulation and therefore the putative antioxidant role of flavonoids would be redundant22. However, recent studies both in Arabidopsis and other plant species have shown that the in vivo antioxidant function of flavonoids is important for the survival of plants under abiotic stress22,23.\n\nRecent studies have also shown that those flavonoid species which, based on their chemical structure, are predicted to be the strongest antioxidants are indeed induced the most by stress22,23. These flavonoids, the dihydroxy B-ring-substituted flavonoids and their glycosides, are exemplified by quercetin and its derivatives22. Quercetin, one of the most abundant flavonoids in plants, also attracted significant attention in medical research because of its antioxidant, anti-inflammatory and anticancer effects with no human toxicity24,25. Here we have tested if quercetin feeding protects plants against the ROS-inducer paraquat (methyl viologen). Paraquat causes the formation of ROS in plants predominantly by impacting the chloroplastic electron transport systems1. Feeding Arabidopsis, tobacco and duckweed with quercetin indeed suppressed the toxic effects of paraquat, indicating that this flavonol can be used as an effective protectant against the harmful effects of ROS on plant growth.\n\n\nMethods\n\nAll plants were grown and treated under sterile conditions. Arabidopsis wild type Lansberg erecta (Ler) and transparent testa (tt) mutant lines tt3-1, tt4-1 and tt5-1 (all in Ler background) were grown on solid half-strength Murashige and Skoog (MS/2, Phytotechology) media supplemented with 1% sucrose (pH 5.7). Nicotiana tabacum (Burley variety KT204LC) was grown on solid full-strength MS media with 3% sucrose (pH 5.7). Lemna gibba (Rutgers Duckweed Stock Cooperative ID 7749) was grown in liquid Schenk and Hilderbrandt Basal Salt media (SH, Phytotechology) without sucrose and vitamins. All tested compounds were added to the media after autoclaving. Paraquat and quercetin were obtained from Sigma. Plants were grown in continuous light with a light intensity of 80 µmol m−2 s−1 at 24°C. To measure fresh weight, at least seven pools of 10 plants per treatment were used. Chlorophyll levels were measured using CCM-300 chlorophyll content fluorometer (Opti-Sciences). Data were analyzed using Prism 5.0a software (GraphPad Software Inc.) and are presented as mean ± SD of at least two independent experiments. One-way ANOVA with the Bonferroni’s multiple compariston post-test was used to determine the significance of the difference between means.\n\nFor the protein carbonylation experiments, plants were grown on the denoted media for 2 weeks and then weighed. Tissue was disrupted with zirconium beads in a BeadBug bead beater (MidSci) in 2 volumes of extraction buffer (50 mM potassium phosphate buffer pH 7.0, 2 mM MgCl2, 5% glycerol and 5 mM 2-mercaptoethanol). Protein concentration was measured with a BioPhotometer (Eppendorf) using Bradford reagent (Bio-Rad) and bovine serum albumin (BSA, Bio-Rad) as the standard. Proteins were derivatized as described previously26. In brief, protein extracts containing the same amount of protein were mixed with one volume of 12% sodium dodecyl sulfate (SDS, Fisher Scientific) and 2 volumes of 20 mM dinitrophenylhydrazine (Sigma). Derivatization reactions were performed at room temperature in the dark for 60 minutes. Derivatization mixtures were then neutralized with 2M Tris Base and mixed with one volume of 2X SDS-PAGE loading buffer. Protein extracts used for the control gels were directly mixed with 1 volume of 2X SDS-PAGE loading buffer. After denaturation at 95°C for 5 minutes, protein samples were loaded onto SDS-PAGE gels (7.5% for derivatized proteins and 4–20% gradient for control proteins, both Mini-Protean TGX precast gels from Bio-Rad). Separated proteins were transferred to nitrocellulose membranes as previously described27. The commercial antibodies used were rabbit polyclonal anti-DNP antibody (D9656 Sigma; used at 1:1000), monoclonal anti-HSP70 1D9 (Enzo; used at 1:10,000) and polyclonal anti-BiP antibodies (Santa Cruz Biotechnology, sc-33757; used at 1:1000). Secondary antibodies (goat anti-rabbit IgG-HRP and goat anti-mouse IgG-HRP; Santa Cruz Biotechnology) were used at 1:1000. Immunoblots were developed using SuperSignal West Femto Maximum Sensitivity Substrate (Thermo Scientific). Signals were captured using ChemiDoc XRS (Bio-Rad).\n\n\nResults\n\nTo determine whether quercetin feeding protects plants from oxidative stress, we tested the response of Arabidopsis thaliana wild type and mutants with reduced flavonoid content to the ROS-generating compound paraquat in the presence or absence of quercetin. Paraquat is known to prevent germination at high concentrations and to retard growth and promote chlorosis at sub-lethal concentrations28–30. From the large collection of Arabidopsis flavonoid pathway mutants, we selected the three transparant testa (tt) mutants tt3-1, tt4-1 and tt5-1 in the Ler background31,32 and plated them on MS/2 media containing a range of paraquat doses (Figure 1A). The tt4-1 mutant, which carries a lesion in the first dedicated enzyme of the flavonoid biosynthesis pathway, has been previously tested for paraquat sensitivity and was shown to have a lower tolerance to paraquat than the wild type by monitoring loss of chlorophyll content as a measure of chloroplast damage23. Paraquat doses of 0.15 μM and 0.3 μM caused severe growth inhibition in both the wild type and tt mutants (Figure 1). Quercetin (at 100 μM) alone did not lead to any significant changes in fresh weight of any of the tested lines (Ler: 73.9±12.6 mg and 71.3±10.1 mg; tt4-1: 73.2±8.3 mg and 69.3±8.7 mg; tt5-1: 68.8±10.7 mg and 65.1±6.7 mg; tt3-1: 66.2±14.7 mg and 70.4±8.4 mg for 2-week-old plants grown on control and 100 µM quercetin supplemented media, respectively). When plants were grown on plates with 100 µM quercetin and 0.15 µM or 0.3 µM paraquat, they were partially protected from the toxic effect of the herbicide (Figure 1). As expected from their genetic backgrounds, the wild-type, tt3-1 and tt5-1 seedlings were rescued more efficiently by 100 µM quercetin then the tt4-1 mutant which has the strongest defect in flavonoid biosynthesis (Figure 1).\n\nA. Seeds of the Arabidopsis wild type Landsberg erecta (Ler) and transparant testa (tt) mutants were sown and grown on half-strength Murashige and Skoog media containing the denoted compounds paraquat (PQ) and quercetin (Q). Representative plants were transferred to a new plate for photography 2 weeks after sowing. B. Relative fresh weight of plants grown on paraquat media with and without quercetin. Fresh weight of plants grown on control media was assigned the value of 1. Two-week-old plants were weighed in pools of 10 and the data are presented as mean ± SD (n≥7).\n\nTo test if quercetin-dependent protection from oxidative stress can be detected at the molecular level, we analyzed protein oxidation. Protein carbonylation is an irreversible type of protein oxidation that leads to loss of protein function and is often used as an indicator of oxidative stress9,33–35. We grew Arabidopsis wild-type plants on control plates and plates containing 100 µM quercetin for 10 days. Plants were then harvested and incubated in either water or 100 µM paraquat for 4 hours. Proteins were isolated, derivatized with dinitrophenylhydrazine, separated on SDS-PAGE gels, transferred to membranes and probed with the anti-diphenylhidrazone antibodies. The protective effect of quercetin was apparent from the reduced accumulation of derivatized proteins in paraquat-treated plants grown on media containing quercetin (Figure 2).\n\nThe wild-type plants (Ler) was grown for 10 days on control media or media supplemented with 100 µM Q. Plants were then removed from the plates, weighed and incubated for 4 hours with a mock (water) or 100 µM paraquat (PQ). Representative immunoblot of carbonylated proteins is shown. Arrowhead marks the position of the 50 kDa marker. HSP70 and BiP blots are shown to illustrate that the overall levels of proteostatic stress are not increased in the cytosol and endoplasmic reticulum, respectively. Region of the Ponceau S stained membrane encompassing the RuBiSCO large subunit (LSU) is shown as a loading control.\n\nNext, we tested if quercetin protects other plant species from paraquat-induced oxidative stress. We chose to test tobacco as another dicot species that is distantly related to Arabidopsis and the aquatic monocot species Lemna gibba (duckweed) (Figure 3). Dose-response experiments showed that quercetin counteracts the toxic effects of paraquat in tobacco (Figure 3A and B). Whereas lower doses of quercetin (e.g. 10 µM) did not reverse seedling growth inhibition or chlorophyll loss, seedlings grown on paraquat and higher quercetin doses (e.g. 50 µM and 100 µM) showed no symptoms of toxicity. Seedlings grown on paraquat and the highest tested dose of quercetin (500 µM) remained green, but were stunted suggesting that quercetin concentrations higher than 100 µM are suboptimal for tobacco growth.\n\nA. Tobacco KT204LC seeds were sown and grown on full-strength Murashige and Skoog media containing the denoted doses of PQ and/or Q. Plants were grown for 3 weeks prior to photography. Scale bar: 5 mm. B. Plants shown in A. were used to measure chlorophyll (Chl) content. One-way ANOVA was used to determine the significance of the difference between the PQ-treated sample and PQ and Q-treated sample. ***, p<0.001. C. Lemna gibba plantlets were transferred to Schenk and Hilderbrandt media with denoted doses of PQ and/or Q and incubated under continuous light with the denoted doses or PQ and/or Q for 5 days prior to photography. Arrowhead points to the newly grown fronds. Scale bar: 5 mm. D. Chl content was measured in all (young and mature) fronds from plantlets shown in C. One-way ANOVA was used to determine the significance of the difference between samples. ***, p<0.001.\n\nWe also observed a protective effect of quercetin against paraquat toxicity in the duckweed Lemna gibba. Duckweeds are the smallest, fastest growing and the most morphologically reduced flowering plants36,37. They have a frond (thalloid), no stem and one or more roots. When duckweed plantlets were grown for 36 hours in liquid media with 1 µM paraquat, new fronds emerged as chlorotic (Figure 3C). In contrast, new-grown fronds remained green when plantlets are grown in media containing 1 µM paraquat and 100 µM quercetin. Chlorophyll measurements showed that the overall chlorophyll level in paraquat-treated cultures decreased to ~50% of the control, whereas the chlorophyll level in cultures treated with paraquat and quercetin were the same as in the control plants (Figure 3D).\n\n\nConclusion\n\nHere we have shown that feeding plants with quercetin suppressed paraquat toxicity, indicating that this particular flavonoid and its derivates have an important role in the protection of plant cells against increased ROS load. We also found that quercetin offers protection against ROS in a range of plant species, from the evolutionary distant dicots Arabidopsis and tobacco to the monocot and aquatic plant Lemna gibba. Thus, we can conclude that quercetin can be used as a general stress protectant. Considering the relatively low cost of quercetin and its low concentration (100 μM) required for protection against ROS, we propose that the inclusion of quercetin to growth media could be beneficial to promote stress tolerance of agricultural plants grown in tissue or aquaculture.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of quercetin feeding protecting plants against oxidative stress, 10.5256/f1000research.9659.d13696438.",
"appendix": "Author contributions\n\n\n\nJ.K., T.E.S. and J.A.S. designed the experiments. J.K. and T.E.S. performed the experiments. All authors analyzed the data, prepared figures and wrote the paper.\n\n\nCompeting interests\n\n\n\nAuthors declare no competing interest.\n\n\nGrant information\n\nThis work was funded by the Kentucky Tobacco Research and Development Center.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAsada K: Production and scavenging of reactive oxygen species in chloroplasts and their functions. Plant Physiol. 2006; 141(2): 391–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDel Rio LA: ROS and RNS in plant physiology: an overview. J Exp Bot. 2015; 66(10): 2827–37. PubMed Abstract | Publisher Full Text\n\nNoctor G, Foyer CH: ASCORBATE AND GLUTATHIONE: Keeping Active Oxygen Under Control. Annu Rev Plant Physiol Plant Mol Biol. 1998; 49: 249–79. PubMed Abstract | Publisher Full Text\n\nHalliwell B: Reactive species and antioxidants. Redox biology is a fundamental theme of aerobic life. Plant Physiol. 2006; 141(2): 312–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nApel K, Hirt H: Reactive oxygen species: metabolism, oxidative stress, and signal transduction. Annu Rev Plant Biol. 2004; 55: 373–99. PubMed Abstract | Publisher Full Text\n\nBaxter A, Mittler R, Suzuki N: ROS as key players in plant stress signalling. J Exp Bot. 2014; 65(5): 1229–40. PubMed Abstract | Publisher Full Text\n\nFoyer CH, Noctor G: Redox homeostasis and antioxidant signaling: a metabolic interface between stress perception and physiological responses. Plant Cell. 2005; 17(7): 1866–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMittler R, Vanderauwera S, Gollery M, et al.: Reactive oxygen gene network of plants. Trends Plant Sci. 2004; 9(10): 490–8. PubMed Abstract | Publisher Full Text\n\nMøller IM, Jensen PE, Hansson A: Oxidative modifications to cellular components in plants. Annu Rev Plant Biol. 2007; 58: 459–81. PubMed Abstract | Publisher Full Text\n\nWrzaczek M, Brosché M, Kangasjärvi J: ROS signaling loops - production, perception, regulation. Curr Opin Plant Biol. 2013; 16(5): 575–82. PubMed Abstract | Publisher Full Text\n\nKnight H, Knight MR: Abiotic stress signalling pathways: specificity and cross-talk. Trends Plant Sci. 2001; 6(6): 262–7. PubMed Abstract | Publisher Full Text\n\nSewelam N, Kazan K, Schenk PM: Global Plant Stress Signaling: Reactive Oxygen Species at the Cross-Road. Front Plant Sci. 2016; 7: 187. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiller G, Suzuki N, Ciftci-Yilmaz S, et al.: : Reactive oxygen species homeostasis and signalling during drought and salinity stresses. Plant Cell Environ. 2010; 33(4): 453–67. PubMed Abstract | Publisher Full Text\n\nPastori GM, Foyer CH: Common components, networks, and pathways of cross-tolerance to stress. The central role of \"redox\" and abscisic acid-mediated controls. Plant Physiol. 2002; 129(2): 460–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerez IB, Brown PJ: The role of ROS signaling in cross-tolerance: from model to crop. Front Plant Sci. 2014; 5: 754. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossel JB, Wilson IW, Pogson BJ: Global changes in gene expression in response to high light in Arabidopsis. Plant Physiol. 2002; 130(3): 1109–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWinkel-Shirley B: Biosynthesis of flavonoids and effects of stress. Curr Opin Plant Biol. 2002; 5(3): 218–23. PubMed Abstract | Publisher Full Text\n\nVanderauwera S, Zimmermann P, Rombauts S, et al.: Genome-wide analysis of hydrogen peroxide-regulated gene expression in Arabidopsis reveals a high light-induced transcriptional cluster involved in anthocyanin biosynthesis. Plant Physiol. 2005; 139(2): 806–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDixon RA, Paiva NL: Stress-Induced Phenylpropanoid Metabolism. Plant Cell. 1995; 7(7): 1085–97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHernández I, Alegre L, Van Breusegem F, et al.: How relevant are flavonoids as antioxidants in plants? Trends Plant Sci. 2009; 14(3): 125–32. PubMed Abstract | Publisher Full Text\n\nPourcel L, Routaboul JM, Cheynier V, et al.: Flavonoid oxidation in plants: from biochemical properties to physiological functions. Trends Plant Sci. 2007; 12(1): 29–36. PubMed Abstract | Publisher Full Text\n\nAgati G, Azzarello E, Pollastri S, et al.: Flavonoids as antioxidants in plants: location and functional significance. Plant Sci. 2012; 196: 67–76. PubMed Abstract | Publisher Full Text\n\nNakabayashi R, Yonekura-Sakakibara K, Urano K, et al.: Enhancement of oxidative and drought tolerance in Arabidopsis by overaccumulation of antioxidant flavonoids. Plant J. 2014; 77(3): 367–79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGupta SC, Kim JH, Prasad S, et al.: Regulation of survival, proliferation, invasion, angiogenesis, and metastasis of tumor cells through modulation of inflammatory pathways by nutraceuticals. Cancer Metastasis Rev. 2010; 29(3): 405–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHalliwell B: Dietary polyphenols: good, bad or indifferent for your health? Cardiovasc Res. 2007; 73(2): 341–7. PubMed Abstract | Publisher Full Text\n\nKurepa J, Toh-E A, Smalle JA: 26S proteasome regulatory particle mutants have increased oxidative stress tolerance. Plant J. 2008; 53(1): 102–14. PubMed Abstract | Publisher Full Text\n\nKurepa J, Karangwa C, Duke LS, et al.: Arabidopsis sensitivity to protein synthesis inhibitors depends on 26S proteasome activity. Plant Cell Rep. 2010; 29(3): 249–59. PubMed Abstract | Publisher Full Text\n\nKurepa J, Smalle J, Van Montagu M, et al.: Polyamines and paraquat toxicity in Arabidopsis thaliana. Plant Cell Physiol. 1998; 39(9): 987–92. PubMed Abstract | Publisher Full Text\n\nKurepa J, Smalle J, Van Montagu M, et al.: Oxidative stress tolerance and longevity in Arabidopsis: the late-flowering mutant gigantea is tolerant to paraquat. Plant J. 1998; 14(6): 759–64. PubMed Abstract | Publisher Full Text\n\nKurepa J, Smalle J, Van Montagu M, et al.: Effects of sucrose supply on growth and paraquat tolerance of the late-flowering gi-3 mutant. Plant Growth Regul. 1998; 26(2): 91–6. Publisher Full Text\n\nShirley BW, Kubasek WL, Storz G, et al.: Analysis of Arabidopsis mutants deficient in flavonoid biosynthesis. Plant J. 1995; 8(5): 659–71. PubMed Abstract | Publisher Full Text\n\nPeer W, Brown DE, Tague BW, et al.: Flavonoid accumulation patterns of transparent testa mutants of Arabidopsis. Plant Physiol. 2001; 126(2): 536–48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartínez-Sánchez G, Giuliani A, Pérez-Davison G, et al.: Oxidized proteins and their contribution to redox homeostasis. Redox Rep. 2005; 10(4): 175–85. PubMed Abstract | Publisher Full Text\n\nLevine RL, Wehr N, Williams JA, et al.: Determination of carbonyl groups in oxidized proteins. Methods Mol Biol. 2000; 99: 15–24. PubMed Abstract | Publisher Full Text\n\nJung T, Bader N, Grune T: Oxidized proteins: intracellular distribution and recognition by the proteasome. Arch Biochem Biophys. 2007; 462(2): 231–7. PubMed Abstract | Publisher Full Text\n\nWang W, Messing J: Status of duckweed genomics and transcriptomics. Plant Biol (Stuttg). 2015; 17(Suppl 1): 10–5. PubMed Abstract | Publisher Full Text\n\nCui W, Cheng JJ: Growing duckweed for biofuel production: a review. Plant Biol (Stuttg). 2015; 17(Suppl 1): 16–23. PubMed Abstract | Publisher Full Text\n\nSmalle J, Kurepa J, Timothy S: Dataset 1 in: Quercetin feeding protects plants against oxidative stress. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16750",
"date": "10 Oct 2016",
"name": "Keith R. Davis",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript provides solid data indicating that quercetin can protect plants grown in tissue culture or aquaculture conditions from the oxidative stress imposed by paraquat, which interferes with chloroplast electron transport. The experiments are adequately described and scientifically valid, albeit, they are limited to basic assays of plant growth and overall protein oxidation levels. The data shown appear to be of good quality and appropriate statistical analyses have been provided in most cases. The major weaknesses of the manuscript are that the studies are rather preliminary and the major general conclusions are not entirely supported by the data. The manuscript would be strengthened if the authors addressed the following points by providing more data or limiting the conclusions to what is clearly supported by the data:\n\nThe conclusion that quercetin can be used as a general antioxidant stress protectant for plants is too broad and would require additional data to be accurate. First, only one type of oxidative stress was assessed; additional assays compatible with the tissue culture format (e.g. salt, metal, cold stress etc.) are needed to support this generalization. Second, the authors only point out at the end of the Discussion that the use of quercetin as a stress protectant is likely limited to tissue culture or aquaculture of plants. This point should be made clear earlier in the manuscript and in the abstract.\n\nIt isn’t clear how quercetin exerts the observed effects. The authors appear to believe that it is due to the well documented antioxidant properties of this flavonoid; however, no data are presented to indicate whether quercetin was taken up by the plants and if so, how the amounts accumulated compared to what is normally found in the test plant species. It is well documented in animal cells that quercetin affects a number of signaling pathways (e.g. JNK and other MAP kinases, Akt etc.) that are conserved in plants. Thus, it is possible that quercetin-induced tolerance may be more complex than simply scavenging ROS.\n\nSeveral minor points that would be helpful to clarify are:\n\nWhat treatments summarized in Figure 1 are statistically significantly different? The authors suggest that the rescue of tt4 plants was not as robust as that observed in wild-type or the other tt mutants tested; however, based on the error bars, it isn’t clear how significant the observed trend is.\n\nWhether the reduction in protein oxidation observed in the acute short-term treatments also occurred in plants treated under the conditions of the growth assays.",
"responses": []
},
{
"id": "18344",
"date": "08 Dec 2016",
"name": "Frank Van Breusegem",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis concise manuscript provides clear evidence that addition of quercetin in growth medium protects plants against Methyl Viologen stress (as monitored by growth phenotypes and decreased protein carbonylation) and that quercetin can alleviate stress sensitivity in flavonoid biosynthesis Arabidopsis mutants. The conclusions are sensible and justified on the basis of the described results.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2430
|
https://f1000research.com/articles/5-2106/v1
|
30 Aug 16
|
{
"type": "Research Article",
"title": "Factors affecting stone free rate of primary percutaneous nephrolithotomy on staghorn calculi: a single center experience of 15 years",
"authors": [
"Widi Atmoko",
"Ponco Birowo",
"Nur Rasyid",
"Nur Rasyid"
],
"abstract": "Objectives: Percutaneous nephrolithotomy on staghorn calculi is challenging for urologists because it is difficult to remove all of the stones. The purpose of this study was to evaluate the associated factors of stone-free rate after primary percutaneous nephrolithotomy on staghorn calculi in a large series of patients at a single, tertiary referral, endourologic stone center. Methods: We collected data from medical record between January 2000 and December 2015. A total of 345 primary percutaneous nephrolithotomy procedures were performed for patients with staghorn calculi. This study included both and made no distinction between partial and complete staghorn calculi. Stone-free is defined as the absence of residual stones after undergoing percutaneous nephrolithotomy for the first time. Significant factors from univariate analysis that correlated with stone-free rate after primary percutaneous nephrolithotomy of staghorn stone were further analyzed using multivariate regression analysis. Results: The mean patient age was 52.23±10.38 years. The stone-free rate of percutaneous nephrolithotomy monotherapy was 62.6%. The mean operating time was 79.55±34.46 minutes. The mean length of stay in hospital was 4.29±3.00 days. Using the chi-square test, history of ipsilateral open renal stone surgery (p = 0.01), stone burden (p = < 0.001), and type of anesthesia (p = 0.04) had a significant impact on the stone-free. From multivariate analysis, the history of ipsilateral open renal stone surgery [OR 0.48; 95% CI 0.28-0.81; p 0.01] and the stone burden [OR 0.28; 95% CI 0.18-0.45; p 0.00] were significant independent risk factors for stone-free.",
"keywords": [
"primary percutaneous nephrolithotomy",
"staghorn calculi",
"stone-free rate"
],
"content": "Introduction\n\nStaghorn calculus are large and branching kidney stones that occupy a large proportion of the renal pelvis and some or all of the renal calices. Surgical treatment of staghorn calculi involves complete stone removal minimising morbidity. Because untreated staghorn calculus have a tendency to destroy the kidney and cause life-threatening urosepsis, the American Urological Association (AUA) recommends to actively treat all newly diagnosed patients1,2. In patients with staghorn calculi who are treated conservatively, the mortality rates have been reported to range around 28% to 47.5%3–6. It is crucial to completely remove all staghorn calculi, because residual stones can form nuclei for stone recurrence (85% recurrence rate) that may lead to infection7.\n\nPercutaneous nephrolithotomy (PCNL) has become the recommended treatment for staghorn calculi as it has stone-free rate three times higher than extracorporeal shock wave lithotripsy (ESWL) and has lower morbidity, shorter length of hospital stay, shorter operating time, and time to return to work faster than open surgery1,8. Nevertheless, the management of staghorn calculi with PCNL remains challenging. Stone-free rates were lower, complications more frequent, and operative time and hospital stay were longer in patients with staghorn stones compared to nonstaghorn stones9.\n\nHowever, PCNL is still the mainstay treatment for staghorn calculi, despite the complete removal of staghorn calculi by PCNL being a high skill-demanding surgical procedure and a challenging task for urologists. In this study, we evaluated the stone-free rate and the factors that influence the effectiveness of primary PCNL performed in our national tertiary referral hospital.\n\n\nMethods\n\nFrom January 2000 to December 2015, the data from 345 patients with staghorn calculi who had undergone PCNL surgery at the Cipto Mangunkusumo Hospital by one of two surgeons (NR and PB) were reviewed. This study included both and made no distinction between partial and complete staghorn calculi. Patients who were eligible for the study were adult patients (≥ 18 years old) and those who had PCNL for primary treatment for nephrolithiasis who agreed to enroll by written informed consent. The patients meeting the below criteria were excluded: 1) Patients who had systemic hemorrhagic disease without correction; 2) Patients with severe heart disease and pulmonary incompetence who could not undertake the operation; 3) Uncontrolled diabetes and hypertension patients as well as tuberculosis patients; 4) Patients with renal anatomic malformations, such as horse-shoe and ectopic kidneys, with coexisting staghorn calculi; 5) Lordosis or scoliosis patient who could not tolerate the prone position; 6) Patients who had history of ipsilateral PCNL for secondary or tertiary PCNL. The study protocol was approved by the Ethical Committee, Faculty of Medicine, Universitas Indonesia (No.513/UN2.F1/ETIK/2016).\n\nPreoperative laboratory examination undertaken included urinalysis, urine culture, serum creatinine, and complete peripheral blood. Plain abdominal radiography of kidneys, ureters, and bladder (KUB) and intravenous urography (IVU) were the primary radiological investigations. Non-contrast computed tomography (NCCT) was performed for patients with high serum creatinine (>1,6 mg/dL) or those allergic to iodinated contrast. Stone burden was assessed pre-operatively by multiplying sum of length and width by means of imaging. Patients with urinary tract infections treated with antibiotics appropriate preoperative urine culture 5 days prior to PCNL. Other patients who had negative urine cultures receiving intravenous antibiotics prior to anesthesia.\n\nFollowing anesthesia, patients were placed in lithotomy position and a 22.5F rigid cystoscope (OLYMPUS) was used to pass a 5F open-end ureteral catheter (Selectip, 62450200; Angiomed, Bard, Murray Hill, NJ) under fluoroscopic guidance, into the renal pelvis, to allow injection of contrast material to delianeate the intrarenal collecting system. A 16F Foley catheter was inserted into the bladder to provide drainage during the procedure and the ureteral catheter was fixed to the Foley catheter. Then the patient was moved to prone position and the side of kidneys to be operated was positioned higher 30°. Percutaneous puncture to gain access to the kidney was done with the help of C-arm control fluoroscopy. Calyx puncture was performed through a superior, media, or inferior, using 18-gauge, diamond-tip needle (Cook Urological, Spencer IN). The needle was positioned so that the target puncture, the needle tip, and the base of the needle was in a position in line. The depth of puncture was controlled using fluoroscopy in the anteroposterior position. After the needle of puncture had been confirmed in the pelvicalyceal system, then a 0.038 guidewire was inserted. After that, the tract was then dilated to 30F using metal dilators (Telescope Bougie Set, 27290A, Karl Storz, Tuttlingen, Germany), fascial dilator and malleable dilators (Amplatz Renal Dilator Set, 075000, Cook Urological, Spencer IN). After inspection by 24-F rigid nephroscope (HOPKINS Wide-Angle Straight Forward Telescope 6°, 27293 AA, Karl Storz, Tuttlingen, Germany), mechanical lithotripsy (Vibrolith, Elmed, Orlando, FL) could be done by breaking the stone. Stone forceps were used to take a hard rock fragments.\n\nPostoperative imaging were performed 1 or 2 days after PCNL with either Kidney Ureter Bladder (KUB) photos, computed tomography (CT) scan, or antegrade pyelography (APG). Stone-free is defined as the absence of residual stones after undergoing PCNL for the first time. Patients who required additional treatment after their first PCNL, such as secondary PCNL and or ESWL, were automatically excluded from the stone-free group. We also evaluated the transfusion rate and the incidence of postoperative complications, such as infection, urine leakage on operative wounds, intestinal perforation, and bleeding.\n\nBivariate analysis was performed by correlating the numerical variables with stone free rates. Those with P value <0.25 were further analyzed with multivariate analysis of logistic regression. Data were analyzed using the Statistical Package for the Social Sciences, version 17 (SPSS Inc., Chicago, IL). The analysis considered significant when P <0.05.\n\n\nResults\n\nFrom January 2000 to December 2015, a total of 345 patients with staghorn calculi had undergone primary PCNL procedures at the Cipto Mangunkusumo Hospital. The mean patient age was 52.23±10.38 years. The stone-free rate of PCNL monotherapy was 62.6%. This value was the result just after the 1st stage of PCNL. The mean operating time was 79.55±34.46 minutes. The mean length of stay in hospital was 4.29±3.00 days. Perioperative transfusions were performed in 11% of patients (Table 1).\n\nFrom the univariate analysis, there was significant association between history of ipsilateral renal stone open surgery, stone burden, and type of anesthesia with the stone-free rate (p = 0.01; p < 0.001; p = 0.04, respectively). The univariate analyses are illustrated in Table 2. Stepwise multivariate regression analysis which included variables with p-value < 0.25 showed that the stone burden was the most influential predictor of stone-free (OR 0.28, 95% CI 0.18–0.45, p=0.00) (Table 3).\n\n*Chi-Square test\n\n\nDiscussion\n\nSince the introduction of PCNL to treat kidney stones, there has been a rapid development in techniques and instruments that can be used to treat staghorn calculi and complex stone. In 1983, Clayman et al. reported the capability and safety of PCNL in treating staghorn calculi10. Currently, PCNL is the preferred treatment option for patients with staghorn calculi, complex stone, and big stone1,11,12. The goal treatment of staghorn calculi is stone-free thoroughly with minimal morbidity1,11. PCNL in patients with staghorn calculi still represents a procedural challenge, thus requiring the surgeon to perform complete removal of the stone while keeping morbidity to a minimum13.\n\nStone-free rate after PCNL monotherapy for staghorn calculi is reported to range between 49% to 78%13. In this study, the stone-free rate after PCNL monotherapy was 64.6%. This is higher than the stone-free rate reported by Al - Kohlany et al. (49%)8 because they only considered and treated complete staghorn calculi, whereas in this study we included both patients with partial staghorn calculi and complete staghorn calculi and we made no distinction between partial and complete staghorn calculi. Stone-free rate in our study was not very different from the research conducted by El-Nahas et al.14 (56.6%) and Desai et al.9 (56.9%). They included subject criteria similar to our study, namely the complete and partial staghorn calculi14. However, the stone-free rate of our study was lower than that reported by Soucy et al.13 who reported higher stone-free rate (78%). That study incorporated branched stone in just one calyx (borderline staghorn calculi) found in 67% of their patients, so that the majority of patients had a lower burden stone and were easier to treat13.\n\nThe duration of the operation is an important factor in determining and comparing various procedural techniques15, as the duration of anesthesia and the risk of pulmonary complications after surgery can indirectly affect the operation outputs (amount of blood loss, decrease of hemoglobin, and blood transfusion requirements)16,17 and complications18,19 associated with PCNL. The mean length of surgery in this study was 79.55±34.46 minutes with a median value of 60 (range 20–210) minutes. The mean operating time on research conducted by Huang et al.20 was 63.5±11.8 minutes with a range of 29–103 minutes. The duration of operation on that research was shorter because Huang et al. did not use a ureteral catheter or balloon catheter before PCNL. According to Huang et al., direct puncture to the stone without previous insertion of ureteral catheter can be done so as to save operating time and reduce complications15.\n\nPotential significant morbidity or even mortality of PCNL have been reported in a large-scale study16,21,22. Kidney stone management panel of AUA guidelines mentioned that the staghorn calculi have 7–27% complication and transfusion rate reaching 18%1. Previous studies reported that blood transfusion was needed at 14–24% in PCNL with staghorn calculi, depending on the surgical technique, patient population, indications for transfusion, and the opinion of the surgeon to perform transfusion23,24. El-Nahas et al. reported that the staghorn calculi is a risk factor for the occurrence of severe bleeding in PCNL25. The bleeding complications in our study that required transfusion were lower in numbers than previously reported. As shown in Table 1 and Dataset 1, we observed 4.9% of bleeding cases and 11% cases of perioperative transfusion. Total complications observed in our study amounted to 6.4%.\n\nEl-Nahas et al.26 found an association between the stone burden (partial and complete staghorn calculi) and secondary calyx stones with a stone-free rate. In our study, no distinction was made between the data entries of complete and partial staghorn calculi but we devided the category of stone burden into two groups, the first group was ≤ 52 mm and the second group was > 52 mm. From our multivariate analysis, we found that the stone burden was associated with the stone-free rate (OR 0.28; 95% CI 0.18-0.45; p 0.00). In our study, we didn’t perform S.T.O.N.E nephrolithometry that was found to be the predictor for stone-free rate after PCNL for staghorn stones27. El-Nahas et al.26 stated that the stone is branched and secondary stones require multiple access or use flexible nephroscopy to achieve stone-free, but sometimes this technique is not enough. The surgeon must determine whether to increase the number of access PCNL to take the entire residual stone or to treat residual stone with ESWL26. The more the number of PCNL access, the higher the incidence of bleeding complications16.\n\nIn this study, we found that history of ipsilateral renal stone open surgery was significantly associated with stone-free rate. This is different from the previous study conducted by Kurtulus et al.28 that compared patients who undergone PCNL for the first time with patients who had previous history of open renal stone surgery. In patients who have a history of open renal stone surgery, infundibulum stenosis, perinephric fibrosis, bowel displacement, and incisional hernia are the major factors that should be taken into account by the surgeon29,30. As long as the safety rules are strictly followed, PCNL can still be performed with minimal complication and high success rates despite the technical and access difficulties encountered in secondary or tertiary cases due to anatomic positional differences of the kidney and fibrosis as mentioned by Kurtulus et al.28. In their study, the residual stone rate wasn’t significantly different between patients who had previous history of ipsilateral open renal stone surgery and patients who undergone PCNL for the first time (5% vs 3%, p>0.05). Kurtulus et al. had difficulty in dilating percutaneous tract in patients with history of ipsilateral open renal stone surgery. With the help of newly developed high-pressure balloons, assistance of fascial dilators, or by mechanical dilators, difficulty in establishing access may be overcomed28. In some other studies, it had been reported that open stone surgery can increase PCNL failure rate31, while others showed that previous open stone surgery does not affect PCNL outcome32–34.\n\nThe type of anesthesia was not significantly associated with stone-free rate in our multivariate analysis. This finding was in accordance with other studies. Astram et al. compared 220 PCNL procedures using general anesthesia and 540 PCNL using spinal anesthesia. They found the stone-free rate in the general anesthesia group was 71.37%, similar to the spinal anesthesia group 72.97% (p > 0.05)35. Kuzgunbay et al.36 and Tangpaitoon et al.37 also found that combined spinal-regional anesthesia is a feasible technique in PCNL operations because the efficacy and safety were not affected compared to PCNL with general anesthesia. Selection of anesthesia is important because it can affect the patient's postoperative recovery and a consideration for the urologist to discharge a patient from the hospital in a safe condition as soon as possible38. In our study, the majority of PCNL was performed under spinal anesthesia (81.4%) and no conversion from spinal to general anesthesia was recorded. It was found that the use of spinal anesthesia can reduce the need for PCNL postoperative analgetic, decrease nausea39, and the patient can cooperate when operation being held36. General anesthesia on the other hand, may increase complications in PCNL when the patient changes position40. Additionaly, performing PCNL on staghorn calculi under general anesthesia can induce diluted anemia, hypothermia, higher blood loss, as well as the possibility of fluid absorption and electrolyte imbalance38. In short, lower dose of analgesia demand, duration of surgery, well-maintained hemodynamic stability during and after operation with faster patient recovery shows the promising aspect of spinal anesthesia to be virtually used in most PCNL procedures41.\n\nThis study bears the common problems of retrospective studies, including selection bias and missing of important clinical data, like partial or complete staghorn stone. The results reported here are different from those published in the study conducted by El-Nahas et al26. They found that independent risk factors for residual stones were complete staghorn calculi and presence of secondary calyceal stones (relative risks were 2.2 and 3.1, respectively). In our study, we didn’t distinct between partial and complete staghorn calculi and this type of analysis could not be done. In addition, the low metabolic evaluation in patients is a weakness of this study because the stone analysis and the metabolic tests are not used routinely on all patients. No follow-up data collection on secondary treatment (such as ESWL, ureterorenoscopy (URS), and secondary PCNL) is also a shortcoming of this study because from those data we ccould analyse the effectiveness of combination therapy with ESWL, secondary PCNL effectiveness rate, and other therapies.\n\n\nConclusions\n\nPercutaneous nephrolithotomy is the mainstay for treating staghorn calculi. History of ipsilateral renal stone surgery and stone burden are prognostic factors determining stone clearance after PCNL on staghorn stones.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for Table 1, Table 2, and Table 3 of 'Factors affecting stone free rate of primary percutaneous nephrolithotomy on staghorn calculi: a single center experience of 15 years’, 10.5256/f1000research.9509.d13411742\n\n\nConsent\n\nWritten informed consent to participate in the study and publish clinical data was obtained by the patients.\n\n\nAbbreviations & acronyms\n\nAPG: Antegrade Pyelography\n\nAUA: American Urological Association\n\nESWL: Extracorporeal Shock Wave Lithotripsy\n\nIVU: Intravenous Urography\n\nKUB: Kidneys, Ureters, and Bladder\n\nNCCT: Non Contrast Computed Tomography\n\nPCNL: Percutaneous Nephrolithotomy\n\nURS: Ureterorenoscopy",
"appendix": "Author contributions\n\n\n\nWA - study concepts, design of study, data acquisition, data interpretation, statistical analysis, manuscript preparation. PB – study concepts, design of study, manuscript review, funds collection. NR - design of study, manuscript review, funds collection. All authors have agreed to publish this final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nPreminger GM, Assimos DG, Lingeman JE, et al.: Chapter 1: AUA guideline on management of staghorn calculi: diagnosis and treatment recommendations. J Urol. 2005; 173(6): 1991–2000. PubMed Abstract | Publisher Full Text\n\nHealy KA, Ogan K: Pathophysiology and management of infectious staghorn calculi. Urol Clin North Am. 2007; 34(3): 363–74. PubMed Abstract | Publisher Full Text\n\nKoga S, Arakaki Y, Matsuoka M, et al.: Staghorn calculi--long-term results of management. Br J Urol. 1991; 68(2): 122–124. PubMed Abstract | Publisher Full Text\n\nBlandy JP, Singh M: The case for a more aggressive approach to staghorn stones. J Urol. 1976; 115(5): 505–506. PubMed Abstract\n\nWojewski A, Zajaczkowski T: The treatment of bilateral staghorn calculi of the kidneys. Int Urol Nephrol. 1974; 5(3): 249–260. PubMed Abstract | Publisher Full Text\n\nPriestley JT, Dunn JH: Branched renal calculi. J Urol. 1949; 61(2): 194–203. PubMed Abstract\n\nMeng M: Struvite and staghorn calculi. Emedicine Nov 20, 2015. Accessed: August 22, 2016. Reference Source\n\nAl-Kohlany KM, Shokeir AA, Mosbah A, et al.: Treatment of complete staghorn stones: a prospective randomized comparison of open surgery versus percutaneous nephrolithotomy. J Urol. 2005; 173(2): 469–73. PubMed Abstract | Publisher Full Text\n\nDesai M, Lisa AD, Turna B, et al.: The clinical research office of the endourological society percutaneous nephrolithotomy global study: staghorn versus nonstaghorn stones. J Endourol. 2011; 25(8): 1263–1268. PubMed Abstract | Publisher Full Text\n\nClayman RV, Surya V, Miller RP, et al.: Percutaneous nephrolithotomy. An approach to branched and staghorn renal calculi. JAMA. 1983; 250(1): 73–5. PubMed Abstract | Publisher Full Text\n\nDesai M, Jain P, Ganpule A, et al.: Developments in technique and technology: the effect on the results of percutaneous nephrolithotomy for staghorn calculi. BJU Int. 2009; 104(4): 542–8; discussion 548. PubMed Abstract | Publisher Full Text\n\nMorris DS, Wei JT, Taub DA, et al.: Temporal trends in the use of percutaneous nephrolithotomy. J Urol. 2006; 175(5): 1731–6. PubMed Abstract | Publisher Full Text\n\nSoucy F, Ko R, Duvdevani M, et al.: Percutaneous nephrolithotomy for staghorn calculi: a single center's experience over 15 years. J Endourol. 2009; 23(10): 1669–73. PubMed Abstract | Publisher Full Text\n\nEl-Nahas AR, Eraky I, Shokeir AA, et al.: Percutaneous nephrolithotomy for treating staghorn stones: 10 years of experience of a tertiary-care centre. Arab J Urol. 2012; 10(3): 324–329. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalahatkar S, Moghaddam KG, Kazemnezhad E, et al.: Factors affecting operative time during percutaneous nephrolithotomy: our experience with the complete supine position. J Endourol. 2011; 25(12): 1831–1836. PubMed Abstract | Publisher Full Text\n\nKukreja R, Desai M, Patel S, et al.: Factors affecting blood loss during percutaneous nephrolithotomy: Prospective study. J Endourol. 2004; 18(8): 715–722. PubMed Abstract | Publisher Full Text\n\nAkman T, Binbay M, Sari E, et al.: Factors affecting bleeding during percutaneous nephrolithotomy: Single surgeon experience. J Endourol. 2011; 25(2): 327–333. PubMed Abstract | Publisher Full Text\n\nde la Rosette JJ, Zuazu JR, Tsakiris P, et al.: Prognostic factors and percutaneous nephrolithotomy morbidity: a multivariate analysis of a contemporary series using the Clavien classification. J Urol. 2008; 180(6): 2489–2493. PubMed Abstract | Publisher Full Text\n\nOzden E, Mercimek MN, Yakupoglu YK, et al.: Modified Clavien classification in percutaneous nephrolithotomy: Assessment of complications in children. J Urol. 2011; 185(1): 264–268. PubMed Abstract | Publisher Full Text\n\nHuang SW, Chang CH, Wang CJ: Percutaneous nephrolithotomy for the treatment of complete staghorn stones. JTUA. 2005; 16: 169–173. Reference Source\n\nMichel MS, Trojan L, Rassweiler JJ: Complications in percutaneous nephrolithotomy. Eur Urol. 2007; 51(4): 899–906. PubMed Abstract | Publisher Full Text\n\nDuvdevani M, Razvi H, Sofer M, et al.: Third prize: contemporary percutaneous nephrolithotripsy: 1585 procedures in 1338 consecutive patients. J Endourol. 2007; 21(8): 824–9. PubMed Abstract | Publisher Full Text\n\nKoga S, Arakaki Y, Matsuoka M, et al.: Staghorn calculi--long-term results of management. Br J Urol. 1991; 68(2): 122–124. PubMed Abstract | Publisher Full Text\n\nStoller ML, Wolf JS Jr, St Lezin MA: Estimated blood loss and transfusion rates associated with percutaneous nephrolithotomy. J Urol. 1994; 152(6 Pt 1): 1977–1981. PubMed Abstract\n\nEl-Nahas AR, Shokeir AA, El-Assmy AM, et al.: Post-percutaneous nephrolithotomy extensive hemorrhage. A study of risk factors. J Urol. 2007; 177(2): 576–9. PubMed Abstract | Publisher Full Text\n\nel-Nahas AR, Eraky I, Shokeir AA, et al.: Factors affecting stone-free rate and complications of percutaneous nephrolithotomy for treatment of staghorn stone. Urology. 2012; 79(6): 1236–1241. PubMed Abstract | Publisher Full Text\n\nSfoungaristos S, Gofrit ON, Pode D, et al.: Percutaneous nephrolithotomy for staghorn stones: Which nomogram can better predict postoperative outcomes? World J Urol. 2016; 34(8): 1163–8. PubMed Abstract | Publisher Full Text\n\nKurtulus FO, Fazlioglu A, Tandogdu Z, et al.: Percutaneous nephrolithotomy: primary patients versus patients with history of open renal surgery. J Endourol. 2008; 22(12): 2671–5. PubMed Abstract | Publisher Full Text\n\nShah HN, Mahajan AP, Hegde SS, et al.: Tubeless percutaneous nephrolithotomy in patients with previous ipsilateral open renal surgery: a feasibility study with review of literature. J Endourol. 2008; 22(1): 19–24. PubMed Abstract | Publisher Full Text\n\nMargel D, Lifshitz DA, Kugel V, et al.: Percutaneous nephrolithotomy in patients who previously underwent open nephrolithotomy. J Endourol. 2005; 19(10): 1161–1164. PubMed Abstract | Publisher Full Text\n\nJones DJ, Russell GL, Kellett MJ, et al.: The changing practice of percutaneous stone surgery. Review of 1000 cases 1981-1988. Br J Urol. 1990; 66(1): 1–5. PubMed Abstract | Publisher Full Text\n\nBasiri A, Karrami H, Moghaddam SM, et al.: Percutaneous nephrolithotomy in patients with or without a history of open nephrolithotomy. J Endourol. 2003; 17(4): 213–216. PubMed Abstract | Publisher Full Text\n\nSofikerim M, Demirci D, Gülmez I, et al.: Does previous open nephrolithotomy affect the outcome of percutaneous nephrolithotomy? J Endourol. 2007; 21(4): 401–403. PubMed Abstract | Publisher Full Text\n\nReddy SV, Shaik AB: Outcome and complications of percutaneous nephrolithotomy as primary versus secondary procedure for renal calculi. Int Braz J Urol. 2016; 42(2): 262–269. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAstram A, Birowo P, Rasyid N, et al.: Success of percutaneous nephrolithotomy: comparing spinal anesthesia with general anesthesia. Indonesian Journal of Urology. 2015; 22(2): 5–9. Reference Source\n\nKuzgunbay B, Turunc T, Akin S, et al.: Percutaneous nephrolithotomy under general versus combined spinal-epidural anesthesia. J Endourol. 2009; 23(11): 1835–8. PubMed Abstract | Publisher Full Text\n\nTangpaitoon T, Nisoog C, Lojanapiwat B: Efficacy and safety of percutaneous nephrolithotomy (PCNL): a prospective and randomized study comparing regional epidural anesthesia with general anesthesia. Int Braz J Urol. 2012; 38(4): 504–511. PubMed Abstract | Publisher Full Text\n\nRozentsveig V, Neulander EZ, Roussabrov E, et al.: Anesthetic considerations during percutaneous nephrolithotomy. J Clin Anesth. 2007; 19(5): 351–5. PubMed Abstract | Publisher Full Text\n\nAndreoni C, Olweny EO, Portis AJ, et al.: Effect of single-dose subarachnoid spinal anesthesia on pain and recovery after unilateral percutaneous nephrolithotomy. J Endourol. 2002; 16(10): 721–725. PubMed Abstract | Publisher Full Text\n\nBasiri A, Mehrabi S, Kianian H: Blind puncture in comparison with fluoroscopic guidance in percutaneous nephrolithotomy: a randomized controlled trial. Urol J. 2007; 4(2): 79–83; discussion 83–5. PubMed Abstract\n\nMovasseghi G, Hassani V, Mohaghegh MR, et al.: Comparison between spinal and general anesthesia in percutaneous nephrolithotomy. Anesth Pain Med. 2014; 4(1): e13871. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAtmoko W, Birowo P, Rasyid N: Dataset 1 in: Factors Affecting Stone Free Rate of Primary Percutaneous Nephrolithotomy on Staghorn Calculi: A Single Center Experience of 15 Years. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15960",
"date": "09 Sep 2016",
"name": "Frederick Singer",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a large retrospective analysis of the recurrence rate over 15 years of staghorn calculi after removal of staghorn calculi by percutaneous nephrolithotomy which examines the factors which appear to influence the recurrence rate. Univariate analysis indicated that a history of ipsilateral open renal stone surgery, stone burden and the type of anesthesia were significant factors in determining a stone-free outcome whereas multivariate analysis eliminated the type of anesthesia as a risk factor. The authors pointed out that they did not have adequate data concerning the metabolic factors that could have made a significant contribution to the recurrence rate. Nevertheless this is an otherwise worthwhile contribution to understanding the appropriate treatment of staghorn calculi.",
"responses": []
},
{
"id": "15959",
"date": "16 Sep 2016",
"name": "Manint Usawachintachit",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a retrospective study looking for associated factors of stone free rate following PCNL on staghorn stones. The primary endpoint was stone free status evaluated at an early postoperative period. It’s interesting data in that the number of procedures is high (345).\n\nOverall, the paper was written concisely with a good methodology in research design and statistical analysis. I would recommend this article is indexed after addressing some issues enumerated below:\n\nIntroduction\nNo specific comments.\n\nMaterial and Methods\nThis is a retrospective study over a time period of 15 years. The authors didn’t mention about the surgeon’s level of experience so I wonder if there will be any effect on stone free rate by the surgeons’ learning curve. Stone free status was a primary endpoint and a main focus in this study. However, it was evaluated by either plain KUB radiograph, CT scan, or antegrade pyelography. Could there be any bias based on these images, since we probably missed 3-4 mm residual fragment on a plain KUB film?\n\nAdditionally, it could be difficult to evaluate stone free status accurately with plain KUB film in an early postoperative period. Fluid leakage around the kidney may obscure residual fragments.\n\nResults\nIn Table 2, data regarding Number of PCNL access was duplicated with the above row. How would nephrostomy tube size correlate to stone free status? I think the tube size most likely depends on surgeon’s preference and other factors such as bleeding or requirement for postoperative drainage.\n\nDiscussion\nNo specific comments.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2106
|
https://f1000research.com/articles/5-2103/v1
|
30 Aug 16
|
{
"type": "Research Article",
"title": "Whose sample is it anyway? Widespread misannotation of samples in transcriptomics studies",
"authors": [
"Lilah Toker",
"Min Feng",
"Paul Pavlidis",
"Min Feng"
],
"abstract": "Concern about the reproducibility and reliability of biomedical research has been rising. An understudied issue is the prevalence of sample mislabeling, one impact of which would be invalid comparisons. We studied this issue in a corpus of human transcriptomics studies by comparing the provided annotations of sex to the expression levels of sex-specific genes. We identified apparent mislabeled samples in 46% of the datasets studied, yielding a 99% confidence lower-bound estimate for all studies of 33%. In a separate analysis of a set of datasets concerning a single cohort of subjects, 2/4 had mislabeled samples, indicating laboratory mix-ups rather than data recording errors. While the number of mixed-up samples per study was generally small, because our method can only identify a subset of potential mix-ups, our estimate is conservative for the breadth of the problem. Our findings emphasize the need for more stringent sample tracking, and that re-users of published data must be alert to the possibility of annotation and labelling errors.",
"keywords": [
"Transcriptomics",
"misannotation",
"reproducibility",
"data quality",
"gene expression",
"mislabeling"
],
"content": "Introduction\n\nRecent years have seen an increase in concern about the quality of scientific research, along with efforts to improve reliability and reproducibility1,2. These issues are highly relevant to genomics studies, which deal with complex and often weak signals measured genome-wide. In transcriptomics studies (our focus here), mRNA is extracted from samples and processed using microarrays or RNA-seq, followed by statistical analysis to identify patterns of interest (e.g. differential expression). Much work has been done to raise awareness of technical issues in such studies such as RNA quality3 and batch effects4 and many investigators are aware of the need to address them5. Alongside, a great effort was put into establishing guidelines for annotation standards of expression data into public repositories6.\n\nA key step in many scientific experiments, which has received less attention, is the importance of maintaining an accurate correspondence between the experimental conditions or sources of the samples and the eventual data. Simply put, for the analysis to be valid, the samples must not be mixed up. If mix-ups are present but undetected, the conclusions of the analysis might be affected and pollute the literature, as well as create a lurking problem for those who re-use the data.\n\nThe obviousness of the need to avoid mix-ups suggests that investigators should be well aware of the risk, and take steps to reduce it, such as careful bookkeeping (e.g., permanent sample tube labels matched to data files). However, we recently became concerned that mix-ups might not be rare. Our concerns came to a head when we reanalyzed four publically available datasets of Parkinson’s disease subjects7. As part of our quality checks of the data, we examined expression levels of sex-specific genes (genes expressed only in males or in females), and compared these with the corresponding subject sex meta-data annotations from each of the papers. To our surprise, we found discordance between the sex predicted based on expression levels of sex-specific genes and the manuscript-annotated sex in two out of the four datasets7 (Supplementary Figure S1). This finding, and other anecdotal observations, led us to examine this issue more broadly.\n\nSex-specific genes are well-suited for this purpose. In genetics studies, genotypes of the sex chromosome are routinely used to identify mislabeled samples8,9, moreover, sex check is a built-in option for some of the dedicated software10. Given that genetic abnormalities resulting in disagreement between genotypic and phenotypic sex are rare11, any disagreements are very likely to stem from errors and may also be indicative of other dataset quality issues. Using such genes for quality checks of transcriptome data is not widespread practice, but it is well known that several X- and Y-linked genes show sex-specific patterns of expression. A limitation of this approach is that mix-ups that do not yield conflicting sex labels (e.g., swapping two female samples) cannot be detected. But at the very least the sex-specific-gene-based approach can provide a lower bound for the amount of mix-ups and if any are detected it should trigger a reassessment of the tracking of all samples in the study.\n\nIn this study, we focused on publically available human expression profiling experiments that included individuals of both sexes. To our surprise, we found strong evidence of mix-ups in nearly half of them. Importantly, for the vast majority of the studies we were able to validate that the disagreement between metadata- and gene-based sex is prevalent in the original manuscript. This indicates that the disagreements are not a result of erroneous sex description during data submission to public repository. An additional 10% of the studies have samples of ambiguous gene-based sex, suggesting the possibility of samples being mistakenly combined or other quality problems. While it is possible that a small number of the cases we identify are due to sex chromosome abnormalities, we regard the most likely explanation for most to be laboratory mix-ups or errors in the meta-data annotations. Our findings suggest a widespread quality control issue in transcriptomics studies.\n\n\nMethods\n\nExcept where mentioned, data analysis was performed using the R/Bioconductor environment12,13. Source code for the analysis is available in a Github repository (https://github.com/min110/mislabeled.samples.identification). The archived version of the code at the time of publication can be accessed through Zenodo mislabeled.samples.identification: doi:10.5281/zenodo.60313.\n\nWe identified datasets containing sex information as experimental factors by searching the Gemma database14. Out of an initial 121 datasets we focused on 79 studies run on the Affymetrix HG-U1333Plus_2 and HG-U133A platforms as they have the same sex marker genes (GEO platform identifiers GPL570 and GPL96 respectively). The annotations in Gemma, which originate from GEO sample descriptions augmented with manual annotation, were re-checked against GEO, resulting in the correction of errors for 14 samples. Datasets that contained samples of only one sex, represented data from sex-specific tissues (e.g. ovary or testicle) or contained numerous missing values were excluded (nine datasets). A final set of 70 studies (a total of 4160 samples) met the criteria. Table 1 summarizes the data included and full details of each study are in Supplementary Table S1. Whenever possible, data were reanalyzed from .CEL files. The signals were summarized using RMA method from the Affymetrix “power tools” (http://media.affymetrix.com/partners_programs/programs/developer/tools/powertools.affx), log2 transformed and quantile normalized as part of the general Gemma pre-processing pipeline.\n\nUnclassified samples are samples with disagreement between their classification using k-means clustering and the median expression of the sex specific probesets. Datasets were considered as “correctly annotated” only if they did not contain mismatched or unclassified samples. Eight of the datasets contained both mismatched samples and unclassified samples.\n\nProbeset selection: The male-specific genes KDM5D and RPS4Y1 are represented by a single probeset on both platforms included in our analysis. XIST is represented by two probesets on the GPL96 platform and by seven probesets on the GPL570 platform. With the exception of the 221728_x_at probeset, XIST probesets were highly correlated with each other, and negatively correlated with the KDM5D and RPS4Y1 expression in all of the datasets analyzed (Supplementary Figure S3). The poor-performing XIST probeset (221728_x_at) was excluded from further analysis. The final set was four probesets for GPL96 and eight probesets for GPL570.\n\nAssigning gene-based (biological) sex to samples: The expression data for the selected sex markers were extracted from the normalized data for each dataset. For each of these small expression matrices, we applied standard k-means clustering (using the “kmeans” function from the “stats” package in R15 to classify the samples into two clusters. We assigned the two clusters as “male” or “female”, based on the centroid values of each of the probesets: specifically, the cluster with higher values of the XIST probesets centroids and a lower value of KDM5D and RPS4Y1 centroids was assigned as a “female” cluster. To identify samples with ambiguous sex, we calculated the difference between the median expression level of the XIST probesets and the median expression level of the KDM5D and RPS4Y1 probesets. We compared this difference with the cluster-based gender, and validated that the difference is positive for samples assigned as females and negative for samples assigned as males. We excluded 34 samples that showed disagreement in this comparison since they could not provide a conclusive result for the gene—expression-based sex. We note that 12 (35%) of these would have been assigned to a cluster contradicting their annotated sex if we had retained them.\n\nManual validation of the discrepancy between the gene-based sex and the meta-data-based sex: For all the cases where a discrepancy was found between the gene-expression-based sex and the meta-data-based sex, we manually examined the original studies to check if the mismatch was due to incorrect annotation of the sample during the data upload to GEO, or was present in the original paper. Since most of the manuscripts only contain summary statistics of the demographic data (13/31, Supplementary Table S2), direct sample-by-sample validation was not possible for most studies. For these studies we used the highest resolution level of group summary statistics, provided in the publication to validate that the data in the paper corroborate the data in GEO. In addition, for all of the datasets with mismatched samples, we manually evaluated the expression values of the relevant probesets using the GEO2R tool on the GEO website.\n\nConfidence interval estimate for population proportion of studies with misannotated samples: We used the properties of the binomial distribution to compute the confidence interval for the population estimate of affected data sets using the “qbinom” function in R.\n\nAnalysis of Stanley Foundation datasets: CEL files and sample metadata were downloaded directly from the Stanley Medical Research Institute genomic database (https://www.stanleygenomics.org/stanley/). CEL files were pre-processed, quantile normalized and log2 transformed using the rma function from the “affy” package in R Bioconductor12,13.\n\n\nResults\n\nWe identified a corpus of 70 human gene expression studies that had sample sex annotation (4160 samples in total) run on two platforms. We developed a simple robust method for classifying samples by sex based on three sex specific genes – XIST, RPS4Y1 and KDM5D. XIST (X-inactive specific transcript) is expressed from the inactive X chromosome and acts to silence its expression and thus, is only expressed in female subjects. KDM5D (Lysine (K)-Specific Demethylase 5D) and RPS4Y1 (Ribosomal Protein S4, Y-Linked 1) are both located on the Y chromosome, and thus are only expressed in male subjects. Although additional sex-specific genes exist, we determined that KDM5D, RPS4Y1 and XIST are the only sex-specific genes consistently showing high expression levels in the associated sex in all tissues. Our method assigns a predicted sex based on gene expression to each sample, which we refer to as “gene-based sex” (see Methods). We also performed a second analysis to identify samples where a gene-based sex could not be confidently assigned. Such samples might reflect technical problems, but could also be due to true biological effects; for example, XIST expression is altered in some cancers and in early stages of development16. We then compared gene-based sex to the sex according to the provided sample annotations (“meta-data-based sex”) for the 70 studies, seeking samples with disagreements. Figure 1 shows examples of studies with no discrepant samples (1A) and with discrepancies (1B). Similar plots for all datasets analyzed are shown in Supplementary Figure S2. All calls of discrepant or ambiguous sex were followed by manual confirmation.\n\nExpression level of probesets representing the XIST (red), KDM5D (black) and RPS4Y1 (blue) genes. “MetaFemale” and “MetaMale” indicate the meta-data annotated sex of the samples and their total number in brackets. The “M” and “F” along the X axis indicates the gene-based sex of the samples, as determined by k-means clustering. Log2-transformed expression levels are plotted. (a) Representative dataset with no mismatched samples. (b) Representative dataset with two mismatched samples (highlighted with grey bars). Gene-based sex that contradicts the annotated sex of the sample is highlighted in bold at bottom.\n\nThe heatmap represents z-transformed expression values of KDM5D, RPS4Y1 and XIST probeset in four datasets of microarray data from Stanley Array Collection cohort of subjects. The datasets are designated - Study1 AltarA, Study3 Bahn, Stady5 Dobrin, Study7 Kato, in correspondence to their names on the Stanley collection site. Each column represents a subject and each raw represents a probeset. The four studies are represented on the left color bar on the side of the heatmap. The gene names corresponding to each probeset are shown by the right color bar on the side of the heatmap. Three of the studies – AltarA, Bahn and Kato were performed on the GPL96 platform on which XIST is represented by two probesets. The Dobrin dataset is on the GPL570 platform containing additional 5 XIST probesets, one of which was removed from the analysis. The annotated sex of each subject (metadata gender) is represented by the top color bar (females – pink, males – purple). Missing samples (samples that were excluded from the original studies) are shown in grey. Arrows point to the mismatched samples.\n\nWe found samples with a discrepancy between the meta-data sex information and the gene-based sex in 32/70 (46%) of the datasets (ambiguous samples excluded; summarized in Table 1; details in Supplementary Table S2). Although datasets containing mismatched samples were more prevalent among cancer datasets (53% vs 44%, cancer vs. non-cancer, respectively), the proportion of mismatched samples was similar in cancer and non-cancer samples (2.04% vs 1.97%; Table 1). This discrepancy might be explained by on average higher number of samples in cancer datasets from our corpus (Supplementary Table S1). As expected, the proportion of samples with ambiguous gene-based sex was much higher in cancer as compared to non-cancer samples: 23/1223 (1.97%) in cancer vs. 11/2937 (0.4%; Table 1). In total, 34 samples were flagged as ambiguous, though we note that 12/34 (35%) would have been signed to the discrepant sex by our method. Ambiguous samples were found in 15/70 (21%) of the studies (eight of which also contained mismatched samples).\n\nBecause the sex annotations we used to this point were obtained from the sample descriptions in GEO, there was a possibility that the discrepancies we identified were due to mistakes introduced during the communication of the data from the submitter to GEO. If this was the case, the results in the original publication (29/31 of the affected studies had an associated publication) would be unaffected, though users of the GEO data would still be affected. To check this possibility, we went back to the 29 original publications to see if the sex labels provided in the paper matched those in GEO (detailed in Supplementary Table S2). This check was not always possible because many publications did not provide detailed meta-data in the paper or Supplementary materials; GEO provides the only record. In 12/29 cases, sufficient detail was provided for us to confirm that the discrepant sex labels were present in the publication, and in all of them there was agreement between the meta-data in the publication and the meta-data in GEO. In 13 cases only summaries were given in the publication (e.g. “10 males and nine females in group X”). In 10 of these 13 studies, the summary counts in the publication agree with GEO. In the other three, both GEO and gene-based totals disagree with the publication-based totals. In other words, there seems to have been miscommunication with GEO in addition to a sex annotation discrepancy in the original study report. Finally, for four datasets meta-data was not provided or ambiguously described in the paper. We failed to find any unambiguous case in which we would infer the only problem was a miscommunication with GEO.\n\nThe analysis presented cannot distinguish between actual sample mix-ups (e.g., tube swaps) and errors in the meta-data (incorrect recording of the subject’s sex). Fortuitously, we identified data sets where it can be determined that at least in some cases, samples were probably physically mixed up. In addition to the 70 datasets used above, we analyzed four datasets that all used human brain RNA from the same collection of subjects (Stanley Medical Research Institute, Array Collection, https://www.stanleygenomics.org/stanley/). In this case the meta-data is common across the four laboratories since they are all analyzing the same individuals (though not all studies analyzed all the individuals). If the meta-data is incorrect, then all of the studies should show discrepancies for the same samples. If the samples were mixed up in a particular laboratory (or by the sample provider at the time they were sent to the laboratory), each study would have different discrepancies. We found that out of the four available datasets with data corresponding to the same subjects, two datasets contained mismatched samples (a single mismatched sample was identified in the “AltarA” study, and five in the “Dobrin” study; Figure 2). Importantly, the mismatched subjects differed between the datasets and samples from the same subjects appeared as correctly annotated in the other datasets. This suggests that the mismatched cases are likely to represent mislabeled samples rather than mistakes taking place during the recording of the subjects’ sex.\n\nWe were surprised that nearly 50% of studies had at least one labeling error, and were concerned that this might be an overestimate by chance, due to sampling error. To address this we computed confidence intervals for our estimate of the fraction of affected studies, yielding a 95%-confident lower bound of 36% and a 99% lower bound of 33% (upper confidence bounds were 56% and 60% respectively). We also note that our independent observations of 2/4 datasets containing misannotations described in Santiago et al.17 and in 2/4 of the Stanley data set are in agreement with a relatively high estimate. Thus we project that, with 99% certainty, if all expression studies in GEO could be checked for mix-ups based on sex-specific genes, the fraction affected would be at least 33%.\n\n\nDiscussion\n\nUsing a simple approach to compare sample annotations for sex to expression patterns, we found that nearly 50% of datasets we checked contain at least one discrepancy. Our findings are also in general agreement with another study that examined this issue in cancer datasets18, although in cancer there is an expectation of more ambiguity of sex marker expression16. In the case of the Stanley brain datasets, we could determine that the problem is likely to stem from laboratory mix-ups rather than an error in recording the subject’s sex. While our analysis is limited to a corpus of studies where sex information was available along with the presence of good markers on the microarray platform, our data suggest a widespread problem.\n\nWhat is the impact of this issue? Viewed optimistically, a single mixed-up sample is not likely to dramatically affect the conclusions of a well-powered study. In addition, our analysis suggests a lower (99% confident) estimate of “only” 33% of studies with a sex mislabeling, which might provide a small amount of comfort to optimists – it could be worse. However, the sample mislabeling we identified might be the tip of the iceberg, because sex-specific genes can only reveal mixed-up samples with differing sex. We also suggest that sample mix-ups might correlate with other quality problems. Indeed, many of the misannotated datasets we found have additional issues such as undocumented batch effects, outlier samples, other apparent sample misannotations (not sex-related), and discordance in sample descriptions reported in different parts of the relevant publication (Supplementary Table S2). The presence of samples with ambiguous gene-based sex in non-cancer samples is suggestive of even more quality problems. This is because expression patterns of sex-specific genes could be treated as a positive quality control for the expression data as a whole, serving as indicators for the reliability of other gene signals. Deviations from the expected pattern might indicate samples were mixed together, or suggest problems with RNA quality.\n\nOur conclusions are two-fold. First, there is an alarming degree of apparent mislabeling of samples in the transcriptomics literature. In at least the specific cases we identified, the trust in the reliability of the findings reported is certainly not improved. Second, because it is simple to check the expression patterns of sex markers, the tests we performed should become a routine part of all omics studies where sex can be inferred from the data.\n\n\nData and software availability\n\nAll data analyzed in this manuscript were previously published and can be accessed through the GEO repository (http://www.ncbi.nlm.nih.gov/gds) using accession numbers indicated in Table S1.\n\nStanley Medical Research Institute data can be accessed through https://www.stanleygenomics.org/stanley/.\n\nSource code for the analysis is available in a Github repository (https://github.com/min110/mislabeled.samples.identification).\n\nThe archived version of the code at the time of publication can be accessed through Zenodo mislabeled.samples.identification: doi:10.5281/zenodo.6031319",
"appendix": "Author contributions\n\n\n\nLT and PP conceived the study idea. MF and LT performed the analyses, LT and PP prepared the manuscript. All the authors agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe research was supported by grants to PP from NIH (GM076990) and the NeuroDevNet Network of Centres of Excellence, and the UBC Graduate program in Genome Sciences and Technology (MF).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Patrick Tan for assistance with Gemma query and Ben Callaghan for assistance with the Stanley data sets.\n\n\nSupplementary material\n\nTable S1. Description of datasets included in the study.\n\nClick here to access the data.\n\nTable S2. Detailed description of all datasets with mislabeled samples.\n\nClick here to access the data.\n\nSantiago and Potashkin included four datasets in their metaanalysis. When available (three out of the four datasets) we used sample characteristics provided in the associated manuscripts to identify existence of mislabelled samples. Gene-based males are defined by high RPS4Y1 and low XIST expression. Cont – control subjects (green), PD – Parkinson’s disease (orange). In brackets, the corresponding number of females (F) and males (M) reported in the original manuscript. XIST and RPS4Y1 genes were present in datasets GSE22491, but only RPS4Y1 was present in GSE18838. (a) Based on the sex-genes expression, dataset GSE22491 contains at two 2 mislabelled samples. Of notice, in the pooled sample (indicated by an arrow) containing equal amount of males and females, the two genes are expressed at similar levels. (b) This is the only dataset for which sex of individual samples was available on GEO. Red – GEO annotated females, blue – GEO annotated males. Based on the manuscript’s sample characteristics there should be 8F, 3M controls, and 2F, 15M PD. However, metadata provided on GEO, describes 5F, 6M controls, and 4F, 13M PD. Both of these annotations disagree with the gene-based sex of the samples (Cont – 8F, 3M, PD – 5F, 12M).\n\nFigure S2. Expression of probesets corresponding to the sex-specific genes XIST, KDM5D and RPS4Y1 in datasets analyzed in the current study.\n\nEach plot represents a separate dataset. The mismatched samples are highlighted in grey. For presentation interests, samples with undetermined gene-based sex were excluded. Each point represents a value of a single probeset in one sample. XIST – filled red circles, KDM5D filled black circles, RPS4Y1 – open black circles. X axis shows the predicted gene-based sex of each sample.\n\nClick here to access the data.\n\nProbeset-probeset Pearson correlation of probesets corresponding to XIST, KDM5D and RPS4Y1 genes from all the datasets on in the current study. High correlation was observed between the two probesets corresponding to male specific genes. Six out of the seven XIST probesets showed very high positive correlation with each other, and high negative correlation with probesets corresponding to the male genes. Probeset 243712_at showed low positive correlation with other XIST probesets and relatively low negative correlation with male specific probesets and thus was excluded from the analysis.\n\nSupplementary references\n\n1. Santiago JA, Potashkin JA: Network-based metaanalysis identifies HNF4A and PTBP1 as longitudinally dynamic biomarkers for Parkinson’s disease. Proc Natl Acad Sci U S A. 112, 2257–2262 (2015).\n\n2. Toker L, Pavlidis P: Metaanalysis of flawed expression profiling data leading to erroneous Parkinson’s biomarker identification. Proc Natl Acad Sci U S A. 112, E3637 (2015).\n\n\nReferences\n\nAllison DB, Brown AW, George BJ, et al.: Reproducibility: A tragedy of errors. Nature. 2016; 530(7588): 27–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBegley CG, Ioannidis JP: Reproducibility in science: improving the standard for basic and preclinical research. Circ Res. 2015; 116(1): 116–126. PubMed Abstract | Publisher Full Text\n\nKauffmann A, Gentleman R, Huber W: arrayQualityMetrics--a bioconductor package for quality assessment of microarray data. Bioinformatics. 2009; 25(3): 415–416. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeek JT, Scharpf RB, Bravo HC, et al.: Tackling the widespread and critical impact of batch effects in high-throughput data. Nat Rev Genet. 2010; 11(10): 733–739. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSEQC/MAQC-III Consortium: A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium. Nat Biotechnol. 2014; 32(9): 903–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar R, Barrett T: NCBI GEO standards and services for microarray data. Nat Biotechnol. 2006; 24(12): 1471–1472. PubMed Abstract | Publisher Full Text | Free Full Text\n\nToker L, Pavlidis P: Metaanalysis of flawed expression profiling data leading to erroneous Parkinson’s biomarker identification. Proc Natl Acad Sci U S A. 2015; 112(28): E3637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQu C, Schuetz JM, Min JE, et al.: Cost-effective prediction of gender-labeling errors and estimation of gender-labeling error rates in candidate-gene association studies. Front Genet. 2011; 2: 31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTzvetkov MV, Meineke I, Sehrt D, et al.: Amelogenin-based sex identification as a strategy to control the identity of DNA samples in genetic association studies. Pharmacogenomics. 2010; 11(3): 449–457. PubMed Abstract | Publisher Full Text\n\nPurcell S, Chang C: PLINK 1.9. Reference Source\n\nSax L: How common is intersex? a response to Anne Fausto-Sterling. J Sex Res. 2002; 39(3): 174–178. PubMed Abstract | Publisher Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZoubarev A, Hamer KM, Keshav KD, et al.: Gemma: a resource for the reuse, sharing and meta-analysis of expression profiling data. Bioinformatics. 2012; 28(17): 2272–2273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: The R Project for Statistical Computing. In: The R Project for Statistical Computing. 2015; [cited 5 Feb 2016]. Reference Source\n\nWeakley SM, Wang H, Yao Q, et al.: Expression and function of a large non-coding RNA gene XIST in human cancer. World J Surg. 2011; 35(8): 1751–1756. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSantiago JA, Potashkin JA: Network-based metaanalysis identifies HNF4A and PTBP1 as longitudinally dynamic biomarkers for Parkinson’s disease. Proc Natl Acad Sci U S A. 2015; 112(7): 2257–2262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLohr M, Hellwig B, Edlund K, et al.: Identification of sample annotation errors in gene expression datasets. Arch Toxicol. 2015; 89(12): 2265–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nToker L: mislabeled.samples.identification. Zenodo. 2016. Data Source"
}
|
[
{
"id": "15949",
"date": "13 Sep 2016",
"name": "Leonard P. Freedman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an excellent paper highlighting the importance of sample annotation as a critical contributor to reproducible research. Using transcriptomics of sex-specific gene expression levels as an example, the authors do a careful analysis to illustrate the issue of mislabeling. My one concern, which they candidly acknowledge, is that nearly half of the samples appear to have only one error, which may lead many readers to conclude that this actually a quite respectable error rate and that thus this is not really a big problem. It was not entirely clear to me why, as the authors argue, that this is in fact 'the tip of the iceberg\".",
"responses": [
{
"c_id": "2597",
"date": "29 Mar 2017",
"name": "Lilah Toker",
"role": "Author Response",
"response": "We thank the reviewer for the kind words. The reason we believe this is ‘the tip of the iceberg’ is that we were only able to detect cases where the mislabeling resulted in sex discrepancy. Since most of the studies include unequal numbers of males and females, the chance for having a same sex mislabelling are in fact higher than 50%."
}
]
},
{
"id": "16144",
"date": "22 Sep 2016",
"name": "Hans van Bokhoven",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article by Toker et al. reports a retrospective biostatistical analysis of the data reported in 70 RNA expression studies to identify the possible misannotation of samples used in these studies. Their analysis is based on the expression of gender-specific genes (XIST for females; KDM5D and RPS4Y1 for males). Their analysis revealed apparent mismatches between the expression data and the annotated gender for 83 of 4160 samples (2%), encompassing 32 of the 70 studies (46%). This percentage is consistent with a those of a previous analysis in cancer datasets (PUBMED: 26608184). While these figures are already alarming, the actual number of mismatches is likely to be higher, because the gender-analysis can only identify discrepancies based on a gender-mismatch and will not detect mislabelling of samples of the same gender and case-control samples.\nFor most cases, the reason for the mismatches is not clear, but comparison of identical sample-data presented in different publications revealed that sample mix-ups are likely to be involved.\nThe mislabelling of samples in transcriptomics studies have an immediate impact on the involved studies, which often only have a modest sample size. In addition, also follow-up studies based on the results reported in such studies can suffer from it. Therefore, the use of controls to check the identity of samples is warranted. The gender-test presented in this work is a simple test that should become routine in expression studies, another option is to use nucleic acid-based bar codes that can be added to the sample early in the processing.\nSome minor comments:\nOn page 4, second-last line: here 29/31 of the affected studies is mentioned. Shouldn't that be 32? I hope I missed it, but certainly in a report as this one, the figures should be absolutely correct.\n\nSome typo's in the legend: Figure 2, Stady 5 is Study 5; Supplementary Fig S1: Based on the sex-genes expression, dataset GSE22491 contains at two 2 mislabelled samples.",
"responses": [
{
"c_id": "2596",
"date": "29 Mar 2017",
"name": "Lilah Toker",
"role": "Author Response",
"response": "We thank the reviewer for his comment. It was indeed a typo and it was corrected in version 2 of the manuscript. We thank the reviewer for his comment. The typo in the legend was corrected in version 2 of the manuscript. Regarding dataset GSE22491, when a single sample is mislabelled, it would affect the number of samples from both genders – for example if a sample annotated as a female is biologically male, the result would be that the number of biological female would be the number of annotated females - one, while the number of biological males would be the number of annotated males + one."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2103
|
https://f1000research.com/articles/5-2421/v1
|
29 Sep 16
|
{
"type": "Data Note",
"title": "A fully featured COMBINE archive of a simulation study on syncytial mitotic cycles in Drosophila embryos",
"authors": [
"Martin Scharm",
"Dagmar Waltemath"
],
"abstract": "COMBINE archives are standardised containers for data files related to a simulation study in computational biology. This manuscript describes a fully featured archive of a previously published simulation study, including (i) the original publication, (ii) the model, (iii) the analyses, and (iv) metadata describing the files and their origin. With the archived data at hand, it is possible to reproduce the results of the original work. The archive can be used for both, educational and research purposes. Anyone may reuse, extend and update the archive to make it a valuable resource for the scientific community.",
"keywords": [
"COMBINE",
"data",
"containers"
],
"content": "Introduction\n\nIn systems biology and systems medicine, the steadily increasing size and complexity of simulation studies pose additional challenges to sharing reproducible results1. Repeated mentions of problems with replication and reproducibility2–4 led to new standards, tools, and methods for the transfer of reproducible simulation studies5–9. Several projects and initiatives already deal with reproducibility issues, such as COMBINE (co.mbine.org), FAIRDOM (fair-dom.org), and the Reproducibility Initiative (reproducibilityinitiative.org).\n\nThe Computational Modeling in Biology Network (COMBINE) coordinates the development of standard formats for various aspects of a simulation study: The Systems Biology Markup Language (SBML)10 and CellML11 encode the mathematical models; the Systems Biology Graphical Notation (SBGN)12 encodes the visual representation of models; the Simulation Experiment Description Markup Language (SED-ML)13 encodes the simulation recipes; and the Systems Biology Result Markup Language (SBRML)14 encodes numerical data and simulation results.\n\nToday’s studies consist of multiple, heterogeneous, and sometimes distributed data files, leading to the challenge of exchanging complete and thus reproducible results. To close this gap, the COMBINE community developed the COMBINE archive8. A COMBINE archive is a single file that aggregates all data files and information necessary to reproduce a simulation study in computational biology. The skeleton of a COMBINE archive consists of a manifest and a metadata file, specified by the Open Modeling EXchange format (OMEX).\n\nHere we describe a fully featured COMBINE archive, which encodes an investigation of the syncytial mitotic cycles in Drosophila embryos15. The study published by Calzone et al. proposes a dynamical model for the molecular events underlying rapid, synchronous, syncytial nuclear division cycles in Drosophila embryos. This particular study was chosen for several reasons. Firstly, the paper, the documentation, and the related data are openly accessible. Secondly, the model is available in two standard formats: The CellML encoding is available from the Physiome Model Repository16 at models.cellml.org/exposure/1a3f36d015121d5596565fe7d9afb332 and the SBML encoding is available from BioModels17 at www.ebi.ac.uk/biomodels-main/BIOMD0000000144. Thirdly, both model files are already curated, which increases the level of trust. Fourthly, the model describes a common biological system (cell cycle). Thus, the basic mechanisms of the encoded biology should be familiar to many researchers, reducing the effort of understanding the example.\n\nThis archive contains files that are openly available for download, as well as previously unpublished files that were generated using COMBINE-compliant software tools (see Section Materials and methods). When executed, it reproduces the original findings by Calzone et al.\n\n\nMaterials and methods\n\nThe fully featured COMBINE archive was created in three subsequent steps. Firstly, all available materials relating to the study were automatically retrieved from an online resource (initial archive). Secondly, the data files were organised into subdirectories, following the different aspects of a simulation study (documentation, model, experiment, result). Thirdly, missing files were manually retrieved from web resources or created using COMBINE-compliant software tools. The three steps are described in the following.\n\nThe initial version of the COMBINE archive was generated using the web-based software tool M2CAT18 Version 0.1 (m2cat.sems.uni-rostock.de). Among the suggested archives for the work by Calzone et al., we chose the simulation study containing a CellML model and a visualisation of the model in three different formats (PNG, SVG, AI). M2CAT automatically generated the initial COMBINE archive from these files. It also added metadata to the archive, such as annotations to creators, contributors, and modification times. M2CAT retrieved this metadata from the corresponding GIT project in the Physiome Model Repository (git log).\n\nFor convenience, the files inside the COMBINE archive were structured in subfolders. The initial archive was therefor imported into the CombineArchiveWeb application (WebCAT,9) Version 0.4.13 (webcat.sems.uni-rostock.de). WebCAT is a web interface to display and modify the files contained in an archive, together with metadata and file structures. The files inside the archive were organised in four directories, which reflect the different aspects of a simulation study:\n\n• documentation/: files that describe and document the model and/or experiment (empty)\n\n• model/: files that encode and visualise the biological system (4 files)\n\n• experiment/: files that encode the in silico setup of the experiment (empty)\n\n• result/: files that result from running the experiment (empty)\n\nAll files in the initial archive were stored in the model/ directory. However, these files alone are not sufficient to reproduce the study.\n\nTo make the encoded study reproducible, the COMBINE archive needs to be extended with additional files.\n\nThe article is typically the central object of a research study. For this study, the original publication by Calzone et al., together with available supplementary information, was retrieved from the homepage of the journal Molecular Systems Biology (msb.embopress.org/content/3/1/131). Using WebCAT, the files were uploaded to the documentation/directory of the archive. The automatically added metadata was adjusted to attribute the authors of the publication and to state when and where the files were downloaded. In the background, WebCAT encoded the metadata in RDF/XML and added it to the archive.\n\nThe model is not only available in CellML format, but also in SBML format. The SBML file was retrieved from BioModels (www.ebi.ac.uk/biomodels-main/download?mid=BIOMD0000000144, SBML Level 2 Version 1) and uploaded to the model/directory. Again, the metadata was corrected to attribute the original authors, curators, and contributors, as stated on the BioModels website (www.ebi.ac.uk/biomodels-main/BIOMD0000000144) and in the model document.\n\nThe simulation description is essential to run the experiment. It defines the simulation environment and the output of the in silico execution. As no simulation description was found in any of the open repositories, an initial version was created using the SED-ML Web Tools (SWT) Version 2.1 (bqfbergmann.dyndns.org/SED-ML_Web_Tools). SWT takes the model files and creates a default simulation description with standard settings. For this study, a default SED-ML file encodes instructions to generate 66 plots and a data table. Each plot describes the change of concentration in one species of the model. The data table contains all numerical values. Based on the default script, a second SED-ML file (Calzone2007-simulation-figure-1B.xml) was generated to reassemble Figure 1B of the original publication. Using WebCAT, both SED-ML scripts were added to the experiment/directory of the archive. The metadata for the new files was added.\n\nThis figure shows the SBGN-PD compliant reaction network, as encoded in the SBML model obtained from BioModels. The figure was generated and modified using SBGN-ED.\n\nThe simulation results reflect the behaviour of a model under certain conditions. The script defined in Calzone2007-simulation-figure-1B.xml was loaded into SWT and into the stand-alone software program COPASI Version 4.15 Build 9519. The plots generated by both tools show that the developed in silico experiment reproduces the results from the paper. Using WebCAT, the figures produced by SWT and COPASI were uploaded and added to the result/ directory of the archive. Metadata, such as the versions of the software tools, was added accordingly.\n\nThe visualisation of a model helps to understand the encoded biological system. For this study, an SBGN-compliant visualisation of the model was created using SBGN-ED Version 1.5.120 together with VANTED Version 2.1.021. SBGN-ED generated an automatic layout of the uploaded SBML model, which was then improved manually. The resulting Figure 1 was exported in different formats (GraphML22, GML (www.fim.uni-passau.de/index.php?id=17297&L=1), PNG image, PDF, and SBGN-ML23). Using WebCAT the files were uploaded to the model/sbgn directory and metadata was provided.\n\n\nData description\n\nThe archive consists of 21 files (Table 1). Among these files are the manifest.xml and the metadata.rdf, which form the skeleton of the archive. The manifest lists the files included in the archive. The metadata file contains additional information about the files in the archive, such as creators and descriptions. A third file, README.md, contains a description for visitors of the GitHub repository, where the archive is being developed (github.com/SemsProject/CombineArchiveShowCase). The remaining 18 files are organised in four directories, cmp. Section Organising the COMBINE archive. The original publication (PDF) is stored in the documentation/ directory. The encodings of the model (CellML, SBML, graph formats) are stored in the model/ directory. The simulation descriptions (SED-ML) are stored in the experiment/ directory. The simulation results (SVG, PNG) are stored in the result/ directory.\n\nThe table lists all files included in the presented COMBINE archive together with formats and descriptions. The indentation indicates the directory structure used to organise the files in the archive.\n\nThe latest version of the compiled COMBINE archive can be accessed through our web server at scripts.sems.uni-rostock.de/getshowcase.php.\n\n\nData validation\n\nThe COMBINE archive described in this data note reproduces the results of the study published by Calzone et al. To validate the reproducibility, we executed the archive in different simulation tools. For example, the encoded simulation study can be executed in COPASI, cmp. Figure 2(b). The archive can also be loaded to the SWT by opening a specific URL (bqfbergmann.dyndns.org/SED-ML_Web_Tools/Home/SimulateUrl?url=http://scripts.sems.uni-rostock.de/getshowcase.php). The simulation results will immediately be shown in the web browser, cmp. Figure 2(c). Moreover, users reported a successful reproduction of the simulation results using Tellurium24 (github.com/SemsProject/CombineArchiveShowCase/pull/2).\n\nThe figure shows the simulation results included in the original publication (2(a)). Furthermore, the results generated by COPASI (2(b)) and the SWT (2(c)), using the SED-ML script Calzone2007-simulation-figure-1B.xml, are shown.\n\n\nConclusions\n\nThe presented COMBINE archive provides a reproducible simulation study for a previously published model on syncytial mitotic cycles in Drosophila embryos15. The archive contains several files that were collected from online resources, e. g. the CellML model from the Physiome Model Repository or the scientific publication from the publisher’s website. It also provides new files that did not exist previously, e. g. a SED-ML file to encode the simulation setup for Figure 1B of the original publication.\n\nThis fully featured archive allows scientists to reproduce the results obtained by Calzone et al. in software tools that can read COMBINE archives. For example, the archive was successfully executed in the SED-ML Web Tools and Tellurium. Figure 2 shows that the developed study is able to reproduce the original results.\n\nThis data note describes the fully featured COMBINE Archive as published on Figshare25. However, we expect the archive to evolve further. The latest version of the archive is available from GitHub at github.com/SemsProject/CombineArchiveShowCase. It can also be downloaded from our website at scripts.sems.uni-rostock.de/getshowcase.php. Extensions, refinements, and comments are very welcome. Please fork the project on GitHub and contribute pull requests.\n\n\nData availability\n\nThe latest version of the COMBINE archive: github.com/SemsProject/CombineArchiveShowCase/ (latest commit at the time of submission: a469197)\n\nThe fully featured COMBINE archive as at the time of publication: Figshare: COMBINE Archive Show Case, 10.6084/m9.figshare.3427271.v124",
"appendix": "Author contributions\n\n\n\nMS generated the data files for the archive and designed the initial version. DW and MS wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has been funded by the German Federal Ministry of Education and Research (BMBF) as part of the e:Bio programs SEMS (FKZ 031 6194) and SBGN-ED+ (FKZ 031 6181).\n\n\nAcknowledgements\n\nWe would like to thank Vasundra Touré for her help with creating the SBGN-compliant visualisations of the model and Matthias König for running and testing the archive in Tellurium.\n\n\nReferences\n\nWaltemath D, Wolkenhauer O: How modeling standards, software, and initiatives support reproducibility in systems biology and systems medicine. IEEE Trans Biomed Eng. 2016; (99). PubMed Abstract | Publisher Full Text\n\nPrinz F, Schlange T, Asadullah K: Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011; 10(9): 712. PubMed Abstract | Publisher Full Text\n\nIoannidis JP, Allison DB, Ball CA, et al.: Repeatability of published microarray gene expression analyses. Nat Genet. 2009; 41(2): 149–155. PubMed Abstract | Publisher Full Text\n\nBegley CG, Ellis LM: Drug development: Raise standards for preclinical cancer research. Nature. 2012; 483(7391): 531–533. PubMed Abstract | Publisher Full Text\n\nSandve GK, Nekrutenko A, Taylor J, et al.: Ten simple rules for reproducible computational research. PLoS Comput Biol. 2013; 9(10): e1003285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorcho Ó, Garijo Verdejo D, Belhajjame K, et al.: Workflow-centric research objects: First class citizens in scholarly discourse. In 2nd Workshop on Semantic Publishing. Informatica, 2012. Reference Source\n\nBechhofer S, De Roure D, Gamble M, et al.: Research objects: Towards exchange and reuse of digital knowledge. The Future of the Web for Collaborative Science. 2010. Publisher Full Text\n\nBergmann FT, Adams R, Moodie S, et al.: COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project. BMC Bioinformatics. 2014; 15(1): 369. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScharm M, Wendland F, Peters M, et al.: The CombineArchiveWeb application – A web-based tool to handle files associated with modelling results. In Proceedings of the 7th International Workshop on Semantic Web Applications and Tools for Life Sciences, Berlin, Germany, December 9–11, 2014. 2014. Publisher Full Text\n\nHucka M, Finney A, Sauro HM, et al.: The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics. 2003; 19(4): 524–531. PubMed Abstract | Publisher Full Text\n\nCuellar AA, Lloyd CM, Nielsen PF, et al.: An overview of CellML 1.1, a biological model description language. SIMULATION. 2003; 79(12): 740–747. Publisher Full Text\n\nLe Novère N, Hucka M, Mi H, et al.: The Systems Biology Graphical Notation. Nat Biotechnol. 2009; 27(8): 735–741. PubMed Abstract | Publisher Full Text\n\nWaltemath D, Adams R, Bergmann FT, et al.: Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language. BMC Syst Biol. 2011; 5(1): 198. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDada JO, Spasić I, Paton NW, et al.: SBRML: a markup language for associating systems biology data with models. Bioinformatics. 2010; 26(7): 932–938. PubMed Abstract | Publisher Full Text\n\nCalzone L, Thieffry D, Tyson JJ, et al.: Dynamical modeling of syncytial mitotic cycles in Drosophila embryos. Mol Syst Biol. 2007; 3: 131. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu T, Lloyd CM, Nickerson DP, et al.: The Physiome Model Repository 2. Bioinformatics. 2011; 27(5): 743–44. PubMed Abstract | Publisher Full Text\n\nLi C, Donizelli M, Rodriguez N, et al.: BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models. BMC Syst Biol. 2010; 4: 92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScharm M, Waltemath D: Extracting reproducible simulation studies from model repositories using the CombineArchive Toolkit. In Datenbanksysteme für Business, Technologie und Web (BTW 2015) – Workshopband. Gesellschaft für Informatik, 2015; P-242: 137–142. Publisher Full Text\n\nHoops S, Sahle S, Gauges R, et al.: COPASI--a COmplex PAthway SImulator. Bioinformatics. 2006; 22(24): 3067–3074. PubMed Abstract | Publisher Full Text\n\nCzauderna T, Klukas C, Schreiber F: Editing, validating and translating of SBGN maps. Bioinformatics. 2010; 26(18): 2340–2341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRohn H, Junker A, Hartmann A, et al.: VANTED v2: a framework for systems biology applications. BMC Syst Biol. 2012; 6(1): 139. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrandes U, Eiglsperger M, Herman I, et al.: GraphML progress report structural layer proposal. In Graph Drawing. Springer; 2002; 2265: 501–512. Publisher Full Text\n\nvan Iersel MP, Villéger AC, Czauderna T, et al.: Software support for SBGN maps: SBGN-ML and LibSBGN. Bioinformatics. 2012; 28(15): 2016–2021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSauro HM, Choi K, Medley JK, et al.: Tellurium: A Python Based Modeling and Reproducibility Platform for Systems Biology. bioRxiv. 2016; 054601. Publisher Full Text\n\nScharm M, Touré V: COMBINE Archive Show Case. Figshare. 2016. Data Source"
}
|
[
{
"id": "16695",
"date": "03 Oct 2016",
"name": "Lars Juhl Jensen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments:\nThe data note describes how COMBINE is used to make available a mathematical model for simulation of syncytial mitotic cycles in Drosophila. Although well written, the aim of the article is somewhat unclear to me.\nThe way the introduction is written and the archive is named suggest that the purpose of making the archive is to serve as a showcase for how to use COMBINE archives. However, the way the rest of the article is written focuses very much on how this specific archive was made and how it is organized. If the goal is to showcase COMBINE archives, I feel the manuscript lacks a description of not only how the specific archive was made but also why this would be a good way to make such archives in general. If, on the other hand, the aim is to make the specific model available, it lacks some more background on the specific model and possibly suggestions or examples of how it can be used.\n\nMajor comments:\nI am confused about what is to be considered the primary repository / access point. The article refers to both a GitHub repository, an omex file on figshare, and a link to a php script on their own server where one can retrieve the latest version of the omex file. This leads to a number of questions:\nAm I right that GitHub is the primary place where development is done and improvements will made? Is the omex file provided by the php script always up-to-date with the GitHub repository, or may the newest version on GitHub be even newer than the latest version made available as an omex file? Will the omex file on figshare be updated, or will there only be a v1 there?\nConsidering that figshare has both versioning and API for submission, I think the best solution would be to abandon the php script. This would eliminate redundancy, as the omex file would only be on figshare. Since figshare has both versioned and version-less DOIs for datafiles, this would allow the authors to provide a stable DOI that always points to the latest version of the omex file and at the same time allow users to always cite the specific versioned DOI of the omex file they used in their work. Given that figshare has a submission API, it should be possible to make sure that the omex file is automatically updated based on the GitHub repository whenever needed.\n\nTechnical issues:\nAccessing the URL \"bqfbergmann.dyndns.org/SED-ML_Web_Tools/Home/SimulateUrl?url=http://scripts.sems.uni-rostock.de/getshowcase.php\" by copy and paste works correctly. However, clicking the link within the article instead opens \"http://bqfbergmann.dyndns.org/SED-ML_Web_Tools/Home/Simulate\" and leads to an error page instead of the promised results.\nThe link \"www.ebi.ac.uk/biomodels-main/download?mid=BIOMD0000000144\" is similarly broken. When clicked, it instead tries to open \"https://f1000research.com/articles/5-2421/www.ebi.ac.uk/biomodels-main/download?mid=BIOMD0000000144\", which leads to an error.\nThe URL \"www.fim.uni-passau.de/index.php?id=17297&L=1\" leads me to an error page irrespective of whether I copy it or click it.\n\nMinor corrections:\nThe authors several times use the abbreviation \"cmp.\" to refer to other parts/figures in the article. I have never encountered this abbreviation before and was unable to find any other articles that use it. Given the context, I suspect \"cf.\" may be what is meant.",
"responses": []
},
{
"id": "16691",
"date": "10 Oct 2016",
"name": "Laurence Calzone",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article describes a COMBINE archive of a previously published ODE model (Calzone et al) of the oscillatory dynamics of cell cycle regulatory proteins in Drosophila embryos undergoing rapid syncytial nuclear division cycles.\n\nThe files are stored on GitHub in 4 subfolders, containing the original article, model files, the figures generated from the simulations, as well as metafiles. The XML and CellML files that were originally published along with the article were modified, by adding metadata on the authors.\n\nIssues:\nThere is a clear advantage to such an archive for publication of such models, reproducibility etc., but the authors do not insist on it. This type of archive could be added to publications, and provided by the authors, with a full documentation. This more general aim of the article should be more forcefully and clearly stated.\n\nWe tried to open the model file available on github (XML) using COPASI (4.16) but some errors appeared. We downloaded the initial model from Biomodels and it worked. Same issues with CellML files.\n\nThe link pointing to the website \"bqfbergmann.dyndns.org/SED-ML_Web_Tools/\" is broken, but if copy-pasted the figures are correctly displayed.",
"responses": []
},
{
"id": "16693",
"date": "11 Oct 2016",
"name": "Alan Garny",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nA major issue in computational biology is that of reproducibility. Here, the authors provide us with a concrete example of a reproducible simulation experiment using the Calzone et al. model (2007) of syncytial mitotic cycles in Drosophila embryos. To achieve their aim, they use established mark-up languages such as CellML, SBML and SED-ML, and package everything in a COMBINE archive. This COMBINE archive can then be used by anyone to reproduce the simulation experiment, as well as have access to the original paper, SBGN diagram, etc. This data note therefore shows us one possible, and viable, way to address the issue of reproducibility in computational biology and should, as such, be considered for indexing.\n\nMajor comments:\nHow do you envisage tackling the issue of provenance? Say that someone modifies your archive and makes it available to the community, how are we then supposed to know which one to use and what the differences are between the two versions? Considering that your COMBINE archive is on GitHub, I guess someone could always fork it, but maybe a better approach would be to take advantage of existing repositories in the community, such as BioModels.net and the Physiome Model Repository (PMR)? I am not sure about BioModels.net, but PMR inherently addresses the issue of provenance.\n\nIn their supplementary material, Calzone et al. provide some files that can be used to reproduce their different figures. So, in effect, they allow for their results to be reproduced (using XPPAUT). It would therefore make sense to have a comparison of their ‘approach’ to reproducibility compared with yours.\n\nThe two SED-ML files in your COMBINE archive refer to the SBML version of the Calzone model. Now, because you are also providing a CellML version of that model, it would be nice to also have a CellML-based version of your two SED-ML files.\n\nMinor comments:\nAbstract:\nYou might want to replace “previously published simulation study” with “model of syncytial mitotic cycles in Drosophila embryos”.\n\nKeywords:\nApparently, authors should supply up to eight keywords.\n\nIntroduction:\n1st paragraph: COmputational Modeling in BIology NEtwork (i.e. “O” instead of “o” and “E” instead of “e”).\n\n2nd paragraph: “the Systems Biology…” and not “The Systems Biology…”?\n\n2nd paragraph: wouldn’t it be better to reference NuML rather than SBRML? (On http://co.mbine.org/Standards, SBRML is only mentioned as part of the NuML description.)\n\n4th paragraph: “Here, we describe…” (i.e. missing comma).\n\nMaterials and methods:\nRetrieving an initial COMBINE archive:\nIt would be nice to know exactly what kind of search was done using M2CAT (probably one using the term “calzone”?) and, then, which COMBINE archive was used as an initial COMBINE archive.\n\nOrganising the COMBINE archive:\n“The initial archive was therefor[e] imported…” (therefore). It would be nice to know what those 4 initial files were (those in the model folder).\n\nExtending the COMBINE archive:\nThe article: “…to state when and [from] where…” (from). The simulation description:\nWhat is the point of having that default simulation experiment? To use it as a starting point for reproducing Figure 1B of the Calzone paper is fine, but I don’t see the point of including that default simulation experiment in the COMBINE archive. It would be nice to know what simulation parameters and solver (incl. its parameters) were used to reproduce Figure 1B of the Calzone paper. (I imagine they are the same as the ones used by the authors with XPPAUT?) You might want to use sysbioapps.dyndns.org/SED-ML_Web_Tools as the URL for SED-ML Web Tools (rather than bqfbergmann.dyndns.org/SED-ML_Web_Tools)? It would be nice to have all the other figures of the Calzone paper also encoded in SED-ML, just so that your COMBINE archive is not only fully featured, but also complete.\n\nThe visualisation of a model:\nAre you sure about the version of VANTED you used? Version 2.1.0 is somewhat old compared with the latest version available (version 2.6.3). Actually, looking at the contents of your COMBINE archive, I can see that Calzone2007.gml was generated using VANTED 2.6.2. (FWIW, SBGN-ED 1.6 has just been released.) The link for GML is broken (http://www.fim.uni-passau.de/index.php?id=17297&L=1).\n\nData description:\n“…in four directories, cmp. Section…”?\n\nRather than referring to scripts.sems.uni-rostock.de/getshowcase.php (in this section and elsewhere in the manuscript), you might want to refer to your GitHub repository (github.com/SemsProject/CombineArchiveShowCase) and make use of GitHub’s release feature? It might be safer in the long term.\n\nData validation:\n“…can be executed in COPASI, cmp. Figure 2(b).”?\n\nTo click on the bqfbergmann.dyndns.org/SED-ML_Web_Tools/Home/SimulateUrl?url=http://scripts.sems.uni-rostock.de/getshowcase.php link takes me to a page that reads “No model uploaded. You need to upload a model first prior to attempting to simulate it!”\n\n“… shown in the web browser, cmp. Figure 2(c).”?\n\nYou might want to provide a URL for COPASI and Tellurium?\n\nData availability:\nYou might want to reference your very latest commit (i.e. 6c34cc4) rather than commit a469197?\n\nFigure 1:\nI am not sure how useful this figure is. To me, it doesn’t bring anything to your data note. Not only that, but at its original size, one cannot read anything (I personally had to view it at 400% to be able to start reading the different labels).\n\nFigure 2:\nYou might want to remove the “B” in panel (a). It’s confusing.\n\nYou might want to mention that the range of the X and Y axes, as well as the colour of the different plots, cannot currently be specified in SED-ML, hence panels (b) and (c) don’t perfectly match panel (a)?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2421
|
https://f1000research.com/articles/5-2417/v1
|
28 Sep 16
|
{
"type": "Research Article",
"title": "Maternal complications in a geographically challenging and hard to reach district of Bangladesh: a qualitative study",
"authors": [
"Animesh Biswas",
"Koustuv Dalal",
"Abu Sayeed Md Abdullah",
"Mervyn Gifford",
"MA Halim",
"Koustuv Dalal",
"Abu Sayeed Md Abdullah",
"Mervyn Gifford",
"MA Halim"
],
"abstract": "Background: Maternal complications contribute to maternal deaths in developing countries. Bangladesh still has a high prevalence of maternal mortality, which is often preventable. There are some geographically challenging and hard to reach rural districts in Bangladesh and it is difficult to get information about maternal complications in these areas. In this study, we examined the community lay knowledge of possible pregnancy complications. We also examined the common practices associated with complications and we discuss the challenges for the community. Methods: The study was conducted in Moulvibazar of north east Bangladesh, a geographically challenged, difficult to reach district. Qualitative methods were used to collect the information. Pregnant women, mothers who had recently delivered, their guardians and traditional birth attendants participated in focus group discussions. Additionally, in-depth interviews were conducted with the family members. Thematic analyses were performed. Results: The study revealed that there is a lack of knowledge of maternal complications. In the majority of cases, the mothers did not receive proper treatment for maternal complications.\n\nThere are significant challenges that these rural societies need to address: problems of ignorance, traditional myths and family restrictions on seeking better treatment. Moreover, traditional birth attendants and village doctors also have an important role in assuring appropriate, effective and timely treatment. Conclusions: The rural community lacks adequate knowledge on maternal complications. Reduction of the societal barriers including barriers within the family can improve overall practices. Moreover, dissemination of adequate information to the traditional birth attendant and village doctors may improve the overall situation, which would eventually help to reduce maternal deaths.",
"keywords": [
"Maternal complications",
"deaths",
"rural community",
"Bangladesh"
],
"content": "Introduction\n\nMaternal complications during pregnancy, delivery and after delivery contribute to deaths which are preventable in the majority of cases1,2. Every year, around 350,000 maternal deaths occur globally due to maternal complications, most of which are in developing countries3,4. Complications leading to maternal deaths can also lead to death of the baby if the mother dies during pregnancy. Bangladesh is facing some challenges to reduce maternal deaths. In Bangladesh there has been a reduction in maternal death during the last few decades and new goals have been set to meet the United Nations Sustainable Developmental Goal 3 by 20305. Recent literature has clearly shown that the majority of the pregnant mothers in rural Bangladesh who died had suffered post-partum haemorrhage and eclampsia6. Studies also showed that a lack of knowledge and literacy on maternal health issues including complications, health seeking behavior, and social and cultural beliefs also contribute to maternal deaths in rural Bangladesh3,7–11. Similar studies in other developing countries have shown that the rural communities have a lack of knowledge about maternal complications9–11. Despite deliveries at specialist units have increased in the last few decades, home deliveries remain the most common, and are conducted by traditional birth attendants or family relatives12–15. Thus, risks of maternal complications are still high. It has also been observed that the involvement of village doctors and traditional birth attendants (TBA) during delivery might delay hospital referral during any maternal complications12–19.\n\nA recent study from Bangladesh showed that family and community are important players in reducing maternal and neonatal deaths6. Therefore, the focus of the current study was to investigate the current knowledge, practices and challenges in the rural community. Information of such factors could be useful in designing interventions to reduce maternal deaths in Bangladesh.\n\n\nMethods\n\nThe Moulavibazar district of Bangladesh was chosen for the study. This district is one of the hard to reach districts of Bangladesh because of its geographical location. The district is surrounded by hills and a large number of tea plantations.\n\nThe community people residing surrounding the tea plantations were mostly from the local ethnic group. Three sub-districts (upazilas) were randomly chosen. Both focus group discussions (FGDs) and in-depth interviews (IDIs) were conducted (Table 1).\n\nIn Bangladesh, especially in the hard to reach areas parents, it is usual for mothers-in-law and fathers-in-law to be involved in making major decisions in families. Considering the objective and study context, five focus group discussions (FGDs) were conducted: one group of pregnant mothers, one group of recently delivered mothers, one group of mothers-in-law and mothers (of the pregnant/mothers who delivered), one group of husbands, fathers-in-law and fathers (of the pregnant/mothers who delivered) and one group of traditional birth attendants (TBA). A total of 42 participants participated in the FGDs, where each group consisted of between seven and nine participants . The total number of pregnant mothers and recently delivered mothers (during last three months before interview) having a live birth in the community was obtained from different sources including field level government health workers, community volunteers, traditional birth attendant and village doctors. The research officers discussed with them to confirm the list of mothers to obtain the desired participants for FGDs. The participants were initially approached by the researchers and briefly explained the study objectives. The participants who were interested in enrolling were given further details of the objectives of the study. Fathers-in-law, fathers, mothers-in-law and mothers were selected from the pregnant or mothers who had delivered. The traditional birth attendant (TBA) group participated in the FGDs in one upazila. The remaining FGDs were conducted in two of each upazilas.\n\nFor in-depth interviews, families of mothers who had died as a result of maternal complications were identified from government health records. In three upazilas of Moulvibazar district, 15 maternal deaths occurred between January and March 2014. Members of the families of the mothers who died were interviewed. We chose a family member who was present at the time of the maternal complications before the deaths, or who knew the mothers’ circumstances before the death. A total number of 15 IDIs were conducted at the 15 deceased households.\n\nTwo research officers with expertise in conducting qualitative research from an anthropological perspective conducted the study during May 2014. The research officers were provided training on how to conduct FGDs and IDIs in the community. Guidelines were developed, modified and pre-tested in the field before actual data collection. The guidelines are available as Supplementary material. During conduction of the FGDs, one of the research officers was responsible for facilitating the session, whilst another one was assigned to take key notes. All FGDs were conducted at the tea garden catchment area within the community.\n\nFor IDIs, face-to-face interviews were conducted with the respondents at the deceased’s homes. The research officer used guidelines to conduct the interviews. The objectives of the research were clearly mentioned to the respondents before data collection.\n\nEach of the interviews was audio recorded and participants also gave informed consent for recording. Audio records were transcribed in the Bengali language and later translated into English. The accuracy was checked by translation and back-translation methods. Randomly selected transcripts were re-checked by the researchers and matched with the audio records and notes for its accuracy. Initially open coding of the data was performed and then selective code was used. A numbers of themes were identified after reading and re-reading the data20,21 and finally a thematic analysis was performed. Results were presented under three broad themes namely: perceptions, practices and challenges.\n\nIn Table 2 the content of the focus-group discussions and in-depth interviews are presented. Areas and prompts used during conduction were described here (Table 2).\n\nWritten informed consent to participate in the study (or a thumb print for consent of the illiterate) were taken from each of the respondents before the FGDs and interview sessions. The study received ethical permission from the Institutional Review Board of the Centre for Injury Prevention and Research, Bangladesh.\n\n\nResults\n\nIgnorance, misperceptions and lack of knowledge followed by delivery conducted by local untrained birth attendants or family members at home increase risk of maternal death. Moreover, barriers in the society and traditional myth also influence the process. The findings from FGDs and IDIs are presented categorically under three sub-headings: perceptions, practices and challenges.\n\nFocus group discussions explored a higher level of perceptions on maternal complications among the pregnant mothers and the mothers who had recently delivered a live baby. This group also had some knowledge on what initiatives were required if any complications should arise. Female guardians of the families (mothers and mothers-in-law) had minimal knowledge of maternal complications, rather having a number of misperceptions. Male guardian groups (husbands, fathers and fathers-in-law) felt that the issues of maternal complications were not things men should be concerned about and that such issues were relevant only to the female members of the family.\n\nIn-depth interviews were performed with the family members of mothers who had died because of maternal complications. The majority of family members had no knowledge of maternal complications and there was a number of misconceptions. Many of the families felt that complications during pregnancy were common and they stated that they were resigned to such events as being in the hands of God.\n\nOne of the recently delivered mothers mentioned during the FGD\n\n“My legs were swollen and I had blurred vision in my last pregnancy. I knew these were complications of pregnancy and I went to the nearby government hospital with my husband for a health checkup.”\n\nA mother-in-law of a pregnant mother said during the FGD:\n\n“Vomiting, anorexia, some bleeding, fatigue, headaches, low or high blood pressure, loss of body weight, swelling of legs, and abdominal pain are usual during pregnancy, these are not complications. Only if there is a large amount of bleeding after delivery it is a complication for the mother”.\n\nOne of the mothers of a deceased mother said in the IDI:\n\n“My daughter died due to our bad luck. Her face and legs were swollen during pregnancy. Such symptoms are common during pregnancy. We discussed it with our village doctor and the Dai (traditional birth attendant) and both of them ensured us not to be worried about this”.\n\nPregnant mothers were asked how they obtained their information about maternal complications. During a FGD one of the pregnant mothers mentioned her source of knowledge:\n\n“I heard about maternal complications from a government health care provider during a courtyard meeting at our village. He talked about bleeding during pregnancy, swelling of legs and hands etc. I also learned such information on the television”.\n\nOne of the husbands of a recently delivered mother mentioned in the FGD:\n\n“All the complications are naturally occurring during pregnancy and this is to do with female issues”.\n\nAnother husband of a deceased mother said during the IDI:\n\n“As far as I know the mothers’ faces swell due to over feeding and taking too much rest during pregnancy”.\n\nThe majority of the responders in the female guardian and male guardian group mentioned during the FGD that they depended primarily on village doctors and traditional birth attendants for reducing complications such as swelling of the face and neck, and headaches and fevers. If major complications like bleeding or eclampsia occurred, then they went directly to the upazila or district hospital. Almost all the traditional birth attendants said they worked with the pregnant women without having received any training.\n\nThe majority of the female guardians (mothers and mothers-in-law) and male guardians (husbands and fathers-in-law) of the deceased mothers mentioned that when any complications in pregnancy occurred they depended solely on Kabiraj (quacks) and village doctors and traditional birth attendants for treatment. Some of them said they relied on the drug salesmen in the pharmacies. None of them said they had sought help from government or private agencies.\n\nOne of the mothers-in-law of a mother who had recently delivered mentioned during the FGD:\n\n“I provided a small amount of food and let my daughter-in-law to rest as her face and legs were swollen during her last pregnancy”.\n\nOne of the husbands of a mother who had recently delivered said:\n\n“My wife’s legs and face were swollen and she suffered from blurred vision and headaches during her last pregnancy. When she had not recovered with the village doctor’s treatment, I then took her to Upazila Government hospital and she was cured with treatment.”\n\nDuring the FGD one of the TBA mentioned:\n\n“I conducted more than a hundred deliveries in the community. Due to cessation of menstruation during pregnancy heavy bleeding can occur after delivery. I can handle any difficult complications such as obstructed and prolonged labor without any instruments. I learn these traditionally without any training”.\n\nDuring an IDI one mother of a deceased mother said:\n\n“My daughter had high fever along with severe bleeding after delivery. The village Kabiraj (quack) provided her with herbal drugs but when my daughter didn’t get better with traditional treatments I contacted the village doctor and he gave some drugs. But unfortunately my daughter died with a high fever seven days after delivery”\n\nDuring an IDI one of the husbands said:\n\n“I heard from our village doctor that my wife’s face was swollen due to decreased urination and working less during pregnancy. I then advised her to work more but it is our bad luck that she died immediately after delivery at home”.\n\nFifty percent of the male respondents in FGDs and the majority of males participated in IDIs had no knowledge about maternal complications. During discussions it emerged that males had a misperception on this issue and thought that complications are a normal phenomenon during pregnancy. In FGDs, about 90% of mothers who had recently delivered and pregnant women mentioned that the main source of problems during maternal complications are the female decision makers in the family including the mother and mother-in-law. Most of the participants in the IDIs mentioned that they depended on village doctors and traditional birth attendants for advice and treatment for complications. They also thought that treatment of maternal complications is expensive at government facilities. TBAs and village doctors are not very aware of maternal complications and consider complications during pregnancy normal. They still practice some traditional procedures based on myths. Almost all of the participants stated that information about maternal complications should be made available to the head of the family. They recommended raising community awareness of maternal complications by use of through posters, videos, health camps and courtyard meetings. They also suggested more involvement of government health workers in the community to help inform older women in the family on maternal complication. The majority of the male groups in FGDs mentioned that they prefer home delivery by traditional birth attendants to the check up and deliveries conducted by male doctors in hospital.\n\nA mother who had recently delivered said during a FGD:\n\n“I had severe headaches during my pregnancy. My legs were swollen. I wanted to go to hospital but my husband and mother-in-law didn’t agree as they thought these symptoms were normal during pregnancy”.\n\nOne of the TBAs in a FGD stated:\n\n“I have conducted deliveries for many years. Swelling of legs, blurred vision, headaches, anemia and vomiting are normal signs during pregnancy. I even delivered a mother after a labor of three days”.\n\nOne of the husbands of a deceased mother said during an IDI:\n\n“When the traditional birth attendant could not advance my wife’s delivery I called the village doctor and he injected a drug to facilitate delivery but then after delivery she started to bleed and then the village doctor arrived and injected saline. One of my neighbors advised me to take my wife to hospital. But I have no enough money for hospital treatment. She died the day after delivery”.\n\n\nDiscussion\n\nOur study showed that mothers who had recently delivered and the pregnant women group have good knowledge on maternal complications but cannot put this in practice due to social and family barriers in their local communities. Both male and female guardians of mothers who recently delivered have more awareness of maternal complications and practices than the guardians of deceased mothers. Male guardians have less knowledge of practice during maternal complications. However, the mother and mother-in-law of deceased mothers traditionally believed that complications are normal during pregnancy.\n\nMothers who had recently delivered were more aware of maternal complications and conducted health checkup by community health care providers during any complication compared to the families of deceased mothers, indicating that health care during pregnancy may reduce the chance of maternal mortality17.\n\nOur study explored that the male guardians in the family such as husbands, fathers and fathers-in-law were not aware of maternal care during pregnancy. They were ignorant of maternal complication issues. Literature indicated that husbands are important for improving maternal health care and maternal complications11. Therefore, husbands could be encouraged to insist that their wives use health care facilities, especially during any maternal complications. Husbands also could provide financial support to their wives and encourage adequate nutrition.\n\nThere were many misperceptions and malpractices on maternal complications in the communities, which supported the finding of a study describing the prevalence of reported complications by analyzing the risk factors of obstetric complications in rural setting of Bangladesh22.\n\nMany pregnant mothers and mothers who had recently delivered had good awareness of maternal complications, but could not practice due to family barriers. Husbands' social and economic supports could eliminate or reduce barriers like high costs, poor transportation, and long distances to health care facilities. They could also facilitate other factors associated with for improved utilization of delivery care23.\n\nThe information provided by all participants in the survey underlines the importance of mass awareness among community people, which, if improved, can eliminate misperception and malpractices during maternal complications. Appropriate health care seeking behavior among community people will be a first step to improve maternal health services and ultimately reduce maternal death24.\n\n\nConclusion\n\nCommunity perception on maternal complications was inadequate among the male and female guardian group of mothers who recently delivered and the group who had maternal deaths in their family. There was scarcity of knowledge among male respondents in both groups; therefore, decision making during maternal complications was missing in the majority of cases. Mothers and mothers-in-law in a family also required knowledge improvements to develop their practice. We observed that communities are still dependent on traditional birth attendants to deliver babies in most of the cases. Focused interventions are required by the government at the local level to improve the overall situation of maternal complications which will consequently reduce maternal deaths in Bangladesh.\n\n\nData availability\n\nRaw datasets have not been made available at the request of the ethics committee in order to maintain participant confidentiality. This data is stored at the Department of Reproductive and Child Health Unit of CIPRB, and is available upon request. Please contact the corresponding author for further information.",
"appendix": "Author contributions\n\n\n\nConceived and designed the study AB, KD, AS, AH. Analyzed the findings: AB, AH, KD. Wrote the paper: AB, KD, AS, MG, AH.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThere was no funding allocated for this study. This study was done by the Centre for Injury Prevention and Research, Bangladesh (CIPRB) within its own capacity.\n\n\nSupplementary material\n\nEnglish translation of the guidelines used for the qualitative study of maternal complications in a geographically challenging and hard to reach district of Bangladesh.\n\nClick here to access the data\n\n\nReferences\n\nMaternal or pregnancy comlication [Internet]. Reference Source\n\nThompson JF, Roberts CL, Currie M, et al.: Prevalence and persistence of health problems after childbirth: Associations with parity and method of birth. Birth. 2002; 29(2): 83–94. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization (WHO): Trends in Maternal Mortality: 1990 to 2008. Estimates developed by WHO, UNICEF, UNFPA and The World Bank. 2010. Reference Source\n\nZahr CA, Tessa W: Antenatal care in developing countries: promises, achievements and missed opportunities: an analysis of trends, levels and differentials. World Health Organ. 2003. Reference Source\n\nSustainable Development Goals. Reference Source\n\nBiswas A: Maternal and Neonatal Death Review System to Improve Maternal and Neonatal Health Care Services in Bangladesh. Örebro University, Sweden, 2015. Reference Source\n\nGBD 2013 Mortality and Causes of Death Collaborators. Global, regional, and national age-sex specific all-cause and cause-specific mortality for 240 causes of death, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013. Lancet. 2015; 385(9963): 117–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarkee R, Baral OB, Khanal V, et al.: The role of obstetric knowledge in utilization of delivery service in Nepal. Health Educ Res. England; 2014; 29(6): 1041–8. PubMed Abstract | Publisher Full Text\n\nIyengar K, Yadav R, Sen S: Consequences of maternal complications in women's lives in the first postpartum year: a prospective cohort study. J Health Popul Nutr. 2012; 30(2): 226–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhan AR, Jahan FA, Begum SF: Maternal mortality in rural Bangladesh: the Jamalpur District. Stud Fam Plann. United States; 1986; 17(1): 7–12. PubMed Abstract | Publisher Full Text\n\nAhmed A, Hossain SAS, Quaiyum A, et al.: Husbands’ knowledge on maternal health care in rural Bangladesh: An untapped resource? Trop Med Int Heal. 2011; 16: 291.\n\nBBS. 2001. Reference Source\n\nBBS. 2010. Reference Source\n\nShabnam J, Gifford M, Dalal K: Socioeconomic Inequalities in the use of delivery care services in Bangladesh: A comparative study between 2004 and 2007. Health. 2011; 3(12): 762–71. Publisher Full Text\n\nAndrews JY, Dalal K: Umbilical cord-cutting practices and place of delivery in Bangladesh. Int J Gynaecol Obstet. 2011; 114(1): 43–6. PubMed Abstract | Publisher Full Text\n\nRonsmans C, Graham WJ; Lancet Maternal Survival Series steering group: Maternal mortality: who, when, where, and why. Lancet. 2006; 368(9542): 1189–200. PubMed Abstract | Publisher Full Text\n\nMoran AC, Winch PJ, Sultana N, et al.: Patterns of maternal care seeking behaviours in rural Bangladesh. Trop Med Int Heal. 2007; 12(7): 823–32. PubMed Abstract | Publisher Full Text\n\nMarkos D, Bogale D: Birth preparedness and complication readiness among women of child bearing age group in Goba woreda, Oromia region, Ethiopia. BMC Pregnancy Childbirth. England; 2014; 14: 282. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoghani Lankarani M, Changizi N, Rasouli M, et al.: Prevention of pregnancy complications in iran following implementing a national educational program. J Family Reprod Health. Iran; 2014; 8(3): 97–100. PubMed Abstract | Free Full Text\n\nIrving S: Interviewing as Qualitative Research - A Guide for Researchers in Education and the Social Sciences. Teach Coll Columbia Univ USA. 2006. Reference Source\n\nBoyatzis RE: Transforming qualitative information: Thematic analysis and code development. Sage. 1998. Reference Source\n\nSikder SS, Labrique AB, Shamim AA, et al.: Risk factors for reported obstetric complications and near misses in rural northwest Bangladesh: analysis from a prospective cohort study. BMC Pregnancy Childbirth. England; 2014; 14: 347. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStory WT, Burgard SA, Lori JR, et al.: Husbands’ involvement in delivery care utilization in rural Bangladesh: A qualitative study. BMC Pregnancy Childbirth. 2012; 12: 28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuayyum Z, Khan MN, Quayyum T, et al.: “Can community level interventions have an impact on equity and utilization of maternal health care” - evidence from rural Bangladesh. Int J Equity Health. 2013; 12: 22. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "16673",
"date": "10 Oct 2016",
"name": "Puspa Raj Pant",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI would like to congratulate the authors for their efforts to compile such evidences required for the improvement of maternal and child health service utilisation in Bangladesh as well similar communities in neighbouring countries.\n\nI find the manuscript well structured and written. However, I suggest some editing for languages (please see highlighted and suggested in the pdf text) for consistency and rearrange the quotations in the result section.\nI have also suggested to touch the situation of Antenatal Care service use by the women in the study area and briefly describe the existing government/nongovernmental structures available in the communities.\nI approve the manuscript to be indexed after the incorporation of the comments and suggestions.",
"responses": []
},
{
"id": "17019",
"date": "17 Oct 2016",
"name": "Asish Kumar Das",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract: The abstract is well written. The title of the study perhaps would have been a little more expressive if it would have a word “Rural”. The study has been conducted in the rural areas and there are differences in perception among the rural and urban population in terms of knowledge, perception and practice.\n\nArticle Content: It is a well-designed study. The study population groups have been selected appropriately, FGD groups were homogenous in nature which was crucial in order to get the best out of a FGD. The discussions were well directed. Analysis was done manually and no qualitative software has been used which is OK (and perhaps more meaningful). The study looked into the issue of maternal deaths from all possible perspectives. As a public health professional with program based hands on experience in maternal and child health, I have repeatedly come across the issues/ difficulties as mentioned in the article. I consider it a real good one.\n\nConclusion: The conclusion was comprehensive and was able to summarize the study efficiently.\n\nData: As it is a qualitative study, I was not expecting any set of data. The quotes from the respondents were enough for me.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2417
|
https://f1000research.com/articles/5-2415/v1
|
28 Sep 16
|
{
"type": "Research Note",
"title": "Pattern of triple negative epithelial ovarian cancer in indigenous African women ",
"authors": [
"Mustapha Akanji Ajani",
"Ayodeji Akeem Salami",
"Olutosin Alaba Awolude",
"Abideen Olayiwola Oluwasola",
"Ayodeji Akeem Salami",
"Olutosin Alaba Awolude",
"Abideen Olayiwola Oluwasola"
],
"abstract": "Background: Triple negative epithelial ovarian cancer (TNEOC) refers to ovarian carcinomas that do not express estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor- type 2 (HER-2/neu). The aim of this study is to determine the pattern of triple negative epithelial ovarian cancer in indigenous African women.\nMethods: We performed a retrospective review of ER, PR and HER-2/neu expression in 90 Nigerian patients with histologically diagnosed epithelial ovarian cancer. Lack of expression of ER, PR and HER2/neu antigens was used to determine carcinomas that are among the TNEOC. We also compared the clinicopathological parameters (age, International Federation of Gynaecology and Obstetrics (FIGO) stage, grade and histological subtype) in patients with TNEOC and non- TNEOC .\nResults: Thirty-eight (42.2%) of the 90 tumours diagnosed as EOC were negative for ER, PR and HER2/neu expression. There was no significant association between TNEOC with other parameters such as age, FIGO stage and histological grade. Sixteen (66.7%) of the 24 mucinous carcinomas were triple negative, while only 21 (33.3%) of the 63 serous carcinomas were triple-negative and one (50%) of the two endometrioid carcinomas was triple negative. There was a significant association between triple-negative tumours and histological subtypes of EOC (p = 0.034).\nConclusions: A subtype of epithelial ovarian cancer that is negative for ER, PR and HER-2/neu has been discovered in indigenous African women. TNEOC expression is high and is comparable to the triple negative breast cancer subtype seen in people of African ancestry. Future study of TNEOC in a large sample size should be considered.",
"keywords": [
"Estrogen Receptor",
"HER-2/neu expression",
"Immunohistochemistry",
"Ovarian carcinoma",
"Progesterone Receptor",
"Triple negative"
],
"content": "Introduction\n\nEpithelial ovarian cancer (EOC) remains one of the leading causes of death in gynaecological malignancies in developed countries1–4. The initial symptoms of ovarian cancer are often ambiguous, therefore it goes undiagnosed until after the disease is far advanced and has spread throughout the abdomen or to distant organs5,6.\n\nSteroid hormone receptors expression in epithelial ovarian cancers have been proposed to have therapeutic and prognostic relevance, as is the case in breast cancers7. The determination of tumour characteristics such as age, International Federation of Gynaecology and Obstetrics (FIGO) stage, grade and histological subtypes has been associated with clinical behaviour and impact on treatment and prognosis but have been found to be limited8. Among the biological parameters proposed as possible prognostic factors in ovarian cancer, estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor- type 2 (HER-2/neu) have been tested as potential biomarkers that guide individualized treatment of the cancer5,6,9. Epithelial ovarian carcinoma results from repeated ovulations, where the cumulative effects of each minor trauma on the ovarian epithelium can lead to malignant transformation10. PR has been observed to predict better prognosis because of its protection against ovarian carcinoma development11,12. On the other hand, overexpression of ER has been found to be associated with poor prognosis due to its contribution to initiation and/or promotion of ovarian carcinogenesis10,13. The HER-2/neu has been shown to be over-expressed in approximately 20–30% of EOC with associated poor prognosis14–16.\n\nTriple negative epithelial ovarian cancer (TNEOC) cases have been found to be more aggressive and display a worse prognosis than non-TNEOC cases17. This was similarly observed in the studies of triple negative breast cancer18,19.\n\nThis study was designed to determine the pattern of TNEOC among indigenous African women and correlate it with clinicopathological parameters.\n\n\nMethods\n\nWe performed a retrospective review of ER, PR and HER-2/neu expression in 90 patients with histologically diagnosed epithelial ovarian cancer seen at the University College Hospital, Ibadan, Nigeria between January 2006 and December 2012. Non-epithelial primary ovarian cancers and metastatic cancers in the ovary were not included in this study. The demographic data and clinical history of these cases were obtained from the case notes, surgical daybooks, surgical pathology request forms, post-mortem records and Cancer Registry data. Formalin-fixed paraffin-embedded tissue blocks of histologically diagnosed solid EOC between January 2006 and December 2012 were retrieved and used for the study. The microscopic grading (three-grade system) of Shimizu and Silverberg was used, which assesses architectural pattern, nuclear pleomorphism and mitotic activity20. All histological classification of the EOC was based on the 2013 World Health Organisation (WHO) classification of ovarian tumours21. The FIGO staging of the cases used for this study was extracted from the case notes of the patients.\n\nThe ethical clearance for this study was obtained from the Joint University of Ibadan/University College Hospital Ethical Review Committee (approval number UI/EC/13/0050) according to the Declaration of Helsinki.\n\nThe immunostaining procedure for HER-2/neu was carried out in accordance with the previously published article22. For the immunostaining procedure, three sections each for ER, PR and HER-2/neu at 5µm were cut from each of the paraffin-embedded tissue blocks after deparaffinization in xylene (two aliquots for five minutes each with the xylene covering the slide entirely). The sections were then rehydrated in graded alcohol concentrations (two aliquots each of 100% and 95% each and a single aliquot of 70%) in 250ml couplin jars. The antibodies used were monoclonal mouse anti-human ERα (Dako USA; clone ID5) and monoclonal mouse anti-human PR (Dako USA; clone PgR636) which identify the ER and PR nuclear protein antigens. The primary antibody used for HER-2/neu antigen was polyclonal rabbit anti-human C-erbB-2 (MBO/TEG, Dako USA, 1:800). The tissue sections were immersed in EDTA buffer (pH 9.0) for ER, citrate buffer (pH 6.0) for PR and in 1M Tris buffer (pH 9.0) for HER-2/neu. These slides were then incubated at room temperature for 20 minutes with primary monoclonal antibodies against ER (Dako USA, clone 1D5; 1:50), PR (Dako USA, clone PgR636; 1:50) and polyclonal rabbit anti-human C-erbB-2 (MBO/TEG, Dako USA, 1:800) followed by incubation in biotin-labelled secondary antibodies, polyclonal goat anti-mouse antibody for both ER and PR, (Dako USA, REF: K0675, LOT: 10081219) and polyclonal goat anti-rabbit antibody for HER-2/neu, (kitR, K5001, Dako Denmark) for 20 minutes and streptavidin-peroxidase complex (Dako USA, REF: K0675, LOT: 10084687) for another twenty minutes. The antigen-antibody complex was precipitated with di-aminobenzidine (DAB) for light microscopy with DAB substrate and DAB chromogen in the ratio of 1ml to 1 drop respectively. This was thereafter counterstained in Mayer’s haematoxylin (Dako USA). Dehydration of the sections was performed in ascending grades of alcohol and cleared in xylene. The slides were coversliped with DPX mountant. Known cases of breast cancer with positive reactions for ER, PR and HER-2/neu were used as a positive control. Negative controls were cases of tumour sections that were pretreated in Tris but without primary antibody immunostaining. All slides were reviewed independently by the three of the authors and cases with discordant scores were re-evaluated to have a consensus score. Grading of nuclear ER and PR staining was performed using an immunoreactive H-scoring system {none= 0 (negative); 1–25%=1+ (weak); 26–50%=2+ (moderate); >50%=3+(strong)}11. HER-2/neu membrane staining was graded in according to the Hercep Test protocol system as 0, 1+, 2+ or 3+. Samples scored as 0 or 1+ were considered negative for HER-2/neu overexpression, 2+ was weakly positive and 3+ was strongly positive22,23. Photomicrographs of the specimens were taken using Olympus digital camera, DP 21 at 400X magnification (Figure 1).\n\nThe data obtained were subjected to statistical analysis using Statistical Package for Social Sciences (SPSS) version 20. Statistical analysis was used to evaluate statistical associations between TNEOC and clinicopathological parameters i.e. age, FIGO stage grade, and histological subtypes. Continuous variables were compared using the student’s T test and categorical variables were compared using the chi-square test, with the level of significance set at p <0.05.\n\n\nResults\n\nThirty-eight (42.2%) of the 90 epithelial ovarian cancers (EOC) were negative for ER, PR and HER-2/neu expression (Figure 1). There was no significant association between triple-negative EOC and age (p = 0.218), FIGO stage (p = 0.425) and histological grade (p= 0.269). There were more TNEOC cases seen in patients older than 40 years than those below 40 years of age. Of 38 cases of TNEOC, 21 (55.3%) were found in the early stage (FIGO stage I and II) of epithelial ovarian cancer and 17 (44.7%) were at the advanced stage.\n\nHowever, sixteen (66.7%) of the 24 mucinous carcinomas were triple-negative, while only 21 (33.3%) of the 63 serous carcinomas were triple-negative and one (50%) of the two endometrioid carcinomas was triple-negative (Table 1). There was therefore a significant association between triple-negative tumours and histological subtypes of EOC (p = 0.034).\n\n\nDiscussion\n\nA subgroup of epithelial ovarian cancer that is negative for ER, PR and HER-2/neu expression has been identified among indigenous African women. This subgroup is known as triple negative epithelial ovarian cancer (TNEOC). According to ER, PR, and HER-2/neu expressions, a breast cancer subtype known as triple negative breast cancer (TNBC) has been identified24.\n\nIn our study, triple negative tumours accounted for 42.2% of EOC. This value contrasts with the results of other studies17,25 and compares with the results of previous study26. A significant percentage (66.7%) of mucinous carcinoma were negative for ER, PR and HER-2/neu and this was statistically significant (p=0.034). This finding contrasts what was found from previous studies where there was no significant association between the TNEOC and histological subtypes17,25,26. No significant association was also found between the TNEOC and histological grade unlike what was observed by Liu et al.17 and de Toledo et al.26 where TNEOC was significantly correlated with histological grade. There was no significant association between TNEOC and age and FIGO stage compared to the findings of other studies17,25,26.\n\nOur findings were comparable with what was found by Huo et al. in the population differences in breast cancer where triple negativity was predominant (27%)19. In view of the fact that triple-negative breast cancers are more often seen in black Africans and African-Americans and are associated with a poorer prognosis than non-triple-negative breast cancers, further studies of TNEOC in different environments are required.\n\n\nConclusions\n\nA subtype of epithelial ovarian cancer that is negative for ER, PR and HER-2/neu has been discovered in Nigeria. Its (TNEOC) expression is high and is comparable to the triple negative breast cancer subtype seen in people of African ancestry. Future study of TNEOC in a large sample size should be considered.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for ‘Pattern of triple negative epithelial ovarian cancer in indigenous African women’, 10.5256/f1000research.9632.d13677727",
"appendix": "Author contributions\n\n\n\nMAA conceived and designed the study. MAA, AAS and AOO carried out the research. MAA and AOO prepared the first draft of the manuscript. All authors contributed to the experimental design and preparation of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nSpecial thanks to Mr S Ajagboye and Mr SP Otegbade of the Department of Pathology, University of Ibadan and University College Hospital respectively for the technical assistance they rendered with the slides used for this study and Mr Abayomi Odetunde of Institute for Advanced Medical Research and Training, College of Medicine, University of Ibadan for carrying out the immunohistochemical staining on the cases.\n\n\nReferences\n\nHunn J, Rodriguez GC: Ovarian cancer: etiology, risk factors, and epidemiology. Clin Obstet Gynaecol. 2012; 55(1): 3–23. PubMed Abstract | Publisher Full Text\n\nJemal A, Bray F, Center MM, et al.: Global cancer statistics. CA Cancer J Clin. 2011; 61(2): 69–90. PubMed Abstract | Publisher Full Text\n\nBarrena Medel NI, Wright JD, Herzog TJ: Targeted therapies in epithelial ovarian cancer. J Oncol. 2010; 2010: 314326. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwamy GG, Satyanarayana N: Clinicopathological analysis of ovarian tumors--a study on five years samples. Nepal Med Coll J. 2010; 12(4): 221–223. PubMed Abstract\n\nDinh P, Harnett P, Piccart-Gebhart MJ, et al.: New therapies for ovarian cancer: cytotoxics and molecularly targeted agents. Crit Rev Oncol Hematol. 2008; 67(2): 103–12. PubMed Abstract | Publisher Full Text\n\nYap TA, Carden CP, Kaye SB: Beyond chemotherapy: targeted therapies in ovarian cancer. Nat Rev Cancer. 2009; 14(3): 167–81. PubMed Abstract | Publisher Full Text\n\nScambia G, Ferrandina G, D’Agostino G, et al.: Oestrogen and progesterone receptors in ovarian carcinoma. Endocrine Rel Cancer. 1998; 5: 293–301. Publisher Full Text\n\nde Toledo MC, Barreta A: The Role of Steroid Receptors and HER2 in Ovarian Cancer. J Carcinog Mutagen. 2014; 5: 1. Publisher Full Text\n\nTuefferd M, Couturier J, Penault-Llorca F, et al.: HER2 status in ovarian carcinomas: a multicenter GINECO study of 320 patients. PLoS One. 2007; 2(11): e1138. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFathalla MF: Incessant ovulation--a factor in ovarian neoplasia? Lancet. 1971; 2(7716): 163. PubMed Abstract | Publisher Full Text\n\nAyadi L, Chaabouni S, Khabir A, et al.: Correlation between immunohistochemical biomarkers expression and prognosis of ovarian carcinomas in Tunisian patients. World J Oncol. 2010; 1(3): 118–128. Publisher Full Text\n\nLee P, Rosen DG, Zhu C, et al.: Expression of progesterone receptor is a favorable prognostic marker in ovarian cancer. Gynaecol Oncol. 2005; 96(3): 671–677. PubMed Abstract | Publisher Full Text\n\nZhao D, Zhang F, Zhang W, et al.: Prognostic role of hormone receptors in ovarian cancer: a systematic review and meta-analysis. Int J Gynecol Cancer. 2013; 23(1): 25–33. PubMed Abstract | Publisher Full Text\n\nHellström I, Goodman G, Pullman J, et al.: Overexpression of HER-2 in ovarian carcinomas. Cancer Res. 2001; 61(6): 2420–2423. PubMed Abstract\n\nCirisano FD, Karlan BY: The role of the HER-2/neu oncogene in gynecologic cancers. J Soc Gynecol Investig. 1996; 3(3): 99–105. PubMed Abstract\n\nYu D, Wolf JK, Scanlon M, et al.: Enhanced c-erbB-2/neu expression in human ovarian cancer cells correlates with more severe malignancy that can be suppressed by E1A. Cancer Res. 1993; 53(4): 891–898. PubMed Abstract\n\nLiu N, Wang X, Sheng X: The clinicopathological characteristics of ‘triple-negative’ epithelial ovarian cancer. J Clin Pathol. 2010; 63(3): 240–243. PubMed Abstract | Publisher Full Text\n\nChacón RD, Costanzo MV: Triple-negative breast cancer. Breast Cancer Res. 2010; 12(Suppl 2): S3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuo D, Ikpatt F, Khramtsov A, et al.: Population differences in breast cancer: survey in indigenous african women reveals over-representation of triple-negative breast cancer. J Clin Oncol. 2009; 27(27): 4515–4521. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShimizu Y, Kamoi S, Amada S, et al.: Toward the development of a universal grading system for ovarian epithelial carcinoma. I. Prognostic significance of histopathologic features--problems involved in the architectural grading system. Gynecol Oncol. 1998; 70(1): 2–12. PubMed Abstract | Publisher Full Text\n\nTassovoli FA, Devilee P: Pathology and Genetics. Tumours of the Breast and Female Genital Organs. Lyon, IARC Press. 2003; 114. Reference Source\n\nAjani MA, Salami A, Awolude OA, et al.: The expression status of human epidermal growth factor receptor 2 in epithelial ovarian cancer in Ibadan, Nigeria. South Afr J Gynaecol Oncol. 2016; 8(1): 9–13. Publisher Full Text\n\nWolff AC, Hammond ME, Schwartz JN, et al.: American Society of Clinical Oncology/College of American Pathologists guideline recommendations for human epidermal growth factor receptor 2 testing in breast cancer. J Clin Oncol. 2007; 25(1): 118–145. PubMed Abstract | Publisher Full Text\n\nReis-Filho JS, Tutt AN: Triple negative tumours: a critical review. Histopathology. 2008; 52(1): 108–118. PubMed Abstract | Publisher Full Text\n\nDemir L, Yigit S, Sadullahoglu C, et al.: Hormone Receptor, HER2/NEU and EGFR expression in ovarian carcinoma--is here a prognostic phenotype? Asian Pac J Cancer Prev. 2014; 15(22): 9739–9745. PubMed Abstract | Publisher Full Text\n\nde Toledo MC, Sariana LO, Salluma LF, et al.: Analysis of the contribution of immunologically-detectable HER2, steroid receptors and of the “triple-negative” tumor status to disease-free and overall survival of women with epithelial ovarian cancer. Acta Histochemica. 2014; 116(3): 440–447. PubMed Abstract | Publisher Full Text\n\nAjani MA, Salami AA, Awolude OA, et al.: Dataset 1 in: Pattern of triple negative epithelial ovarian cancer in indigenous African women. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16923",
"date": "11 Oct 2016",
"name": "Akinwumi Komolafe",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have gone through the whole paper and these are my comments:\nIt is a well conceived and conducted research paper with the aims and objectives and rationale for the study well spelled out.\n\nIt addresses the peculiarities of the African race which is fundamental to adequate care.\n\nThe authors are very clear and consistent in the message that they aimed to convey and have passed this through.\n\nAll parts of the research work have fulfilled the guidelines for sound scientific work.\n\nThe discussion addresses crucial details.\n\nThe conclusion is a brilliant and excellent summation of the author's scientific work.",
"responses": []
},
{
"id": "16697",
"date": "13 Oct 2016",
"name": "Saad Aliyu Ahmed",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting manuscript, well conceived and has addressed the objective of the research.\n\nThe methodology is simple and reproducible.\n\nResult was clearly outlined and discussion was adequate.\n\nConclusion is apt.\n\nThis research finding will add to the few literature on TNEOC in the population studied. It will serve as a baseline for future reference and interestingly the research has prognostic and therapeutic implications.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2415
|
https://f1000research.com/articles/5-2375/v1
|
26 Sep 16
|
{
"type": "Data Note",
"title": "Lies, irony, and contradiction — an annotation of semantic conflict in the movie \"Forrest Gump\"",
"authors": [
"Michael Hanke",
"Pierre Ibe",
"Pierre Ibe"
],
"abstract": "Here we extend the information on the structure of the core stimulus of the studyforrest project (http://studyforrest.org) with a description of semantic conflict in the “Forrest Gump” movie. Three observers annotated the movie independently regarding episodes with portrayal of lies, irony or sarcasm. We present frequency statistics, and inter-observer reliability measures that qualify and quantify semantic conflict in the stimulus. While the number of identified events is limited, this annotation nevertheless enriches the knowledge about the complex high-level structure of this stimulus, and can help to evaluate its utility for future studies, and the usability of the existing brain imaging data regarding this aspect of cognition.",
"keywords": [
"semantic",
"cognition",
"brain imaging",
"frequency statistics"
],
"content": "Introduction\n\nDetection of semantic conflict is an important cognitive skill for human social interaction. It is required to identify lies (false statements made with the intention to deceive) but also to correctly interpret stylistic devices — such as sarcasm and irony (statements with direct meaning that is the opposite1 or contrary2 to the implied semantic content). As the interpretation of such events is highly context dependent, it is difficult to study how the brain processes these in the context of real-life like interactions in complex natural environments.\n\nIn this study we explored occurrences of semantic conflict in the core stimulus of the studyforrest project (http://studyforrest.org) — the motion picture “Forrest Gump” — in order to evaluate whether the available brain imaging data3,4 can be readily used to study this aspect of cognition. We annotated the presence of contradictory statements, including lies and ironic statements, as well as the portrayal of cues, such as exaggeration or raised eyebrows, that are often associated with making ironic statements. Additionally, we recorded the context that allowed observers to classify an event as contradictory.\n\nDepending on the exact criterion used for identifying events across observers, we found only between 64 and 36 occurrences of semantic conflict or portrayal of irony cues in the entire movie stimulus. These are likely insufficient numbers for an investigation based on these data alone. However, these new annotations nevertheless contribute to a more comprehensive description of this complex movie stimulus5,6 and may be useful as confound variables in subsequent studies.\n\n\nMaterials and methods\n\nThe annotated stimulus was a slightly shortened (≈2 h) version of the movie Forrest Gump (R. Zemeckis, Paramount Pictures, 1994), with a dubbed German soundtrack, and is identical to the audio-visual movie annotated in 5,6. Further details on this particular movie cut, and how to reproduce it from commercially available sources, are available in 4.\n\nThree observers (all female, age 19–20) independently annotated the movie. They were also involved in the development of the concept for this annotation.\n\nObservers were instructed to watch the movie from beginning to end, replaying scenes as often as required, and to detect two types of events: 1) whenever a verbal statement is made that contradicts with either the immediate context or with the viewer’s body-of-knowledge at this point in the movie, or 2) whenever one or more cues associated with irony (predefined list, see below) are portrayed. In either case, observers had to describe the event by specifying its properties via a number of variable settings in a spreadsheet. The software video player VLC (http://www.videolan.org/vlc) was used to watch and navigate through the movie.\n\n\nData legend\n\nFor each annotated event, a total of 10 properties were recorded, each of which are described in the following sections.\n\nStart and end The duration of each event is recorded in start and end as the number of seconds from movie start (no sub-second precision, due to limitations of the video player time display). The time-points correspond to the onset and offset of the respective evidence. Both times can be identical in the case of events with less than one second duration. For contradictory statements, the duration covers the time from the onset of evidence of a contradiction until the end of the statement.\n\nSender and receiver The identity of a character making a contradictory statement or portraying an irony cue is encoded in sender using character labels listed in 5. In the case that the respective statement is directed to another present movie character, its identity is encoded in receiver.\n\nEvidence of a contradiction The contradiction flag indicates the presence of a contradiction in an event (1: present, 0: absent). The variable proof qualifies if the current or previous events provide the viewer with information to allow the detection of this contradiction (see Table 1). If proof is empty, the movie itself does not contain such information (e.g. a common sense contradiction).\n\nIrony cues The variable cues contains a space-separated list of labels for all irony cues present in a particular events. See Table 1 for a description of all possible labels.\n\nEvent category The category variable classifies events into lies, ironic statements, and other events (value empty).\n\nIntention Two more variables encode whether a contradiction was used deliberately and whether this was noticed by the receiver. The variable intended encodes the presence of evidence for deliberate use (1: yes, 0: no). The variable is empty if there is no evidence for either case. The second variable intention_decoded encodes, in the same way, whether a potential receiver noticed a deliberate ironic statement or lie.\n\nThe released annotation are three, text-based, comma-separated-value (CSV) formatted tables (data/o??.csv), one for each observer.\n\nThe source code for all descriptive statistics included in this paper is available in code/descriptive_stats.py (Python script).\n\n\nDataset validation\n\nWe used an automated procedure to check the annotation records of individual observers for errors or potential problems. Observers submitted their annotations in tabular form to a script that generated a list of error and warning messages. Using this feedback, observers double-checked their annotations as often as necessary until no objective errors were found and all warning messages were confirmed to be false positives. The tests included, for example, plausibility of timing information (no end time before the respective start time) or the presence of unknown condition labels.\n\nIn order to assess inter-observer agreement of annotations, we used a two-step approach. First, the temporal location of events depicting any relevant property were determined by comparing annotation timing across observers. The columns in Table 2 report agreement statistics for events defined by at least one, two, or all three observers recording an annotation for the same sender at the same time. In the case that individual observers reported events of different length, or with only partially overlapping duration, only the time-windows with the minimum number of observers reporting an event were considered.\n\nNumber of events and categorization agreement are presented for three levels of inter-observer agreement on the temporal location and the performing movie character. The number of events for any particular event property are determined by majority vote across observers, i.e. an event is counted when more observers indicate the presence of a property than its absence. Exhaustive technical detail on the statistical analysis can be found in the descriptive_stats.py Python script.\n\nIn the second step, we computed Fleiss’ Kappa7 for each individual property of an annotation separately with respect to being consistently assigned or non-assigned to the identified events (Table 2). We observe increasing inter-observer agreement of all annotated properties with increasing agreement of annotation timing, approaching “substantial” or “almost perfect” agreement — according to the conventions put forth by 8.\n\n\nData and software availability\n\nF1000Research: Dataset 1. Events of semantic conflict and occurrences of irony cues in the motion picture \"Forrest Gump\", 10.5256/f1000research.9635.d1362039\n\nF1000Research: Dataset 2. Python script, 10.5256/f1000research.9635.d13620410\n\nIn addition, released data, code, and manuscript sources are also available on Github (https://github.com/psychoinformatics-studyforrest-paper-ironyannotation).",
"appendix": "Author contributions\n\n\n\nMH contributed to the design of the annotation effort, performed the dataset validation, and wrote the paper; PI contributed to the design, coordinated the annotation effort, and wrote the paper. Both authors agreed to the final content of the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nMichael Hanke was supported by funds from the German federal state of Saxony-Anhalt and the European Regional Development Fund (ERDF), Project: Center for Behavioral Brain Sciences.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to Denise Naumann, Marisela Markarian, Jasmin Billowie, and Susann Bergmann for their contributions to the design and the execution of the annotation effort. We also appreciate Alex Waite for his seemingly unending willingness to edit papers.\n\n\nReferences\n\nAttardo S, Eisterhold J, Hay J, et al.: Multimodal markers of irony and sarcasm. Humor. 2003; 16: 243–260. Publisher Full Text\n\nHancock JT: Verbal irony use in face-to-face and computer-mediated conversations. Journal of Language and Social Psychology. 2004; 23(4): 447–463. Publisher Full Text\n\nHanke M, Baumgartner FJ, Ibe P, et al.: A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Sci Data. 2014; 1: 140003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanke M, Adelhöfer N, Kottke D, et al.: Simultaneous fMRI and eye gaze recordings during prolonged natural stimulation – a studyforrest extension. Scientific Data. 2016. Publisher Full Text\n\nLabs A, Reich T, Schulenburg H, et al.: Portrayed emotions in the movie \"Forrest Gump\". F1000Res. 2015; 4: 92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHäusler CO, Hanke M: An annotation of cuts, depicted locations, and temporal progression in the motion picture \"Forrest Gump\" [version 1; referees: 1 approved]. F1000Research. 2016; 5: 2273. Publisher Full Text\n\nFleiss JL: Measuring nominal scale agreement among many raters. Psychol Bull. 1971; 76(5): 378–382. Publisher Full Text\n\nLandis JR, Koch GG: The measurement of observer agreement for categorical data. Biometrics. 1977; 33(1): 159–174. PubMed Abstract | Publisher Full Text\n\nHanke M, Ibe P: Dataset 1 in: Lies, irony, and contradiction — an annotation of semantic conflict in the movie \"Forrest Gump\". F1000Research. 2016. Data Source\n\nHanke M, Ibe P: Dataset 2 in: Lies, irony, and contradiction — an annotation of semantic conflict in the movie \"Forrest Gump\". F1000Research. 2016. Data Source"
}
|
[
{
"id": "16597",
"date": "21 Oct 2016",
"name": "Dylan Wagner",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this data note the authors provide a new annotation of the motion picture Forrest Gump focusing specifically on moments of semantic conflict in the movie. This annotation is designed to further enrich prior annotations of the movie and the associated functional neuroimaging dataset of participants viewing Forrest Gump. The authors do a fine job of describing exactly how semantic conflict events were segmented and provide details on the inter-rater reliability of these annotations. Moreover, providing the source code for calculating descriptive statistics and general quality assurance is much appreciated.\nI have only minor recommendations:\nPerhaps it is beyond the scope of a data note, but it may help the reader if the authors could expatiate on the various annotation categories. Specifically, the introduction could be expanded to say a few words as to why semantic conflict is interesting, why these particular dimensions were the ones chosen and what exactly each means in lay terms.\n\nSecond, the authors suggest in their introduction that there are insufficient semantic conflict events in Forrest Gump to be truly useful. Although I appreciate the candor, it's my opinion that we should first see what creative uses people can make of this annotation and the associated imaging dataset before we get too sullen!",
"responses": []
},
{
"id": "17893",
"date": "29 Nov 2016",
"name": "Stéphanie Caillies",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this data note, the authors analyzed and reported semantic conflicts in the movie “Forrest Gump”. This data collection has been well conducted and is part of a larger project named the studyforrest project.\nAn explanation in few lines of what the goal of the studyforrest project is and of the rationale behind this data note would be welcomed. The authors mentioned that this data collection was conducted “in order to evaluate whether the available brain imaging data can be readily used to study this aspect of cognition” but it is quite hard to follow this sentence without information regarding the goal of studyforrest project and without reading the recent publications of Hanke and Ibe. As a more minor suggestion, we think it could be useful to add some references justifying the cues used in this dataset (see Attardo’s et al., 2003).",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2375
|
https://f1000research.com/articles/5-2355/v1
|
22 Sep 16
|
{
"type": "Review",
"title": "Group B Streptococcus vaccine development: present status and future considerations, with emphasis on perspectives for low and middle income countries",
"authors": [
"Miwako Kobayashi",
"Johan Vekemans",
"Carol J. Baker",
"Adam J. Ratner",
"Kirsty Le Doare",
"Stephanie J. Schrag",
"Johan Vekemans",
"Carol J. Baker",
"Adam J. Ratner",
"Kirsty Le Doare",
"Stephanie J. Schrag"
],
"abstract": "Globally, group B Streptococcus (GBS) remains the leading cause of sepsis and meningitis in young infants, with its greatest burden in the first 90 days of life. Intrapartum antibiotic prophylaxis (IAP) for women at risk of transmitting GBS to their newborns has been effective in reducing, but not eliminating, the young infant GBS disease burden in many high income countries. However, identification of women at risk and administration of IAP is very difficult in many low and middle income country (LMIC) settings, and is not possible for home deliveries. Immunization of pregnant women with a GBS vaccine represents an alternate pathway to protecting newborns from GBS disease, through the transplacental antibody transfer to the fetus in utero. This approach to prevent GBS disease in young infants is currently under development, and is approaching late stage clinical evaluation. This manuscript includes a review of the natural history of the disease, global disease burden estimates, diagnosis and existing control options in different settings, the biological rationale for a vaccine including previous supportive studies, analysis of current candidates in development, possible correlates of protection and current status of immunogenicity assays. Future potential vaccine development pathways to licensure and use in LMICs, trial design and implementation options are discussed, with the objective to provide a basis for reflection, rather than recommendations.",
"keywords": [
"Group B Streptococcus",
"neonatal sepsis",
"maternal vaccination",
"vaccine development",
"low- and middle-income countries",
"phase III trial"
],
"content": "Introduction\n\nStreptococcus agalactiae is also known as Lancefield’s group B Streptococcus (GBS), and is a Gram-positive diplococcus, originally known for causing bovine mastitis1. GBS remains the leading cause of neonatal sepsis and meningitis, and is associated with significant mortality and morbidity, including long-term neurodevelopmental sequelae2. Disease risk is the highest during the first 3 months of life3, the primary target for GBS disease control efforts, but risk of invasive GBS disease increases again later in life, in particular among pregnant women and adults with underlying conditions or older age1.\n\nNeonatal infections (sepsis and pneumonia) contribute importantly to deaths among children under 5 years of age globally, with the highest rates in low income countries, followed by middle income countries4. The etiologies of neonatal infections in low income countries are poorly characterized but GBS likely contributes to this burden. A recent systematic review showed that neonatal GBS disease incidence and case fatality rates are highest among countries in sub-Saharan Africa. However, published data from this region remain sparse and the estimated numbers are still considered underestimates3. In high-income countries, GBS emerged as a leading cause of neonatal infection in the 1970s for reasons that remain poorly understood. Many resource rich settings have experienced significant reductions in the incidence of early-onset disease (onset of disease during days 0–6 of life) after introduction of targeted administration of intrapartum intravenous antibiotics to women at risk of transmitting GBS to their newborns5,6. However, this intrapartum prophylaxis has not proven to be effective in preventing late-onset disease (disease onset during days 7–89 of life), and is not implemented in most high disease burden low-and middle-income countries (LMIC). Therefore, there has been a longstanding interest in developing a maternal vaccine against GBS to prevent disease in infants of vaccinated mothers.\n\nAmong various vaccine candidates, the glycoconjugate vaccines targeting GBS capsular polysaccharide (CPS) have been most studied, although common protein vaccines hold the appeal of broader coverage against circulating disease-causing strains. GBS vaccine development underwent an active phase in the 1990s. Although pre-clinical and early clinical studies showed promise, efforts slowed for a period, for a variety of reasons, including the strong success of intrapartum prophylaxis in reducing the early-onset disease burden in high income countries, and concerns about the acceptance and the liability coverages for maternal immunization. Recent years have experienced a wave of new activity in GBS vaccine development. Successes in rolling out pneumococcal conjugate, rotavirus, and Haemophilus influenzae type b vaccines to the world’s poorest countries through the GAVI alliance paved the way for future LMIC vaccine introductions. Finally there is a renewed interest in invigorating the maternal immunization platform, and several licensed products such as tetanus, influenza and pertussis vaccines are recommended for use among pregnant women in LMIC.\n\nThis review provides necessary background for non-GBS subject matter experts on issues of relevance to accelerating development of a GBS vaccine for LMIC. It draws almost exclusively on published literature or public information but alludes to some key activities of relevance that are anticipating publications in the near future. First we provide an overview of GBS disease and the global burden with a focus on GBS disease in infants (days 0–90 days), the primary prevention target for a maternal immunization program. This is followed by a summary of GBS diagnostics, and a review of intrapartum antibiotic prophylaxis (IAP), standards of care, strategies and impact. The next three sections provide relevant background in GBS vaccine development including a brief review of GBS virulence factors, the history of GBS vaccine development, and a review of safety and immunogenicity of current vaccine candidates, primarily from phase II studies of a trivalent glycoconjugate vaccine formulation. This section also reviews issues related to measuring serologic endpoints and the current status of establishing immune correlates of protection. The final three sections address cost-effectiveness analysis and other potential contributions of mathematical modeling to GBS vaccine decision-making; options regarding the planning and conduct of a phase III efficacy study; and different possible vaccine development pathways are presented. We conclude with a high level summary of key gaps in knowledge.\n\n\nDiseases and sequelae caused by GBS and population at risk\n\nGiven the purpose of this document, information used in the next two sections (“Diseases and sequelae caused by GBS and population at risk” and “GBS disease burden and serotype distribution”) is primarily from LMIC, supplemented with data from high-income countries whenever information from LMIC was not available.\n\nDefinition. Although definitions for early-onset neonatal disease vary, the most common include onset of GBS disease within 72 hours of birth or days 0–6 of life7. See section on ‘Considerations for licensure based on immune markers’ for candidate definitions for a phase III trial.\n\nTransmission. Early-onset disease is caused by vertical transmission through colonized mothers during or just before birth8. GBS can ascend from the vagina to the amniotic fluid after onset of labor or rupture of membranes9, although intrauterine infection without evidence of ruptured membranes has been reported10,11. GBS in the amniotic fluid can colonize the fetal skin or mucus membranes or can be aspirated into the fetal lungs, leading to an invasive infection12,13. Infants can also be exposed to GBS during passage through the birth canal and can become colonized at mucus membrane sites in the gastrointestinal or respiratory tracts. It has been estimated that in the absence of any intervention, approximately 50% of babies born to colonized mothers become colonized and 1–2% of them progress to develop invasive disease14–16.\n\nRisk factors. Risk factors for early-onset disease have been well described in resource-rich settings. A review of risk factors as established in United States studies showed that the strongest risk factor for neonatal disease was a positive maternal vaginal culture at delivery (Odds Ratio [OR]: 204)17. Other risk factors include prolonged rupture of membranes, preterm delivery, GBS bacteriuria during pregnancy, birth of a previous infant with invasive GBS disease, maternal chorioamnionitis as evidenced by intrapartum fever, young maternal age, and low levels of antibody to type-specific capsular polysaccharide antigens18–21. Although few risk factor analyses have been conducted in LMIC, epidemiologic characteristics of case series from these settings22–25, as well as a risk factor analysis of early-onset neonatal sepsis in South Africa26, suggest that the same risk factors play an important role in LMIC. Additionally, human immunodeficiency virus (HIV) infection in mothers has been shown to increase the risk of neonatal GBS disease. Recent studies from South Africa reported that HIV-infected women have lower GBS antibody concentrations and reduced transplacental antibody transfer compared to HIV-uninfected women27,28, and infants born to HIV infected mothers had lower anti-GBS surface binding antibody levels28. However, maternal HIV infection appears to be more of a risk for late-onset disease compared to early-onset disease.\n\nDisease onset and clinical presentation. Most cases occur within the first 24 to 48 hours23,25,29–31, and more than half of early-onset disease occurs in term (≥37 weeks gestation) infants: studies from sub-Saharan Africa have reported the proportion of pre-term infants among infants with early-onset disease to range from 10 to 46%22,24,25. Respiratory distress has been described as one of the most common presenting symptoms, ranging from 68% (South Africa, single hospital retrospective review, 1997–1999)25 to 83% (South Africa, surveillance at three secondary-tertiary care hospitals, 2012–2014)24. Early-onset disease most frequently manifests with bacteremia, and less frequently with meningitis. A study in Malawi demonstrated that about half (52%) of children with early-onset disease presented with sepsis (defined as GBS isolated from blood with no clinical evidence of pneumonia), 17% with meningitis, and 14% with probable meningitis (GBS isolated from blood and cerebrospinal fluid [CSF] findings consistent with meningitis)22. In the US, infants with early-onset GBS disease present primarily with bacteremia (80%), meningitis (6%), or pneumonia (7%)32. A recent study from Uganda suggests that neonatal GBS infection may be an under-recognized cause of cerebral hypoxic encephalopathy33, although more data are needed to confirm the nature of the link between the two.\n\nDisease outcomes. Case fatality ratios reported in hospital-based studies from sub-Saharan Africa have ranged from 20% to 38%22,24,25, with higher case fatality among preterm infants24. A multi-country observational study conducted in Panama, Dominican Republic, and Hong Kong showed that the average number of days hospitalized ranged from 6 (Dominican Republic) to 15 (Hong Kong), with case fatality ratios ranging from 10% (Hong Kong) to 33% (Dominican Republic)34. Case fatality ratios in resource rich settings are notably lower (e.g., United States: 4–6%), yet a study from the United States reported a nearly eight times higher risk of death among preterm cases compared to term cases35.\n\nDefinition. Late-onset infections occur among infants aged 7–89 days of life9,36. In some instances the period from day 3–89 is considered7.\n\nTransmission. As with early-onset disease, development of late-onset GBS disease first requires adhesion of GBS to mucosal surfaces, followed by invasion across epithelial cells to gain entry to the bloodstream. Vertical transmission from colonized mothers can result in late-onset disease, although it is considered to play a less important role compared to early-onset disease37, and IAP has not impacted the late-onset disease burden in countries that provide IAP38. Nosocomial transmission, horizontal transmission from mother to infant after the perinatal period, and transmission from breast milk have also been described39–42, although it is unclear whether these are common routes of transmission38.\n\nRisk factors. Risk factors for late-onset disease are less understood than those for early-onset disease, and prevention strategies for late-onset infections have not yet been identified. Some of the identified risk factors are similar to those of early-onset disease, such as preterm delivery and maternal GBS colonization43,44. More recent studies have shown that preterm delivery may be a major factor for late-onset disease, with each week of decreasing gestation associated with an increased risk of late-onset disease44,45. Another prospective cohort study from Italy also showed that preterm infants had an increased risk for late-onset disease46.\n\nAs mentioned above, HIV exposure may be a greater risk for development of late-onset disease compared to early-onset disease: one study from South Africa reported that the risk ratio of the incidence of GBS disease was 1.7 (95% CI: 1.3–2.2) compared to HIV-unexposed infants for early-onset disease vs. 3.2 (95% CI: 2.3–4.4) for late-onset disease47. Another South African study reported that the incidence of early-onset disease was similar between HIV-exposed and un-exposed (1.1 vs. 1.5; p=0.5) but there was a 4.7-fold greater risk (95% CI: 2.3 vs. 0.5; p<0.001) for late-onset disease24. Similar results were reported from a study conducted in Belgium48.\n\nDisease onset and clinical presentation. Studies reported different proportions of preterm infants (<37 weeks) among late-onset cases: 49% in the United States38, 25% in South Africa24, and 14% in Malawi22, suggesting this proportion may be lower in LMIC than in high-income countries. A study from Italy showed that term infants develop disease earlier (median 23 days, interquartile range [IQR] 15–42) compared to preterm infants (median 39 days, IQR 28–58)46.\n\nThe proportion of infants with late-onset disease presenting with meningitis is higher compared to infants with early-onset disease, and data from sub-Saharan Africa reported that meningitis is one of the leading clinical presentations for late-onset disease (33–59%)22,24,25. Data from the United States show that about 26% of infants with late-onset disease presented with meningitis, while 67% had bacteremia without a focus of infection38.\n\nDisease outcomes. Because of the higher proportion of meningitis cases among infants with late-onset disease, risk of long-term neurologic sequelae may be higher among survivors of late-onset disease compared with infants surviving early-onset disease49. A study from South Africa showed that GBS-affected infants were >13 times more likely to have neurological sequelae at 6 months of age compared to controls, defined as abnormal Denver-II assessments (in the following domains: gross motor, fine motor, language and personal/social) or presence of hypertonia or hyper-reflexia24. Results from the United Kingdom showed that 22% of survivors of neonatal meningitis (≤28 days of life) had mild to moderate sequelae (e.g., isolated hydrocephalus, isolated epilepsy, mild learning problems, mild cerebral palsy), and 14% had severe sequelae (e.g., cerebral palsy, global delay, significant learning problems) at 9–10 years of age50. Another multi-center study from the United States described similar percentages of neurologic sequelae among GBS meningitis survivors: 25% with mild-to moderate impairment, and 19% with severe impairment at a mean age of 7 (range 3–12) years2.\n\nReported case-fatality ratios are lower compared to those of early-onset disease3,22,24,25,32. In the systematic review by Edmond et al., the pooled result of all studies reporting case fatality for early-onset disease was 12.1% (95% CI 6.2–18.3) and was 6.8% (95% CI 10.8–14.9) for late-onset disease. A more recent systematic review conducted by Sinha et al. reported that health facility-based studies from Malawi and South Africa reported case fatality ratios ranging from 20–38% for early-onset disease and 14–29% for late-onset disease (meta-analysis was not done due to heterogeneity in numerator and denominator)51.\n\nPreterm delivery. GBS colonization during pregnancy has been associated with preterm delivery52,53, although the association is less clear than the association between colonization and early-onset disease. A systematic review which included 20 studies from 10 different countries summarized results by study design: results from cross-sectional studies conducted at the time of delivery had a pooled OR of 1.75 (95% CI 1.43–2.14) for preterm delivery between GBS colonized mothers and non-colonized mothers, and 1.59 (95% CI 1.03–2.44) for case-control studies that matched mothers with preterm delivery with mothers with the same gestational age, but not in labor. Whether colonization causes preterm delivery is still a matter of debate. A systematic review of cohort studies evaluating the odds of preterm delivery according to colonization status during pregnancy were inconclusive (pooled OR: 1.06; 95% CI 0.95–1.19)54.\n\nStillbirth. GBS has also been associated with spontaneous abortions and stillbirths. A retrospective study conducted in Australia which reviewed causes of spontaneous abortions (between 16 to 26 weeks gestation) among those with autopsy and microbiological cultures available showed that GBS was the most significant pathogen, often being the sole pathogen recovered, and found both in babies born to women with intact as well as ruptured membranes55. A study using United States population-based surveillance data showed that 24% of invasive GBS infections during pregnancy resulted in septic abortions and/or stillbirths, a higher proportion than observed for pregnancy-associated invasive infections with Streptococcus pneumoniae (8%) or group A Streptococcus (6%)56. Invasive GBS infections are however infrequent among pregnant women, whereas GBS colonization is much more common. Estimating the burden of GBS-related stillbirths is challenging, even in high income countries.\n\nGBS can cause urinary tract infection, chorioamnionitis, endometritis, and bacteremia in women49. Women during pregnancy and shortly after are at a higher risk of developing invasive GBS disease compared to non-pregnant women of the same age group56. Data on pregnancy- and postpartum-associated GBS disease are limited, even in resource-rich settings, and we are not aware of data from LMIC. Data from United States population-based surveillance showed that GBS bacteremia without focus was the most common presentation both during pregnancy (43%) and the postpartum period (32%), followed by chorioamnionitis (33%) in pregnant women, and endometritis (25%) in postpartum56. Pneumonia and puerperal sepsis have also been reported35. Unlike influenza, invasive GBS infection during pregnancy or the postpartum period was not associated with a longer hospital stay, an indicator of disease severity, or increased mortality risk, compared to non-pregnant women56. Most (81%) of these pregnancy-associated invasive infections occurred in absence of additional underlying conditions35.\n\nVaginal GBS colonization is considered to be a risk for maternal chorioamnionitis and postpartum endometritis57,58. Some studies have suggested that GBS bacteriuria during pregnancy, possibly an indicator for heavy colonization59,60, may be associated with an increased risk for adverse obstetric outcomes, such as habitual abortion, intrauterine growth restriction, preterm labor, chorioamnionitis and premature rupture of membranes14,61. However, other studies have shown that asymptomatic GBS bacteriuria during pregnancy correlates poorly with GBS genital cultures at 35–37 weeks62 or at delivery63. Both GBS vaginal colonization and bacteriuria are commonly asymptomatic among pregnant women.\n\nAn increasing incidence of invasive GBS disease has been reported among non-pregnant adults in recent years, primarily from high-income countries where surveillance for invasive GBS disease among all ages has been conducted35,64–67. Results from population surveillance from the United States showed a doubling of invasive GBS cases among non-pregnant adults (≥18 years) between 1990 and 200767. The only multi-province surveillance for invasive GBS disease among non-pregnant adults in LMIC that we are aware of comes from Thailand, where GBS was the leading pathogen in an invasive bacterial diseases surveillance system68.\n\nSkin- and soft- tissue infections are one of the most frequent clinical presentations in adults, although clinical syndromes associated with invasive GBS infections are variable, including bacteremia, pneumonia, bone and joint infections, urosepsis, endocarditis, meningitis, and intravenous catheter infections69. A study from South Africa showed that soft tissue abscesses and pneumonia accounted for 70% of the presentations and reported an overall 35% mortality among all identified GBS cases70. Another study from Malaysia showed that skin and soft-tissue infections accounted for >70% of all GBS infections71. The majority of disease occurs in people with significant underlying conditions, particularly diabetes mellitus. The proportion of subjects with diabetes reported in population-based surveillance for invasive disease ranged from 20% in Canada72 to ≥40% in the United States of non-pregnant adults aged ≥15 years35,67. Estimates of the prevalence of diabetes among adults with invasive GBS infections from other countries come primarily from single institution studies, and vary from 28% in Soweto, South Africa to 71% in Malaysia64,70,71,73,74. Other conditions associated with increased risk of invasive GBS disease among non-pregnant adults include atherosclerotic cardiovascular disease, obesity, cancer, heart failure, and renal disease67. Age is also a risk factor; data from the United States, Europe and Thailand have shown that incidence rates for invasive GBS are highest among adults aged ≥65 years65–68.\n\n\nGBS disease burden and serotype distribution\n\nQuantifying the burden of neonatal GBS disease remains a challenge even in high-income countries: clinical characteristics are non-specific and often difficult to differentiate from non-infectious causes75. Invasive infections are most commonly diagnosed based on isolation of GBS from a normally sterile site (e.g., blood, cerebrospinal fluid) in microbiological culture; however, sensitivity of blood culture varies depending on the bacterial load, blood collection, and culture method, and typically requires 36 to 48 hours for positive results to become available75. Estimating GBS disease burden in LMIC is even more difficult: a portion of births may occur outside of hospital settings; facility-born infants may be discharged quickly after birth; care seeking, particularly early in life, may be limited; access to care, particularly in rural areas may pose challenges; and health facilities may lack access to diagnostic tests or laboratory capacity or resources to diagnose GBS infection. As a result, particularly for early-onset disease, most of which occurs within the first 24–48 hours of life, GBS disease is likely underrepresented in studies from these settings76. Finally, incidence of neonatal GBS varies regionally3,14. IAP use should be considered in making regional comparisons, as IAP, an intervention known to reduce the risk of early-onset disease, is widespread in many resource-rich settings but rarely implemented in LMIC3.\n\nAs a result of these challenges, and the relative paucity of invasive disease data from LMIC, some researchers have focused on GBS colonization as a surrogate measure for neonatal disease. However, different studies in resource rich settings have reported similar and high maternal colonization prevalence but different neonatal disease incidence3,14, suggesting that the relationship between maternal colonization and newborn disease is not simple.\n\nEstimating the invasive GBS disease burden in pregnant women is difficult due to the paucity of data from LMIC, and the common clinical practice of empiric treatment in absence of a definitive diagnosis for postpartum infections. Estimating the burden of GBS-related stillbirths poses challenges even in high-income countries. First, there is inconsistency in the definition used for stillbirth. For international comparison, the World Health Organization (WHO) defines stillbirth as a baby born with no signs of life at or after 28 weeks gestation77. However, various definitions of stillbirth have been used, making comparisons difficult between countries, or even within the same country78–80. In addition, stillbirths are not reported in national surveillance systems in about 90 countries80,81, and even where stillbirth is included in vital reporting systems, the causes of stillbirth are generally not recorded82. Diagnostic procedures may not be conducted or may not be available, and even if performed, there may be difficulties in producing valid results, as pathologic changes could have occurred before the time of investigation. Because GBS is a common colonizing organism of the birth canal, distinguishing the presence of GBS (for example in amniotic fluid or placenta or even fetal tissue) due to colonization from a direct role of GBS in fetal death adds further challenge in identifying the cause of death82.\n\nInfants aged <90 days. Neonatal sepsis (in infants aged <1 month) is one of the leading causes of neonatal deaths globally83, and among the WHO regions, the burden is the highest in the African region, where it was estimated to cause 5.3 deaths per 1,000 live births in 201284.\n\nA group at the London School of Hygiene and Tropical Medicine is updating the global GBS disease burden estimates, focusing on neonatal and maternal disease; results are expected in 2017. The most recent systematic review and meta-analysis currently available of invasive GBS disease in infants aged <90 days was published in 2012. This review showed that incidence of GBS disease was the highest in Africa (1.21 per 1,000 live births, 95% CI 0.50–1.91), followed by the Americas (0.67 per 1,000 births, 95% CI 0.54–0.80), and lowest in southeast Asia (0.02 per 1,000 live births, 95% Ci -0.03 to 0.07)3 (Table 1). While IAP is common in the Americas it is rarely used in Africa or Southeast Asia. Incidence of early-onset disease and late-onset disease was also the highest in Africa (0.53 and 0.24 per 1,000 live births, respectively). However, only four studies were available for incidence estimates in Africa (Kenya, Malawi, Nigeria, South Africa). A more recent systematic review based on additional studies from sub-Saharan Africa reported a somewhat higher estimated incidence of neonatal disease: 1.3 cases per 1,000 births for early-onset disease (Kenya, Malawi, Mozambique, Nigeria, South Africa, Zimbabwe) and 0.73 per 1,000 births for late-onset disease (Kenya, Malawi, Mozambique, Nigeria, South Africa, Zimbabwe)51, although the authors believe this is still an underestimation of the actual incidence given the challenges in collecting data in these countries. A recent study of early-onset sepsis in Soweto, South Africa that used both blood culture and a real-time polymerase chain reaction test for GBS on whole blood estimated an incidence of early-onset GBS disease of 1.8 per 1000 live births, higher than the estimate of 1.3 per 1000 live births based on blood culture detections alone (Sithembiso Velaphi, SANISA study, in preparation). This difference underscores that invasive disease estimates from blood culture are minimum estimates unless they take into account blood culture sensitivity.\n\nThe low reported incidence of neonatal GBS disease from South Asia poses a puzzle: is this an accurate reflection of the disease burden, or is it a reflection of under-ascertainment due to the challenges of capturing specimens from ill newborns, particularly on day 0 of life, in this region? The Aetiology of Newborn Infections in South Asia (ANISA)85 study attempted to fill this gap by conducting population-based surveillance and etiologic evaluation of possible serious infections among newborns <60 days of age in the community in catchment areas in Bangladesh, Pakistan and India86. The study faced some anticipated challenges in registering babies on day 0 of life (88% of all live births were enrolled in the surveillance and 74% of those enrolled were visited within 24 hours) and in capturing specimens from babies who died, particularly on day 0 of life. The study nevertheless included samples from over 970 babies with possible serious infection onset on day 0 of life. At all study sites, presence of GBS was assessed by blood culture, and by PCR on whole blood and PCR on nasopharyngeal/oropharyngeal samples on both ill babies and healthy controls. Colonization was detected, providing evidence that GBS is present in this region, consistent with other studies in Bangladesh and India87. However, culture-confirmed infections were rare among babies with possible serious infection in the ANISA study; culture-confirmed GBS infections were identified at the Sylhet site (Bangladesh) and Vellore site (India). Fuller results are under preparation for publication.\n\nEdmond and colleagues reviewed available data on global serotype distribution. Serotype III accounted for almost half of all isolates, followed by serotypes Ia, Ib, II, and V, and, this trend was similar across all WHO regions. Five serotypes (Ia, Ib, II, III, V) accounted for more than 85% of serotypes in all regions with available data: 98% in Africa, 96% in the Americas, 93% in Europe, 89% in western Pacific, and 88% in the eastern Mediterranean3 (Figure 1). However, serotype studies from low-income countries and Southeast Asia were not identified in this review (Table 2). Although not limited to neonates, a recent report from Vaccine Preventable Infections Surveillance conducted in Thailand sheds some light on serotype distribution of GBS disease in the southeast Asian region: among children aged <5 years with invasive GBS disease, serotype III was the most frequently isolated (approximately 50%), followed by Ia and Ib (approximately 13% each)88. In Edmond’s global review, the proportion of serotype III isolates was larger for late-onset compared to early-onset disease (53% vs. 37%)3 (Figure 1). A more recent review from sub-Saharan Africa showed similar results: the five serotypes (Ia, Ib, II, III and V) accounted for 97% of early-onset disease and 98% of late-onset disease, and the proportion of serotype III was higher in late-onset cases (79%) than in early-onset cases (54%)51.\n\nThis figure has been reproduced with permission from Elsevier3. n indicates the number of articles included in the review for each region and disease category.\n\nMaternal colonization. Ascertainment of maternal genital GBS colonization varies according to the specimens collected (e.g., vaginal sampling only vs. rectovaginal sampling), the culture medium, specimen transport and processing procedures and timing. In general, rectovaginal sampling has a higher yield than vaginal sampling only89,90, and use of selective broth media is better compared to nonselective blood agar89. A review estimated the prevalence of genital colonization in pregnant women to be around 13% globally, although included studies used various laboratory methods91. When restricted to studies which were considered to have used adequate methods (collection site including the vagina and using selective broth media), the estimated overall prevalence was 18% with regional variation: 12% in India/Pakistan, 19% in Asia/Pacific, 19% in sub-Saharan Africa, 22% in Middle East/North Africa, and 14% in the Americas (no data from Europe included)91. Regional variation was also reported in a multi-country cross-sectional study among pregnant women between 20 and 32 weeks gestation which used a standardized laboratory method (specimens collected from the cervix, lower vaginal wall, and urine and used selective enrichment broth) and showed that overall colonization prevalence was 11%, ranging from 8% in Manila, the Philippines, to 22% in Philadelphia, United States, which may reflect geographic differences in disease burden92.\n\nGBS colonization is known to fluctuate during pregnancy, and a recent longitudinal study in South Africa reported that acquisition rates and the duration of colonization differ according to GBS serotype93. Serotype distribution in colonized mothers may not correlate directly with serotypes causing invasive neonatal disease, as invasiveness appear to be different according to GBS serotype94. However, colonization data may provide some insight into circulating GBS serotypes in regions where data from invasive disease are limited, especially Southeast Asia. A recent systematic review from sub-Saharan Africa showed that serotype III (>30%), Ia and V (both >20%) were the most frequently isolated51. Another study that took place on the Thai-Myanmar border showed that serotype II was the most frequently isolated serotype (24%), followed by Ia, VI, III, and V95. Results from a multi-country study showed that overall, serotype III (17.8%) was the most frequently isolated serotype, followed by serotypes V (17%) and Ia (5%); however, serotype III was not isolated in two of the sites (Philadelphia, United States and Yangon, Myanmar), whereas serotype VII was the most frequently isolated serotype in Khon Kaen, Thailand, and was the only site that reported this serotype92. The GBS global serotype distribution appears more diverse than previously reported; a recent review of maternal GBS colonization showed significant heterogeneity across and within regions96. Additionally the modeling team led by the London School of Hygiene and Tropical Medicine is including a comprehensive review of maternal GBS colonization, risk of neonatal disease, neonatal disease incidence and impairment outcomes as part of their update of GBS disease burden estimates. They will also review data on GBS-related stillbirth, GBS-related preterm birth and review data on the association of GBS disease with neonatal encephalopathy.\n\nPregnancy-associated GBS disease and stillbirths. As described above, few data are available on the incidence of invasive GBS disease among pregnant and postpartum women in low- and middle-income countries. A study from the United States showed that the incidence of invasive GBS disease was 0.04 (95% CI 0.03–0.05) per 1000 women-years for pregnant women, 0.49 (95% CI 0.36–0.64) per 1000 women-years for postpartum women, and 0.02 (95% CI 0.02–0.02) for non-pregnant women between the ages 15–44 years56.\n\nA recent systematic review evaluated the incidence of GBS-related stillbirth (defined as at ≥20 weeks gestation most likely caused by GBS infection, as confirmed by a GBS-positive culture sample from the placenta and/or amniotic fluid and/or a normally sterile site)97. GBS-related stillbirth rates ranged from 0.04–0.9 per 1,000 births with highest reported from a small study in the United States11, and the proportion of stillbirths attributable to GBS infection ranged from 0–12%97; however, the review was limited by inconsistencies in stillbirth definitions and diagnostic methods and the number of studies available that met the inclusion criteria, particularly those from low- and middle-income countries to assess the burden of GBS-related stillbirth worldwide. Stillbirth data from Kenya were recently published98, and data from South Africa are currently being evaluated and are expected to be available, soon. These were both prospective studies that attempted to capture meaningful specimens from stillbirths for diagnostics and that applied similar, although not identical case definitions for a GBS-related stillbirth. In South Africa, preliminary estimates suggest GBS-related stillbirth incidence may be similar to that of early-onset GBS disease incidence (personal communication, Dr. Shabir Madhi).\n\nDisease in non-pregnant adults. Annual incidence rates have been reported primarily from North America and Europe, ranging from 1.5 per 100,000 population (Spain, ages 21–100 years, 1992–1999)73 to 7.3 per 100,000 population (United States, ages 18–105 years, 2007)67, and the rates tend to be higher with increasing age35,66,67. A population-based surveillance of invasive bacterial infections conducted in Thailand between 2010 and 2013 showed that the average annual incidence of invasive GBS disease is the highest among those aged ≥70 years (23 per 100,000 population68, similar to incidence reported among adults aged ≥65 years in 2005 in the United States (25.3 per 100,000 population)35. As seen when evaluating GBS disease incidence in other age groups or GBS colonization, geographic variation has been noted in the serotype distribution: reports from North America shows that serotype V is the most frequently isolated serotype in adult GBS disease, representing approximately 30% of the isolates as opposed to 11% for serotype III67,72, whereas reports from Europe show that serotype V (approximately 20%) was less frequent compared to serotype III (25–30%)66,99. A surveillance report from Thailand shows that among adults aged 21 years or older, serotypes Ia, II, II, V and VI represented >90% of cases, with serotype III being the most common (approximately 48%)88.\n\n\nDiagnosis and treatment of GBS disease\n\nThis section summarizes clinical and laboratory methods commonly used for diagnosing GBS disease. For summary of case definitions used in published studies and discussion on candidate case definitions for phase III studies.\n\nNewborns/Young infants\n\nClinical evaluation of sick children Integrated Management of Childhood Illness (IMCI) was developed jointly by WHO and the United Nations International Children’s Fund (UNICEF) to promote the accurate identification and appropriate treatment of common childhood illnesses at first-level health facilities in low-income countries, where health workers rely on patients’ history, and signs and symptoms to determine a course of management. Infants <2 months of age are assessed for signs of very severe disease: not feeding well, convulsions, fast breathing [≥60 breaths/min], severe chest indrawing, fever [≥37.5°C], low body temperature [<35.5°C], movement only when stimulated or no movement at all. These clinical syndromes which warrant urgent referral of young infants to hospitals are defined as possible serious bacterial infection (PSBI)100. Rates of PSBI among newborns young infants in LMIC can be very high (80 cases/1000 live births or higher)101. Even in the higher middle income country setting of South Africa, hospital admission for physician-suspected early-onset sepsis occurred at a rate of approximately 30 cases/1000 live births15. WHO guidance recommends that newborns presenting with signs of PSBI should be admitted to the hospital and blood cultures and lumber punctures should be obtained whenever possible before starting antibiotics102. Additionally, the United States Centers for Disease Control and Prevention (CDC) GBS prevention guidelines recommend a blood culture at birth for infants born to mothers with chorioamnionitis, even if the infant is well-appearing9.\n\nIMCI has a separate set of algorithms for children 2 through 59 months. Children are first assessed for general danger signs (unable to drink or breastfeed, child vomits everything, lethargic or unconscious, had convulsions or actively convulsing). If stiff neck or general danger signs are present in a child with fever, administration of antibiotics and urgent referral are recommended103.\n\nLaboratory detection of invasive disease Confirmation of invasive GBS disease requires isolation of GBS from a normally sterile site (e.g., blood, CSF), which is usually done by collecting cultures. Automated blood culturing systems have improved the practice of blood culture: the automated system automatically detects microbial growth by monitoring microbial CO2 detection, and eliminates the need for manual inspection or examination. In addition, growth of aerobes and facultative anaerobes are promoted by agitating culture bottles104. However, a recent review reported that many studies from LMIC used manual culture methods, with lower GBS incidence rates compared to studies using automated culture methods105. Therefore, differences in the culture methodology used can result in variation in reported GBS disease burden.\n\nRates of culture-confirmed infection are typically an order of magnitude or more lower than rates of clinical sepsis, although the culture positivity rate varies according to the criteria for collecting culture and how samples were collected. It is known that the likelihood of pathogen isolation increases with the quantity of blood submitted for culture, and for neonates, at least 0.5 to 1 ml of blood is recommended7,106. Because of the small blood volumes that can be obtained from newborns and young infants it is also important to use pediatric rather than adult blood culture bottles. In real use many cultures contain inadequate amounts of blood107,108. The yield of blood culture also varies with organism density in the blood. A study in infants 0–2 months of age showed that about half of the cultures positive for GBS had a very low organism density (≤1.0 cfu/ml)109. Based on an estimate from a study comparing the yield of pathogens from blood culture using blood samples with various volumes and bacterial load, the sensitivity of blood culture to detect low-level GBS bacteremia (1.0 cfu/ml) could vary from 44% (0.5 ml collected) to 98% (2 ml)110.\n\nCSF analysis by lumbar puncture is the gold standard to diagnose meningitis. It is considered that up to 23% of neonates with bacteremia will also have concomitant meningitis, and that up to 38% of those with meningitis will have a negative blood culture7. Therefore, children suspected of meningitis should undergo lumbar puncture to assess the CSF whenever possible. Laboratory methods to identify GBS are summarized in Table 3. Cloudy CSF, elevated CSF leukocyte counts, low CSF glucose (e.g., < 1.5 mmol/litre or a ratio of CSF to serum glucose of ≤0.4), elevated CSF protein (e.g., > 0.4 g/litre), and positive Gram stain results indicate presence of meningitis, and treatment should be started immediately while awaiting culture results102.\n\nCAMP: Christie, Atkinson, Munch, Peterson; GBS: group B Streptococcus; MALDI-TOF MS: Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry; NAAT: nucleic acid amplification test\n\nOnce bacterial isolates suggestive of GBS are identified, various laboratory methods including culture-based methods and high sensitivity latex agglutination tests can be used for GBS identification (see Table 3). More recently, nucleic acid amplification tests (NAAT) (e.g., polymerase chain reaction) have allowed direct GBS identification from clinical samples. Some studies have used NAAT in addition to culture in order to improve the detection of cases.\n\nNeonatal colonization It is estimated that about half of neonates exposed to GBS by their colonized mothers become colonized with GBS, and only a small proportion of those develop invasive disease. Neonatal GBS colonization results from exposure to and swallowing of GBS-infected amniotic fluid or maternal vaginal secretions. External auditory canal cultures are more likely to yield GBS in the first 24 hours of life compared to other sites, and isolation of organisms from the ear canal is a surrogate for the degree of contamination from amniotic fluid and vaginal secretions sequestered during the birth process. After the first 48 hours of life, throat and rectal sites are the best sources for detection of GBS, and positive cultures indicate true colonization (i.e., multiplication of organisms at mucous membrane sites), not just maternal exposure111.\n\nPregnant and postpartum women\n\nMaternal colonization Maternal colonization can be assessed by collecting swabs from the vagina and the rectum from pregnant women9. Swabs are inoculated into a selective broth medium, and subcultured on to an agar plate for bacterial isolation. If enriched selective broth media is used, GBS can be determined faster (see Table 3).\n\nChorioamnionitis The diagnosis and reporting of chorioamnionitis varies widely. Some consider histopathologic diagnosis as the gold standard112. Amniotic fluid sampling and culture can be used in the diagnosis of chorioamnionitis, however, diagnosis of chorioamnionitis is often made clinically due to challenges in accessing uncontaminated amniotic fluid or placenta for culture113,114. Culture of the fluid may be conducted, but may have limited clinical utility due to the potential colonization of the amniotic fluid and the time it takes to obtain results114. In addition, the infectious etiology is often polymicrobial115. Fever in a pregnant women is the most important clinical sign of chorioamnionitis. Other key clinical findings associated with clinical chorioamnionitis include uterine fundal tenderness, maternal tachycardia (>100/min), fetal tachycardia (>160/min), and purulent or foul amniotic fluid113. The WHO reference material lists fever (≥38.0°C) in pregnant women with foul-smelling watery discharge after 22 weeks and abdominal pain as symptoms typically present among pregnant women with chorioamnionitis116.\n\nEndometritis. The diagnosis of endometritis is also often made clinically, and is often due to polymicrobial infection117. Clinically, endometritis presents as fever, uterine tenderness, abdominal pain, and a purulent lochia or a positive culture of endometrial fluid or tissue118. Positive blood cultures may help identify the bacterial etiology, as bacteremia may be present in up to 20% of women118. In the WHO clinical guidance, fever in women after childbirth with lower abdominal pain, purulent, foul-smelling lochia and tender uterus is described as signs and symptoms typically present in women with endometritis116.\n\nStillbirths. Identifying infection as a cause of stillbirth is challenging: it is often difficult to determine the cause of stillbirth, and organism isolation on the placenta or the surface of the fetus does not prove causality119. Pregnant women may be colonized with GBS, and could contaminate the fetus or the placenta after membrane rupture or vaginal contamination during delivery119. In a recently published systematic review of 17 studies of GBS-related stillbirths, diagnosis was made based on a range of laboratory methods: culture confirmation from placenta (eight studies), blood/CSF (12 studies), amniotic fluid (two studies), and internal organs (eight studies)97. Careful placental histologic examination and autopsy are considered to be more useful in identifying the cause of stillbirths119,120, and culture of fetal heart blood or fluid from uncontaminated fetal sites during autopsy may help identify the infectious cause119.\n\nSerological methods. Serological classification of GBS is based on the identification of capsular polysaccharides and protein antigens121. Capsular polysaccharide is currently the most advanced glycoconjugate vaccine target, and currently ten serotypes have been described (Ia, Ib, II–IX). Several serological methods have been used for serotyping (e.g., Lancefield capillary precipitin method, double immunodiffusion122, coaggulutination123, enzyme immunoassay124, latex agglutination125). The Lancefield capillary precipitin method is considered as the “gold standard”1,122. One of the most common methods for capsular polysaccharide serologic typing is the latex agglutination method, using antibodies specific for the 10 recognized capsular polysaccharides126. In a recent report of a multicenter external quality assessment of molecular and serological typing conducted in 14 institutions in 13 European countries, the commercially available latex agglutination method was the most widely used typing method, with a typeability value (number of accurate results/total number of tests performed) of >90%121. Limitations of serological methods include failure to type an isolate (~4–9% are classified as non-typeable) due to lack of or low expression of capsular polysaccharide under experimental conditions, the presence of reversible non-encapsulated variants, or, although rare, expression of a new capsular serotype1,126. In addition, results are dependent on the quality of the antibodies used and on the experience of the laboratory126.\n\nMolecular typing methods\n\nSerotyping As an alternative to serological serotyping methods, molecular approaches based on the detection of capsular gene typing have been developed in recent years. Molecular methods include polymerase chain reaction (PCR) in conjunction with sequencing, hybridization, or enzymatic restriction cleavage pattern analysis, and multiplex-PCR approaches1,126–128. These molecular approaches are attractive because they have made it possible to assign a molecular serotype to otherwise nontypeable isolates by serologic methods, and because they are reproducible, specific, easy to perform, and suited for capsular polysaccharide typing in large-scale epidemiological studies1. However, PCR serotyping could potentially misclassify certain serotypes126. Also, PCR serotyping does not reveal if the capsular polysaccharide gene locus detected is actually expressed as a polysaccharide capsule126. Recently, Sheppard and colleagues have conducted whole genome sequencing to determine serotype with promising results129. Although the method currently may not be cost-effective merely for determining serotype129, the whole genome sequencing platform can be used to obtain genotyping data of the strains, as described below, as well as in depth analyses of strains within clonal complexes130.\n\nGenotyping Molecular typing methods have been used for further characterization of GBS and are useful in distinguishing different GBS strains in epidemiological studies121. Examples of methods include restriction fragment length polymorphism (RFLP)131, pulsed-field gel electrophoresis (PFGE)132, multilocus sequence-typing (MLST)133 and more recently DNA microarray-based typing134. Whole genome sequencing has also enabled the investigation of large and small scale genetic changes in comprehensive collections of GBS strains, thereby permitting enhanced understanding of the diversity of the organism135. See Box 1 for definitions of serotype, genotype, strain, and clonal complex.\n\n\n\nSerotype: type of antigenically variable polysaccharide capsule\n\nGenotype: the genetic makeup of an organism or a group of organisms with reference to a single trait, set of traits, or an entire complex of traits\n\nStrain: a single isolate of any bacterial population and any laboratory induced variants thereof\n\nClonal complex: a group of bacterial strains derived from a recent common ancestor that share many alleles at various phylogenetically informative loci. A clonal complex generally includes the ancestral genotype and strains with minor variation\n\nNewborns/young infants. IMCI recommends hospitalization and intramuscular or intravenous treatment of all infants meeting the case definition for PSBI136. The recommended antibiotic selection for management of “serious bacterial infection” and “meningitis” in infants aged <2 months is ampicillin and gentamicin (Table 4)102. To date GBS remains universally susceptible to beta lactam antibiotics so penicillin and ampicillin remain effective therapeutic agents. The 2010 CDC guidelines recommend providing antibiotic therapy pending culture results for well-appearing newborns whose mothers had suspected chorioamnionitis9. The WHO recommends providing prophylactic intramuscular (IM) or intravenous (IV) ampicillin and gentamicin in neonates with documented risk factors for infection (see Table 4)102.\n\nPregnant and postpartum women. For treatment of chorioamnionitis, the WHO recommends ampicillin and once-daily gentamicin100. A combination of clindamycin and gentamicin is recommended as first-line treatment of postpartum endometritis. Use of intrapartum antibiotic prophylaxis (IAP) to prevent early-onset neonatal disease is described in further detail in section ‘Prevention of perinatal GBS disease through intrapartum antibiotic prophylaxis’.\n\nAntimicrobial susceptibility. Globally, GBS resistance to penicillin G or ampicillin has not been reported. Thus, beta lactams are considered first-line antibiotics for GBS infection or IAP. However, isolates with increased minimum inhibitory concentrations to these antibiotics due to mutations in penicillin binding proteins have been reported primarily from Japan and North America137–141. Macrolide and/or clindamycin resistant strains have been increasing. There are limited invasive GBS antimicrobial susceptibility data available from LMIC. This largely reflects the relative paucity of invasive neonatal GBS disease surveillance from a majority of LMIC. A systematic review on community-acquired neonatal and infant sepsis in developing countries (Nigeria [4], Iraq [1], Bangladesh [2], India [4], Nepal [1], Tanzania [2], Malawi [1], Uganda [1], Philippines [1], Ethiopia [1], Mozambique [1]) showed that among GBS isolates from neonates, 100% were susceptible to penicillin, 60% (95% CI 25–91%) were susceptible to chloramphenicol, and 65% (95% CI 0–100%) to third-generation cephalosporins142.\n\n\nPrevention of perinatal GBS disease through intrapartum antibiotic prophylaxis\n\nIn the 1980s, clinical trials and a large observational study demonstrated that administration of intravenous ampicillin or penicillin during labor to mothers with certain risk factors for GBS transmission was highly effective (efficacy estimates of 80–100%) at preventing invasive early-onset GBS disease143–145. Effectiveness estimates, although often somewhat lower than estimates from trial settings due to a portion of women receiving less than the optimal prophylaxis duration (at least 4 hours of a beta lactam agent before delivery) or non-beta lactam agents, are consistent with trial findings146,147. Based on this evidence, penicillin or ampicillin are often the first line agents recommended for prophylaxis, with cefazolin and in narrow instances clindamycin or vancomycin as options for penicillin-allergic women. WHO recommends intrapartum antibiotic administration (first choice penicillin G or ampicillin) to women with GBS colonization based on observed clinical benefits for the neonates (see Table 4); however, the guideline development group acknowledged the challenges in implementing GBS screening and provision of IAP especially in low-resource settings100. WHO recommendations and feasibility in LMIC are further discussed in the two sections below.\n\nBecause only a portion of women are at elevated risk of transmitting GBS to their infants, universal prophylaxis of all deliveries is not an optimal strategy, particularly since antibiotic exposure is associated with low but non-zero risks. The most immediate risk is maternal anaphylaxis to penicillin which is estimated to occur in four per 10,000 to 4 per 100,000 recipients148. In resource-rich, hospital settings, anaphylaxis-related mortality is exceedingly rare, but in low and middle income countries risks for complications from anaphylaxis, even for hospital births, may be higher. While there is no risk for anaphylaxis in the newborn, due to the very low probability of previous antibiotic exposure and the lack of transfer of maternal IgE antibodies across the placenta, intrapartum antibiotics do impact the microbiome of the maternal birth canal and thus the microbiome acquired by the newborn, particularly for vaginal births. Some studies suggest microbiome alterations, particularly at the time of birth, may result in health impacts well past the newborn period, although these have not yet been substantiated and the risks have not been quantified149.\n\nTwo major strategies have been employed to limit the portion of women exposed to intrapartum prophylaxis to those at most risk of transmitting GBS9. The risk-based strategy identifies women for antibiotic prophylaxis based on presence of known risk factors for early-onset disease including maternal fever, prolonged rupture of membranes, preterm delivery, and previous birth to an infant with invasive GBS disease and detection of GBS bacteriuria during the current pregnancy. In different countries employing the risk-based approach, variations may exist in the risk factors screened for, or in the thresholds used to identify risk, based either on local epidemiology or efforts to narrow the portion of women targeted for prophylaxis. Maternal fever is most commonly defined as ≥38°C and prolonged membrane rupture is often ≥18 hours. In contrast, the culture-based screening leads to identification of women with vaginal/rectal GBS colonization late in pregnancy, as a basis for antibiotic prophylaxis indication. Women who present at labor without a culture result are managed according to the risk-based strategy. Variants of the culture-based screening strategy also exist across countries but most recommend screening at 35–37 weeks gestation.\n\nBoth strategies have been documented to result in significant declines in invasive early-onset GBS disease, both in single hospitals and population-based analyses, in a range of resource-rich settings5. A population-based comparison of the two strategies in the United States found that the culture-based screening strategy was over 50% more effective than the risk-based strategy, primarily due to the high proportion of GBS positive women who received intrapartum prophylaxis and to the frequency of colonized women without any noted risk factors (18% of the delivering population in the United States)150.\n\nAlthough current WHO recommendations do not specify a recommended approach for identifying women at risk, antibiotic prophylaxis (erythromycin) to women in preterm pre-labor rupture of membranes is recommended, as part of a strategy to improve the prognosis of infants with preterm birth (strong recommendation based on moderate-quality evidence)100. Antibiotic prophylaxis is not recommended for women in preterm labor with intact amniotic membranes nor for women with pre-labor rupture of membranes at term or near term (36 weeks gestation and above). The latter recommendation is based on the review of evidence from studies in women with duration of ruptured membranes less than 12 hours, and it is acknowledged that there may be a benefit from antibiotic prophylaxis in women with prolonged rupture of membranes (>18 hours)100,151.\n\nNeither of the above strategies were designed in the context of LMIC and both pose implementation challenges, particularly in low-income country settings. In low income countries, safe administration of intravenous antibiotics may not always be affordable or feasible, particularly for settings where births do not occur in hospitals. Even in instances where intrapartum prophylaxis may be feasible, identifying candidates for prophylaxis poses unique barriers. The risk-based strategy has the appeal that the key variables for action can be captured at the time a woman presents for labor. However, even in middle income countries such as South Africa, capture of these variables may prove challenging in a busy labor and delivery setting. For example, in a study of over 8000 deliveries at the main public hospital serving Soweto, South Africa, less than 1% of women were noted as having intrapartum fever suggesting under-ascertainment15. Additionally, gestational age is not always known and clear distinctions between term and preterm deliveries may not always be straightforward. Moreover, because risk factors such as prolonged membrane rupture may evolve over the course of labor, prophylaxis may not always be administered to women who develop risk factors after admission. In resource-rich settings such as the United States, a lower proportion of women with risk factors have been noted to receive prophylaxis compared to GBS-colonized women150; this may prove even more challenging in LMIC where providers care for a higher patient load. Finally, more women in LMIC than in resource-rich settings may present to facilities at a late stage in labor, leaving insufficient time for efficient prophylaxis.\n\nWhile the risk-based strategy poses challenges, few LMIC are positioned to overcome the implementation and cost challenges associated with late antenatal screening. In particular, LMIC settings rarely have access to a high proportion of women at 35–37 weeks gestation, a strong microbiology laboratory network to process antenatal samples, and systems for effective communication of results to labor and delivery staff.\n\nTo date, possible alternative to intrapartum prophylaxis have not proven effective. Chlorhexidine wipes of the birth canal during labor and the newborn at birth were evaluated in a large clinical trial South Africa with no evidence of efficacy against culture-confirmed or clinical neonatal sepsis15. Universal administration of intramuscular penicillin to newborns within 1 hour of birth is implemented at one large center in the United States152–154; however the lack of a concurrent control makes it difficult to interpret effectiveness or generalizability. This strategy also exposes all newborns to antibiotics. IM penicillin intrapartum does not achieve high enough concentrations rapidly enough, and antenatal use of oral or IM antibiotics have not shown impact146,155–157.\n\n\nVirulence factors of GBS\n\nGBS disease typically progresses from bacterial colonization, penetration of placental or epithelial barriers, and immune evasion preventing clearance of GBS from the bloodstream. In the case of meningitis, the ability to cross the endothelial blood-brain barrier is also needed158,159. GBS expresses a number of virulence factors, which play different roles in these steps (summarized in Table 5), but one of the most prominent and best-studied is the capsular polysaccharide (CPS), which protects the bacteria from opsonization and subsequent phagocytosis and intracellular killing160,161. The type-specific CPS contains a terminal sialic acid, which is identical to a sugar epitope widely displayed on the surface of mammalian cells. Presence of this sialic acid terminal allows GBS to mimic the host cell structure and avoid immune recognition111,162. To date, 10 different CPS types (Ia and b, and II–IX) have been identified161,163,164.\n\nMore recently, multilocus sequence typing (MLST) analysis has shown that sequence type (ST) 17 is associated with enhanced invasiveness in neonates independent of capsular serotype although most ST17 isolates are CPS type III165. ST17 displays meningeal tropism, and has been referred to as the hypervirulent clone166.\n\nIt has been hypothesized that the GBS isolates causing invasive GBS disease in neonates worldwide emerged from a few successful clonal lineages, and virulence factor identification to date has focused on elements common across these clones161. Factors under investigation to date include the C5a peptidase, the AlphaC-like surface protein family, the Sip-protein, and pilus islands, all of which have different roles in the infection process161, and have been investigated as vaccine targets. Pili mediate GBS resistance to cationic antimicrobial peptides (AMPs), which are components of the host innate immune system that play a critical role in combating bacterial infections167, and also facilitate adherence and attachment of the pathogen to host mucosal cells. More recently, a surface-anchored adhesion protein called hypervirulent GBS adhesion (HvgA) was identified from comparative expression analysis between clones of different virulence. HvgA is considered to be a specific virulence factor of hypervirulent ST17168. In a manner similar to that of pili, HvgA mediates both colonization and invasion in the intestine, which appears to be a prerequisite for meningitis in the neonatal mouse model161.\n\n\nGBS vaccine development\n\nPrevention of neonatal GBS disease has been the primary focus for GBS vaccine development. Most cases of early-onset neonatal and young infant disease occur within the first 24 hours. Therefore, maternal immunization rather than direct vaccination of newborns is required to prevent neonatal and young infant disease. In animal models, passive immunization (e.g., transferring sera of animals exposed to GBS disease) and active immunization (e.g., mouse maternal vaccination-neonatal pup challenge model) have been shown to be protective against development of GBS disease (see below for details on animal models). In humans, transplacental transfer of protective maternal antibodies against GBS was first reported by Baker and colleagues169. Their study showed that mothers whose infants developed invasive GBS disease from serotype III had significantly lower levels of serum IgG levels to CPS III compared to mothers whose infants were exposed to type III but did not develop disease. Subsequent studies reported similar findings with other GBS serotypes170,171 and the association of low maternal GBS CPS specific IgG levels and the risk of GBS disease in their infants was further described19. Attempts have been made to identify a threshold that would confer protection against GBS disease for vaccine development.\n\nMaternal IgG is transmitted transplacentally to the fetus, mediated by the neonatal Fc receptor (FcRn): maternal IgG is endocytosed within placental syncytiotrophoblast cells and bound to FcRn in the acidic environment of the endosome, then released to the fetal side of the syncytiotrophoblast when exposed to physiological pH172,173. Results from phase I/II trials have shown that maternal immunization with glycoconjugate vaccines results in increased CPS-specific antibody levels in the infants which persists for about 2 to 3 months174,175. Transfer of maternal IgG initiates at around 17 weeks gestation, although most of the transfer occurs after 33 weeks gestation176. Therefore, infants born before 34 weeks gestation are likely to have decreased amounts of IgG177.\n\nA study by Baker et al. using data from the United States estimated that third trimester maternal immunization could prevent approximately 46% of all late-onset GBS infections, given that 63.3% of infants who developed late-onset disease were born ≥35 weeks gestation, and of those born ≥35 weeks gestation, approximately 72% of infections occurred during the first 6 weeks of life176. This is based on the assumption that infants born ≥35 weeks gestation would have acquired sufficient concentrations of maternal antibodies, which would protect the infant from GBS disease for the first 6 weeks of their life (translating to two half-lives of antibody decay). The optimal timing of maternal immunization that would maximize protection against young infants requires further investigation.\n\nResults from a phase II randomized controlled trial have shown that the III-TT vaccine delayed the acquisition of vaginal and rectal GBS III (NCT00128219)178. Another study reported an association between increased serum CPS IgG levels and reduced homotypic GBS rectovaginal acquisition179. If the vaccine reduces maternal colonization, maternal vaccination could further reduce the risk of neonatal disease by reducing exposure to GBS in the first months of life.\n\nOpsonization, followed by phagocytosis (ingestion of invading microorganism) and intracellular killing are the main mechanisms of host defense against GBS infection180. Opsonization requires the deposition of specific antibody and complement on the bacterial surface, and antibody and complement do not kill GBS in the absence of phagocytes181. Type III GBS-CPS was shown to prevent activation of the alternative complement pathway but this effect can be overcome by the presence of a sufficient amount of CPS antibody182,183.\n\nImmaturity of the immune system makes neonates more susceptible to infections: neutrophils have a small storage pool at birth, and are less responsive to chemoattractants than later in life. Neonatal monocytes, which mature into macrophages, are impaired in their capacity for killing intracellular GBS184. Newborns have an impaired ability to form antibodies in general, and are particularly deficient in their ability to mount antibody responses against polysaccharide antigens185. Altogether, their capacity for GBS CPS antigen-specific protection is determined largely by the placental transfer of maternal IgG antibodies186. Therefore, the goal of maternal immunization is to induce GBS-specific antibody levels in the mother to achieve antibody levels in the child that would confer protection during the first 3 months of life.\n\nGBS disease models. Pre-clinical studies using animal models are important to obtain sufficient data on safety, immunogenicity and potential efficacy of candidate vaccines before proceeding to clinical trials. A wide range of animal models has been used to study GBS-host interactions and to provide means to test potential therapies and vaccine approaches. A sampling, rather than a comprehensive review, is provided below.\n\nMice have been commonly used to model GBS infections. The earliest animal models studies of GBS infections date to the 1930s122. In later studies, intraperitoneal or intravenous models of GBS infection in adult or neonatal mice were developed to simulate human infections187,188. In some cases, oral inoculation has been used as a means of inducing systemic infection in mice189. Notably, in both mice and rats, there appears to be an age-related decrease in susceptibility to invasive GBS infection190,191.\n\nA large number of other animal model systems have been explored, including chicken embryo192, rabbits193, sheep194–196, piglet197, and non-human primates198,199. Some of these models (especially the large animal models) have been used to provide insights that are difficult or impossible to study in mice. The sheep194–196 and piglet models197 are of particular relevance for the study of hemodynamic changes in host animals during GBS sepsis. Non-human primate models of GBS infection have been used sparingly, but they are of particular utility in modeling newborn infections and host responses in vivo198,199.\n\nAnimal models of GBS colonization. Fewer studies have used animals to model asymptomatic GBS carriage, despite the importance of the carrier state for maintenance of GBS in the population and the role of maternal colonization as the major risk factor for neonatal disease. Most recent work in this area has used murine models of vaginal or gastrointestinal colonization.\n\nVaginal colonization models have allowed determination of specific bacterial or host factors involved in carriage in the absence of invasive disease200–208. Gastrointestinal GBS colonization has been modeled in gnotobiotic mice and used as a means to understand the role of surface proteins in GBS carriage209. Neonatal mice have also been used as a model for gastrointestinal carriage, especially as a prelude to invasive disease168. Oral colonization of infant rats has been used to examine the utility of antibiotics to decrease mucosal bacterial load210.\n\nAnimal models of ascending infection and/or perinatal GBS transmission. To examine the role of ascending infection in adverse pregnancy outcomes (e.g., preterm delivery, stillbirths), animal models simulating human infections have been explored. Examples include intracervical GBS inoculation of rabbits211–213, murine intravaginal/intrauterine/intraperitoneal inoculation214–216; catheterization and intraamniotic instillation of GBS has been used to model chorioamnionitis in non-human primates and to study its effects on fetal lung tissue217–220. However, initial attempts to create an ascending infection animal model secondary to chronic vaginal colonization, which is a better simulation of human infection, were unsuccessful213. Recently, Randis and colleagues have developed a model of GBS ascending infection during pregnancy secondary to vaginal colonization using pregnant mice. This model may shed light on the role of bacterial virulence factors such as beta-hemolysin/cytolysin in causing adverse pregnancy outcomes associated with maternal GBS colonization203.\n\nPreclinical studies of GBS vaccines in animal models\n\na. Passive immunization Animal models have been used to examine the effect of antibody delivery (passive immunization) on invasive GBS disease in vivo. The first studies used generation of antibodies in rabbits followed by passive protection of mice exposed to systemic GBS infection122,221. Subsequent studies used hyperimmune serum or purified antibody preparations to provide protection to neonatal experimental animals199.\n\nb. Active immunization Active immunization formulations used in experimental animals have included whole killed GBS, purified bacterial components (generally capsular polysaccharide or purified proteins), or conjugate vaccine preparations. Most studies have used systemic vaccination (intramuscular, subcutaneous, or intraperitoneal), though some investigations of mucosal vaccine delivery (intranasal, oral, rectal, or intravaginal) have been reported222–227.\n\nMost vaccine studies have used protection from invasive infection as their outcome, but reports of potential effects of vaccination on GBS colonization have also been presented228. The most frequently used model to evaluate the efficacy of GBS vaccines is the mouse maternal vaccination-neonatal pup challenge model. In this model, female mice are actively vaccinated, and their offspring are challenged with GBS229,230. Maternal IgG is transferred to the pups similar to maternal antibody transfer in humans, and most pups of mothers vaccinated with a range of conjugate formulations have survived challenge231,232. This model continues to be used to test new vaccine targets233–235. However, GBS strains isolated from human infections may be highly adapted to their human host, and results obtained from mouse models must be interpreted with caution236. For example, human GBS isolates may express surface proteins that specifically interact with the human hosts but not with other animals236–238. In addition, the shorter gestational period of mice (19–22 days) should be taken into account to measure the timing of vaccination and passive protection in neonates239.\n\nThe structure and function of antibodies induced by vaccination and the kinetics of maternal antibody transfer to the fetus are most similar between human and non-human primates. Baboon models have been used in preclinical GBS vaccine studies229,230,240. As in mouse models, these studies showed that GBS conjugate vaccine induced CPS-specific antibodies230,240, and there was a correlation between maternal and infant baboon serum antibody levels230. Differences have been noted in the kinetics of antibody responses and waning between humans and baboons241.\n\nPolysaccharide vaccines. GBS capsular polysaccharide (CPS) has been the primary target for vaccine development. In the 1930s studies demonstrated that CPS-specific rabbit sera could be used to protect mice against lethal challenge with GBS242. The first purified type III CPS vaccine underwent phase I testing in healthy adults in 1978243, and subsequently type Ia and II CPS vaccines were tested. Type II CPS was found to be the most immunogenic, while type Ia and III showed an immune response in about half of the recipients244. Most adults (nearly 90%) had very low serum concentrations of CPS specific antibodies before immunization, which was considered to indicate immunologic naivety to GBS polysaccharides, and was a partial predictor for a poor immune response244,245. Favorable safety of CPS vaccines was shown on a small scale in non-pregnant adults and among pregnant women245,246, and infant antibody levels in cord serum correlated with maternal antibody levels at delivery246.\n\nGlycoconjugate vaccines. Immunogenicity of polysaccharides is enhanced by covalent conjugation with a carrier protein. Glycoconjugate vaccines have been developed for Haemophilus influenzae type b (Hib), Neisseria meningitidis and Streptococcus pneumoniae. Unlike T-cell-independent B-cell activation by non-conjugated polysaccharide antigens, glycoconjugate vaccines have the potential to induce both B- and T-cell memory and produce a stronger and highly functional IgG response through antibody class switching160.\n\nThe first GBS glycoconjugate vaccine trial conducted in humans involved a GBS III CPS-tetanus toxoid (III-TT) glycoconjugate160,247. Healthy non-pregnant women were recruited and randomized to receive III-TT, type III CPS vaccine, or placebo247. Results showed that the highest dose of III-TT produced higher levels of type III CPS-specific antibody measured two weeks after vaccination, and that the proportion of recipients achieving a ≥4-fold rise in antibody concentration was higher among those who received III-TT compared to those who received unconjugated type III CPS vaccine247, suggesting that the glycoconjugated vaccines are able to induce a more robust immune response compared to polysaccharide-only vaccines. Following this first trial, phase I trials of monovalent Ia, Ib, II and V-TT conjugates showed immunogenicity of a single dose suggesting no need for addition of an adjuvant241,248,249. In another randomized controlled study in healthy non-pregnant women, receipt of GBS III-TT was associated with protection against future acquisition of type-specific GBS colonization, with 36% vaccine efficacy for vaginal acquisition and 43% efficacy for rectal acquisition compared to controls who received tetanus and diphtheria toxoids (clinicaltrials.gov NCT00128219)178.\n\nTo achieve broader coverage against the GBS serotypes causing disease in humans, several multivalent vaccines have been developed and tested in humans. The immune response in subjects who received a bivalent vaccine containing II-TT and III-TT glycoconjugates did not differ statistically from the antibody responses to monovalent vaccines250,251. Novartis (now GSK) has developed a trivalent (serotypes Ia, Ib, III) glycoconjugate vaccine conjugated to a CRM197 carrier and conducted several phase I and II clinical trials in healthy non-pregnant and pregnant women (clinicaltrials.gov NCT01150123, NCT01193920, NCT01446289).\n\nInitially, GBS vaccine studies have used tetanus toxoid (TT) as the carrier protein, but there have been concerns about possible immune interference and adverse events upon subsequent vaccination with TT-conjugated vaccines229,250. CRM197, a nontoxic mutant of diphtheria toxin (DT), is another carrier protein, and has been used in the investigational trivalent GBS vaccine that went through phase II studies (clinicaltrials.gov NCT01412801, NCT01446289)174,252. A study showed equivalent immunogenicity of CPS V-TT and CPS V- CRM197 against the target GBS antigen249.\n\nWhile vaccines conjugated to TT (used in Menitorix®[MenC-TT/PRP-TT, GSK]) and DT (used in Menactra®[Men ACWY-DT, GSK] and in Synflorix®[PCV10, GSK]) have shown to induce immunity against TT or DT, vaccines using CRM197 as the carrier protein (used in Prevnar13®[PCV13, Pfizer], Menveo®[MenACWY-CRM, GSK], investigational trivalent GBS vaccine [GSK]) have not shown to induce immunity against DT174,253. The interest of inducing protection against tetanus through a TT-conjugated GBS vaccine could be considered where maternal and neonatal tetanus remain a concern.\n\nLastly, there have been concerns that use of CRM197 may interfere with responses to routine infant vaccines that use CRM197 as the carrier protein, such as PCV13, Hib, and MenACWY254–256. In addition, use of CRM197 may interfere with responses to routine diphtheria vaccination in infants, but results from a phase II study did not show any evidence of interference174.\n\nProtein-based vaccines. Polysaccharide-based vaccines typically only provide protection against CPS types included in the vaccine or closely related serotypes, and may be vulnerable to serotype replacement/switching. Therefore, efforts have been made to identify proteins common to all GBS as the basis of a vaccine that would confer broad protection against GBS250.\n\nUntil whole genome sequences of two GBS strains became available in 2002, only a limited number of proteins involved in GBS pathogenesis were identified as potential vaccine candidates250. Rib and alpha are among the GBS surface proteins that have been studied extensively as possible vaccine targets257,258. Recently, MinervaX, a privately held Danish biotech company, has initiated phase I clinical trials with a protein vaccine based on a fusion of the N-terminal portion of two surface proteins, AlphaC and Rib (GBS-NN) (NCT02459262)259. MinervaX expects that GBS-NN will protect against up to 95% of GBS isolates, given the broad expression of AlphaC and Rib as well as cross-reactive proteins259.\n\nDuring the past decades, the application of recombinant DNA techniques and the availability of complete bacterial genomes have allowed use of genome-based vaccinology to identify new protein vaccine candidates250. Investigators from GSK used reverse vaccinology to identify a conserved sequence encoding components of pili proteins on the bacterial surface. A vaccine based on a combination of these proteins conferred protection against different GBS strains in a mouse model260. However, coverage against all GBS strains was not possible due to antigenic variation associated with the pilin subunits250,260. Structural vaccinology was successfully applied to design an optimized BP-2a protein, a subunit of the backbone protein of the GBS pili known to have high gene variability250. The protective capacity of a BP-2a variant is restricted to a small region (D3), and each variant fused into a single recombinant chimeric construct expressed in Escherichia coli which conferred strong protection against all six strains expressing a BP-2a variant in challenged mice235.\n\nGBS-specific antibody concentration and correlates of protection. Sero-epidemiological studies showed some evidence in favor of an association between low maternal GBS CPS specific IgG levels and the risk of GBS disease in offspring. Associations between maternal GBS surface-protein antibody concentrations and invasive disease in their infants have not been as clearly established: among the surface proteins studied so far (surface immunogenic protein [Sip], resistance to proteases immunity group B [Rib], AlphaC protein, BetaC protein, fibrinogen-binding protein A, GBS-immunogenic bacterial adhesion, and pilus-island surface protein antibodies), limited data suggest that antibodies against alphaC and Rib proteins may provide protection against invasive neonatal GBS disease258,261–265.\n\nEvidence from sero-epidemiological studies. Most of the earlier studies comparing capsular antibody concentrations between cases and controls were done using a small sample size (e.g., ≈10–50 cases total per capsular serotype). More recent studies with larger sample sizes (e.g., >50–300 cases total per capsular serotype) have attempted to identify a serotype-specific IgG level in mothers that would confer protection against infant disease due to the same serotype266–269. A summary of studies published after 2000 is shown in Table 6. Both studies by Lin and colleagues were case-control studies using data collected from multiple study sites in the United States266,269. Maternal and cord serum samples were collected from enrolled participants after delivery and antibody levels were compared between cases (neonates who developed early-onset disease and their mothers) and controls (neonates who remained healthy despite being colonized with the same serotype and their mothers). The case-control study by Baker and colleagues was also a multi-center study in the United States and compared maternal serum samples from cases (those whose infants developed early-onset disease due to specific serotypes) matched by age and ethnicity with those from controls (those who were colonized with the sample capsular serotypes but whose infants did not develop disease)267. The study by Matsubara and colleagues was conducted at a single institution in Japan and compared serum antibody levels of pregnant women with serotype VIII colonization with stored serum samples from four mother-and-neonate pairs with early-onset serotype VIII infection270. Dangor and colleagues conducted a matched case-control study in South Africa; cases were infants with laboratory-confirmed invasive GBS disease within <90 days of age, and controls were age-matched healthy infants, whose mothers were colonized with the same GBS CPS serotypes as cases. Maternal and infant serum from cases were compared with those of controls (or cord serum in case of controls of early-onset disease268.\n\nThis table was adapted with permission from Taylor & Francis272.\n\nCC: case-control; EOD: early-onset disease; LOD: late-onset disease; NR: not reported; OR: odds-ratio\n\n1 Case, number of infants with invasive disease stratified by disease serotype\n\n2 Control, number of GBS rectal/vaginal/cervical colonized mothers of healthy mothers of healthy infant stratified by colonizing serotype\n\n3 Geometric mean concentration\n\n4 Adjusted for maternal age <20 years old, primigravida, diabetes during pregnancy, insulin required during pregnancy, membranes ruptured ≥12 h before delivery, and delivery by Caesarean section, in the multivariable regression models\n\n5Adjusted for female sex of neonate, performance of invasive procedure(s) during labor, intrapartum antibiotics, maternal fever, and delivery by Caesarean section\n\n6None of the cases had an antibody level above the threshold\n\n7Median (interquartile range)\n\n8The suggested threshold was >1 μg/ml but OR of early onset GBS disease in neonates was calculated using maternal serum capsular polysaccharide-specific IgG concentrations at delivery of ≥0.5μg/ml compared to those with <0.1 μg/ml in a logistic regression model\n\nThe results showed that in general, there was an inverse relationship between maternal serotype-specific IgG levels and the risk of their infants developing GBS disease (Table 6. Except for the studies by Baker and Dangor267,268, all studies used relative statistical measures to estimate thresholds, whereas the study by Baker and Dangor used Bayesian modeling to determine the threshold. This method is considered to be robust using small sample sizes and does not depend on preselected reference values271. Only the study by Dangor et al. included late-onset disease and due to the small sample size, they were not able to assess correlates of protection separately for early-onset disease and late-onset disease268.\n\nFunctional antibody concentrations and other potential endpoints of relevance. While the above studies showed evidence of an association between antibodies and risk of invasive infection, some infants developed disease despite having high antibody levels. Measurement of functional antibodies rather than overall antibody concentrations may be important to shed further light on immune correlates of protection, as total antibody levels might include inactive antibodies272–274. An example of this is the opsonophagocytosis killing assay (OPkA)182,275, which mimics the in vivo process of the killing of the bacterium by host effector cells following opsonization by specific antibodies. Antibody-mediated bacterial killing has also been shown to protect infants from GBS disease and may be a more useful marker than purely measuring antibody quantity via an enzyme-linked immunosorbent assay (ELISA)-type assay267. Functional antibody assessed by OPkA appears to correlate more closely with protection from GBS colonization, a precursor to disease in infants, than CPS-specific antibody concentration276. However, OPkA assays are laborious to perform and require large volumes of test sera. This is a critical issue in studies where sample volume is at a premium, such as in neonatal studies. Other assays, including an antibody-mediated complement C3b/iC3b deposition assay28,277 have been developed that are less labor intensive and less variable as they do not rely on human phagocytes and require small serum volumes. Avidity assays have also been explored but results indicate no significant difference in median avidity between antibodies induced by unconjugated or conjugated vaccines with a large range of values obtained for both vaccines278.\n\nDifferent assay methods, antigen constructs and standard quantitation for serotype-specific antibody levels267,269 have made comparison across studies challenging272. Different specific antibody concentrations that could be associated with protection from disease have been defined. However, these vary across studies and by GBS serotypes (Table 6), and there has been significant controversy regarding appropriate laboratory methods to derive such thresholds reliably279.\n\nHistorically, the radioantigen binding assay (RABA) has been seen as the gold standard for the quantification of anti-GBS antibody as it measures antibody in its native state169. However, the RABA has low sensitivity towards the lower limit of quantification and is unable to identify immunoglobulin of different isotypes and subclasses as so offers an incomplete picture of immunoglobulin concentration. Several more sensitive isotype-specific ELISA have subsequently been developed and have been used in the majority of vaccine studies to date; however, the estimated antibody concentration required to reduce the risk of GBS disease varied266,267,269,270,280–284. These assay methods vary, resulting in difficulties in extrapolating data between studies. More recently, studies have used Luminex or Bioplex platforms in order to improve the sensitivity and throughput of these assays and allow multiplexing. However, none of these ELISA or Luminex assays provide information on the ability of the antibodies to neutralize GBS. Therefore, an ELISA alone may not be sufficient in predicting protective immunity from GBS infection272. A possible solution to this may be the development of an effective functional antibody assay that could be used as an in vitro correlate of protection, such as OPkA.\n\nHowever, to achieve this goal for GBS, assay standardization is required for each GBS antigen of relevance and for each serotype (Table 6. It is also possible that proposed thresholds might vary depending on study population differences (e.g., higher prevalence of HIV positive patients in the study268). Efforts to standardize quantitative and functional immunoassays are needed for phase II and phase III GBS vaccine studies using immunogenicity endpoints.\n\nThe development of a GBS vaccine as considered here is unique in that the primary target population is pregnant women, as opposed to vaccines that WHO currently recommends in pregnant women (e.g., tetanus toxoid, inactivated trivalent influenza vaccine, acellular pertussis vaccine) which were not developed nor licensed to target pregnant women285–287. The anticipated vaccine development pathway will likely begin with preclinical studies relying on animal models to assess the immunogenicity and safety of the product. Potential adverse outcomes in both mothers and their offspring are evaluated, including reproductive and developmental toxicity associated with the product288. Upon favorable pre-clinical evaluation, first time in human studies are conducted in healthy adults. Phase I testing could start in non-pregnant women of childbearing age, in a limited number of participants (e.g., <100)288. Phase II studies of up to several hundred subjects per trial typically provide more information on common local and systemic reactions and immunogenicity evaluations of dose range and dose schedule288. Evaluation in pregnant women would typically only start upon favorable evaluation in non-pregnant women. In addition to adding to information on adverse events among mothers, phase II trials in pregnant women can provide initial information about safety effects in newborns, as well as information about IgG antibody transfer ratios to the newborn and duration/decay of these antibodies over time (see following section on endpoints of relevance in immunogenicity studies). Phase III trials would typically have a large enough sample size to provide data supportive of licensure288. Phase III pivotal licensure studies most classically include a well-defined primary clinical endpoint, but alternative pathways to licensure are being discussed in the case of GBS vaccines, considering the possibility of establishing a regulatory acceptable immune correlate of protection. Post-licensure evaluations may play a critical role in characterizing rarer safety events and effectiveness under real-world conditions, as well as in special populations of interest.\n\nVaccines targeting maternal immunization during pregnancy must demonstrate favorable safety for the mother, the developing fetus and the newborn. Upon request by the WHO Strategic Advisory Group of Experts (the senior WHO vaccine governance board), the WHO Global Advisory Committee on Vaccine Safety (GACVS) recently reviewed safety data on existing vaccines for maternal immunization in pregnancy289,290. The GACVS concluded that there is no evidence of adverse pregnancy outcomes from the vaccination of pregnant women with currently licensed inactivated virus, bacterial, or toxoid vaccines. They concluded that pregnancy should not preclude women from immunization with these vaccines if medically indicated. As described in the previous paragraph, WHO currently recommends administration of tetanus toxoid, inactivated trivalent influenza vaccine, and acellular pertussis vaccine to pregnant women, although none of these vaccines were licensed for use in pregnant women285–287. Conjugate vaccines (either licensed or investigational), when conjugated with different carrier proteins (e.g., TT, DT, CRM197), as well as vaccine formulations including alum and oil-in-water emulsions as adjuvants have been used in pregnant women, and favorable safety has been documented174,252,291,292. Further considerations on safety evaluation of GBS vaccine candidates are presented in following sections.\n\n\nCurrent GBS vaccine candidates in development\n\nCPS-based vaccines have been the most extensively studied among vaccine candidates, and trivalent glycoconjugate vaccine candidates have gone through phase I and II trials. Currently, there are no plans for these trivalent vaccine candidates to move on to phase III studies.\n\nGBS protein vaccines using other target antigens293 and polysaccharide vaccines conjugated with different carriers (e.g., GBS80 pilus protein, peptide)233,294 have been tested in animal models. GBS-NN is undergoing phase I evaluation (NCT02459262). A summary of candidate vaccines is shown in Table 7.\n\nCPS: capsular polysaccharide, GSK: GlaxoSmithKline, NIH: National Institutes of Health, POC: proof of concept\n\nNon-pregnant women. Multiple polysaccharide and protein conjugate GBS vaccines have been tested in healthy non-pregnant women, although the number of volunteers included was usually small (e.g., ≤30 in each vaccine group). Earlier studies testing vaccine dose-response have shown local pain or mild redness which seemed to be more frequent upon immunization with higher doses241,247,248. More recently, a phase Ib randomized, observer-blind and placebo-controlled trial of a trivalent (serotypes Ia, Ib, III) GBS CPS-CRM197 conjugate vaccine was conducted among healthy non-pregnant women (NCT01150123)295. In this study, approximately 40 women were enrolled in each vaccine group, which consisted of different dosing schedule (e.g., one dose vs. two doses) and different use of adjuvants (no adjuvant, use of Al(OH)3, or MF59 [either half dose or full dose]). Results showed that local reactogenicity was increased in those who received vaccines with adjuvants (range: 40–42% in placebo group, 75–88% in vaccine group without adjuvants, 93–100% in those with Al(OH)3, 83–100% in those with half dose MF59, and 93–100% in those with full dose MF59); the proportion of solicited systematic reactions was less frequent (58–65% in the placebo groups, 50–85% across vaccine groups). Serious adverse reactions were similar among the vaccine and the placebo groups (5–11% in placebo group, 0–5% in vaccine group without adjuvants, 0–15% in those with Al(OH)3, 0–8% in those with half dose MF59, and 5–15% in those with full dose MF59), but none of them were considered related to vaccination, and there were no deaths or premature withdrawals due to adverse events (NCT01150123)295.\n\nPregnant women and newborns. The first phase I trial that used a glycoconjugate vaccine among pregnant women was conducted with III-TT vaccine with a saline placebo control group175. A total of 30 participants were enrolled, and no vaccine-associated serious adverse events were observed. Mild to moderate pain at the injection site occurred in 70% of the vaccine recipients compared to 40% in placebo recipients; 10% had redness at the injection site in the vaccine group compared to 0 in the placebo group. Obstetrical complications, mostly related to need for cesarean section or postpartum fever, occurred in 35% of vaccine and 70% of placebo recipients. All neonates had an uncomplicated hospital course in both groups. Results from a phase II randomized, observer-blind, multicenter study using trivalent (Ia, Ib, III) GBS polysaccharide-CRM conjugate vaccine among pregnant women has been published recently (NCT01446289)174. A total of 86 women at 24–35 weeks gestation were enrolled, of whom 51 were assigned to the vaccine group. Reports of solicited adverse reactions were similar between the groups, with 54% of the vaccine group vs. 53% in the placebo group reporting at least one solicited reaction. Reported rates of systematic reactions were similar, although more participants in the vaccine group reported local adverse reactions (40% in the vaccine group vs. 24% in the placebo group). All women gave birth to single, live born neonates, and obstetric outcomes were similar between the two groups. No infant deaths occurred during the study period, and serious adverse events were reported in 24% of the vaccine and 31% of the placebo group infants.\n\nEndpoints of relevance. The phase I/II trials using investigational trivalent GBS conjugate vaccines quantified GBS serotype-specific antibody levels using ELISA and reported as geometric mean concentrations (GMC) (NCT 01446289, NCT01150123, NCT01412801). None of these studies evaluated antibody functionality, but earlier GBS conjugate vaccine studies reported Opsonophagocytic assay (OPA) evaluation175,241,247,248. An ongoing phase I trial of GBS-NN is using both ELISA and OPA to measure immunogenicity.\n\nEvidence from Phase I and II trials\n\na. Non-pregnant women (NCT01193920, NCT01150123) A phase Ib/II trial in which 40 non-pregnant women received two doses of trivalent GBS vaccine (Ia, Ib, III, 20/20/20μg) showed that compared to the placebo group, the geometric mean concentration (GMC) of antibody measured by ELISA a month after the second vaccination was significantly higher for all measured serotypes (serotype Ia 40 μg/mL in vaccine group vs. 0.88 in placebo group; serotype Ib 5.3 vs. 0.25; serotype III 11 vs. 0.61), and remained higher a year after the first dose (serotype Ia 15 μg/mL in vaccine group vs. 0.86 in placebo group; serotype Ib 5.28 vs. 0.4; serotype III 7.03 vs. 0.3) (clinicaltrials.gov: NCT01193920)296. In a study by Leroux-Roels and colleagues comparing vaccine groups with different antigen concentration, adjuvants, and dosing schedule (NCT01150123)295, results showed that all vaccine groups had a higher GMC compared to placebo groups at both 61 days and 361 days after vaccination; a higher dose level, the presence of aluminum hydroxide adjuvant or a second dose did not significantly increase antibody concentration. The exception was a higher GMC against serotype III one year vaccination in the group having received a second dose. When stratified by antibody concentrations at baseline, women who had undetectable antibody concentrations had lower antibody responses than those with detectable antibodies at baseline.\n\nb. Pregnant women and newborn (NCT01446289) The aforementioned phase I trial using III-TT vaccine in pregnant women reported that 19 of 20 recipients had 4-fold increases in III CPS-specific IgG after vaccination relative to pre-vaccination levels, infant cord levels were approximately 70% of maternal values at delivery, and opsonophagocytic killing measured in sera of infants born to vaccine- but not placebo-recipients persisted until 2 months of age, suggesting the potential to protect against both early- and late-onset GBS infant disease175. A phase II placebo-controlled trial using a single dose of trivalent (Ia, Ib, III, 5/5/5μg) GBS polysaccharide-CRM197 conjugate vaccine administered to pregnant women at 24–35 weeks gestation was conducted in Belgium and in Canada (NCT01446289)174. Levels of antibodies against serotypes Ia, Ib, and III at delivery were respectively 16-, 23- and 20-fold higher than pre-vaccination. Of note, those with baseline antibody concentrations below the lower limit of detection had lower antibody responses compared to those with higher antibody levels at baseline. Infants born to vaccinated mothers had significantly increased antibody levels at birth, which persisted above placebo group levels at least 3 months after birth. Antibody concentrations decreased after birth and by day 91 were 22–25% of the levels measured at birth but were still 5–8.5 fold higher than those observed in the placebo group. There was only one (2%) preterm infant in the vaccine group, and there was no clear relationship between time from vaccination to delivery and maternal or neonatal antibody concentrations at birth for any of the serotypes. GBS-specific antibody ratios between vaccinated mother and infant (calculated as the paired ratio between the GBS-specific antibody concentration measured in the cord blood of the neonate to those measured in maternal sera at birth) ranged from 0.68 to 0.81 across the three serotypes (serotype Ia: 0.81, serotype Ib: 0.77, serotype III: 0.68). Currently, an extension study is underway to examine the safety and immunogenicity of a second dose of the trivalent vaccine administered in non-pregnant women after a time interval close to inter-pregnancy interval (NCT02690181).\n\nHIV-infected mothers and their newborns. A non-randomized phase II open-label study using the trivalent (Ia, Ib, III) GBS polysaccharide-CRM conjugate vaccine was conducted in Malawi and South Africa among 270 pregnant women aged 18–40 years between 24–35 weeks gestation with or without HIV infection (NCT01412801)252. There was no control group. Enrolment stratification ensured that about half of the HIV-infected women were in a low CD4 cell count category [50–350 cells/μL] or high CD4 cell count category [>350 cells/μL]. Results showed that immune response to vaccines as well as serotype-specific antibody levels in infants at birth were lower in HIV-infected mothers and their infants. In mothers, the fold change in antibody concentrations was higher for the HIV-uninfected group than the HIV-infected groups, and those with undetectable antibody levels at baseline had lower antibody concentrations post-vaccination compared to those with detectable antibody concentration at baseline. Transfer ratios (infant geometric mean antibody concentration in blood collected within 72 hours of birth divided by maternal geometric mean antibody concentration in blood collected at delivery) were similar across all three groups (0.49–0.72).\n\nRates of women reporting at least one solicited adverse reaction were highest in the HIV-uninfected group (67%), compared with HIV-infected women with a low CD4 cell count (44%) or high CD4 cell count (59%). Local reactions (most frequently injection site pain) were reported by 18–39% of women across the groups, and systematic reactions were reported by 40–59% of women (fatigue and headache were most frequent). Adverse events were reported by 74–78%, of which 7–23% were deemed to be caused by study vaccination. None of the reported serious adverse events (reported by 28–32% of women) or adverse events reported in infants (41–49%) were deemed to be caused by vaccination.\n\n\nCost-effectiveness evaluation for low and middle income countries\n\nGBS vaccine cost-effectiveness assessments may shed light on the potential investment case for GBS vaccines before phase III trials have been completed. Six analyses of GBS vaccine cost-effectiveness have been published to date, including four before the current era of GBS vaccine development297–300 and two recent analyses301,302.\n\nThe older studies evaluated cost-effectiveness in resource-rich settings (three in the United States and one in the United Kingdom). These documented the value of variants of screening- or risk-based intrapartum prophylaxis compared to ‘doing nothing’ and also assessed the potential value for a vaccine with assumed efficacy levels against GBS disease-causing serotypes, either as a maternal immunization strategy or as a vaccine delivered to adolescent females. The UK analysis300 found that if a vaccine was available, the most cost-effective prevention strategy would include vaccination of all pregnant women, in combination with IAP for all preterm deliveries and a subset of term deliveries with risk factors (19% of all women treated). This study also emphasized the need for additional information on key model parameters.\n\nTwo more recently published cost-effectiveness analyses301,302 focused on the conjugate trivalent vaccine (serotypes Ia, Ib, III) in clinical development at the time, assuming a single dose of GBS vaccine would be recommended during each pregnancy. The Oster analysis evaluated the addition of universal vaccination of pregnant women to the screening-based IAP strategy in the United States. Assuming a vaccine cost of $100 per dose and75% vaccine efficacy against included serotypes among term deliveries and a reduced efficacy among preterm deliveries, this analysis found that the cost-effectiveness of maternal immunization may be comparable to other recently approved vaccines in the United States. A CDC-sponsored cost-effectiveness analysis for the United States is in progress, with results anticipated in late 2016.\n\nThe Kim analysis focused on the upper middle-income country of South Africa. This decision-analytic model simulated the natural history of GBS transmission from mothers to infants and compared four strategies: do nothing, risk factor-based IAP, maternal GBS vaccination, and vaccination plus risk factor-based IAP. National and hospital-based GBS prevention policies in South Africa are consistent with variants of the risk factor-based IAP approach, although group of women eligible is quite narrow and implementation is limited. This analysis assumed a vaccine price per dose of 10–30 U.S. dollars (USD) and vaccine efficacy against included serotypes of 50–90% among term infants with a reduction among preterm infants. The most influential parameters in one-way sensitivity analyses were vaccine price per dose and early onset GBS disease incidence. This analysis concluded that maternal immunization would lead to important reductions in the burden of infant GBS disease and be considered very cost-effective (range 416–3,545 in 2010 USD/DALY averted comparing vaccination to doing nothing; range 461–5,491 2010 USD/DALY averted comparing vaccination to risk factor-based IAP). Notably, vaccination plus risk factor-based IAP was more effective and consistently very cost effective. Risk factor based IAP alone was also very cost effective but prevented only a small burden of infant GBS disease.\n\nSinha and team are also in the process of conducting a GBS vaccine cost-effectiveness analysis for GAVI-eligible low-income sub-Saharan African countries. Thirty seven countries in the region were clustered into four groups based on 24 measures of economic development, general health resources, and past success in public health programs. A decision-analytic model was built to compare a natural history arm (‘do nothing’) with maternal immunization as part of antenatal care. Risk factor-based IAP was not included in this assessment due to expert opinion that this was not feasible for these low income birth settings. Results are expected in late 2016.\n\n\nMathematical modeling related to GBS vaccines\n\nMathematical models, can inform decision-making related to vaccine development and implementation in several ways. For example, disease transmission models can shed light on the impact of varying key aspects of vaccine delivery such as age at vaccination, dosing schedules and method of delivery (e.g., vaccine campaigns versus incorporation into routine schedules). Models can also clarify the potential impact of a vaccine on unvaccinated members of the population (herd immunity) and predict potential unintended consequences of vaccine introduction such as an increased age at first infection, or the potential for replacement disease due to strains not included in the vaccine candidate. Finally, mathematical models can often highlight influential parameters where there would be value in a strengthened evidence base to allow for more accurate estimates.\n\nIn the context of maternal immunization for GBS, mathematical modeling to date is extremely limited. Some of the cost-effectiveness models developed have included a natural history arm that estimates disease burden based on a variety of maternal risk factors301. A non-dynamic compartmental model that estimates GBS-related outcomes based on maternal GBS colonization and risk of neonatal disease is under development as part of a global GBS disease burden estimation activity led by the London School of Hygiene and Tropical Medicine. It is possible that models could prove useful to better understand the impact of maternal vaccination timing on the preventable portion of newborn disease, particularly since earlier vaccination may offer protection to later preterm deliveries, depending on antibody transfer ratios and decay rates. If conjugate GBS vaccines have an appreciable effect on reducing acquisition of GBS colonization with vaccine-included serotypes, models may also help assessing the consequences of reduced exposure of the newborn to GBS. Models may also help predict the impact of maternal immunization across LMIC settings with different prevalence of maternal HIV infection, levels of home vs facility deliveries, and optimal window for vaccination considering also antenatal care seeking behaviors. Models could contribute to the understanding of the potential impact of GBS vaccination on the burden of GBS-related stillbirth and preterm delivery.\n\n\nConsiderations about options to generate pivotal licensure data\n\nThe present document intends to provide an overview of available options and a framework for future reflection and should not be interpreted as guidance or recommendations.\n\nDouble-blind individually randomized controlled trial designs generate most robust data and minimize risks of bias. A relevant clinical endpoint288 supportive of efficacy evaluation provides the most direct evidence of the potential health impact. However, low baseline disease incidence may lead to very large sample size requirements for vaccine efficacy evaluation. A potential alternative option may be to use immunologic correlates of protection as the primary endpoint76,272. Correlates of protection have been used for licensure in future generations or variants of a licensed product, or in instances where direct efficacy against disease is not readily feasible and correlates of protection are well-established. The evidence supporting recognition of a correlate of protection may be derived from an efficacy trial, which provides the opportunity for nested immunogenicity evaluations and detailed analysis of the relationship between immune and clinical endpoints. Alternatively, as in the case of GBS, immunological correlate of protection may be inferred from sero-epidemiological studies.\n\nCorrelates of protection have indeed been used for licensure of meningococcal, pneumococcal conjugate, and inactivated influenza vaccines. Group C meningococcal conjugate vaccine was licensed in the United Kingdom based on immunogenicity studies without efficacy data. These compared serum bactericidal assay titers induced by the experimental vaccine with a licensed serogroup C polysaccharide vaccine, which had an established evidence of efficacy and correlates of protection274,303. The 10-valent pneumococcal conjugate vaccine (PCV10) and the 13-valent pneumococcal conjugate vaccine (PCV13) were licensed based on non-inferiority trials compared against PCV7 using serological end-points304. Use of immunogenicity bridging studies comparing new vaccine products with those with established clinical efficacy is an accepted licensure pathway for inactivated seasonal influenza vaccine305,306. If licensure is granted based on a primary immunogenicity endpoint, there may be a regulatory requirement for post-licensure evaluations of effectiveness against disease endpoints. The optimal design of post-licensure trials need careful considerations as the inclusion of a non-vaccinated study arm may be deemed ethically unacceptable. Alternative case control, cluster randomized or ecological studies are possible.\n\nTrial with a disease endpoint\n\nGBS disease GBS invasive disease in young infants would likely be viewed as a relevant primary efficacy endpoint (see Table 8)76. Given that GBS-related stillbirths have similar pathophysiology as neonatal GBS disease (ascending infection from a colonized mother), using a composite disease endpoint that includes GBS-related stillbirths is a possibility, which could help reduce the study sample size. Subgroup analyses may be used to assess the influence of various maternal factors (e.g., HIV infection, malaria, malnutrition, maternal age, multiparity) on protection. Factors that influence the extent of protection, such as when maternal vaccination occurs in relation to the birth of the child (allowing sufficient time for a maternal antibody response), the gestational age at birth (placental transfer will be less in those born prematurely) and the chronological age of the infant (antibody levels will wane over the first 2–3 months of life), may also need to be characterized76. Analyses of vaccine serotype-specific efficacy and efficacy stratified by term vs. preterm, early-onset disease vs. late-onset disease, and serotype-specific efficacy could be conducted76. Other endpoints of public health interest such as prevention of prematurity, stillbirths, hospitalization, and mortality could be considered but interpreted carefully in the context of multiple statistical testing and statistical power. See Table 8 for summary of candidate case definitions.\n\nCSF: cerebrospinal fluid, GBS: group B streptococcus, IAP: intrapartum antibiotic prophylaxis, LMIC: low- and middle-income countries, PSBI: probable severe bacterial infection, WHO: World Health Organization\n\nColonization Newborn GBS colonization or exposure from colonized mothers is a precursor to GBS disease. The demonstration of vaccine efficacy against maternal and newborn colonization may argue for a protective effect of GBS vaccination. If vaccination reduces vaginal GBS colonization with the targeted invasive strains at the time of delivery, the risk of developing early-onset disease and potentially late-onset disease by strains targeted by the vaccine would likely decrease76. However, other factors may play a role, as only a small proportion of colonized neonates develop disease. Further considerations on case definitions are provided in Table 8.\n\nTrial with immunologic correlates of protection. For glycoconjugate GBS vaccines, evidence from immune-epidemiological studies suggest that maternally-transmitted, functional IgG antibodies against GBS capsular polysaccharides, as measured by a quality-assured opsonophagocytic assay in serum from neonates and/or young infants may constitute a candidate substitute endpoint (see immune correlates of protection section). Further evidence is needed to evaluate the possible role for immune markers of protection induced by protein vaccine candidate in the licensure pathway.\n\nConsiderations for licensure based on immune markers While associations between antibody concentrations and risk of disease have been observed, the strength and nature of these associations require further investigation and continued assay standardization efforts. Several analytical frameworks for validating immune markers as substitute endpoints for protection against clinical disease have been developed274. The Prentice Criteria (see Box 2), originally designed for randomized-controlled trial data, but extended by others to observational designs274, can be used to evaluate potential substitute endpoints.\n\n\n\n1. Protection against the clinical endpoint is significantly related to having received the vaccine\n\n2. The substitute endpoint (immune marker) is significantly related to vaccination status\n\n3. The substitute endpoint is significantly related to protections against the clinical endpoint\n\n4. The full effect of the vaccine on the frequency of the clinical endpoint is explained by the substitute endpoint, as it lies on the sole causal pathway\n\nThe evidence base to evaluate whether a substitute endpoint fulfils the Prentice Criteria would most typically come from a trial with a clinical disease endpoint and nested immune marker study. For a GBS candidate vaccine, evidence for these criteria may need to be gleaned from a range of experimental and observational studies. The first criterion, that protection against the clinical endpoint is significantly associated with vaccine receipt, may derive from animal challenge studies. Evidence for the second criterion (the immune marker is significantly related to vaccination status) would likely derive from phase II studies. Evidence for the third (the substitute endpoint is significantly related to protection against the clinical endpoint) would likely derive from sero-epidemiological observational cohort and case-control studies. Evidence for the fourth criterion may come largely from existing knowledge about immune response and protection among young infants in the first 3 months of life.\n\nThe Prentice Criteria are not the only approach to evaluation of a substitute endpoint. The Qin framework307 can also be applied. This framework distinguishes associations between immune markers and clinical disease endpoints into three classes, and within these also offers more options for causal inference frameworks that can be applied. This framework also highlights whether a substitute endpoint is specific to a single population (the data derived just from one population) or whether it is general (meaning it has been observed in multiple populations).\n\nIsolation of GBS from a normally sterile site, such as blood or CSF in an infant with possible sepsis or meningitis, is a widely used definition for young infant invasive GBS disease22,24,32,37,38,46,146,147. GBS isolation by culture is considered the reference standard. Automated culture methods yield higher detection rates compared to manual culture methods105,308; minimizing time between collection and inoculation of blood culture bottles, using pediatric bottles for young infants, and maximizing blood volumes are important for optimal results105. For GBS meningitis, in addition to positive CSF culture, case definitions have included detection of GBS antigens in CSF (e.g., latex agglutination)24,46, detection by PCR46, and GBS positive blood culture with CSF findings consistent with meningitis22,46,47. As described, onset of disease during days 0–2 or 0–6 of life is commonly used for early-onset disease and onset during days 7–89 is used for late-onset disease3,24,32,38,46,147. Due to challenges in surveillance for invasive disease, some young infant studies have developed case definitions for probable GBS infection capturing infants with clinical sepsis and surface colonization with GBS309–311. Because surface colonization of young infants can be common, however, such definitions have limited specificity. Recently, some studies have used PCR on whole blood in addition to blood culture95,312. This can enhance detection but blood samples from healthy controls provide an important context: a low percentage of healthy controls have been documented with positive PCR on blood in both South Africa and South Asia (SANISA and ANISA unpublished studies). Another option for newborn disease is clinical sepsis. Several definitions have been used. PSBI, as defined by IMCI103 is sensitive but not specific: the sensitivity is estimated to be 85% and the specificity of 75% based on an experienced pediatrician’s assessment101. South Africa has used a more specific definition including both clinical and laboratory signs15. Use of chest X-rays may be considered if pneumonia is one of the outcomes of interest. Candidate case definitions are summarized in Table 8.\n\nThe number of young infant GBS cases at single institutions is relatively small, depending on the number of annual deliveries and the disease incidence rate. A trial with a disease endpoint would likely necessitate a multi-center trial. An efficacy trial conducted in settings where standards of care include screening-guided IAP would lead to very large sample size requirements. If acceptable, trial conduct in a high incidence setting where screening-based IAP is not implemented as standard of care would reduce the sample size requirement. Adequate infection risk management in study participants would need to be discussed with relevant authorities and institutional review boards (IRBs), in consideration of local recommendations and WHO recommendations. Acceptability may be higher if favorable safety has already been established in a significant number of individuals. At a site with an incidence of 2.0 per 1000 live births for neonatal GBS disease <90 days of age, approximately 34–44,000 pregnant women will need to be enrolled (assuming that 75–85% of neonatal GBS disease are caused by vaccine serotypes, 70–80% are eligible, and 90% power to detect vaccine efficacy of 75% against vaccine serotypes), whereas >100,000 pregnant women will be needed in countries such as Europe and North America where IAP has reduced the incidence of early-onset disease to markedly less than 1 per 1000 live births76 (Table 9. For conjugate vaccines this number will vary depending on the GBS serotypes contained in the candidate vaccine and the serotype distribution at the study site. A licensure trial based on an established immune correlate of protection would require a smaller sample size and the total pre-licensure exposed population would likely be determined by safety characterization requirements. Considerations for safety evaluation are described in “safety considerations” and “regulatory considerations and potential licensure pathways for low- and middle-income countries”.\n\n* Assuming 90% power to detect 75% vaccine efficacy, 75–85% of disease are vaccine types, 70–80% of approached participants are eligible per protocol\n\n1. Active Bacterial Core Surveillance, 2014\n\n2. Edmond et al. 20123\n\n3. Sinha et al. 201651\n\n4. Cutland et al. 201547\n\nGBS : group B Streptococcus, U.S.: United States\n\nEvaluation of safety for a vaccine that will be specifically approved for use in pregnant women is unique given that: (1) the safety of both the mother and the fetus/child will need to be considered, and (2) complications of pregnancy may occur even in pregnancies considered as “low risk” regardless of the vaccination status288. Therefore, the relative risk of common adverse pregnancy outcomes in the study population should be determined. Sample sizes must be adequate considering baseline incidence of adverse pregnancy outcomes and may not be finalized until phase II safety data are available. Detection of rare adverse outcomes require large sample size. Baseline studies can be useful to determine sample size needs, which should be discussed in advance with regulators.\n\nOne of the challenges in assessing safety of maternal immunization has been a lack of standard definitions for maternal immunization adverse events313. In 2014 WHO held a consultation jointly with the Brighton Collaboration to facilitate harmonization of existing key terms to support monitoring of vaccine safety in pregnant women and newborn children314. Key terms were chosen for discussion based on (1) frequency, (2) severity of health outcome, (3) public health relevance, and (4) measurability in different settings. The Global Alignment of Immunization Safety Assessment in pregnancy (GAIA) established working groups to review the evidence from the WHO-Brighton Collaboration landscape analysis, and has developed a set of interim case definitions according to the Brighton Collaboration process and format315. As of September 2015, the working group has developed 10 case definitions currently undergoing peer-review316.\n\nThere is no regulatory or ethical prohibition on studies in pregnancy317–319. However, the concept of maternal vaccination, which may potentially pose harm to both the mother and the infant, may not be well received in countries where uptake of vaccines currently recommended for pregnant women by WHO is low320. If a randomized-controlled study is designed, an important consideration is whether the control group should receive another vaccine that is currently recommended by WHO rather than placebo76.\n\nControversies exist surrounding whether trials in low- and middle-income countries with a high burden of GBS disease should offer universal screening and IAP to their participants, the worldwide “best available” standard of care321. Arguments against this have been presented: provision of care that is not sustainable at the study site could produce results that are more generalizable in higher-income countries and have little social value for the host community321. Additionally, provision of a standard of care normally not available could coerce pregnant women into trial participation. Authors have suggested that study sites should adhere to local recommendations, in consideration of WHO guidelines100,102,116. The acceptability of a trial under local standards of care may be dependent on the benefit risk assessment and the available safety data on the candidate vaccine. Whatever the approach, GCP trials should be authorized and under oversight by local IRBs and recognized authorities, with participant agreement documented through an informed consent process.\n\nPhase III studies with clinical outcomes as endpoints would likely need to be conducted in geographical locations with a high burden of neonatal invasive GBS disease (Table 9), which are likely to be in LMIC. Important trial site characteristics are reviewed in Table 10. Important aspects include the presence of experienced clinical-trialists and established Ethics Review Committees (ERC) and Regulatory Authorities (RA) oversight, to ensure the highest compliance with Good Clinical Practice standards76; availability of clinical and laboratory infrastructure for optimal capture of PSBI cases for specimen collection, processing, and identification of GBS from collected specimens105; proportion of home deliveries; access to care supportive of rapid clinical sepsis diagnosis and collection of appropriate specimens close to disease onset76; capacity to assess gestational age, provide sufficient medical care and to identify and respond to adverse events in both vaccinated pregnant women and their newborn infants76. Clinical management study algorithms can support standardized collection of safety events and endpoints of interest according to defined case definitions.\n\nANC: antenatal care, CSF: cerebrospinal fluid, GBS: group B Streptococcus, GCP: Good Clinical Practice, GCLP: Good Clinical Laboratory Practice, ERC: ethics review committee, IAP: intrapartum antibiotic prophylaxis, IMCI: integrated management of childhood illness, LMIC: low- and middle-income country, MLST: multilocus sequence typing, PCR: polymerase chain reaction, RA; regulatory authority\n\nPotential challenges. Several review articles have summarized challenges in conducting studies that involve pregnant women in low- and middle-income countries320,322. Reaching women during the early stages of their pregnancy may be challenging in societies where women are reluctant about revealing their pregnancy early322, and may miss the window of enrollment and vaccine administration. In addition, accurate estimation of gestational age, which is important in assessing pregnancy outcomes (e.g., preterm), is often a challenge in resource-limited settings. Measurements that are typically used, such as based on last menstrual period or measurement of fundal height, often do not provide consistent results. It is important for the participants to deliver their infants at predictable locations affiliated with the study and to be able to follow through on the follow-up visits to assess study-specific adverse events. However, this can be challenging in settings where regular follow-up visit after delivery is not customary.\n\n\nRegulatory considerations and potential licensure pathways for low- and middle- income countries\n\nThe regulatory considerations for products seeking an indication for use in pregnant women differ between already-licensed products and new products seeking licensure expressly for use among pregnant women. While there are several examples of already-licensed products with public health recommendations for use during pregnancy, there are no products yet that have achieved licensure for the specific indication of use during pregnancy. Respiratory syncytial virus (RSV) vaccines may represent the first pathogen class of vaccines that gain an initial indication for immunization of pregnant women as at least one RSV vaccine is ahead of GBS vaccines in their development timelines. Early dialogue between vaccine developers and regulators can play a particularly important role for maternal immunization product development. Major regulatory authorities, as well as the Council for International Organizations of Medical Sciences (CIOSM) in collaboration with WHO, have agreed that pregnant women should be presumed eligible for participation in research studies (CIOMS Guideline 17)323, and that this applies also to vaccines intended to protect primarily the offspring.\n\nLicensure in the United States and Europe can be requested through due Food and Drug Administration (FDA) and European Medicine Agency (EMA) processes respectively. The Article 58 pathway provides a collaborative review framework between the EMA and WHO, for products not intended to be used in Europe. Submissions can be done as specified by relevant national regulatory authorities in LMICs. The African Vaccine Regulatory Forum (AVAREF) is a collaborative forum of regulators from different African countries, constituted to enable information sharing between African NRAs. The Developing Country Vaccine Regulators’ Network (DCVRN) may also facilitate steps in regulatory processes in LMICs that are members.\n\nRegulatory considerations from the FDA on the clinical development of vaccines indicated for use in pregnancy have been presented elsewhere288. In addition to evidence to support safety and effectiveness claims, maternal immunization submission packages may need to include information on potential immune interference in the infant, due to the transfer of maternal antibodies to the vaccine antigen or to carrier proteins that may share epitopes with carriers used in the infant vaccine series. The role of immunological correlates of protection will need to be clarified, especially with regards to the primary licensure. Relevant quality-assured immunogenicity endpoints may also be used to bridge across populations, for instance when considering generalizability across LMIC or between LMIC and resource-rich settings.\n\nSafety evaluations would be conducted considering vaccine effects on both pregnant women and their infants, taking into account background rates of common pregnancy complications (e.g., pre-eclampsia, miscarriage/spontaneous abortion, stillbirth, preterm delivery). Pregnancy and neonatal outcomes, serious adverse events, new onset maternal medical conditions. The duration of safety follow-up for pregnant women and for newborns needs to be determined. Phase II data may be needed for optimal determination of Phase III sample size requirement for safety evaluation. Multiple factors including accumulated safety data associated with the product to date, safety signals, and the overall benefit to risk ratio assessment would likely be taken into account324. See previous section for considerations on safety in phase III trials. Lastly, the need for post-approval investigations should be reflected on. To maximize chances of success for a candidate vaccine, vaccine developers should plan ahead to overcome potential post-approval obstacles325. Target product profiles (TPP) list desirable characteristics, features, and attributes of a candidate vaccine, and have been long used by biotechnological and pharmaceutical companies326.\n\n\nAreas for future research\n\nWe briefly summarize some of the leading scientific gaps relevant to GBS vaccine development and areas for future research based on the section topics addressed in this briefing document (Table 11).\n\nCSF: cerebrospinal fluid, GBS: group B Streptococcus, HIV: human immunodeficiency virus, HIC: high-income countries, IAP: intrapartum antibiotic prophylaxis, LIC: low-income countries, LMIC: low- and middle-income countries, SES: socioeconomic status",
"appendix": "Author contributions\n\n\n\nMK, SS, and JV conceived the study. MK and SS prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nPart of this work was supported by a grant to WHO from the Bill & Melinda Gates Foundation: Global Health Grant OPP1134011.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention or the World Health Organization. We thank the following individuals generously reviewing sections of this document and providing their expertise: David Goldblatt, University College London; Bengt Johansson Lindbom, Lund University; Anushua Sinha, Rutgers School of Public Health; Lesley McGee, Centers for Disease Control and Prevention. We are also grateful to the following groups and individuals for graciously allowing us to share preliminary data from their ongoing studies: Joy Lawn, Neal Russel, and Anna Seale, London School of Hygiene and Tropical Medicine; Clare Cutland, Shabir Madhi, and Sithembiso Velaphi, Respiratory and Meningeal Pathogens Research Unit and University of the Witwatersrand; Saithip Bhengsri, Global Disease Detection Center, Thailand MoPH-US Collaboration, Ministry of Public Health, Nonthaburi, Thailand; Kip Baggett, and Christopher Gregory, Division of Global Health Protection, Centers for Disease Control and Prevention.\n\n\nReferences\n\nMcGee L, Beall B: Streptococci. In: de Filippis I, McKee ML, eds. Molecular typing in bacterial infections. New York: Springer. 2013; 109–126. Publisher Full Text\n\nLibster R, Edwards KM, Levent F, et al.: Long-term outcomes of group B streptococcal meningitis. Pediatrics. 2012; 130(1): e8–15. PubMed Abstract | Publisher Full Text\n\nEdmond KM, Kortsalioudaki C, Scott S, et al.: Group B streptococcal disease in infants aged younger than 3 months: systematic review and meta-analysis. Lancet. 2012; 379(9815): 547–556. PubMed Abstract | Publisher Full Text\n\nLiu L, Oza S, Hogan D, et al.: Global, regional, and national causes of child mortality in 2000–13, with projections to inform post-2015 priorities: an updated systematic analysis. Lancet. 2015; 385(9966): 430–440. PubMed Abstract | Publisher Full Text\n\nSchrag SJ, Verani JR: Intrapartum antibiotic prophylaxis for the prevention of perinatal group B streptococcal disease: experience in the United States and implications for a potential group B streptococcal vaccine. Vaccine. 2013; 31(Suppl 4): D20–26. PubMed Abstract | Publisher Full Text\n\nDi Renzo GC, Melin P, Berardi A, et al.: Intrapartum GBS screening and antibiotic prophylaxis: a European consensus conference. J Matern Fetal Neonatal Med. 2015; 28(7): 766–782. PubMed Abstract | Publisher Full Text\n\nSimonsen KA, Anderson-Berry AL, Delair SF, et al.: Early-onset neonatal sepsis. Clin Microbiol Rev. 2014; 27(1): 21–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Barrett FF: Transmission of group B streptococci among parturient women and their neonates. J Pediatr. 1973; 83(6): 919–925. PubMed Abstract | Publisher Full Text\n\nVerani JR, McGee L, Schrag SJ: Prevention of perinatal group B streptococcal disease--revised guidelines from CDC, 2010. MMWR Recomm Rep. 2010; 59(RR-10): 1–36. PubMed Abstract\n\nDesa DJ, Trevenen CL: Intrauterine infections with group B beta-haemolytic streptococci. Br J Obstet Gynaecol. 1984; 91(3): 237–239. PubMed Abstract | Publisher Full Text\n\nKatz V, Bowes WA Jr: Perinatal group B streptococcal infections across intact amniotic membranes. J Reprod Med. 1988; 33(5): 445–449. PubMed Abstract\n\nHanley J: Neonatal infection: group B streptococcus. Clin Evid. 2006; (15): 476–479. PubMed Abstract\n\nMelin P: Neonatal group B streptococcal disease: from pathogenesis to preventive strategies. Clin Microbiol Infect. 2011; 17(9): 1294–1303. PubMed Abstract | Publisher Full Text\n\nLe Doare K, Heath PT: An overview of global GBS epidemiology. Vaccine. 2013; 31(Suppl 4): D7–12. PubMed Abstract | Publisher Full Text\n\nCutland CL, Madhi SA, Zell ER, et al.: Chlorhexidine maternal-vaginal and neonate body wipes in sepsis and vertical transmission of pathogenic bacteria in South Africa: a randomised, controlled trial. Lancet. 2009; 374(9705): 1909–1916. PubMed Abstract | Publisher Full Text\n\nYow MD, Leeds LJ, Thompson PK, et al.: The natural history of group B streptococcal colonization in the pregnant woman and her offspring. I. Colonization studies. Am J Obstet Gynecol. 1980; 137(1): 34–38. PubMed Abstract | Publisher Full Text\n\nBenitz WE, Gould JB, Druzin ML: Risk factors for early-onset group B streptococcal sepsis: estimation of odds ratios by critical literature review. Pediatrics. 1999; 103(6): e77. PubMed Abstract\n\nAdair CE, Kowalsky L, Quon H, et al.: Risk factors for early-onset group B streptococcal disease in neonates: a population-based case-control study. CMAJ. 2003; 169(3): 198–203. PubMed Abstract | Free Full Text\n\nBaker CJ, Edwards MS, Kasper DL: Role of antibody to native type III polysaccharide of group B Streptococcus in infant infection. Pediatrics. 1981; 68(4): 544–549. PubMed Abstract\n\nSchuchat A, Zywicki SS, Dinsmoor MJ, et al.: Risk factors and opportunities for prevention of early-onset neonatal sepsis: a multicenter case-control study. Pediatrics. 2000; 105(1 Pt 1): 21–26. PubMed Abstract\n\nOddie S, Embleton ND: Risk factors for early onset neonatal group B streptococcal sepsis: case-control study. BMJ. 2002; 325(7359): 308. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGray KJ, Bennett SL, French N, et al.: Invasive group B streptococcal infection in infants, Malawi. Emerg Infect Dis. 2007; 13(2): 223–229. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrotman H, Bell Y: Neonatal group B streptococcal infection at the University Hospital of the West Indies, Jamaica: a 10-year experience. Ann Trop Paediatr. 2006; 26(1): 53–57. PubMed Abstract | Publisher Full Text\n\nDangor Z, Lala SG, Cutland CL, et al.: Burden of invasive group B Streptococcus disease and early neurological sequelae in South African infants. PLoS One. 2015; 10(4): e0123014. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadhi SA, Radebe K, Crewe-Brown H, et al.: High burden of invasive Streptococcus agalactiae disease in South African infants. Ann Trop Paediatr. 2003; 23(1): 15–23. PubMed Abstract | Publisher Full Text\n\nSchrag SJ, Cutland CL, Zell ER, et al.: Risk factors for neonatal sepsis and perinatal death among infants enrolled in the prevention of perinatal sepsis trial, Soweto, South Africa. Pediatr Infect Dis J. 2012; 31(8): 821–826. PubMed Abstract | Publisher Full Text\n\nDangor Z, Kwatra G, Izu A, et al.: HIV-1 Is Associated With Lower Group B Streptococcus Capsular and Surface-Protein IgG Antibody Levels and Reduced Transplacental Antibody Transfer in Pregnant Women. J Infect Dis. 2015; 212(3): 453–462. PubMed Abstract | Publisher Full Text\n\nLe Doare K, Allen L, Kampmann B, et al.: Anti-group B Streptococcus antibody in infants born to mothers with human immunodeficiency virus (HIV) infection. Vaccine. 2015; 33(5): 621–627. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiannoni E, Berger C, Stocker M, et al.: Incidence and Outcome of Group B Streptococcal Sepsis in Infants in Switzerland. Pediatr Infect Dis J. 2016; 35(2): 222–224. PubMed Abstract | Publisher Full Text\n\nBaker CJ: Early onset group B streptococcal disease. J Pediatr. 1978; 93(1): 124–125. PubMed Abstract\n\nJoubrel C, Tazi A, Six A, et al.: Group B streptococcus neonatal invasive infections, France 2007–2012. Clin Microbiol Infect. 2015; 21(10): 910–916. PubMed Abstract | Publisher Full Text\n\nSchrag SJ, Zywicki S, Farley MM, et al.: Group B streptococcal disease in the era of intrapartum antibiotic prophylaxis. N Engl J Med. 2000; 342(1): 15–20. PubMed Abstract | Publisher Full Text\n\nTann CJ, Nkurunziza P, Nakakeeto M, et al.: Prevalence of bloodstream pathogens is higher in neonatal encephalopathy cases vs. controls using a novel panel of real-time PCR assays. PLoS One. 2014; 9(5): e97259. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRivera L, Sáez-Llorens X, Feris-Iglesias J, et al.: Incidence and serotype distribution of invasive group B streptococcal disease in young infants: a multi-country observational study. BMC Pediatr. 2015; 15: 143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPhares CR, Lynfield R, Farley MM, et al.: Epidemiology of invasive group B streptococcal disease in the United States, 1999–2005. JAMA. 2008; 299(17): 2056–2065. PubMed Abstract | Publisher Full Text\n\nVerani JR, Schrag SJ: Group B streptococcal disease in infants: progress in prevention and continued challenges. Clin Perinatol. 2010; 37(2): 375–392. PubMed Abstract | Publisher Full Text\n\nLin FY, Weisman LE, Troendle J, et al.: Prematurity is the major risk factor for late-onset group B streptococcus disease. J Infect Dis. 2003; 188(2): 267–271. PubMed Abstract | Publisher Full Text\n\nJordan HT, Farley MM, Craig A, et al.: Revisiting the need for vaccine prevention of late-onset neonatal group B streptococcal disease: a multistate, population-based analysis. Pediatr Infect Dis J. 2008; 27(12): 1057–1064. PubMed Abstract | Publisher Full Text\n\nFilleron A, Lombard F, Jacquot A, et al.: Group B streptococci in milk and late neonatal infections: an analysis of cases in the literature. Arch Dis Child Fetal Neonatal Ed. 2014; 99(1): F41–47. PubMed Abstract | Publisher Full Text\n\nKim HJ, Kim SY, Seo WH, et al.: Outbreak of late-onset group B streptococcal infections in healthy newborn infants after discharge from a maternity hospital: a case report. J Korean Med Sci. 2006; 21(2): 347–350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoyer KM, Vogel LC, Gotoff SP, et al.: Nosocomial transmission of bacteriophage type 7/11/12 group B streptococci in a special care nursery. Am J Dis Child. 1980; 134(10): 964–966. PubMed Abstract | Publisher Full Text\n\nMacFarquhar JK, Jones TF, Woron AM, et al.: Outbreak of late-onset group B Streptococcus in a neonatal intensive care unit. Am J Infect Control. 2010; 38(4): 283–288. PubMed Abstract | Publisher Full Text\n\nSchuchat A, Oxtoby M, Cochi S, et al.: Population-based risk factors for neonatal group B streptococcal disease: results of a cohort study in metropolitan Atlanta. J Infect Dis. 1990; 162(3): 672–677. PubMed Abstract | Publisher Full Text\n\nHeath PT, Balfour GF, Tighe H, et al.: Group B streptococcal disease in infants: a case control study. Arch Dis Child. 2009; 94(9): 674–680. PubMed Abstract | Publisher Full Text\n\nPintye J, Saltzman B, Wolf E, et al.: Risk Factors for Late-Onset Group B Streptococcal Disease Before and After Implementation of Universal Screening and Intrapartum Antibiotic Prophylaxis. J Pediatric Infect Dis Soc. 2015. pii: piv067. PubMed Abstract | Publisher Full Text\n\nBerardi A, Rossi C, Lugli L, et al.: Group B streptococcus late-onset disease: 2003–2010. Pediatrics. 2013; 131(2): e361–368. PubMed Abstract | Publisher Full Text\n\nCutland CL, Schrag SJ, Thigpen MC, et al.: Increased risk for group B Streptococcus sepsis in young infants exposed to HIV, Soweto, South Africa, 2004–20081. Emerg Infect Dis. 2015; 21(4): 638–645. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEpalza C, Goetghebuer T, Hainaut M, et al.: High incidence of invasive group B streptococcal infections in HIV-exposed uninfected infants. Pediatrics. 2010; 126(3): e631–638. PubMed Abstract | Publisher Full Text\n\nGibbs RS, Schrag S, Schuchat A: Perinatal infections due to group B streptococci. Obstet Gynecol. 2004; 104(5 Pt 1): 1062–1076. PubMed Abstract | Publisher Full Text\n\nStevens JP, Eames M, Kent A, et al.: Long term outcome of neonatal meningitis. Arch Dis Child Fetal Neonatal Ed. 2003; 88(3): F179–184. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSinha A, Russell LB, Tomczyk S, et al.: Disease Burden of Group B Streptococcus Among Infants in Sub-Saharan Africa: A Systematic Literature Review and Meta-Analysis. Pediatr Infect Dis J. 2016; 35(9): 933–42. PubMed Abstract | Publisher Full Text\n\nRegan JA, Klebanoff MA, Nugent RP, et al.: Colonization with group B streptococci in pregnancy and adverse outcome. VIP Study Group. Am J Obstet Gynecol. 1996; 174(4): 1354–1360. PubMed Abstract\n\nFeikin DR, Thorsen P, Zywicki S, et al.: Association between colonization with group B streptococci during pregnancy and preterm delivery among Danish women. Am J Obstet Gynecol. 2001; 184(3): 427–433. PubMed Abstract | Publisher Full Text\n\nValkenburg-van den Berg AW, Sprij AJ, Dekker FW, et al.: Association between colonization with Group B Streptococcus and preterm delivery: a systematic review. Acta Obstet Gynecol Scand. 2009; 88(9): 958–967. PubMed Abstract | Publisher Full Text\n\nMcDonald HM, Chambers HM: Intrauterine infection and spontaneous midgestation abortion: is the spectrum of microorganisms similar to that in preterm labor? Infect Dis Obstet Gynecol. 2000; 8(5–6): 220–227. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeutscher M, Lewis M, Zell ER, et al.: Incidence and severity of invasive Streptococcus pneumoniae, group A Streptococcus, and group B Streptococcus infections among pregnant and postpartum women. Clin Infect Dis. 2011; 53(2): 114–123. PubMed Abstract | Publisher Full Text\n\nYancey MK, Duff P, Clark P, et al.: Peripartum infection associated with vaginal group B streptococcal colonization. Obstet Gynecol. 1994; 84(5): 816–819. PubMed Abstract\n\nKrohn MA, Hillier SL, Baker CJ: Maternal peripartum complications associated with vaginal group B streptococci colonization. J Infect Dis. 1999; 179(6): 1410–1415. PubMed Abstract | Publisher Full Text\n\nPersson K, Bjerre B, Elfström L, et al.: Group B streptococci at delivery: high count in urine increases risk for neonatal colonization. Scand J Infect Dis. 1986; 18(6): 525–531. PubMed Abstract | Publisher Full Text\n\nSánchez PJ, Siegel JD, Cushion NB, et al.: Significance of a positive urine group B streptococcal latex agglutination test in neonates. J Pediatr. 1990; 116(4): 601–606. PubMed Abstract | Publisher Full Text\n\nKessous R, Weintraub AY, Sergienko R, et al.: Bacteruria with group-B streptococcus: is it a risk factor for adverse pregnancy outcomes? J Matern Fetal Neonatal Med. 2012; 25(10): 1983–1986. PubMed Abstract | Publisher Full Text\n\nMcKenna DS, Matson S, Northern I: Maternal group B streptococcal (GBS) genital tract colonization at term in women who have asymptomatic GBS bacteriuria. Infect Dis Obstet Gynecol. 2003; 11(4): 203–207. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdwards RK, Clark P, Duff P: Intrapartum antibiotic prophylaxis 2: positive predictive value of antenatal group B streptococci cultures and antibiotic susceptibility of clinical isolates. Obstet Gynecol. 2002; 100(3): 540–544. PubMed Abstract | Publisher Full Text\n\nMatsubara K, Yamamoto G: Invasive group B streptococcal infections in a tertiary care hospital between 1998 and 2007 in Japan. Int J Infect Dis. 2009; 13(6): 679–684. PubMed Abstract | Publisher Full Text\n\nBergseng H, Rygg M, Bevanger L, et al.: Invasive group B streptococcus (GBS) disease in Norway 1996–2006. Eur J Clin Microbiol Infect Dis. 2008; 27(12): 1193–1199. PubMed Abstract | Publisher Full Text\n\nLamagni TL, Keshishian C, Efstratiou A, et al.: Emerging trends in the epidemiology of invasive group B streptococcal disease in England and Wales, 1991–2010. Clin Infect Dis. 2013; 57(5): 682–688. PubMed Abstract | Publisher Full Text\n\nSkoff TH, Farley MM, Petit S, et al.: Increasing burden of invasive group B streptococcal disease in nonpregnant adults, 1990–2007. Clin Infect Dis. 2009; 49(1): 85–92. PubMed Abstract | Publisher Full Text\n\nBhengsri S, Gregory C, Thamthitiwat S, et al.: Oral abstract 64. Invasive bacterial infections in Thailand: finding from population-based surveillance in two provinces, 2010–2013. Paper presented at: IDWeek 2015. San Diego, 2015. Reference Source\n\nFarley MM: Group B streptococcal disease in nonpregnant adults. Clin Infect Dis. 2001; 33(4): 556–561. PubMed Abstract | Publisher Full Text\n\nPerovic O, Crewe-Brown HH, Khoosal M, et al.: Invasive group B streptococcal disease in nonpregnant adults. Eur J Clin Microbiol Infect Dis. 1999; 18(5): 362–364. PubMed Abstract | Publisher Full Text\n\nKarunakaran R, Raja NS, Hafeez A, et al.: Group B Streptococcus infection: epidemiology, serotypes, and antimicrobial susceptibility of selected isolates in the population beyond infancy (excluding females with genital tract- and pregnancy-related isolates) at the University Malaya Medical Centre, Kuala Lumpur. Jpn J Infect Dis. 2009; 62(3): 192–194. PubMed Abstract\n\nTyrrell GJ, Senzilet LD, Spika JS, et al.: Invasive disease due to group B streptococcal infection in adults: results from a Canadian, population-based, active laboratory surveillance study--1996. Sentinel Health Unit Surveillance System Site Coordinators. J Infect Dis. 2000; 182(1): 168–173. PubMed Abstract | Publisher Full Text\n\nBolaños M, Cañas A, Santana OE, et al.: Invasive group B streptococcal disease in nonpregnant adults. Eur J Clin Microbiol Infect Dis. 2001; 20(11): 837–839. PubMed Abstract | Publisher Full Text\n\nBlancas D, Santin M, Olmo M, et al.: Group B streptococcal disease in nonpregnant adults: incidence, clinical characteristics, and outcome. Eur J Clin Microbiol Infect Dis. 2004; 23(3): 168–173. PubMed Abstract | Publisher Full Text\n\nBedford Russell AR, Kumar R: Early onset neonatal sepsis: diagnostic dilemmas and practical management. Arch Dis Child Fetal Neonatal Ed. 2015; 100(4): F350–354. PubMed Abstract | Publisher Full Text\n\nMadhi SA, Dangor Z, Heath PT, et al.: Considerations for a phase-III trial to evaluate a group B Streptococcus polysaccharide-protein conjugate vaccine in pregnant women for the prevention of early- and late-onset invasive disease in young-infants. Vaccine. 2013; 31(Suppl 4): D52–57. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Stillbirths. Maternal, newborn, child and adolescent health. Accessed February 5, 2016. Reference Source\n\nFrøen JF, Friberg IK, Lawn JE, et al.: Stillbirths: progress and unfinished business. Lancet. 2016; 387(10018): 574–86. PubMed Abstract | Publisher Full Text\n\nJoseph KS, Kinniburgh B, Hutcheon JA, et al.: Rationalizing definitions and procedures for optimizing clinical care and public health in fetal death and stillbirth. Obstet Gynecol. 2015; 125(4): 784–788. PubMed Abstract | Publisher Full Text\n\nAminu M, Unkels R, Mdegela M, et al.: Causes of and factors associated with stillbirth in low- and middle-income countries: a systematic literature review. BJOG. 2014; 121(Suppl 4): 141–153. PubMed Abstract | Publisher Full Text\n\nLawn JE, Blencowe H, Waiswa P, et al.: Stillbirths: rates, risk factors, and acceleration towards 2030. Lancet. 2016; 387(10018): 587–603. PubMed Abstract | Publisher Full Text\n\nCnattingius S, Stephansson O: The epidemiology of stillbirth. Semin Perinatol. 2002; 26(1): 25–30. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Causes of child mortality. Global Health Observatory (GHO) data. 2015; Accessed January 19, 2016. Reference Source\n\nWHO-CHERG estimates for child causes for death, 2000–2012. Accessed January 19, 2016. Reference Source\n\nSaha SK, El Arifeen S, Schrag SJ: Aetiology of Neonatal Infection in South Asia (ANISA): An Initiative to Identify Appropriate Program Priorities to Save Newborns. Pediatr Infect Dis J. 2016; 35(5 Suppl 1): S6–8. PubMed Abstract | Publisher Full Text\n\nIslam MS, Baqui AH, Zaidi AK, et al.: Infection Surveillance Protocol for a Multicountry Population-based Study in South Asia to Determine the Incidence, Etiology and Risk Factors for Infections Among Young Infants of 0 to 59 Days Old. Pediatr Infect Dis J. 2016; 35(5 Suppl 1): S9–S15. PubMed Abstract | Publisher Full Text\n\nChan GJ, Baqui AH, Modak JK, et al.: Early-onset neonatal sepsis in Dhaka, Bangladesh: risk associated with maternal bacterial colonisation and chorioamnionitis. Trop Med Int Health. 2013; 18(9): 1057–1064. PubMed Abstract | Publisher Full Text\n\nVaccine Preventable Infections Surveillance (VPIS) Project. Annual Repot: Vaccine Preventable Infections Surveillance September 1, 2013–August 31, 2014. 2014. Reference Source\n\nPhilipson EH, Palermino DA, Robinson A: Enhanced antenatal detection of group B streptococcus colonization. Obstet Gynecol. 1995; 85(3): 437–439. PubMed Abstract | Publisher Full Text\n\nEl Aila NA, Tency I, Claeys G, et al.: Comparison of different sampling techniques and of different culture methods for detection of group B streptococcus carriage in pregnant women. BMC Infect Dis. 2010; 10: 285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStoll BJ, Schuchat A: Maternal carriage of group B streptococci in developing countries. Pediatr Infect Dis J. 1998; 17(6): 499–503. PubMed Abstract | Publisher Full Text\n\nWhitney CG, Daly S, Limpongsanurak S, et al.: The international infections in pregnancy study: group B streptococcal colonization in pregnant women. J Matern Fetal Neonatal Med. 2004; 15(4): 267–274. PubMed Abstract | Publisher Full Text\n\nKwatra G, Adrian PV, Shiri T, et al.: Serotype-specific acquisition and loss of group B streptococcus recto-vaginal colonization in late pregnancy. PLoS One. 2014; 9(6): e98778. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadzivhandila M, Adrian PV, Cutland CL, et al.: Serotype distribution and invasive potential of group B streptococcus isolates causing disease in infants and colonizing maternal-newborn dyads. PLoS One. 2011; 6(3): e17861. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTurner C, Turner P, Po L, et al.: Group B streptococcal carriage, serotype distribution and antibiotic susceptibilities in pregnant women at the time of delivery in a refugee population on the Thai-Myanmar border. BMC Infect Dis. 2012; 12: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKwatra G, Cunnington MC, Merrall E, et al.: Prevalence of maternal colonisation with group B streptococcus: a systematic review and meta-analysis. Lancet Infect Dis. 2016; 16(9): 1076–1084, pii: S1473-3099(16)30055-X. PubMed Abstract | Publisher Full Text\n\nNan C, Dangor Z, Cutland CL, et al.: Maternal group B Streptococcus-related stillbirth: a systematic review. BJOG. 2015; 122(11): 1437–1445. PubMed Abstract | Publisher Full Text\n\nSeale AC, Koech AC, Sheppard AE, et al.: Maternal colonization with Streptococcus agalactiae and associated stillbirth and neonatal disease in coastal Kenya. Nat Microbiol. 2016; 1(7): 16067. PubMed Abstract | Publisher Full Text\n\nLambertsen L, Ekelund K, Skovsted IC, et al.: Characterisation of invasive group B streptococci from adults in Denmark 1999 to 2004. Eur J Clin Microbiol Infect Dis. 2010; 29(9): 1071–1077. PubMed Abstract | Publisher Full Text\n\nGeneva: World Health Organization: WHO Recommendations for Prevention and Treatment of Maternal Peripartum Infections. Geneva, Switzerland. 2015. PubMed Abstract\n\nSeale AC, Blencowe H, Manu AA, et al.: Estimates of possible severe bacterial infection in neonates in sub-Saharan Africa, south Asia, and Latin America for 2012: a systematic review and meta-analysis. Lancet Infect Dis. 2014; 14(8): 731–741. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeneva: World Health Organization: Pocket Book of Hospital Care for Children: Guidelines for the Management of Common Childhood Illnesses. 2nd edition, 2013. PubMed Abstract\n\nWorld Health Organization: Integrated Management of Childhood Illness chart booklet. Geneva, Switzerland. 2014. Reference Source\n\nHan XY: Automated blood cultures. In: Tang Y-W, Stratton CW eds. Advanced techniques in diagnostic microbiology. New York, NY: Springer. 2006; 3–10. Publisher Full Text\n\nDagnew AF, Cunnington MC, Dube Q, et al.: Variation in reported neonatal group B streptococcal disease incidence in developing countries. Clin Infect Dis. 2012; 55(1): 91–102. PubMed Abstract | Publisher Full Text\n\nSarkar S, Bhagat I, DeCristofaro JD, et al.: A study of the role of multiple site blood cultures in the evaluation of neonatal sepsis. J Perinatol. 2006; 26(1): 18–22. PubMed Abstract | Publisher Full Text\n\nConnell TG, Rele M, Cowley D, et al.: How reliable is a negative blood culture result? Volume of blood submitted for culture in routine practice in a children's hospital. Pediatrics. 2007; 119(5): 891–896. PubMed Abstract | Publisher Full Text\n\nButtery JP: Blood cultures in newborns and children: optimising an everyday test. Arch Dis Child Fetal Neonatal Ed. 2002; 87(1): F25–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKellogg JA, Ferrentino FL, Goodstein MH, et al.: Frequency of low level bacteremia in infants from birth to two months of age. Pediatr Infect Dis J. 1997; 16(4): 381–385. PubMed Abstract | Publisher Full Text\n\nSchelonka RL, Chai MK, Yoder BA, et al.: Volume of blood required to detect common neonatal pathogens. J Pediatr. 1996; 129(2): 275–278. PubMed Abstract | Publisher Full Text\n\nEdwards MS, Nizet V, Baker CJ: Group B Streptococcal Infections. In: Wilson CB, Nizet V, Maldonado YA, Remington JS, Klein JO, eds. Remington and Klein's Infectious Diseases of the Fetus and Newborn Infant. 8th ed. Philadelphia, PA: Elsevier Saunders; 2016: 411–456. Reference Source\n\nOkike IO, Johnson AP, Henderson KL, et al.: Incidence, etiology, and outcome of bacterial meningitis in infants aged <90 days in the United kingdom and Republic of Ireland: prospective, enhanced, national population-based surveillance. Clin Infect Dis. 2014; 59(10): e150–157. PubMed Abstract | Publisher Full Text\n\nTita AT, Andrews WW: Diagnosis and management of clinical chorioamnionitis. Clin Perinatol. 2010; 37(2): 339–354. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohnson CT, Farzin A, Burd I: Current management and long-term outcomes following chorioamnionitis. Obstet Gynecol Clin North Am. 2014; 41(4): 649–669. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLapinsky SE: Obstetric infections. Crit Care Clin. 2013; 29(3): 509–520. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Managing complications in pregnancy and childbirth: a guide for midwives and doctors. Geneva: World Health Organization; 2000. Reference Source\n\nMorgan J, Roberts S: Maternal sepsis. Obstet Gynecol Clin North Am. 2013; 40(1): 69–87. PubMed Abstract | Publisher Full Text\n\nTharpe N: Postpregnancy genital tract and wound infections. J Midwifery Womens Health. 2008; 53(3): 236–246. PubMed Abstract | Publisher Full Text\n\nMcClure EM, Goldenberg RL: Infection and stillbirth. Semin Fetal Neonatal Med. 2009; 14(4): 182–189. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGibbs RS, Roberts DJ: Case records of the Massachusetts General Hospital. Case 27-2007. A 30-year-old pregnant woman with intrauterine fetal death. N Engl J Med. 2007; 357(9): 918–925. PubMed Abstract | Publisher Full Text\n\nAfshar B, Broughton K, Creti R, et al.: International external quality assurance for laboratory identification and typing of Streptococcus agalactiae (Group B streptococci). J Clin Microbiol. 2011; 49(4): 1475–1482. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLancefield RC: A Serological Differentiation of Specific Types of Bovine Hemolytic Streptococci (Group B). J Exp Med. 1934; 59(4): 441–458. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHåkansson S, Burman LG, Henrichsen J, et al.: Novel coagglutination method for serotyping group B streptococci. J Clin Microbiol. 1992; 30(12): 3268–3269. PubMed Abstract | Free Full Text\n\nArakere G, Flores AE, Ferrieri P, et al.: Inhibition enzyme-linked immunosorbent assay for serotyping of group B streptococcal isolates. J Clin Microbiol. 1999; 37(8): 2564–2567. PubMed Abstract | Free Full Text\n\nPark CJ, Vandel NM, Ruprai DK, et al.: Detection of group B streptococcal colonization in pregnant women using direct latex agglutination testing of selective broth. J Clin Microbiol. 2001; 39(1): 408–409. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYao K, Poulsen K, Maione D, et al.: Capsular gene typing of Streptococcus agalactiae compared to serotyping by latex agglutination. J Clin Microbiol. 2013; 51(2): 503–507. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPoyart C, Tazi A, Réglier-Poupet H, et al.: Multiplex PCR assay for rapid and accurate capsular typing of group B streptococci. J Clin Microbiol. 2007; 45(6): 1985–1988. PubMed Abstract | Publisher Full Text | Free Full Text\n\nImperi M, Pataracchia M, Alfarone G, et al.: A multiplex PCR assay for the direct identification of the capsular type (Ia to IX) of Streptococcus agalactiae. J Microbiol Methods. 2010; 80(2): 212–214. PubMed Abstract | Publisher Full Text\n\nSheppard AE, Vaughan A, Jones N, et al.: Capsular Typing Method for Streptococcus agalactiae Using Whole-Genome Sequence Data. J Clin Microbiol. 2016; 54(5): 1388–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlores AR, Galloway-Peña J, Sahasrabhojane P, et al.: Sequence type 1 group B Streptococcus, an emerging cause of invasive disease in adults, evolves by small genetic changes. Proc Natl Acad Sci U S A. 2015; 112(20): 6431–6436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHauge M, Jespersgaard C, Poulsen K, et al.: Population structure of Streptococcus agalactiae reveals an association between specific evolutionary lineages and putative virulence factors but not disease. Infect Immun. 1996; 64(3): 919–925. PubMed Abstract | Free Full Text\n\nRolland K, Marois C, Siquier V, et al.: Genetic features of Streptococcus agalactiae strains causing severe neonatal infections, as revealed by pulsed-field gel electrophoresis and hylB gene analysis. J Clin Microbiol. 1999; 37(6): 1892–1898. PubMed Abstract | Free Full Text\n\nJones N, Bohnsack JF, Takahashi S, et al.: Multilocus sequence typing system for group B streptococcus. J Clin Microbiol. 2003; 41(6): 2530–2536. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNitschke H, Slickers P, Müller E, et al.: DNA microarray-based typing of Streptococcus agalactiae isolates. J Clin Microbiol. 2014; 52(11): 3933–3943. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeatero S, Athey TB, Van Caeseele P, et al.: Emergence of Serotype IV Group B Streptococcus Adult Invasive Disease in Manitoba and Saskatchewan, Canada, Is Driven by Clonal Sequence Type 459 Strains. J Clin Microbiol. 2015; 53(9): 2919–2926. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeneva: World Health Organization: Guideline: Managing possible serious bacterial infection in young infants when referral is not feasible. Geneva, Switzerland: World Health Organization, 2015. PubMed Abstract\n\nDahesh S, Hensler ME, Van Sorge NM, et al.: Point mutation in the group B streptococcal pbp2x gene conferring decreased susceptibility to beta-lactam antibiotics. Antimicrob Agents Chemother. 2008; 52(8): 2915–2918. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLongtin J, Vermeiren C, Shahinas D, et al.: Novel mutations in a patient isolate of Streptococcus agalactiae with reduced penicillin susceptibility emerging after long-term oral suppressive therapy. Antimicrob Agents Chemother. 2011; 55(6): 2983–2985. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPersson E, Berg S, Bergseng H, et al.: Antimicrobial susceptibility of invasive group B streptococcal isolates from south-west Sweden 1988–2001. Scand J Infect Dis. 2008; 40(4): 308–313. PubMed Abstract | Publisher Full Text\n\nSimoes JA, Aroutcheva AA, Heimler I, et al.: Antibiotic resistance patterns of group B streptococcal clinical isolates. Infect Dis Obstet Gynecol. 2004; 12(1): 1–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKimura K, Suzuki S, Wachino J, et al.: First molecular characterization of group B streptococci with reduced penicillin susceptibility. Antimicrob Agents Chemother. 2008; 52(8): 2890–2897. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDownie L, Armiento R, Subhi R, et al.: Community-acquired neonatal and infant sepsis in developing countries: efficacy of WHO's currently recommended antibiotics--systematic review and meta-analysis. Arch Dis Child. 2013; 98(2): 146–154. PubMed Abstract | Publisher Full Text\n\nBoyer KM, Gotoff SP: Prevention of early-onset neonatal group B streptococcal disease with selective intrapartum chemoprophylaxis. N Engl J Med. 1986; 314(26): 1665–1669. PubMed Abstract | Publisher Full Text\n\nTuppurainen N, Hallman M: Prevention of neonatal group B streptococcal disease: intrapartum detection and chemoprophylaxis of heavily colonized parturients. Obstet Gynecol. 1989; 73(4): 583–587. PubMed Abstract\n\nGarland SM, Fliegner JR: Group B streptococcus (GBS) and neonatal infections: the case for intrapartum chemoprophylaxis. Aust N Z J Obstet Gynaecol. 1991; 31(2): 119–122. PubMed Abstract | Publisher Full Text\n\nLin FY, Brenner RA, Johnson YR, et al.: The effectiveness of risk-based intrapartum chemoprophylaxis for the prevention of early-onset neonatal group B streptococcal disease. Am J Obstet Gynecol. 2001; 184(6): 1204–1210. PubMed Abstract | Publisher Full Text\n\nFairlie T, Zell ER, Schrag S: Effectiveness of intrapartum antibiotic prophylaxis for prevention of early-onset group B streptococcal disease. Obstet Gynecol. 2013; 121(3): 570–577. PubMed Abstract | Publisher Full Text\n\nPetri W: Penicillins, Cephalosporins, and other B-lactam Antibiotics. In: Brunton L, Lazo J, Parker K, eds. Goodman & Gilman's the Pharmacologicical Basis of Therapeutics, 11th Edition. New York: McGraw-Hill; 2006.\n\nCox LM, Yamanishi S, Sohn J, et al.: Altering the intestinal microbiota during a critical developmental window has lasting metabolic consequences. Cell. 2014; 158(4): 705–721. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchrag SJ, Zell ER, Lynfield R, et al.: A population-based comparison of strategies to prevent early-onset group B streptococcal disease in neonates. N Engl J Med. 2002; 347(4): 233–239. PubMed Abstract | Publisher Full Text\n\nGeneva: World Health Organization: WHO Recommendations on Interventions to Improve Preterm Birth Outcomes. Geneva, Switzerland. 2015. PubMed Abstract\n\nSiegel JD, Cushion NB: Prevention of early-onset group B streptococcal disease: another look at single-dose penicillin at birth. Obstet Gynecol. 1996; 87(5 Pt 1): 692–698. PubMed Abstract | Publisher Full Text\n\nVelaphi S, Siegel JD, Wendel GD Jr, et al.: Early-onset group B streptococcal infection after a combined maternal and neonatal group B streptococcal chemoprophylaxis strategy. Pediatrics. 2003; 111(3): 541–547. PubMed Abstract | Publisher Full Text\n\nWendel GD Jr, Leveno KJ, Sanchez PJ, et al.: Prevention of neonatal group B streptococcal disease: A combined intrapartum and neonatal protocol. Am J Obstet Gynecol. 2002; 186(4): 618–626. PubMed Abstract | Publisher Full Text\n\nBland ML, Vermillion ST, Soper DE: Late third-trimester treatment of rectovaginal group B streptococci with benzathine penicillin G. Am J Obstet Gynecol. 2000; 183(2): 372–376. PubMed Abstract | Publisher Full Text\n\nEasmon CS, Hastings MJ, Deeley J, et al.: The effect of intrapartum chemoprophylaxis on the vertical transmission of group B streptococci. Br J Obstet Gynaecol. 1983; 90(7): 633–635. PubMed Abstract | Publisher Full Text\n\nWeeks JW, Myers SR, Lasher L, et al.: Persistence of penicillin G benzathine in pregnant group B streptococcus carriers. Obstet Gynecol. 1997; 90(2): 240–243. PubMed Abstract | Publisher Full Text\n\nLandwehr-Kenzel S, Henneke P: Interaction of Streptococcus agalactiae and Cellular Innate Immunity in Colonization and Disease. Front Immunol. 2014; 5: 519. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaisey HC, Doran KS, Nizet V: Recent advances in understanding the molecular basis of group B Streptococcus virulence. Expert Rev Mol Med. 2008; 10: e27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen VL, Avci FY, Kasper DL: A maternal vaccine against group B Streptococcus: past, present, and future. Vaccine. 2013; 31(Suppl 4): D13–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerardi A, Cattelani C, Creti R, et al.: Group B streptococcal infections in the newborn infant and the potential value of maternal vaccination. Expert Rev Anti Infect Ther. 2015; 13(11): 1387–1399. PubMed Abstract | Publisher Full Text\n\nMacauley MS, Crocker PR, Paulson JC: Siglec-mediated regulation of immune cell function in disease. Nat Rev Immunol. 2014; 14(10): 653–666. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlotved HC, Kong F, Lambertsen L, et al.: Serotype IX, a Proposed New Streptococcus agalactiae Serotype. J Clin Microbiol. 2007; 45(9): 2929–2936. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaoletti LC, Madoff LC, Kasper DL: Surface structures of group B Streptococcus important in human immunity. In: Fischetti VA, Novick RP, Ferretti JJ, Portnoy, DA, Rood JI, eds. Gram-positive pathogens. Washington, D.C.: ASM Press; 2000: 137–153.\n\nJones N, Oliver KA, Barry J, et al.: Enhanced invasiveness of bovine-derived neonatal sequence type 17 group B streptococcus is independent of capsular serotype. Clin Infect Dis. 2006; 42(7): 915–924. PubMed Abstract | Publisher Full Text\n\nTazi A, Bellais S, Tardieux I, et al.: Group B Streptococcus surface proteins as major determinants for meningeal tropism. Curr Opin Microbiol. 2012; 15(1): 44–49. PubMed Abstract | Publisher Full Text\n\nHancock RE, Diamond G: The role of cationic antimicrobial peptides in innate host defences. Trends Microbiol. 2000; 8(9): 402–410. PubMed Abstract | Publisher Full Text\n\nTazi A, Disson O, Bellais S, et al.: The surface protein HvgA mediates group B streptococcus hypervirulence and meningeal tropism in neonates. J Exp Med. 2010; 207(11): 2313–2322. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Kasper DL: Correlation of maternal antibody deficiency with susceptibility to neonatal group B streptococcal infection. N Engl J Med. 1976; 294(14): 753–756. PubMed Abstract | Publisher Full Text\n\nChristensen KK, Christensen P, Dahlander K, et al.: Quantitation of serum antibodies to surface antigens of group B streptococci types Ia, Ib, and III: low antibody levels in mothers of neonatally infected infants. Scand J Infect Dis. 1980; 12(2): 105–110. PubMed Abstract | Publisher Full Text\n\nChristensen KK, Christensen P, Lindberg A, et al.: Mothers of infants with neonatal group B streptococcal septicemia are poor responders to bacterial carbohydrate antigens. Int Arch Allergy Appl Immunol. 1982; 67(1): 7–12. PubMed Abstract | Publisher Full Text\n\nPalmeira P, Quinello C, Silveira-Lessa AL, et al.: IgG placental transfer in healthy and pathological pregnancies. Clin Dev Immunol. 2012; 2012: 985646. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLindsey B, Kampmann B, Jones C: Maternal immunization as a strategy to decrease susceptibility to infection in newborn infants. Curr Opin Infect Dis. 2013; 26(3): 248–253. PubMed Abstract | Publisher Full Text\n\nDonders GG, Halperin SA, Devlieger R, et al.: Maternal Immunization With an Investigational Trivalent Group B Streptococcal Vaccine: A Randomized Controlled Trial. Obstet Gynecol. 2016; 127(2): 213–221. PubMed Abstract | Publisher Full Text\n\nBaker CJ, Rench MA, McInnes P: Immunization of pregnant women with group B streptococcal type III capsular polysaccharide-tetanus toxoid conjugate vaccine. Vaccine. 2003; 21(24): 3468–3472. PubMed Abstract | Publisher Full Text\n\nEdwards MS, Rench MA, Baker CJ: Relevance of age at diagnosis to prevention of late-onset group B streptococcal disease by maternal immunization. Pediatr Infect Dis J. 2015; 34(5): 538–539. PubMed Abstract | Publisher Full Text\n\nFischer GW: Immunoglobulin therapy of neonatal group B streptococcal infections: an overview. Pediatr Infect Dis J. 1988; 7(5 Suppl): S13–16. PubMed Abstract\n\nBaker CJ, Carey V, Edwards M, et al.: Women receiving group B Streptococcus serotype III tetanus toxoid (GBS-III-TT) vaccine have reduced vaginal and rectal acquisition of GBS type III. Infectious DIseases Society of America 47th Annual Meeting; October 30, 2009, Philadelphia, Pennsylvania. 2009. Reference Source\n\nKwatra G, Adrian PV, Shiri T, et al.: Natural acquired humoral immunity against serotype-specific group B Streptococcus rectovaginal colonization acquisition in pregnant women. Clin Microbiol Infect. 2015; 21(6): 568.e513–521. PubMed Abstract | Publisher Full Text\n\nHastings MJ, Neil J, Easmon CS: Opsonization of group B Streptococcus type III: studies using clinical strains and maternal sera. Br J Exp Pathol. 1985; 66(3): 317–323. PubMed Abstract | Free Full Text\n\nWilson CB: Immunologic basis for increased susceptibility of the neonate to infection. J Pediatr. 1986; 108(1): 1–12. PubMed Abstract | Publisher Full Text\n\nEdwards MS, Baker CJ, Kasper DL: Opsonic specificity of human antibody to the type III polysaccharide of group B Streptococcus. J Infect Dis. 1979; 140(6): 1004–1008. PubMed Abstract | Publisher Full Text\n\nEdwards MS, Kasper DL, Jennings HJ, et al.: Capsular sialic acid prevents activation of the alternative complement pathway by type III, group B streptococci. J Immunol. 1982; 128(3): 1278–1283. PubMed Abstract\n\nMaródi L, Leijh PC, van Furth R: Characteristics and functional capacities of human cord blood granulocytes and monocytes. Pediatr Res. 1984; 18(11): 1127–1131. PubMed Abstract | Publisher Full Text\n\nSiegrist CA, Aspinall R: B-cell responses to vaccination at the extremes of age. Nat Rev Immunol. 2009; 9(3): 185–194. PubMed Abstract | Publisher Full Text\n\nHenneke P, Berner R: SIRS and group-B streptococcal sepsis in newborns: pathogenesis and perspectives in adjunctive therapy. Semin Fetal Neonatal Med. 2006; 11(5): 333–342. PubMed Abstract | Publisher Full Text\n\nWennerstrom DE, Schutt RW: Adult mice as a model for early onset group B streptococcal disease. Infect Immun. 1978; 19(2): 741–744. PubMed Abstract | Free Full Text\n\nRodewald AK, Onderdonk AB, Warren HB, et al.: Neonatal mouse model of group B streptococcal infection. J Infect Dis. 1992; 166(3): 635–639. PubMed Abstract | Publisher Full Text\n\nLounsbery JL, Green LR, Furtado D: Type III group B streptococcal infections in mice: bacteremia and meningitis following oral inoculation. Can J Microbiol. 1981; 27(11): 1150–1155. PubMed Abstract | Publisher Full Text\n\nMancuso G, Midiri A, Beninati C, et al.: Dual role of TLR2 and myeloid differentiation factor 88 in a mouse model of invasive group B streptococcal disease. J Immunol. 2004; 172(10): 6324–6329. PubMed Abstract | Publisher Full Text\n\nFerrieri P, Burke B, Nelson J: Production of bacteremia and meningitis in infant rats with group B streptococcal serotypes. Infect Immun. 1980; 27(3): 1023–1032. PubMed Abstract | Free Full Text\n\nTieffenberg J, Vogel L, Kretschmer RR, et al.: Chicken embryo model for type III group B beta-hemolytic streptococcal septicemia. Infect Immun. 1978; 19(2): 481–485. PubMed Abstract | Free Full Text\n\nHerting E, Strayer DS, Jarstrand C, et al.: Lung function and bacterial proliferation in experimental neonatal pneumonia in ventilated rabbits exposed to monoclonal antibody to surfactant protein A. Lung. 1998; 176(2): 123–131. PubMed Abstract | Publisher Full Text\n\nRojas J, Green RS, Hellerqvist CG, et al.: Studies on group B beta-hemolytic Streptococcus. II. Effects on pulmonary hemodynamics and vascular permeability in unanesthetized sheep. Pediatr Res. 1981; 15(6): 899–904. PubMed Abstract | Publisher Full Text\n\nHellerqvist CG, Rojas J, Green RS, et al.: Studies on group B beta-hemolytic Streptococcus. I. Isolation and partial characterization of an extracellular toxin. Pediatr Res. 1981; 15(6): 892–898. PubMed Abstract | Publisher Full Text\n\nRojas J, Larsson LE, Ogletree ML, et al.: Effects of cyclooxygenase inhibition on the response to group B streptococcal toxin in sheep. Pediatr Res. 1983; 17(2): 107–110. PubMed Abstract | Publisher Full Text\n\nBressack MA, Morton NS, Hortop J: Group B streptococcal sepsis in the piglet: effects of fluid therapy on venous return, organ edema, and organ blood flow. Circ Res. 1987; 61(5): 659–669. PubMed Abstract | Publisher Full Text\n\nLarsen JW Jr, London WT, Palmer AE, et al.: Experimental group B streptococcal infection in the rhesus monkey. I. Disease production in the neonate. Am J Obstet Gynecol. 1978; 132(6): 686–690. PubMed Abstract | Publisher Full Text\n\nHemming VG, London WT, Fischer GW, et al.: Immunoprophylaxis of postnatally acquired group B streptococcal sepsis in neonatal rhesus monkeys. J Infect Dis. 1987; 156(4): 655–658. PubMed Abstract | Publisher Full Text\n\nCox F: Prevention of group B streptococcal colonization with topically applied lipoteichoic acid in a maternal-newborn mouse model. Pediatr Res. 1982; 16(10): 816–819. PubMed Abstract | Publisher Full Text\n\nPatras KA, Rösler B, Thoman ML, et al.: Characterization of host immunity during persistent vaginal colonization by Group B Streptococcus. Mucosal Immunol. 2015; 8(6): 1339–1348. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarey AJ, Tan CK, Mirza S, et al.: Infection and cellular defense dynamics in a novel 17β-estradiol murine model of chronic human group B streptococcus genital tract colonization reveal a role for hemolysin in persistence and neutrophil accumulation. J Immunol. 2014; 192(4): 1718–1731. PubMed Abstract | Publisher Full Text\n\nRandis TM, Gelber SE, Hooven TA, et al.: Group B Streptococcus β-hemolysin/cytolysin breaches maternal-fetal barriers to cause preterm birth and intrauterine fetal demise in vivo. J Infect Dis. 2014; 210(2): 265–273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFurtado D: Experimental group B streptococcal infections in mice: hematogenous virulence and mucosal colonization. Infect Immun. 1976; 13(5): 1315–1320. PubMed Abstract | Free Full Text\n\nPatras KA, Wang NY, Fletcher EM, et al.: Group B Streptococcus CovR regulation modulates host immune signalling pathways to promote vaginal colonization. Cell Microbiol. 2013; 15(7): 1154–1167. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang NY, Patras KA, Seo HS, et al.: Group B streptococcal serine-rich repeat proteins promote interaction with fibrinogen and vaginal colonization. J Infect Dis. 2014; 210(6): 982–991. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng Q, Nelson D, Zhu S, et al.: Removal of group B streptococci colonizing the vagina and oropharynx of mice with a bacteriophage lytic enzyme. Antimicrob Agents Chemother. 2005; 49(1): 111–117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCox F, Taylor L, Eskew EK, et al.: Prevention of group B streptococcal colonization and bacteremia in neonatal mice with topical vaginal inhibitors. J Infect Dis. 1993; 167(5): 1118–1122. PubMed Abstract | Publisher Full Text\n\nLalioui L, Pellegrini E, Dramsi S, et al.: The SrtA Sortase of Streptococcus agalactiae is required for cell wall anchoring of proteins containing the LPXTG motif, for adhesion to epithelial cells, and for colonization of the mouse intestine. Infect Immun. 2005; 73(6): 3342–3350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMillard DD, Shulman ST, Yogev R: Rifampin and penicillin for the elimination of group B streptococci in nasally colonized infant rats. Pediatr Res. 1985; 19(11): 1183–1186. PubMed Abstract | Publisher Full Text\n\nDavies JK, Paoletti LC, McDuffie RS, et al.: A randomized trial of conjugated group B streptococcal type Ia vaccine in a rabbit model of ascending infection. Am J Obstet Gynecol. 1999; 181(4): 803–808. PubMed Abstract | Publisher Full Text\n\nMcDuffie RS, Gibbs RS: Animal models of ascending genital-tract infection in pregnancy. Infect Dis Obstet Gynecol. 1994; 2(2): 60–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDuffie RS Jr, Gibbs RS: Ascending group B streptococcal genital infection in the rabbit model. Am J Obstet Gynecol. 1996; 175(2): 402–405. PubMed Abstract | Publisher Full Text\n\nAncona RJ, Ferrieri P: Experimental vaginal colonization and mother-infant transmission of group B streptococci in rats. Infect Immun. 1979; 26(2): 599–603. PubMed Abstract | Free Full Text\n\nWhidbey C, Vornhagen J, Gendrin C, et al.: A streptococcal lipid toxin induces membrane permeabilization and pyroptosis leading to fetal injury. EMBO Mol Med. 2015; 7(4): 488–505. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEquils O, Moffatt-Blue C, Ishikawa TO, et al.: Pretreatment with pancaspase inhibitor (Z-VAD-FMK) delays but does not prevent intraperitoneal heat-killed group B Streptococcus-induced preterm delivery in a pregnant mouse model. Infect Dis Obstet Gynecol. 2009; 2009: 749432. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRubens CE, Raff HV, Jackson JC, et al.: Pathophysiology and histopathology of group B streptococcal sepsis in Macaca nemestrina primates induced after intraamniotic inoculation: evidence for bacterial cellular invasion. J Infect Dis. 1991; 164(2): 320–330. PubMed Abstract | Publisher Full Text\n\nAdams Waldorf KM, Gravett MG, McAdams RM, et al.: Choriodecidual group B streptococcal inoculation induces fetal lung injury without intra-amniotic infection and preterm labor in Macaca nemestrina. PLoS One. 2011; 6(12): e28972. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcAdams RM, Vanderhoeven J, Beyer RP, et al.: Choriodecidual infection downregulates angiogenesis and morphogenesis pathways in fetal lungs from Macaca nemestrina. PLoS One. 2012; 7(10): e46863. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcAdams RM, Bierle CJ, Boldenow E, et al.: Choriodecidual Group B Streptococcal Infection Induces miR-155-5p in the Fetal Lung in Macaca nemestrina. Infect Immun. 2015; 83(10): 3909–3917. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaltimore RS, Kasper DL, Vecchitto J: Mouse protection test for group B Streptococcus type III. J Infect Dis. 1979; 140(1): 81–88. PubMed Abstract | Publisher Full Text\n\nHordnes K, Digranes A, Haugen IL, et al.: Systemic and mucosal antibody responses to group B streptococci following immunization of the colonic-rectal mucosa. J Reprod Immunol. 1995; 28(3): 247–262. PubMed Abstract | Publisher Full Text\n\nShen X, Lagergård T, Yang Y, et al.: Preparation and preclinical evaluation of experimental group B streptococcus type III polysaccharide-cholera toxin B subunit conjugate vaccine for intranasal immunization. Vaccine. 2000; 19(7–8): 850–861. PubMed Abstract | Publisher Full Text\n\nShen X, Lagergård T, Yang Y, et al.: Systemic and mucosal immune responses in mice after mucosal immunization with group B streptococcus type III capsular polysaccharide-cholera toxin B subunit conjugate vaccine. Infect Immun. 2000; 68(10): 5749–5755. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShen X, Lagergård T, Yang Y, et al.: Effect of pre-existing immunity for systemic and mucosal immune responses to intranasal immunization with group B Streptococcus type III capsular polysaccharide-cholera toxin B subunit conjugate. Vaccine. 2001; 19(25–26): 3360–3368. PubMed Abstract | Publisher Full Text\n\nShen X, Lagergård T, Yang Y, et al.: Group B Streptococcus capsular polysaccharide-cholera toxin B subunit conjugate vaccines prepared by different methods for intranasal immunization. Infect Immun. 2001; 69(1): 297–306. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHordnes K, Tynning T, Brown TA, et al.: Nasal immunization with group B streptococci can induce high levels of specific IgA antibodies in cervicovaginal secretions of mice. Vaccine. 1997; 15(11): 1244–1251. PubMed Abstract | Publisher Full Text\n\nSantillan DA, Andracki ME, Hunter SK: Protective immunization in mice against group B streptococci using encapsulated C5a peptidase. Am J Obstet Gynecol. 2008; 198(1): 114.e1–6. PubMed Abstract | Publisher Full Text\n\nPaoletti LC, Madoff LC: Vaccines to prevent neonatal GBS infection. Semin Neonatol. 2002; 7(4): 315–323. PubMed Abstract | Publisher Full Text\n\nPaoletti LC, Pinel J, Kennedy RC, et al.: Maternal antibody transfer in baboons and mice vaccinated with a group B streptococcal polysaccharide conjugate. J Infect Dis. 2000; 181(2): 653–658. PubMed Abstract | Publisher Full Text\n\nMadoff LC, Paoletti LC, Tai JY, et al.: Maternal immunization of mice with group B streptococcal type III polysaccharide-beta C protein conjugate elicits protective antibody to multiple serotypes. J Clin Invest. 1994; 94(1): 286–292. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaoletti LC, Wessels MR, Rodewald AK, et al.: Neonatal mouse protection against infection with multiple group B streptococcal (GBS) serotypes by maternal immunization with a tetravalent GBS polysaccharide-tetanus toxoid conjugate vaccine. Infect Immun. 1994; 62(8): 3236–3243. PubMed Abstract | Free Full Text\n\nNilo A, Morelli L, Passalacqua I, et al.: Anti-Group B Streptococcus Glycan-Conjugate Vaccines Using Pilus Protein GBS80 As Carrier and Antigen: Comparing Lysine and Tyrosine-directed Conjugation. ACS Chem Biol. 2015; 10(7): 1737–1746. PubMed Abstract | Publisher Full Text\n\nMaione D, Margarit I, Rinaudo CD, et al.: Identification of a universal Group B streptococcus vaccine by multiple genome screen. Science. 2005; 309(5731): 148–150. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNuccitelli A, Cozzi R, Gourlay LJ, et al.: Structure-based approach to rationally design a chimeric protein for an effective vaccine against Group B Streptococcus infections. Proc Natl Acad Sci U S A. 2011; 108(25): 10278–10283. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLindahl G, Stålhammar-Carlemalm M, Areschoug T: Surface proteins of Streptococcus agalactiae and related proteins in other bacterial pathogens. Clin Microbiol Rev. 2005; 18(1): 102–127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBohnsack JF, Chang JK, Hill HR: Restricted ability of group B streptococcal C5a-ase to inactivate C5a prepared from different animal species. Infect Immun. 1993; 61(4): 1421–1426. PubMed Abstract | Free Full Text\n\nBohnsack JF, Widjaja K, Ghazizadeh S, et al.: A role for C5 and C5a-ase in the acute neutrophil response to group B streptococcal infections. J Infect Dis. 1997; 175(4): 847–855. PubMed Abstract | Publisher Full Text\n\nOnderdonk AB, Kennedy RC: Use of Animals for Vaccine Development. In: Paoletti LC, McInnes PM, eds. Vaccines. From Concept to Clinic: A Guide to the Development and Clinical Testing of Vaccines for Human Use. Boca Raton, Florida: CRC Press; 1999. Publisher Full Text\n\nPaoletti LC, Kennedy RC, Chanh TC, et al.: Immunogenicity of group B Streptococcus type III polysaccharide-tetanus toxoid vaccine in baboons. Infect Immun. 1996; 64(2): 677–679. PubMed Abstract | Free Full Text\n\nBaker CJ, Paoletti LC, Wessels MR, et al.: Safety and immunogenicity of capsular polysaccharide-tetanus toxoid conjugate vaccines for group B streptococcal types Ia and Ib. J Infect Dis. 1999; 179(1): 142–150. PubMed Abstract | Publisher Full Text\n\nLancefield RC: Two Serological Types Of Group B Hemolytic Streptococci With Related, But Not Identical, Type-Specific Substances. J Exp Med. 1938; 67(1): 25–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Edwards MS, Kasper DL: Immunogenicity of polysaccharides from type III, group B Streptococcus. J Clin Invest. 1978; 61(4): 1107–1110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Edwards MS: Group B streptococcal conjugate vaccines. Arch Dis Child. 2003; 88(5): 375–378. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Kasper DL: Group B streptococcal vaccines. Rev Infect Dis. 1985; 7(4): 458–467. PubMed Abstract\n\nBaker CJ, Rench MA, Edwards MS, et al.: Immunization of pregnant women with a polysaccharide vaccine of group B streptococcus. N Engl J Med. 1988; 319(18): 1180–1185. PubMed Abstract | Publisher Full Text\n\nKasper DL, Paoletti LC, Wessels MR, et al.: Immune response to type III group B streptococcal polysaccharide-tetanus toxoid conjugate vaccine. J Clin Invest. 1996; 98(10): 2308–2314. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Paoletti LC, Rench MA, et al.: Use of capsular polysaccharide-tetanus toxoid conjugate vaccine for type II group B Streptococcus in healthy women. J Infect Dis. 2000; 182(4): 1129–1138. PubMed Abstract | Publisher Full Text\n\nBaker CJ, Paoletti LC, Rench MA, et al.: Immune response of healthy women to 2 different group B streptococcal type V capsular polysaccharide-protein conjugate vaccines. J Infect Dis. 2004; 189(6): 1103–1112. PubMed Abstract | Publisher Full Text\n\nNuccitelli A, Rinaudo CD, Maione D: Group B Streptococcus vaccine: state of the art. Ther Adv Vaccines. 2015; 3(3): 76–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker CJ, Rench MA, Fernandez M, et al.: Safety and immunogenicity of a bivalent group B streptococcal conjugate vaccine for serotypes II and III. J Infect Dis. 2003; 188(1): 66–73. PubMed Abstract | Publisher Full Text\n\nHeyderman RS, Madhi SA, French N, et al.: Group B streptococcus vaccination in pregnant women with or without HIV in Africa: a non-randomised phase 2, open-label, multicentre trial. Lancet Infect Dis. 2016; 16(5): 546–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBröker M: Potential protective immunogenicity of tetanus toxoid, diphtheria toxoid and Cross Reacting Material 197 (CRM197) when used as carrier proteins in glycoconjugates. Hum Vaccin Immunother. 2016; 12(3): 664–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Haeophilus influenzae type b (Hib) Vaccination Position Paper – July 2013. Wkly Epidemiol Rec. 2013; 88(39): 413–26. PubMed Abstract\n\nPichichero ME: Protein carriers of conjugate vaccines: characteristics, development, and clinical trials. Hum Vaccin Immunother. 2013; 9(12): 2505–2523. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Information sheet. Observed rate of vaccine reactions. Pneumococcal vaccine. WHO vaccine reaction rates information sheets. 2012; Accessed March 1, 2016. Reference Source\n\nLarsson C, Stålhammar-Carlemalm M, Lindahl G: Protection against experimental infection with group B streptococcus by immunization with a bivalent protein vaccine. Vaccine. 1999; 17(5): 454–458. PubMed Abstract | Publisher Full Text\n\nLarsson C, Lindroth M, Nordin P, et al.: Association between low concentrations of antibodies to protein alpha and Rib and invasive neonatal group B streptococcal infection. Arch Dis Child Fetal Neonatal Ed. 2006; 91(6): F403–408. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMinervax initiates phase I clinical trial with innovative group B streptococcal (GBS) vaccine to prevent life-threatening infections in newborns. [Press release], June 2, 2015; 2015. Reference Source\n\nMargarit I, Rinaudo CD, Galeotti CL, et al.: Preventing bacterial infections with pilus-based vaccines: the group B streptococcus paradigm. J Infect Dis. 2009; 199(1): 108–115. PubMed Abstract | Publisher Full Text\n\nDangor Z, Kwatra G, Izu A, et al.: Association between maternal Group B Streptococcus surface-protein antibody concentrations and invasive disease in their infants. Expert Rev Vaccines. 2015; 14(12): 1651–1660. PubMed Abstract | Publisher Full Text\n\nLachenauer CS, Baker CJ, Baron MJ, et al.: Quantitative determination of immunoglobulin G specific for group B streptococcal beta C protein in human maternal serum. J Infect Dis. 2002; 185(3): 368–374. PubMed Abstract | Publisher Full Text\n\nManning L, Laman M, Mare T, et al.: Accuracy of cerebrospinal leucocyte count, protein and culture for the diagnosis of acute bacterial meningitis: a comparative study using Bayesian latent class analysis. Trop Med Int Health. 2014; 19(12): 1520–1524. PubMed Abstract | Publisher Full Text\n\nPannaraj PS, Kelly JK, Madoff LC, et al.: Group B Streptococcus bacteremia elicits beta C protein-specific IgMand IgG in humans. J Infect Dis. 2007; 195(3): 353–356. PubMed Abstract | Publisher Full Text\n\nPannaraj PS, Kelly JK, Rench MA, et al.: Alpha C protein-specific immunity in humans with group B streptococcal colonization and invasive disease. Vaccine. 2008; 26(4): 502–508. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin FY, Weisman LE, Azimi PH, et al.: Level of maternal IgG anti-group B streptococcus type III antibody correlated with protection of neonates against early-onset disease caused by this pathogen. J Infect Dis. 2004; 190(5): 928–934. PubMed Abstract | Publisher Full Text\n\nBaker CJ, Carey VJ, Rench MA, et al.: Maternal antibody at delivery protects neonates from early onset group B streptococcal disease. J Infect Dis. 2014; 209(5): 781–788. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDangor Z, Kwatra G, Izu A, et al.: Correlates of protection of serotype-specific capsular antibody and invasive Group B Streptococcus disease in South African infants. Vaccine. 2015; 33(48): 6793–6799. PubMed Abstract | Publisher Full Text\n\nLin FY, Philips JB 3rd, Azimi PH, et al.: Level of maternal antibody required to protect neonates against early-onset disease caused by group B Streptococcus type Ia: a multicenter, seroepidemiology study. J Infect Dis. 2001; 184(8): 1022–1028. PubMed Abstract | Publisher Full Text\n\nMatsubara K, Katayama K, Baba K, et al.: Seroepidemiologic studies of serotype VIII group B Streptococcus in Japan. J Infect Dis. 2002; 186(6): 855–858. PubMed Abstract | Publisher Full Text\n\nDangor Z, Lala SG, Kwatra G, et al.: Group B Streptococcus: developing a correlate of protection for a vaccine against neonatal infections. Curr Opin Infect Dis. 2016; 29(3): 262–7. PubMed Abstract | Publisher Full Text\n\nDangor Z, Kwatra G, Izu A, et al.: Review on the association of Group B Streptococcus capsular antibody and protection against invasive disease in infants. Expert Rev Vaccines. 2015; 14(1): 135–149. PubMed Abstract | Publisher Full Text\n\nPlotkin SA: Vaccines: correlates of vaccine-induced immunity. Clin Infect Dis. 2008; 47(3): 401–409. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Correlates of vaccine-induced protection: methods and implications. Geneva, Switzerland: World Health Organization; 2013. WHO/IVB/13.01. Reference Source\n\nBaltimore RS, Kasper DL, Baker CJ, et al.: Antigenic specificity of opsonophagocytic antibodies in rabbit anti-sera to group B streptococci. J Immunol. 1977; 118(2): 673–678. PubMed Abstract\n\nKwatra G, Adrian PV, Shiri T, et al.: Natural acquired humoral immunity against serotype-specific group B Streptococcus rectovaginal colonization acquisition in pregnant women. Clin Microbiol Infect. 2015; 21(6): 568.e13–568.e21. PubMed Abstract | Publisher Full Text\n\nHerbert J, Thomas S, Brookes C, et al.: Antibody-mediated complement C3b/iC3b binding to group B Streptococcus in paired mother and baby serum samples in a refugee population on the Thailand-Myanmar border. Clin Vaccine Immunol. 2015; 22(3): 319–326. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuttormsen HK, Baker CJ, Nahm MH, et al.: Type III group B streptococcal polysaccharide induces antibodies that cross-react with Streptococcus pneumoniae type 14. Infect Immun. 2002; 70(4): 1724–1738. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKasper DL, Wessels MR, Guttormsen HK, et al.: Measurement of human antibodies to type III group B Streptococcus. Infect Immun. 1999; 67(8): 4303–4305. PubMed Abstract | Free Full Text\n\nKlegerman ME, Boyer KM, Papierniak CK, et al.: Estimation of the protective level of human IgG antibody to the type-specific polysaccharide of group B Streptococcus type Ia. J Infect Dis. 1983; 148(4): 648–655. PubMed Abstract | Publisher Full Text\n\nGotoff SP, Papierniak CK, Klegerman ME, et al.: Quantitation of IgG antibody to the type-specific polysaccharide of group B streptococcus type 1b in pregnant women and infected infants. J Pediatr. 1984; 105(4): 628–630. PubMed Abstract | Publisher Full Text\n\nGray BM, Pritchard DG, Dillon HC Jr: Seroepidemiological studies of group B Streptococcus type II. J Infect Dis. 1985; 151(6): 1073–1080. PubMed Abstract | Publisher Full Text\n\nGotoff SP, Odell C, Papierniak CK, et al.: Human IgG antibody to group b Streptococcus type III: comparison of protective levels in a murine model with levels in infected human neonates. J Infect Dis. 1986; 153(3): 511–519. PubMed Abstract | Publisher Full Text\n\nFeldman RG, Ferrante A: Prevalence of anti-group B streptococcal type III capsular IgG antibodies in the United Kingdom and an analysis of their specific IgG subclasses. J Infect Dis. 1990; 162(4): 883–887. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Tetanus vaccine. WHO position paper. Wkly Epidemiol Rec. 2006; 81(20): 198–208. PubMed Abstract\n\nWorld Health Organization: Pertussis vaccines: WHO position paper - August 2015. Wkly Epidemiol Rec. 2015; 90(35): 433–460. PubMed Abstract\n\nWorld Health Organization: Vaccines against influenza WHO position paper – November 2012. Wkly Epidemiol Rec. 2012; 87(47): 461–476. PubMed Abstract\n\nRoberts JN, Gruber MF: Regulatory considerations in the clinical development of vaccines indicated for use during pregnancy. Vaccine. 2015; 33(8): 966–972. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Global Advisory Committee on Vaccine Safety, 12–13 June 2013. Wkly Epidemiol Rec. 2013; 88(29): 301–312. PubMed Abstract\n\nGlobal Advisory Committee on Vaccine Safety: Safety of Immunization during Pregnancy. A review of Evidence. Geneva, Switzerland: World Health Organization. 2014. Reference Source\n\nZheteyeva Y, Moro PL, Yue X, et al.: Safety of meningococcal polysaccharide-protein conjugate vaccine in pregnancy: a review of the Vaccine Adverse Event Reporting System. Am J Obstet Gynecol. 2013; 208(6): 478.e1–6. PubMed Abstract | Publisher Full Text\n\nWak G, Williams J, Oduro A, et al.: The Safety of PsA-TT in Pregnancy: An Assessment Performed Within the Navrongo Health and Demographic Surveillance Site in Ghana. Clin Infect Dis. 2015; 61(Suppl 5): S489–492. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu H, Hu C, Gong R, et al.: Evaluation of a novel chimeric B cell epitope-based vaccine against mastitis induced by either Streptococcus agalactiae or Staphylococcus aureus in mice. Clin Vaccine Immunol. 2011; 18(6): 893–900. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAvci FY, Li X, Tsuji M, et al.: Carbohydrates and T cells: a sweet twosome. Semin Immunol. 2013; 25(2): 146–151. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeroux-Roels G, Maes C, Willekens J, et al.: A randomized, observer-blind Phase Ib study to identify formulations and vaccine schedules of a trivalent Group B Streptococcus vaccine for use in non-pregnant and pregnant women. Vaccine. 2016; 34(15): 1786–91. PubMed Abstract | Publisher Full Text\n\nMadhi SA, Cutland CL, Jose L, et al.: Safety and immunogenicity of an investigational maternal trivalent group B streptococcus vaccine in healthy women and their infants: a randomised phase 1b/2 trial. Lancet Infect Dis. 2016; 16(8): 923–34. PubMed Abstract | Publisher Full Text\n\nMohle-Boetani JC, Schuchat A, Plikaytis BD, et al.: Comparison of prevention strategies for neonatal group B streptococcal infection. A population-based economic analysis. JAMA. 1993; 270(12): 1442–1448. PubMed Abstract | Publisher Full Text\n\nInstitute of Medicine (US) Committee to Study Priorities for Vaccine Development; Stratton KR, Durch JS, eds, : Vaccines for the 21st Century: A Tool for Decisionmaking. In: The National Academies Collection: Reports funded by National Institutes of Health. Washington (DC): National Academies Press (US). Copyright 2000 by the National Academy of Sciences. 2000. PubMed Abstract | Publisher Full Text\n\nSinha A, Lieu TA, Paoletti LC, et al.: The projected health benefits of maternal group B streptococcal vaccination in the era of chemoprophylaxis. Vaccine. 2005; 23(24): 3187–3195. PubMed Abstract | Publisher Full Text\n\nColbourn T, Asseburg C, Bojke L, et al.: Prenatal screening and treatment strategies to prevent group B streptococcal and other bacterial infections in early infancy: cost-effectiveness and expected value of information analyses. Health Technol Assess. 2007; 11(29): 1–226, iii. PubMed Abstract | Publisher Full Text\n\nKim SY, Russell LB, Park J, et al.: Cost-effectiveness of a potential group B streptococcal vaccine program for pregnant women in South Africa. Vaccine. 2014; 32(17): 1954–1963. PubMed Abstract | Publisher Full Text\n\nOster G, Edelsberg J, Hennegan K, et al.: Prevention of group B streptococcal disease in the first 3 months of life: would routine maternal immunization during pregnancy be cost-effective? Vaccine. 2014; 32(37): 4778–4785. PubMed Abstract | Publisher Full Text\n\nAndrews N, Borrow R, Miller E: Validation of serological correlate of protection for meningococcal C conjugate vaccine by using efficacy estimates from postlicensure surveillance in England. Clin Diagn Lab Immunol. 2003; 10(5): 780–786. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Pneumococcal vaccines WHO position paper--2012. Wkly Epidemiol Rec. 2012; 87(14): 129–144. PubMed Abstract\n\nU.S. Food and Drug Administration: Guidance for industry: Clinical Data Needed to Support the Licensure of Seasonal Inactivated Influenza Vaccines. Vaccines Guidances. 2007; Accessed February 25, 2016. Reference Source\n\nEuropean Medicines Agency: Guideline on influenza vaccines: non-clinical and clinical module. 2014. Reference Source\n\nQin L, Gilbert PB, Corey L, et al.: A framework for assessing immunological correlates of protection in vaccine trials. J Infect Dis. 2007; 196(9): 1304–1312. PubMed Abstract | Publisher Full Text\n\nGiménez M, Prat C, Vallés X, et al.: Evaluation of the VITAL (bioMérieux) automated blood culture system using blind subculture. Clin Microbiol Infect. 2002; 8(4): 222–228. PubMed Abstract | Publisher Full Text\n\nTrijbels-Smeulders M, Gerards LJ, M PC, et al.: Epidemiology of neonatal group B streptococcal disease in The Netherlands 1997–98. Paediatr Perinat Epidemiol. 2002; 16(4): 334–341. PubMed Abstract | Publisher Full Text\n\nLuck S, Torny M, d'Agapeyeff K, et al.: Estimated early-onset group B streptococcal neonatal disease. Lancet. 2003; 361(9373): 1953–1954. PubMed Abstract | Publisher Full Text\n\nCarbonell-Estrany X, Figueras-Aloy J, Salcedo-Abizanda S, et al.: Probable early-onset group B streptococcal neonatal sepsis: a serious clinical condition related to intrauterine infection. Arch Dis Child Fetal Neonatal Ed. 2008; 93(2): F85–89. PubMed Abstract | Publisher Full Text\n\nMeehan M, Cafferkey M, Corcoran S, et al.: Real-time polymerase chain reaction and culture in the diagnosis of invasive group B streptococcal disease in infants: a retrospective study. Eur J Clin Microbiol Infect Dis. 2015; 34(12): 2413–2420. PubMed Abstract | Publisher Full Text\n\nFulton TR, Narayanan D, Bonhoeffer J, et al.: A systematic review of adverse events following immunization during pregnancy and the newborn period. Vaccine. 2015; 33(47): 6453–6465. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: WHO Meeting to develop Brighton Collaboration definitions of key terms used for monitoring the safety of immunization in pregnancy in mothers and newborn children. Immunization, Vaccines and Biologicals. 2014; Accessed April 10, 2016. Reference Source\n\nGAIA: The GAIA consortium Wesite. 2016; Accessed April 6. Reference Source\n\nGAIA: Newsletter, September 2015. 2015. Reference Source\n\nAllesee L, Gallagher CM: Pregnancy and Protection: The Ethics of Limiting a Pregnant Woman's Participation in Clinical Trials. J Clin Res Bioeth. 2011; 2(108): pii: 1000108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacklin R: Enrolling pregnant women in biomedical research. Lancet. 2010; 375(9715): 632–633. PubMed Abstract | Publisher Full Text\n\nPellegrini C, McCabe ER: Maternal immunization at the crossroads. Vaccine. 2015; 33(47): 6501–6502. PubMed Abstract | Publisher Full Text\n\nCutland CL, Cunnington M, Olugbosi M, et al.: Lessons learnt from enrolment and follow up of pregnant women and their infants in clinical trials in South Africa, a low-middle income country. Vaccine. 2015; 33(47): 6406–6412. PubMed Abstract | Publisher Full Text\n\nWhite A, Madhi SA: Ethical considerations for designing GBS maternal vaccine efficacy trials in low-middle income countries. Vaccine. 2015; 33(47): 6396–6400. PubMed Abstract | Publisher Full Text\n\nDivala TH, Mungwira RG, Laufer MK: Moving targets: The challenges of studying infectious diseases among pregnant women in resource limited settings. Vaccine. 2015; 33(47): 6401–6405. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCouncil for International Organizations of Medical Sciences (CIOMS): International Ethical Guidelines for Biomedical Research Involving Human Subjects. 2002 ed. Geneva: CIOMS; 2003. Accessed February 26, 2016. Reference Source\n\nClinical Development and Requirements for Licensure of Vaccines Intended for Use During Pregnancy to Prevent Disease in the Infant. [presentation], 2015. Reference Source\n\nO'Brien KL, Binka F, Marsh K, et al.: Mind the gap: jumping from vaccine licensure to routine use. Lancet. 2016; 387(10031): 1887–1889. PubMed Abstract | Publisher Full Text\n\nLee BY, Burke DS: Constructing target product profiles (TPPs) to help vaccines overcome post-approval obstacles. Vaccine. 2010; 28(16): 2806–2809. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith NH, Gordon SV, de la Rua-Domenech R, et al.: Bottlenecks and broomsticks: the molecular evolution of Mycobacterium bovis. Nat Rev Microbiol. 2006; 4(9): 670–681. PubMed Abstract | Publisher Full Text\n\nPrentice RL: Surrogate endpoints in clinical trials: definition and operational criteria. Stat Med. 1989; 8(4): 431–440. PubMed Abstract | Publisher Full Text\n\nHeath PT: Status of vaccine research and development of vaccines for GBS. Vaccine. 2016; 34(26): 2876–9. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "16511",
"date": "10 Oct 2016",
"name": "Laura Elizabeth Riley",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an excellent review article summarizing the status of GBS vaccine development for global use. This manuscript, in a logical and practical way, lays out the case for further development and testing of a GBS vaccine to be given to mothers primarily for the protection of their neonates.\nThe burden of maternal and neonatal disease is thoroughly discussed and highlights the paucity of epidemiological data in many resource limited settings. This is the critical data needed to determine the contribution of GBS to overall neonatal morbidity and mortality in low and middle income countries where a candidate vaccine is most likely to be beneficial.\nThere is an excellent list and discussion of previous GBS vaccine trials and outcomes. Even more useful is the discussion of challenges going forward with GBS vaccine trials, including a review of potential trial endpoints, challenges of any maternal vaccination program, and potential ethical and regulatory hurdles.",
"responses": []
},
{
"id": "16507",
"date": "13 Oct 2016",
"name": "Ronald Francis Lamont",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHaving personally published earlier this year on GBS vaccination1, this is the most comprehensive review of GBS vaccination I have seen. Quite appropriately, the authors have extended the review beyond early onset GBS disease to include late onset GBS disease and GBS disease in non-pregnant adults.\nUnlike many reviews they have focused on low and middle-income countries (LMIC) where the burden of disease is at its greatest whereas most of the available data are from resource rich countries. In addition, the review covers other important topics often omitted from other overviews such as the increased risk of neonatal GBS disease in HIV-positive mothers, the use of protein based vaccines, reverse vaccinology and the necessary use of surrogate endpoints in vaccine trials.\nImportantly, they have addressed the problem that logistically, because of the estimated sample size for a phase III randomised controlled trial using a clinical endpoint of invasive neonatal disease, the different burden of disease across geographic regions dictates that trials may have to be performed in LMICs. This has its own problems, not the least of which is the acceptability of local standards of care in the context of such a trial.\nFinally, a minor criticism would be the omission of a discussion on the association between black race and GBS rectal colonisation, which is particularly pertinent bearing in mind the burden of disease in Africa and in African American women in the USA. In addition, while the use is mentioned, the role of rapid detection of GBS carriage using automated PCR techniques that might decrease the unnecessary use of intrapartum antibiotic prophlyxis during labour merits some emphasis.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2355
|
https://f1000research.com/articles/5-2354/v1
|
22 Sep 16
|
{
"type": "Research Article",
"title": "A pilot study: Can heart rate variability (HRV) be determined using short-term photoplethysmograms?",
"authors": [
"Mohamed Elgendi",
"Ian Norton",
"Matt Brearley",
"Socrates Dokos",
"Derek Abbott",
"Dale Schuurmans",
"Ian Norton",
"Matt Brearley",
"Socrates Dokos",
"Derek Abbott",
"Dale Schuurmans"
],
"abstract": "To date, there have been no studies that investigate the independent use of the photoplethysmogram (PPG) signal to determine heart rate variability (HRV). However, researchers have demonstrated that PPG signals offer an alternative way of measuring HRV when electrocardiogram (ECG) and PPG signals are collected simultaneously. Based on these findings, we take the use of PPGs to the next step and investigate a different approach to show the potential independent use of short 20-second PPG signals collected from healthy subjects after exercise in a hot environment to measure HRV. Our hypothesis is that if the PPG--HRV indices are negatively correlated with age, then short PPG signals are appropriate measurements for extracting HRV parameters. The PPGs of 27 healthy male volunteers at rest and after exercise were used to determine the HRV indices: standard deviation of heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD). The results indicate that the use of the $aa$ interval, derived from the acceleration of PPG signals, is promising in determining the HRV statistical indices SDNN and RMSSD over 20-second PPG recordings. Moreover, the post-exercise SDNN index shows a negative correlation with age. There tends to be a decrease of the PPG--SDNN index with increasing age, whether at rest or after exercise. This new outcome validates the negative relationship between HRV in general with age, and consequently provides another evidence that short PPG signals have the potential to be used in heart rate analysis without the need to measure lengthy sequences of either ECG or PPG signals.",
"keywords": [
"Mobile health",
"Point-of-care device",
"Affordable healthcare",
"Moving averages"
],
"content": "Introduction\n\nThe most commonly accepted heart rate variability (HRV) metric to measure both instantaneous heart rate (HR) and RR intervals is derived from electrocardiogram (ECG) signals. Metrics used to describe HRV in literature include heart period variability, cycle length variability, RR interval tachogram, and RR variability.\n\nHRV measurement captures HR variations of the mean HR, while providing information on the stability of the sympathetic–parasympathetic autonomic system. This in turn provides information on the risk of sudden cardiac death. An example of this can be seen when continuous subjection to dry heat occurs with subjects. Dry heat exposure provokes a stress response that is designated by increased HR, which occurs due to a significant reduction in parasympathetic control of the HR, which is signified by reduced HRV1.\n\nThe traditional method of identifying heartbeats in ECGs is by detecting R peaks. In almost every study, comparisons are made between HRV calculated from ECG signals and those calculated from photoplethysmogram (PPG) signals. In these studies, the feasibility of using PPGs as an alternative simple, inexpensive, and convenient diagnostic tool was explored in parallel with the use of ECGs. The overall results showed that PPG is potentially an ideal stand-alone alternative but it has not been explored independently, possibly explaining why there are no known investigations into the alternate use of PPG signals to measure HR and HRV.\n\nIt is noted that accurately detecting inter-beat intervals from fingertip PPG signals is challenging2–4. Bernston et al.2, noted these challenges and recommended using RR intervals from ECG signals to determine inter-beat intervals. Even with the use of a peak detection algorithm that is sophisticated, using intra-arterial pressure pulses is likely acceptable but indirect PPG signal measures need further validation. The contour of the PPG can measure HRV through the detection of the heartbeat, and thus HRV can be measured5.\n\nGiardino et al.4 demonstrated that it is sufficient to use the distal pulse pressure, under resting conditions, to determine the heart rate as shown in Figure 1(a). The authors also recommended further investigations into the test-retest reliability evaluation for different data collection techniques.\n\nTwo successive beats in (a) fingertip photoplethysmogram (PPG) signal (b) second derivative wave of photoplethysmogram (APG) signal.\n\nThese cautious evaluations may explain the reason for limited investigation into the use of PPG signals (rather than ECG) to measure HR and HRV.\n\nThe contour of the PPG can be utilized to detect the heart beat and therefore HRV can be measured from the contour5. In Figure 1(a) two circles, with the smallest positive PPG amplitudes, represent two consecutive heartbeats. Reliable detection of the heartbeats using the PPG contour is, however, challenging due to noise and the nature of its associated interference with hemodynamic variables6. These identified challenges can be addressed by using the second derivative of the photoplethysmogram waveform, also known as the acceleration plethysmogram (APG). In Figure 1(b), two circles with the largest positive amplitudes, represent two consecutive heartbeats of the APG signal. The heart rate is more accurately detected using the APG signal due to the clearly defined peaks in the APG, when compared to the peaks in the PPG contour (more specifically the a wave), as shown in Figure 2(b).\n\n(a) Fingertip photoplethysmogram. (b) Second derivative wave of photoplethysmogram. The photoplethysmogram waveform consists of one systolic wave and one diastolic wave, while the second derivative photoplethysmogram waveform consists of four systolic waves (a, b, c, and d waves) and one diastolic wave (e wave).\n\nWe therefore sought to validate the usefulness of PPGs using an alternative method without the collection of ECGs based on the already established negative correlation between HRV and age8. We hypothesized that if the PPG–HRV indices calculated from short PPG signals are negatively correlated with age then a short PPG signal is an appropriate measurement for extracting HRV parameters. In this study, only the standard deviation of the heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD) indices are investigated, as they are suitable for short recordings based on the recommendation in 9. Herein we investigate if the SDNN and RMSSD calculated from short PPG signals measured before and after exercise can potentially be correlated with age.\n\n\nMaterials and methods\n\nCharles Darwin University (CDU) has one annotated PPG database available with data collected, in 2006. The dataset includes participant recordings during rest (before exercise) and after one hour of exercise (walking) on a treadmill in the climate control chamber at the Northern Territory Institute of Sport (Darwin, Australia). PPG data was collected on three separate occasions during each rest period as shown in Figure 3.\n\nThe duration of the whole experiment was approximately 4 hours, each exercise consumed approximately 1 hr while the PPG signals collected during the 5 minutes break of each exercise at a sampling rate of 200 Hz. The length of each PPG recording is 20 seconds.\n\nExercise was set to moderate intensity, with the treadmill speed set at 5 km/h with a 1% incline increment (which corresponds to effort required to walk on 8 kg of webbing). Details on the project are found at 10. Written consent was obtained from all subjects before participating in the Charels Darwin University Ethics Committee approved study and was conducted in accordance with applicable guidelines and regulations.\n\nA photoplethysmography device (Salus APG, Japan) was used to measure PPG recordings from 27 healthy male volunteers (mean ± SD age of 27 ± 6.9). Sensors were placed on the second digit of the left hand on the cuticle. Measurements were recorded for each subject at rest on a chair. Data were collected at a sampling rate of 200 Hz, with recording duration of 20 seconds.\n\nWe used one PPG oximeter to collect the data from all subjects measured after each exercise. It was not plausible to collect data for a longer period of time as we needed to measure other subjects waiting in line. The longer the waiting time, the more the heat stress impact due to cooling of the body is lost. Therefore, it was convenient to collect data for a consistent recording length of 20 seconds. Preliminary feasibility was also tested with this method, since the collected of a shorter recording length is preferred in a clinical setting.\n\nThe beat annotations were performed by a cardiologist, and each beat was labeled as an a wave after processing the PPG signals with a second-order bandpass Butterworth filter from 0.5–8 Hz of the unfiltered PPG signal, based on the recommendation in 11. There were a total of 584 heartbeats founds in signals measured before exercise (during rest). A total of 885 heartbeats were founds in the PPG signals collected post one hour of exercise and also contained fast rhythm PPG signals.\n\nDetection of a wave. In this study, the a wave detection algorithm published in 12 will be used. Elgendi proposed an algorithm in 201311 that was used to detect systolic waves in PPG signals, QRS complexes in ECG signals13, and to detect c, d, and e waves in APG signals14. The algorithm is adapted for this paper and the same approach will be utilized to detect the a waves, which has four stages: bandpass filtering, squaring, generating potential blocks, and thresholding and uses five parameters F1, F2, MApeak, MAbeat, and β. Each stage is explained in more detail as follows.\n\n• Bandpass filter: A zero-phase second-order Butterworth filter, with bandpass 0.5–15 Hz, was implemented to remove frequency components that do not contribute to the a wave, such as the baseline wander and high frequencies. A filtered S[n] signal was produced by applying the output of the zero-phase Butterworth filter the PPG signal—at rest and post exercise—produced a filtered signal Therefore, F1 = 0.5 Hz and F2 = 15 Hz based on a brute force search that is discussed in the parameter optimization section in 12.\n\n• Second derivative: The second derivative was used on the filtered PPG to obtain and analyze the APG signals. Non-causal filters are represented by the Equation 1 and Equation 2. Using a delay of only two samples, the three-point centre derivative was created.\n\n\n\n\n\nT is the sampling interval and equals the reciprocal of the sampling frequency and n is the number of data points.\n\n• Cancellation of b wave: Emphasis of the APG a wave is necessary to make it distinct for detection, which can be accomplished by cutting out the negative parts of the APG signal (Z[n] = 0, if Z[n] < 0).\n\n• Squaring: Large differences that result from a wave are emphasized by squaring. Small differences from the diastolic wave and noise are suppressed, resulting in the output.\n\ny[n] = Z[n]2, (3)\n\nThis is important for improving the accuracy in distinguishing the a wave segment in APG signals.\n\n• Generating blocks of interest: Using two event-related moving averages, blocks of interest are created, that demarcate the a wave and heartbeat areas.\n\nIn this procedure, the first moving average (MApeak) is used to emphasize the a wave area, and is given by\n\n\n\nwhere W1 represents the window size of the systolic-peak duration. The resulting value is rounded to the nearest odd integer. The exact value for W1 of 175 ms is determined based on a brute force search and is further discussed in the parameter optimization section in 12.\n\nThe beat area is emphasized by the second moving average (MAbeat) that will be used as a threshold for the first moving average, and is given by\n\n\n\nwhere W2 represents a window size of approximately one beat duration and is rounded to the nearest odd integer. A brute force search is used to determine the exact value of W2 of 1000 ms and is further in the parameter optimization section in 12.\n\nThresholding: Determining the offset level is done using the following equation: (α) is β z, where β = 0 based on a brute force search and is discussed further in the parameter optimization section in 12, while z is the statistical mean of the squared filtered PPG signal. The first dynamic threshold value was calculated by shifting the MAbeat signal with an offset level α, as follows:\n\nTHR1 = MAbeat[n] + α. (6)\n\nBy comparing the MApeak signal with THR1 blocks of interest were generated, resulting in the generation of many blocks of interest. Some will contain the APG feature (a wave), while others will mainly contain noise. Thus, block rejection for blocks that result from noise must be carried out next and is based on the anticipated systolic-peak width. For the purpose of this paper, rejection of undesired blocks using THR2, which is a threshold that rejects the blocks containing diastolic wave and noise. Applying the THR2 threshold, will result in only accepting blocks that contain a waves only,\n\nTHR2 = W1. (7)\n\nAs previously mentioned, the threshold THR2 corresponds to the anticipated a wave duration. Classification of an a wave takes place when a block is wider than or equal to THR2, otherwise it will be classified as noise. The final step in this process is to identify the maximum absolute value within each block to detect the a wave. Note that not all the blocks hold potential a waves, since some blocks are created by noise and need to be eliminated. Rejection of blocks will also take place when a block is smaller than the expected width for the a wave duration and are considered as noisy blocks. Accepted blocks are considered to hold an a wave. Detected a waves are compared against the annotated a waves to determine whether they were correctly detected.\n\nPerformance of the a wave detector. We used two statistical measures to evaluate the a wave detection performance: sensitivity (SE) and positive predictivity (+P); whereas SE = TP/(TP + FN) and +P = TP/(TP + FP). TP represents the total number of true positives (a wave detected as an a wave), FN represents the total number of false negatives (a wave has not been detected), and FP represents the total number of false positives (non-a wave detected as an a wave). The percentage of true, correctly detected a waves by the algorithm is reported by SE. The +P reports the percentage of the detected a waves that were true a waves. MATLAB 2012a (The MathWorks, Inc., Natick, MA, USA) was used to implement the a wave detector.\n\nCalculation of HRV indices. The detected a waves (heartbeats) are used to calculate the duration of each consecutive aa interval, as follows: aa[i] = A[i + 1] − A[i], where A represents the annotated a waves in each PPG signal, and aa represents the aa intervals. Note that the main interest is to analyze the aa duration rather than the amplitude, no preprocessing is needed. It is known that HRV decreases with normal based on the analysis of R peaks in ECG signals15–17. Therefore, based on using a waves in PPG signals, if the correlation between HRV and age is decreasing, PPG signals can potentially measure HRV. The correlation between age and HRV is found by calculating and comparing two time-domain HRV parameters and are typically used with ECG signals. The first parameter, SDNN, is the SD of heartbeat duration; here, the RR interval is replaced by aa intervals. The SDNN is calculated, as follows:\n\n\n\nThe second parameter is RMSSD, which is calculated as follows:\n\n\n\nAnalysis of trend As we have a small sample size, there is a need to examine both the correlation coefficient (r) and the slope. Note that it is common to obtain a small correlation with a small sample size, and therefore the slope can be used as an alternative to determine the relationship. Here, r is calculated as follows: r=Cov(u,v)σuσv, where Cov(u, v) is the covariance between data u and data v, σu is the SD of data u and σv is the SD of data v. Here, u and v refer to the HRV indices.\n\nSignificance of trend. To test the significance of the slope, we performed a multilinear regression of the responses in y on the predictors in x for all lines in Figure 4 and Figure 5. The null hypothesis states that the slope is equal to zero, and the alternative hypothesis states that the slope is not equal to zero. MATLAB 2012a (The MathWorks, Inc., Natick, MA, USA) was used to calculated the significance of trend.\n\n\nResults and discussion\n\nTo date, several studies have evaluated the agreement between HRV calculated from PPG and ECG signals; however, some results remain controversial18,19. This disagreement is due to applying inappropriate methodologies or inefficient experimental settings. A recent study by Chen et al.20 showed that the accuracy of HRV obtained from PPG is mostly incommensurable across 26 unhealthy subjects. However, it has been proven that the HRV calculated from PPG signal is sufficient in its accuracy for only healthy (and mostly younger) subjects at rest21. Therefore, in our study, fit and healthy subjects were only considered. Because of this selection, it is expected that there will be no apparent differences between subjects. Moreover, we can focus on the age impact— without any dependencies such as arrhythmia, transvenous cardiac pacing, and heart transplant—as an indirect assessment for the usefulness of HRV calculated from PPG signals.\n\nIt is worth noting that the heat stress PPG data were collected for this study as a part of the project funded by the Australian Department of Defence. As mentioned in the ’Database used’ subsection, there was no exercise test with the same subjects in a cooler (or normal) environment included in the main project protocol. Therefore, we are unable comment, compare or discuss the HRV calculation in two different conditions, hot and cool environments.\n\nThe designed exercise length for the heat stress test was four hours; however, only 16 out of 27 subjects completed the full duration. All subjects were able to complete the first 1-hr exercise, and therefore we analyzed all PPG signals collected only after 1-hr exercise.\n\nAfter collecting the PPG signals, perhaps the question was how can we calculate the HRV from these PPG signals more accurately. In the literature, we found that the second derivative of the PPG signal enhances its frequencies and improves waveform characteristics especially in heat stressed PPG signals22. Moreover, we found that detection of a waves in APG signals slightly increases the accuracy of detecting heart beats compared to systolic peaks in PPG signals. For example, the overall a wave detection rate was 99.9%12, while the overall systolic wave detection rate was 99.8%11. Therefore, we applied the second derivative to the PPG signal to obtain APG signals and then applied the a wave detection algorithm described in the 'Methodology’ section.\n\nThe a wave detection algorithm was evaluated using 27 records, containing 1,540 heartbeats (584 heartbeats measured at rest and 956 heartbeats measured after 1-hr exercise), with an overall SE of 99.8%, and the overall +P of 100%. The overall accuracy was sufficient enough to calculate the HRV indices automatically.\n\nAfter the automatic detection of a waves, SDNN and RMSSD indices are calculated for 27 healthy subjects using PPG recordings each of 20 seconds duration during rest and after exercise. Figure 4(a) & (b) show the relationship between age and the SDNN index at rest and after exercise, respectively. The SDNN index at rest is more negatively correlated with age (r = −0.271) and has a steeper negative slope (−0.004) than after exercise (r = −0.12 and slope = −0.001). The slope significance for SDNN before exercise was p = 0.001, and the slope significance for SDNN after exercise was p = 0.04.\n\n(a) Age and SDNN calculated from PPG signals for all subjects measured at rest, (b) age and SDNN calculated from PPG signals for all subjects measured after exercise. It is clear that the SDNN index is more negatively correlated with age for 20-second PPG signals measured at rest compared to after-exercise measurements. Here, the p-value shows the significance for testing the slope of the linear regression.\n\nFigure 5(a) shows the relationship between the age and the RMSSD index at rest and Figure 5(b) shows the relationship between the age and the RMSSD index after exercise. The RMSSD index at rest is more negatively correlated with age (r = −0.217) and has a more negative slope (−0.004) than the RMSSD index after exercise (r = −0.091 and slope = −0.001). The slope significance was achieved by RMSSD before exercise with p = 0.04, while it failed for RMSSD after exercise as p = 0.18.\n\n(a) Age and RMSSD calculated from PPG signals for all subjects measured at rest, (b) age and RMSSD calculated from PPG signals for all subjects measured after exercise. It is clear that the RMSSD index is more negatively correlated with age for 20-second PPG signals measured at rest compared to after-exercise measurements. Here, the p-value shows the significance for testing the slope of the linear regression.\n\nIt is worth mentioning that the remainder of the correlations are not as strong, specifically the correlation between SDNN and age and that between RMSSD and age. Nevertheless, the slope demonstrates significance between HRV indices measured at rest and after exercise. The combination of the correlation coefficient and the slope provides a more precise evaluation for the trend analysis. Note that PPGs measured at rest have a greater negative slope compared to those measured after exercise. Although it was known that long-term exercise exerts significant effects on the HRV23, it has not been investigated over short PPG signals. This study is important because it reports the effect of exercise on HRVs calculated from short PPG signals and compared their results in subjects measured at rest and after exercise.\n\nThe results of various cross-sectional studies have shown a linear decrease in HRV during exercise with increasing age using only ECG signals8,15,24. Interestingly, our results confirm the inverse linear relationship between HRV measures (SDNN and RMSSD) and age. Moreover, SDNN is statistically significant with age for PPG measured before and after exercise. This new outcome shows that HRV can potentially be measured using short PPG signals.\n\nHealthy subjects were the focus of this study and the proposed method was successfully implemented only on these subjects. Subject physiology changes significantly depending on the current health status, and this was typically reflected in the PPG signal. Thus, due to these changes, the robustness of the proposed algorithm needs to be tested on and verified on recordings from unhealthy subjects. The next step, based on results from this study, is to examine the HRV indices calculated from PPG signals in the diagnosis and monitoring of abnormalities, such as arrhythmia, diabetes, hypertension, and hyperlipidemia. The HRV indices are usually calculated over a period of five minutes from the ECG signals; however, the PPG recordings in this study were very short (20 seconds). Additional studies to demonstrate the 20-second recordings are equivalent in terms of usefulness to the standard length of time (5 minutes) are needed to further validate our findings. Studies that evaluate the extracted HRV indices as a function of PPG duration are recommended. The PPG database contains signals collected from subjects of approximately the same age (about 22 years old), which creates imbalance in the age distribution. Therefore, a larger and more diverse sample size with a balanced age distribution is needed to generalize the findings of this study.\n\n\nConclusion\n\nThe findings of this preliminary study build on previous studies that discussed the potential use of only PPG to measure HRV (i.e., without collecting ECG data). Suitable indices for short duration signals include SDNN and RMSSD, which can be used with 20-second PPG recordings. A negative correlation with age at rest has been demonstrated for both indices. Interestingly, only SDNN shows significance for after exercise measurements. Since long signal measurement can be challenging due to perspiration, calculating SDNN using short PPG measurements from subjects after exercise in hot and humid weather is an appealing approach that needs to be explored on a larger scale. The overall result of this study indicates that an alternative potential modality for HRV analysis and the identification of individuals at risk, is the use of short PPG signals.\n\n\nData availability\n\nIn agreement with the Australian Department of Defence, the raw dataset could not be made available because the data could not be sufficiently anonymised to protect subject confidentiality. Upon request and review, the dataset is available at CDU (http://www.cdu.edu.au/ehse). The PPGs are human subject data and will be made available upon request from appropriate investigators after approval by Charles Darwin University Human Research Ethics Committee (http://www.cdu.edu.au/research/ori/human-ethics).",
"appendix": "Author contributions\n\n\n\nME and IN designed the experiment. ME, DA, SD, and DS performed the statistical analysis. ME, IN, MB, DA, SD, and DS conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for the research project was provided by the Government of the Commonwealth of Australia.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nAuthors gratefully acknowledge the Northern Territory Institute of Sport (Darwin, Australia) and Charles Darwin University for initiating and sharing this line of research.\n\n\nReferences\n\nBruce-Low SS, Cotterrell D, Jones GE: Heart rate variability during high ambient heat exposure. Aviat Space Environ Med. 2006; 77(9): 915–920. PubMed Abstract\n\nBerntson GG, Bigger JT Jr, Eckberg DL, et al.: Heart rate variability: origins, methods, and interpretive caveats. Psychophysiology. 1997; 34(6): 623–648. PubMed Abstract | Publisher Full Text\n\nConstant I, Laude D, Murat I, et al.: Pulse rate variability is not a surrogate for heart rate variability. Clin Sci (Lond). 1999; 97(4): 391–397. PubMed Abstract | Publisher Full Text\n\nGiardino ND, Lehrer PM, Edelberg R: Comparison of finger plethysmograph to ECG in the measurement of heart rate variability. Psychophysiology. 2002; 39(2): 246–253. PubMed Abstract\n\nLu S, Zhao H, Ju K, et al.: Can photoplethysmography variability serve as an alternative approach to obtain heart rate variability information? J Clin Monit Comput. 2008; 22(1): 23–9. PubMed Abstract | Publisher Full Text\n\nWeng J, Ye Z, Weng J: An Improved Pre-processing Approach for Photoplethysmographic Signal. Conf Proc IEEE Eng Med Biol Soc. 2005; 1: 41–44. PubMed Abstract | Publisher Full Text\n\nElgendi M: On the analysis of fingertip photoplethysmogram signals. Curr Cardiol Rev. 2012; 8(1): 14–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO’Brien IA, O’Hare P, Corrall RJ: Heart rate variability in healthy subjects: effect of age and the derivation of normal ranges for tests of autonomic function. Br Heart J. 1986; 55(4): 348–354. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang HM, Huang SC: SDNN/RMSSD as a surrogate for LF/HF: a revised investigation. Modelling and Simulation in Engineering. 2012; 2012: 931943. Publisher Full Text\n\nMatsuyama A: ECG and APG Signal Analysis during Exercise in a Hot Environment. PhD Thesis, Charles Darwin University, Darwin, Australia, 2009. Reference Source\n\nElgendi M, Norton I, Brearley M, et al.: Systolic peak detection in acceleration photoplethysmograms measured from emergency responders in tropical conditions. PLoS One. 2013; 8(10): e76585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElgendi M, Norton I, Brearley M, et al.: Detection of a and b waves in the acceleration photoplethysmogram. Biomed Eng Online. 2014; 13(1): 139. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElgendi M: Fast QRS detection with an optimized knowledge-based method: evaluation on 11 standard ECG databases. PLoS One. 2013; 8(9): e73557. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElgendi M: Detection of c, d, and e waves in the acceleration photoplethysmogram. Comput Methods Programs Biomed. 2014; 117(2): 125–136. PubMed Abstract | Publisher Full Text\n\nUmetani K, Singer DH, McCraty R, et al.: Twenty-four hour time domain heart rate variability and heart rate: relations to age and gender over nine decades. J Am Coll Cardiol. 1998; 31(3): 593–601. PubMed Abstract | Publisher Full Text\n\nBansal D, Khan M, Salhan AK: A review of measurement and analysis of heart rate variability. In Proc International Conference on Computer and Automation Engineering. Bangkok, Thailand, 2009; 243–246. Publisher Full Text\n\nLaguna P, Caminal P, Jané R, et al.: Evaluation of HRV by PP and RR interval analysis using a new time delay estimate. In Proc IEEE Computers in Cardiology. Chicago, USA. 1990; 63–66. Publisher Full Text\n\nGil E, Orini M, Bailón R, et al.: Photoplethysmography pulse rate variability as a surrogate measurement of heart rate variability during non-stationary conditions. Physiol Meas. 2010; 31(9): 1271–1290. PubMed Abstract | Publisher Full Text\n\nWong JS, Lu WA, Wu KT, et al.: A comparative study of pulse rate variability and heart rate variability in healthy subjects. J Clin Monit Comput. 2012; 26(2): 107–114. PubMed Abstract | Publisher Full Text\n\nChen X, Huang YY, Yun F, et al.: Effect of changes in sympathovagal balance on the accuracy of heart rate variability obtained from photoplethysmography. Exp Ther Med. 2015; 10(6): 2311–2318. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchäfer A, Vagedes J: How accurate is pulse rate variability as an estimate of heart rate variability? A review on studies comparing photoplethysmographic technology with an electrocardiogram. Int J Cardiol. 2013; 166(1): 15–29. PubMed Abstract | Publisher Full Text\n\nElgendi M, Fletcher RR, Norton I, et al.: Frequency analysis of photoplethysmogram and its derivatives. Comput Methods Programs Biomed. 2015; 122(3): 503–512. PubMed Abstract | Publisher Full Text\n\nZhang J: Effect of age and sex on heart rate variability in healthy subjects. J Manipulative Physiol Ther. 2007; 30(5): 374–379. PubMed Abstract | Publisher Full Text\n\nYeragani VK, Sobolewski E, Kay J, et al.: Effect of age on long-term heart rate variability. Cardiovasc Res. 1997; 35(1): 35–42. ISSN 0008-6363. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "16497",
"date": "03 Oct 2016",
"name": "Mariusz Oszust",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the paper, heart rate variability (HRV) is determined using the photoplethysmogram (PPG) signal. The demonstrated findings are based on the calculation of the standard deviation of heartbeat interval (SDNN) and the root-mean square of the difference of successive heartbeats (RMSSD). These indices were determined using the second derivative wave of photoplethysmogram (APG) signal. The presented methodology seems valid. However, there are some issues that should be addressed.\n\nSince authors in their previous works used 20-second PPGs for calculation of HRV indices (SDNN and RMSDD), it cannot be said that “To date, there have been no studies that investigate the independent use of the photoplethysmogram (PPG) signal to determine heart rate variability (HRV)”. Please refer e.g., to: M Elgendi, M Jonkman, F De Boer.: Applying the APG to measure Heart Rate Variability, ICCAE 2010, DOI: 10.1109/ICCAE.2010.5452042.\n\nAuthors claim that “if the PPG--HRV indices are negatively correlated with age, then short PPG signals are appropriate measurements for extracting HRV parameters”. The work “Applying the APG to measure Heart Rate Variability” (in the comment 1), as well as the paper: M Elgendi, M Jonkman, F De Boer.: Heart Rate Variability and the Acceleration Plethysmogram Signals Measured at Rest, Third International Joint Conference, BIOSTEC 2010, Valencia, Spain, January 20-23, 2010, DOI: 10.1007/978-3-642-18472-7_21, have already proven that short PPG signals are appropriate for extracting HRV parameters. In my opinion, the main novelty here is in showing the correlation of HRV indices measured before and after exercise with age, and such correlation with age should be emphasized, referring to these works, since it strengthens their findings. The title of the paper should also reflect this – the question in the title (“Can heart rate variability (HRV) be determined using short-term photoplethysmograms?”) was answered in 2010. The works above should be added to the literature. Figure 1 can be also found in these papers.\n\nData were collected at a sampling rate of 200 Hz, with recording duration of 20 seconds, and the duration was limited since subjects were waiting after exercise for the measurement. This suggests a need of different organisation of the data collection, in which a line of participants is not formed and does not affect the experiment. However, since “short” is a relative term, it would be desirable to assess the shortest duration that can be used for meaningful calculation of the correlation between HRV indices measured at rest and after exercise with age. Here, for example, p-value for different lengths of PPG signals can be shown (in e.g., range 5-20s). Please add to the investigation the correlation of the HRV indices with the heart rate in order to make the conclusions more general (see your works in my second comment).\n\nThe PPG database contains samples with imbalanced age distribution. This certainly influences the results and drawn conclusions on the age correlation of HRV indices, i.e., they can be wrong. Since new samples cannot be simply added to the database, the imbalanced data should be handled appropriately in order to obtain reliable results. There are many approaches to handle imbalanced datasets, e.g., undersampling (see Longadge, Rushi, and Snehalata Dongre. \"Class imbalance problem in data mining review.\" arXiv preprint arXiv:1305.1707 (2013). http://arxiv.org/pdf/1305.1707). I strongly advise applying at least one such technique.",
"responses": []
},
{
"id": "16498",
"date": "10 Oct 2016",
"name": "Albert Avolio",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors propose a means of detecting heart rate variability (HRV) from the photoplethysmographic (PPG) signal. As presented, the study is only marginally informative and the following issues require consideration.\n\nFrom the references cited in relation to the available database of PPG signals, it appears that the ECG was also recorded in the same subjects. If so, it begs the question as to why the HRV obtained from the ECG signal was not compared to the HRV obtained from the PPG signal.It is not clear why secondary associations with age needed to be invoked to provide a rather indirect justification of the fact that HRV can be obtained from the PPG signals recorded for 20 seconds.\n\nIf there were no ECG signals available, there is still a significant limitation in the study due to the very uneven distribution of age in the cohort. There are very few subjects in the age range 30-40 compared to 20-30. Given the low correlation, it is not clear if this could even be an indirect demonstration.\n\nAbstract: correct $aa$\n\nPage 4. Right column. 1st paragraph. Correct: “..since the collected of a shorter…”\n\nPage 6. L column Lines 1-3. Sentence unclear.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2354
|
https://f1000research.com/articles/5-632/v1
|
11 Apr 16
|
{
"type": "Review",
"title": "The academic, economic and societal impacts of Open Access: an evidence-based review",
"authors": [
"Jonathan P. Tennant",
"François Waldner",
"Damien C. Jacques",
"Paola Masuzzo",
"Lauren B. Collister",
"Chris. H. J. Hartgerink",
"François Waldner",
"Damien C. Jacques",
"Paola Masuzzo",
"Lauren B. Collister",
"Chris. H. J. Hartgerink"
],
"abstract": "Ongoing debates surrounding Open Access to the scholarly literature are multifaceted and complicated by disparate and often polarised viewpoints from engaged stakeholders. At the current stage, Open Access has become such a global issue that it is critical for all involved in scholarly publishing, including policymakers, publishers, research funders, governments, learned societies, librarians, and academic communities, to be well-informed on the history, benefits, and pitfalls of Open Access. In spite of this, there is a general lack of consensus regarding the advantages or disadvantages of Open Access at multiple levels. This review aims to to be a resource for current knowledge on the impacts of Open Access by synthesizing important research in three major areas of impact: academic, economic and societal. While there is clearly much scope for additional research, several key trends are identified, including a broad citation advantage for researchers who publish openly, as well as additional benefits to the non-academic dissemination of their work. The economic case for Open Access is less well-understood, although it is clear that access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services. Furthermore, Open Access has the potential to save publishers and research funders considerable amounts of financial resources. The social case for Open Access is strong, in particular for advancing citizen science initiatives, and leveling the playing field for researchers in developing countries. Open Access supersedes all potential alternative modes of access to the scholarly literature through enabling unrestricted re-use, and long-term stability independent of financial constraints of traditional publishers that impede knowledge sharing. Open Access remains only one of the multiple challenges that the scholarly publishing system is currently facing. Yet, it provides one foundation for increasing engagement with researchers regarding ethical standards of publishing. We recommend that Open Access supporters focus their efforts on working to establish viable new models and systems of scholarly communication, rather than trying to undermine the existing ones as part of the natural evolution of the scholarly ecosystem. Based on this, future research should investigate the wider impacts of an ecosystem-wide transformation to a system of Open Research.",
"keywords": [
"Open Citation Advantage",
"Altmetrics",
"Open Access",
"Publishing",
"Copyright",
"Incentive System",
"Citations"
],
"content": "Introduction\n\nOpen Access (OA) refers to the removal of major obstacles to accessing, sharing and re-using the outputs of scholarly research. The rationale is that the research process is facilitated by ensuring rapid and widespread access to research findings such that all communities have the opportunity to build upon them. Reflecting this ambition, there are currently over 700 OA policies and mandates recorded worldwide from a range of research institutes and funding bodies (roarmap.eprints.org). OA pertains only to documents made available via two main pathways: the ‘Gold’ route and the ‘Green’ route (Harnad et al., 2008). The Gold route refers to freely accessible research articles at the point of publication. This route is often, but not always, accompanied by article processing charges (APC). The Green route refers to author self-archiving, in which a version of the peer-reviewed article is posted online to a repository or website. This route is dependent on journal or publisher policies on self-archiving (sherpa.ac.uk/romeo). Some publishers require an embargo period before deposit in public repositories is allowed. These embargoes are applied in order to avoid putative reductions in subscription income due to such self-archiving. Through these dual pathways, almost 25% of all scholarly documents archived on the Web are now obtainable via OA somewhere on the Internet (Khabsa & Giles, 2014).\n\nA core issue remains: universal or even marginal access to ~75% of articles is not directly possible unless one is in a privileged position to work at an institute which has subscription access to a portion of these articles. A subscription to all peer-reviewed journals is not affordable for any single individual, research institute or university (Odlyzko, 2006). Consequently, the potential impact of research articles is never fully realized, impeding scientific progress by a lack of use, while simultaneously negatively affecting the recognition of individual researchers (Hitchcock, 2013) and the funders who support their work.\n\nBased on these problems, free and unrestricted access to primary research literature has become a global goal of the OA movement. The steady increase in OA over the past two decades has required negotiations with a range of stakeholders (e.g., librarians, funders, academics). Much of the driving force behind this global change has been through a combination of direct, grassroots advocacy initiatives in conjunction with policy changes from funders and governments. The debates regarding the benefits of OA over subscription-based access often hinge on the increased value to academics. However, increased access has broader benefits to research through enhanced visibility, facilitating innovation by businesses, decreasing financial pressure on academic/research libraries (known more broadly as the ‘serials crisis’ (Russel, 2008)), and fostering a culture of greater scientific literacy. OA also includes a moral aspect, where access to scientific knowledge and information is regarded as a fundamental aspect of global human equality. For example, Article 27 of the United Nations Declaration of Human Rights states that “Everyone has the right to freely participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.” (United Nations, 1948).\n\nThis review paper aims to provide information on the various benefits and drawbacks of Open Access to scholarly research. We consider the case for OA from the academic, economic, and societal perspectives. In addition, we shortly consider the broader implications of OA on Open Data, a closely related issue, united under a general theme of ‘Open Research’. By aggregating evidence from primary sources, this review should be useful to those more broadly interested in the impact of scholarly research, as well as policymakers and others interested in implementing OA policies and strategies.\n\n\nA brief history of Open Access\n\nThe Open Access movement is intrinsically tied to the development of the Internet and how it redefined communication and publishing (Laakso et al., 2011). With increased availability of Internet bandwidth, print articles have become virtually redundant, and the costs per article potentially decrease as a result of not investing material resources in print publications. As a result, widespread dissatisfaction with the expensive traditional publishing model increased, resulting in the OA movement and concomitant innovations in scholarly publishing.\n\nInterest in using the Internet for opening access to scientific research coalesced throughout the 1990s, culminating with the 2001 conference on “Free Online Scholarship\" by the Open Society Institute in Budapest. The result of this conference was the release of the Budapest Open Access Initiative (BOAI), which is recognized as one of the defining points of the OA movement. The BOAI was the first initiative to use the term “Open Access\" and articulated the following public definition:\n\nBy “open access\" to [peer-reviewed research literature], we mean its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.\n\nOne result of the growing OA movement is the rise of OA-only publishers, who publish exclusively digital content and have demonstrated that such a business model is financially sustainable. For example, pioneer OA publishers BioMed Central and the Public Library of Science (PLOS) were founded in the early 2000s and remain successful businesses to date. More recently, OA publishing has gained increasing momentum among researchers, funders, and governments. This has led to a proliferation of innovative approaches to publishing (e.g., PeerJ (peerj.com), F1000Research (f1000research.com), Open Library of Humanities (openlibhums.org)) and a range of different policies from research funders and institutes mandating OA. All of these different policies and new business models, combined with traditional publishers innovating by launching their own OA titles and programs, have made the overall OA ecosystem quite complex.\n\nEven with this growing prevalence of publishers that facilitate Open Access to the scholarly literature, Open Access is hardly ubiquitous as of yet. Bjork et al. (2009) estimated that in 2006 the total number of published articles that year was approximately 1,350,000. Of these, 4.6% became immediately accessible and an additional 3.5% after an embargo period of, typically, one year. Furthermore, usable copies of 11.3% could be found in repositories or on the home pages of the authors. Since the National Institutes of Health mandated archival of articles in the public PubMed Central repository in 2008, the cumulative number of OA articles in PMC has increased more than the number of non-OA articles (see Figure 1). In 2013, the total percentage of OA articles available was estimated at 24% of English-language scholarly documents accessible on the Web (Khabsa & Giles, 2014).\n\nTable 1 shows a non-exhaustive summary of the developments in the advancement of scholarly publishing and the Open Access movement. Included are the founding of major institutions in the movement as well as policy and legal developments. Several controversial moments are included, because they have spurred action or generated awareness for the movement. An example is the suicide of Aaron Swartz, who was arrested for downloading JSTOR articles on the grounds that he allegedly intended to make these publicly available. Another ongoing controversy is scholarly piracy; this includes the Sci-Hub and LibGen projects, which have created an online repository of pirated scholarly papers. Both gained increased attention after becoming the target of a lawsuit by the publisher Elsevier. There have been mixed responses to these kinds of activities, polarising the view that illegal acts regress or weaken the case for OA, while some hail the development as the ‘Napster moment’ for the OA movement which will force the established industry to change.\n\n\nThe academic case for Open Access\n\nThe academic case for OA is two-faceted: (i) it is associated with a higher documented impact of scholarly articles through availability and re-use; and (ii) it non-restrictively allows researchers to use automated tools to mine the scholarly literature. For the former, major arguments in favor of OA include the evidence that work that is openly available generates more academic citations, but also has more societal impact. The latter major argument involves non-restrictive access to the scholarly literature through appropriate licensing, making it possible to use automated tools to collect and analyze the entire body of scholarly literature in a legally sound framework and irrespective of copyright laws. The following sections cover these effects of OA on both the impact of scholarly research and through the mining of scholarly literature.\n\nAcademic impact. Academic impact is frequently measured through citation counts, and these remain fundamental as the ‘currency units’ for researchers, research groups, institutes and universities. Lawrence (2001) was the first to propose that OA would have a citation advantage. The utility and consistency of the citation advantage across different research fields has been intensively debated because its magnitude substantially varies depending on the discipline (Table 2). However, the general tendency from studies to date indicates that there is at least some association between OA publishing and increased citation counts across most disciplines (Antelman, 2004; Hajjem et al., 2006) (Figure 2 and Table 2).\n\nEstimates for the open citation advantage range from +36% (Biology) to +600% (Agricultural Sciences) (Swan, 2010). A longitudinal study Eysenbach (2006) compared the bibliometric impact of a cohort of articles from a multi-disciplinary journal (Proceedings of the National Academy of Sciences) that offers both OA and non-OA publishing options. After adjusting for potentially confounding variables, the results indicated that non-OA papers were twice as likely to remain uncited six months after publication when compared to OA articles. Additionally, the average number of citations for OA articles was more than double that of the non-OA articles. The study also differentiated the type of OA article, namely the self-archived (i.e., Green OA) and the publisher version of record (VOR) that is freely available (i.e., Gold OA). Gold OA was found to have a higher overall academic impact than Green OA.\n\nDespite strong evidence for a citation advantage, the magnitude of this advantage remains variable. The substantial heterogeneity in observed citation advantages can be due to different academic cultures or could simply be spurious. For example, self-archiving prior to publication is a community standard in fields such as high energy physics or mathematics, but has yet to be widely adopted among the life sciences. Such ‘pre-prints’ have also been associated with an overall increase in the average number of citations, the total number of citations, and the speed of citation accumulation (Aman, 2014; Gentil-Beccot et al., 2010). Other studies could only replicate immense citation advantages (+600%) if relevant predictors were omitted (McCabe & Snyder, 2014), which indicates a potential spurious effect. When taking into account these relevant predictors, the citation advantage became much smaller (i.e., +8%).\n\nOne alternative explanation for the citation advantage could be that researchers choose to publish OA when a finding is more impactful, but empirical evidence contradicts this selection effect. Gargouri et al. (2010) compared citation counts within a cohort of OA articles that had either been self-selected as OA or mandated as OA (e.g., by funders). The study concluded that both were cited significantly more than non-OA articles. As such, these findings rule out a selection bias from authors as the cause for the citation advantage (Gargouri et al., 2010).\n\nIn sum, evidence indicates that OA is broadly related to increased academic impact as assessed through citations (Figure 2; see also McKiernan et al. (2016), but given the large variability in results, further research should aim to synthesize these results in a meta-analysis and try to explain the cause of this variability.\n\nSocietal impact. Scholarly articles also have a societal impact, such as when they are covered in news media or are discussed in social media outlets; alternative metrics, or altmetrics, can be used as a guide to measure this impact. Article-level information such as social media usage, Mendeley readership, and media attention (Piwowar, 2013) can be tracked by altmetrics providers. As such, when an article generates discussion outside of the academic literature, altmetrics track this usage. Despite limitations such as academics discussing their own research on Twitter, altmetrics provide a general view of the wider societal impact of research articles. Considering the increased pressure on researchers and research institutes to communicate research findings to the public, altmetrics can provide additional insight into which research enjoys public interest. A working group established by NISO is investigating the future role of altmetrics in research communication and assessment (www.niso. org/topics/tl/altmetrics_initiative/).\n\nThe majority concluded that there is a significant citation advantage for Open Access articles. Source: Data from The Open Access Citation Advantage Service, SPARC Europe, accessed March 2016.\n\nOpen Access articles would be expected to have an altmetrics advantage compared to the non-OA literature; if an article has fewer restrictions to articles for journalists, citizens, businesses, and policy-makers, it seems logical that this would enable the research to be publicly re-used. Furthermore, those parties may be more likely to ‘push’ articles which are publicly accessible into different communication channels. In other words, increased access removes barriers to widespread societal engagement, whereas a relative lack of article access discourages engagement by making it monetarily difficult to access the research.\n\nThere is research showing evidence for an altmetrics advantage for Open Access articles, but this does not reflect itself in the most impactful articles. Wang et al. (2015) found evidence that OA articles receive more attention through social media. The authors compared social media attention (Twitter and Facebook) between OA and non-OA articles at Nature Communications and found that OA articles get 1.2–1.48 times as much social media attention as compared to non-OA articles [see also Adie, 2014]. Nonetheless, of the top 100 articles of 2015 as presented by Altmetric.com, only 42 articles were OA (www.altmetric.com/top100/2015/). This 42% is larger than the overall proportion of OA articles in the literature, which indicates that OA contributes relatively more impact per paper. However, it also indicates that the open impact advantage can be overshadowed by the intrinsic nature of the research published.\n\nAllen et al. (2013) found that a social media announcement of the release of a research article increases the number of users who view or download that article, but does not translate to increases in the citation count in the field of clinical pain research. Costas et al. (2015) found a relatively weak correlation between social media activity and citation counts for the articles in their sample (over 1.5 million article records), while Mohammadi et al. (2015) found that the number Mendeley readers with a status of graduate student or faculty correlated with citation counts. When OA to the articles is factored into an analysis, there is a potential recursive relation between citation counts and altmetrics due to OA. Eysenbach (2011) indicated that there is a moderate correlation (0.42–0.72) between the tweets and citations of articles from an OA journal (Journal of Medical Internet Research). Highly tweeted articles were eleven times more likely to be highly cited than less-tweeted articles, or vice versa (75% of highly tweeted articles were highly cited; 7% of less-tweeted articles were highly cited). However, it is difficult to assess causality in these cases: do research papers that have more academic impact make their way more frequently into societal discussions, or does increasing online discourse increase their potential citation rates. Overall, this evidence implies that there is a general media advantage with OA (see also McKiernan et al. (2016)), which can be used as a proxy or pathway to indicate greater societal impact.\n\nAltmetrics themselves should not be conflated with citations when it comes to assessing impact, even though some providers such as Altmetric.com provide a single score that can be used to rank an article in a similar way to a journal’s Impact Factor. Each measure of altmetrics tells a different story about the impact of research, and a careful understanding of the altmetrics landscape can lead to a clearer picture of article-level impact.\n\n\nText- and data mining\n\nTraditionally, researchers hand over their copyright via a Copyright Transfer Agreement in order to publish a paper. Copyright transfer as the default has far-reaching consequences on the ability of both the authors and others to re-use that research, and many authors are not aware of the impact of these transfers on their ownership of the work. Academics frequently give the copyright to the publishers in exchange for the ‘prestige’ of publishing in one of their venues (e.g., Mueller-Langer & Watt, 2010). Essentially, copyright is a tool wielded by traditional publishers for financial gain rather than fostering creativity, innovation, or protecting authors (Okerson, 1991; Willinsky, 2002). However, in the digital age copying is essential to perform necessary research tasks. These activities range from viewing the article (i.e., downloading requires copying) to re-using figures from an article in a book. The interaction of OA and copyright is complex and deserves multiple papers in itself (e.g., Scheufen (2015)). We will highlight how OA views copyright and relate this to its effects on text- and data-mining (TDM).\n\nThe majority of ‘born Open Access’ journals and publishers do not request or receive copyright from authors. Instead, publishers are granted non-exclusive rights to publish, and copyright is retained by authors through a Creative Commons license (typically CC-BY, as befitting the definition of OA from the BOAI). Importantly, this represents a power shift from publisher-owned to author-owned rights to research. This model of author-retained copyright appears to be favoured by the majority (71%) of the research community (Hoorn & van der Graaf, 2006). Shifting copyright to stay with the author allows for wider re-use, including TDM, and forms the basis for a robust and developing public domain.\n\nAs such, copyright in OA publications is non-restrictive and also allows machines to freely access it. In traditional publishing, human reading and computer reading are seen as two separate things which require different agreements, whereas OA publishing views them both in the same, non-restrictive manner. In other words, in order to mine OA journals, one only needs the technical skills to do so. In order to mine traditional closed access journals, one needs to sign or negotiate access conditions, even if legitimate access to the articles has already been bought (Bloudoff-Indelicato, 2015).\n\nAutomated extraction of information from scholarly research via TDM is a methodology that can be applied to investigate the scholarly literature at an enormous scale, creating new knowledge by combining individual findings. This has already proven to be useful for a large variety of applications (Glenisson et al., 2005). Moreover, OA publishers facilitate TDM on a massive scale by allowing multiple options for collecting the literature needed. For example, PLOS is non-restrictive and allows users to scrape articles directly from the website or using its API. As a result, scraping tools can be used or an application called RPlos, an R package developed to search and download full-text scholarly papers (Chamberlain et al., 2015).\n\nTDM is not only a knowledge-generation tool; it also allows for automated screening for errors and automated literature searches that renew scientific discovery. With TDM it becomes possible to easily compare one’s results with those of the published literature, identify convergence of evidence and enable knowledge discovery (Natarajan et al., 2006) or discover frequent tentative hypotheses that can be used for new research (Malhotra et al., 2013). It has already been used to make major advances in fields such as biomedicine (Gonzalez et al., 2016). TDM also allows for computer applications that can download all scholarly literature given certain search terms (e.g., ContentMine’s ‘getpapers’ tool (github.com/ContentMine/getpapers)), simplifying and shortening the tedious literature search. TDM can also serve a screening purpose similar to plagiarism scanners, helping to detect statistical errors in the scholarly literature (e.g., Nuijten et al. (2015)). TDM can be used in various innovative ways and is an emerging and rapidly advancing field; non-restrictive licensing through OA promotes its wider application.\n\nGiven the exponential increase in the number of scholarly publications, (semi-) automated methods to synthesize results have become increasingly important. TDM decreases the time dedicated to the search for relevant information in scholarly literature by categorizing information (Leitner & Valencia, 2008), highlighting and annotating relevant results to specific users (Shatkay et al., 2008), and profiling research (Porter et al., 2002). Furthermore, TDM also prevents researchers and readers from wasting time on reinventing the wheel simply because one can no longer keep up with the published literature (Harmston et al., 2010).\n\nBecause of traditional copyright transfers, TDM has often been stymied by traditional, closed access publishers who frequently see it as a copyright infringement. Researchers using software that harvests data from online publications have been (threatened to be) cut off from accessing the articles. These researchers found themselves trapped in negotiations to resume their research — even though their universities had paid subscription fees for access (e.g., Bloudoff-Indelicato, 2015; Van Noorden, 2012). Standard subscriptions do not permit systematic downloads because publishers fear that their content might be stolen and revenue lost (Van Noorden, 2012). In 2014, Elsevier opened its papers for TDM via a proprietary API (Van Noorden, 2014), but places restrictions on the researcher using the API; however, researchers are not legally required to comply with these restrictions in some countries [e.g., U.K., U.S.A., Handke et al., 2015].\n\nRetrospectively making the enormous corpus of closed access papers publicly available might be possible through legal action at an institutional or governmental level. The position of these institutes regarding copyright transfer remains generally unclear. While academics themselves may have little power in debates regarding copyright, institutes could claim ownership of the work they likely already own by invoking their rights under the work made-for-hire doctrine (Denicola, 2006). This is further supported by Shavell (2010) and Eger & Scheufen (2012) who ascertained that transition towards an OA model could not be smooth without first undertaking the necessary legislative steps. In light of the potential copyright problems for closed access articles, TDM will be easier and legally safer for OA journals. As a consequence, TDM is likely to be more readily applied to OA literature when compared to closed access literature.\n\n\nThe economic case for Open Access\n\nAny publisher has to cover operating costs, which are through (i) article processing; (ii) management and investment; and (iii) other costs. Article processing includes editing, proofreading and typesetting, among other things. Management and investment are the marginal costs to establish and keep the journal running. Other costs include PR, hosting and infrastructural services, conference sponsoring, and other services that are extrinsic to research articles themselves. The average production cost for one paper is estimated to be around $3500–$4000 (Van Noorden, 2013) but are highly variable depending on the publisher. For example, Philip Campbell (Editor-in-Chief of Nature) stated that his journal’s internal costs were at $20,000–$30,000 per paper (Van Noorden, 2013). One possible reason for such variation is that it is unclear whether proposed costs relate to those directly involved in article processing or those required in order for a publisher to ‘break even’ if they receive zero subscription income for an article made OA.\n\nIn order to cover those costs and make a profit or surplus, closed access publishers charge for access via subscriptions, whereas many OA publishers or journals charge to publish. Due to increased subscription costs, closed access publishing is becoming an increasingly unsustainable business model (Odlyzko, 2013) with prices estimated to have increased at 250% of that for inflation (www.eff.org/issues/open-access), which will slowly but surely diminish the scope of access to the scholarly literature as fewer organisations are able to pay such high costs. Only recently has any transparency into the detailed costs of subscriptions been gained by using Freedom of Information Requests to bypass non-disclosure agreements between libraries and publishers (Lawson & Meghreblian, 2015), and provide the basis for understanding the economics of scholarly communication. Two potential ways to prevent future retention of an unsustainable model is through decreasing the subscription prices, thereby lowering publishers’ profit margins and the financial burden on subscribers, or switching to new OA-oriented business models and creating new value.\n\nOA publishing is most prevalent in the form of ‘pay-to-publish’, which can be viewed as a fundamental conflict of interest for researchers. However, this payment model has proven itself to function properly when editorial decisions are separated from the business-side of the publisher (i.e., editorial independence), and many publishers (e.g., PLOS, PeerJ, and many learned societies) operate fee waiver schemes for researchers unable to obtain funds for publication fees.\n\nSubscription-based publishers frequently still produce print versions of journals, which increases their costs and is potentially done in order to justify charging for readership, or satisfy an increasingly small demographic who prefer this mode of reading. After all, subscriptions to print journals make sense and, if large-scale printing is still in place, simply transferring this idea to the digital versions creates continuity. Print versions are accompanied by logistical costs to print and ship each issue, but these are partially offset with reprint orders, additional charges for colour figures, and print-based advertising. For some of the largest subscription-oriented publishers the annual net profit on investment reaches up to 40 percent, which makes academic journal publishing highly lucrative for investors (Satyanarayana, 2013), further increases investment to sustain this type of publishing model, and allows maintenance of a system which has been described as an ‘oligopoly’ (Larivière et al., 2015).\n\nOpen Access publishers only publish digitally and have opened up avenues for innovation. For example, PeerJ has introduced a wholly different OA business model, where readers pay nothing to access articles, but authors pay a membership fee once to publish for a lifetime. The Open Library of Humanities (OLH) is another innovative business model in which libraries pay a small fee to support OLH and scholars are able to publish for free; this support also enables the OLH to help journals ‘flip’ from a subscription model to OA; for example, the recent case of Lingua (https://www.timeshighereducation.com/research-intelligence/open-library-humanities-aims-flip-journals-open-access). Library publishing has also developed in response to the OA movement; in this model, academic libraries begin publishing operations in the interest of providing added value to their patrons and contributing to the growth of knowledge (librarypublishing.org). In terms of innovating in the publishing platform itself, eLife have introduced the Lens as a novel way of viewing research articles online (lens.elifesciences.org/about/), and F1000Research has introduced so-called ’living figures’ to enable researchers to interact with data underlying research findings [e.g., Colomb & Brembs, 2015]. Such innovations add increased value to the research communication process, contrary to services such as paying to print colour figures, and represent just several cases of a recent explosion in innovation across the publishing ecosystem. One can imagine that publishing costs in OA journals become dependent on the value added on a per-article basis, which can help reshape and improve scholarly communication. As such, making publication costs dependent on the value added aligns the interests of publishers with those of scholars where improving the quality of the process of scholarly communication is the end-goal. The motivation behind this could come from the currently available data that suggests that hybrid publishing options offered by traditional publishers, while being of higher cost, provide a much lower overall quality publishing process (blog.wellcome.ac.uk/ 2016/03/23/wellcome-trust-and-coaf-openaccess- spend-2014-15/).\n\nThe implementation of OA models has implications beyond the publishing industry in terms of economics. Research funding comes from multiple sources, including, national funding agencies and industries, as well as private funders. Much primary research actually takes place outside of academia inside research & development departments. A report from 2004 by Arzberger and colleagues into the scientific, social and economic development of access to research results concluded that access should be promoted to the largest extent possible. According to this report, access to research results can only be responsibly restricted in the case of national security, privacy, or those involving IP rights of the authors (Arzberger et al., 2004). A major principle underlying this is the ownership of research results: publicly funded research and data are public goods and because they have been produced in the public interest they should be considered and maintained as such. Indeed, such a principle has become one of the focal rallying points of the global OA movement. Appropriate licensing and accessibility can influence re-use through commercialization, and can empower citizens and industry to recognize great economic benefits. This apparently resonates with many organisations, as indicated by the increased numbers of OA policies on a global basis (see Figure 3).\n\nFigures are given at the beginning of each year. Source: ROARMAP, accessed March 2016.\n\nWith access to scholarly articles, entrepreneurs and small businesses can accelerate innovation and discovery, which is advantageous for advancing the ‘entrepreneurial state’ (Mazzucato, 2011). Access to research results has clear advantages for a range of industries and can help stimulate regional and global economies. Increased access to research results has been associated with considerable increases of return on financial investment (Beagrie & Houghton, 2014). Furthermore, OA facilitates collaborations between publishers and industrial partners to leverage the potential of structured information networks for advanced data mining projects, such as that recently announced between IBM Watson and PLOS (Denker, 2016). One of the major driving forces behind the development of OA in the UK on a national level, the ‘Finch Report’, also concluded that OA was an essential source for information and innovation to the civil service, commercial sectors, small- and medium-sized enterprises (SMEs), and the general public (www.researchinfonet.org/publish/finch/).\n\nTaking cancer research as one high impact case study, there is substantial evidence for the economic benefit of OA. In 2011–12 prices, the total expenditure on research relating to cancer in the period of 1970–2009 was £15 billion (Glover et al., 2014). 5.9 million quality adjusted life years were gained from the prioritized interventions in 1991–2010, of which the net-monetary benefit was an estimated £124 billion (i.e., eight-fold return on investment). However, only 17% of the annual net-monetary was estimated to be attributable to research performed in the UK (Glover et al., 2014), suggesting that 83% of the economic return on cancer research is drawn from research from non-UK sources. As such, opening up research for global access rather than localized and restricted use has the potential to increase the economic return, as demonstrated with the case on cancer research.\n\nThe question of the current publication cost is difficult and confounded by estimates of the total global publishing costs and revenue. Data provided by Outsell, a consultant in Burlingame, California, suggest that the science-publishing industry generated $9.4 billion in revenue in 2011 and published around 1.8 million English-language articles. This equates to an approximate average revenue per article of $5,000. A white paper produced by the Max Planck Society estimated costs at €3,800–€5,000 per paper through subscription spending, based on a total global spending of €7.6 billion across 1.5–2 million articles per year in total (Schimmer et al., 2014). Other estimates suggest that the total spend on publishing, distribution and access to research is around £25 billion per year, with an additional £34 billion spent on reading those outputs, a sum which equates to around one third of the total annual global spend on research (£175 billion; Research Information Network (2008)).\n\nSuch high costs are at odds with estimates of the cost of OA publishing. For example, the Scientific Electronic Library Online (SciELO) is a pan-Latin American bibliographic database, digital library, and co-operative electronic publishing model of Open Access journals. It is estimated that their costs are between $70 and $600 per OA article depending on the services provided (Brembs, 2015). OA now dominates the Latin American publishing landscape, with an estimated 72–85% of articles now with full text OA articles publicly available (www.sparc.arl.org/news/open-access-latin-americaembraced-key-visibility-research-outputs). Even more extreme estimates of the cost of OA come from Standard Analytics, who suggested the absolute minimum per-article costs of publishing could fall to between $1.36 and $1.61 with sufficient cloud-based infrastructure (Bogich et al., 2016). However, it is likely that this estimate under-emphasizes marginal costs that are beyond a per-article cost basis. What is clear from these analyses is that OA has the opportunity to become a cost-reducing mechanism for scholarly publishing. Open Journals System (OJS), an open source software available to anyone to use and download without charge, is another example of this. Additionally, researcher-led initiatives such as the recently launched Discrete Analysis have costs that average around $30 per article, with no cost to authors or readers, and utilise the infrastructure offered by the arXiv to keep costs low (discreteanalysisjournal.com).\n\nIn his article, Sutton (2011) argued that current scholarly journals are digital products and that as such they are driven by very different economic principles and social forces than their print ancestors. Based on Anderson (2013), the author made the case that changes in both the delivery of scientific content and in publishers’ business models was inevitable when journals moved online. Sutton (2011) considered that scientific literature is no different from other digital products with respect to distribution costs and as such it is no exception to the ‘zero is inevitable’ rule of pricing.\n\n\nThe societal case for Open Access\n\nOA to the scholarly literature does not just benefit academics, but also has wider impacts on other domains in society. It makes research available to anyone with an Internet connection who has the ability to search and read the material. Therefore, it transcends academic affiliation and supports sustainable lifelong learning. Examples of groups who might benefit most from OA include citizen scientists, medical patients and their supporting networks, health advocates, NGOs, and those who work in linguistics and translation. In theory, OA affects anyone who uses information and opens up possibilities for knowledge to be used in unexpected, creative and innovative ways beyond mainstream professional research.\n\nAccess to knowledge has been called a human rights issue, considering it is included in Article 27 of the United Nations Declaration of Human Rights. Willinsky (2006) has argued that “Access to knowledge is a human right that is closely associated with the ability to defend, as well as to advocate for, other rights\". This is not only true for access to knowledge from research that could save human lives, but also, as argued by Jacques Derrida, to the right of access to philosophy and the humanities disciplines that stem from it. Derrida writes about the field of Philosophy, “No one can forbid access to it. The moment one has the desire or will for it, one has the right to it. The right is inscribed in philosophy itself ” (Derrida, 2002).\n\nSociety’s ability to make research publicly accessible supports the long-term interest and investment in research. Citizens support research through taxes and therefore one could argue that efforts to support public access should be a fundamental part of the research process. While OA is not a solution to all aspects of research accessibility (e.g., language barriers and disability access remain continuing issues to be addressed), it most certainly increases accessibility greatly and simultaneously allows innovations to remove other barriers (e.g., OA articles can be freely translated to address language barriers and can be changed to different formats to accommodate screen readers). Anecdotal evidence suggests that public access to research is required from a range of public spheres (whoneedsaccess.org/). Nonetheless, the fact that access to knowledge is actively prohibited in fields like public health should be of major concern to all stakeholders engaged in academic publishing.\n\nIn addition to professional research by, for example, academics, there is the dimension of citizen science. In citizen science, the broader public participates in the research process itself. Numerous projects such as Galaxy Zoo, Zooniverse, Old Weather, Fold It, Whale FM, and Bat Detective are all different initiatives in which citizens publicly and openly engage with research. These initiatives introduce new ways of knowledge creation and these groups also require thorough access to actually be able to do non-redundant research. Citizen science forms part of the societal case for Open Access, because it indicates anyone can be engaged with research, not only professional researchers.\n\nSome traditional publishers, and some academics, have argued that public access to research is not required because research papers cannot be understood by non-specialists. However, citizen science initiatives already indicate the general public is interested and understands the research. Whereas this is hyper-variable, and strongly dependent on a range of factors, it is the fact that any public interest in science that is of importance. These publishers and academics argue that specialization is sufficient reason for confining access to professional research bodies through subscriptions. Such statements conflate a lack of desire or need for access with the denial of opportunity to access research. Isolated incidents such as the crashing of servers of Physical Reviews Letters upon the ‘Gravitational Waves’ announcement and OA publication (Feb, 2016; Abbott et al. (2016)) indicate that there are cases of extreme public interest in science that closed access would only impede. Furthermore, claims that only experts can and should read research articles does little to break down the ‘ivory tower’ perception that still pervades academia, and undermines the enormous amounts of resources invested in science communication and public engagement activities. Such perceptions run counter to the idea of access to knowledge as a right, retaining it as a privilege based on financial or academic status.\n\nThe arguments outlined above form the basis for democratic and equal access to research, which come to light even stronger in the developing world. For low- and middle-income countries (LMIC), OA publishing breaks traditional financial barriers and allows unrestricted, equal access to scholarly information to people all over the globe. Due to the high prices of journal subscriptions, developing countries struggle with access just as in developed countries, but to a greater extent and consequently with greater negative repercussions. For example, a research paper from 1982 that indicated Liberia should be included in the Ebola endemic zone was unknown to Liberian officials in the 2014 Ebola outbreak (Knobloch et al., 1982), and paywalled. In general, lack of access can have major deleterious consequences for students and researchers in that they do not have sufficient material to conduct their own primary research or education.\n\nOA provides a mechanism to level the playing field between developed and developing countries by providing equal access. This increases fair competition and increases the scientific potential of the developing world (Chan et al., 2005). This is linked to the wider issue of open licensing, which is essential for effective marketing of medicines and medical research in developing countries (Flynn et al., 2009), and justifies the necessity of Open Access in the wider context of social welfare. Developing countries clearly acknowledge the need for access and as such have launched many repositories to increase access with self-archiving of research articles. In 2014, over 100 institutions in Africa launched a network of over 25 fully-operational OA repositories in Kenya, Tanzania and Uganda (www.ubuntunet.net/april2014#researchrelevant). Such developments suggest that African nations are leaning more towards a ‘green’ model of OA adoption.\n\nThe shift from a ‘reader pays’ to an ‘author pays’ model with OA potentially limits its adoption in developing countries. The pay-to-publish system is a potentially greater burden for authors in developed countries, considering that they are not used to paying publication costs, and funding systems for OA are not as well-established as those in the Western world. Publication fees present an even greater relative burden (Matheka et al., 2014) given that they can often exceed a monthly salary. This has been at least partially mitigated with waiver fees for authors from developing countries and additional provisions in research grants. In November 2015, Research4Life (www.research4life.org) and DOAJ announced a working partnership that will help to ensure that the Research4Life users will have access to the largest possible array of OA journals from publishers with a certain quality standard. While Research4Life does not directly cover OA publication costs, a lot of publishers propose full or partial waivers if they are based in countries eligible by Research4Life. However, determining which countries qualify for access to scientific journals through these programs, and which journals they are provided access to, is a fairly closed process. They are also not entirely stable, as publishers can opt out of the initiative, or be selective about which countries they choose to serve. In 2011, publishers withdrew free access to 2500 health and biomedicine journals for Bangladesh (Kmietowicz, 2011) through the HINARI programme. While access was subsequently reinstated, this demonstrates that such initiatives are not an adequate replacement for full OA (Chatterjee et al., 2013). Despite these programs purporting to provide essential articles to researchers in poor nations, they exclude some developing countries (e.g., India) and limit access to researchers who work in registered institutions.\n\nInitiatives such as the Journals Online Project developed by INASP (International Network for the Availability of Scientific Publications; www.inasp.info/en/) has helped to develop a number of online OA platforms in the Global South. These were launched in 1998 with the African Journals Online (AJOL) platform, a project currently managed in South Africa. More recently, INASP have set up Latin American Journals Online (LAMJOL) which hosts journals in El Salvador, Honduras, and Nicaragua. In Asia, Bangladesh Journals Online (BanglaJOL), Nepal Journals Online (NepJOL), and Sri Lankan Journals Online (SLJOL), all facilitated through INASP, continue to develop and now around 95% of their articles are full-text Open Access. As mentioned previously, improved access should not be limited to professional researchers only, considering that there is also global interest from the broader public, including health professionals.\n\nOne negative effect on OA comes from entities that attempt to profit by exploiting the pay-to-publish system OA publishers use. These publishers operate a sub-category of OA journals known as vanity presses, predatory publishers (Beall, 2012) or pseudo-journals (McGlynn, 2013). These journals seem to be in the scholarly publishing business primarily to collect publication fees (i.e., APCs) in exchange for rapid publication without formal peer-review. Beall (2015) has defined a list of criteria for identifying predatory journals and an index of publishers and individual journals that meet these criteria is continuously updated (scholarlyoa.com).\n\nWhile not all scholars and advocates agree with the criteria proposed by Jeffrey Beall, there are several factors that many agree on to identify a predatory publisher. Predatory publishers tend to charge low publication fees (Xia, 2015), most below $100 and few charge more than $200. On the contrary, the average publication fee of journals indexed in the Directory of Open Access Journals (DOAJ) is around $900–$1,000 (Solomon & Björk, 2012) and leading universities in the UK and Germany pay on average $1,200–$1,300 per article (Schimmer et al., 2015). The editorial and peer-review aspects of predatory publishers are either non-existent or suspect; they also falsely claim to have ratings such as a Journal Impact Factor and to be indexed in major databases such as Scopus (Djuric, 2015). Editors from these journals solicit articles that have no relation to the topic of their journal and do not send the documents out to be properly peer-reviewed (Bowman, 2014).\n\nThe problem of predatory OA seems to highly affect countries where the academic evaluation strongly favors international publication without further quality checks (Shen & Björk, 2015). Xia et al. (2015) collected and analyzed the publication record, citation count, and geographic location of authors from the various groups of journals. Statistical analyses verified that predatory and non-predatory journals have distinct author populations: authors who publish in predatory journals tend to be inexperienced early-career researchers from developing countries. The spatial distribution of both the predatory publishers and those authors who submit in pseudo-journals is highly skewed: Asia and Africa contributes three quarters of authors (Xia et al., 2015) and Indian journals form the overwhelming proportion of predatory publishers (Xia, 2015). An interesting finding is the very low involvement of South America, both among predatory publishers (0.5 %) and corresponding authors in predatory journals (2.2%). The OA infrastructure in Latin America is different compared to other developing countries, which reveals a possible reason for this asymmetric situation. Latin American journals and universities are engaged in OA publication models at a higher degree than other regions (Alperin et al., 2011). As a result, scholars from this region are not only more aware of OA issues, but they have more options for publishing OA than those from other regions (Alperin et al., 2011). Moreover, SciELO (Packer, 2009) and the creation of Latin American databases (Octavio Alonso-Gamboa & Russell, 2012) have played a tremendous part in this process by bringing recognition and a good reputation to publishing outlets in Latin America.\n\nConsiderable attention is given to the subject of predatory publishers, who have become conflated with the OA movement in general to the detriment of genuine OA publishers. For example, a ‘sting’ operation that outed bad peer-review instead got misinterpreted as bad peer-review in OA journals (Bohannon, 2013), but was probably more indicative of issues to do with the traditional closed and over-burdened system of peer review (www.scilogs.com/communication_breakdown/jon-tennant-oa/). Overall, the predatory publisher phenomenon is one major negative aspect that spawns many misconceptions and misgivings about publishing OA. Recently launched industry-led initiatives such as “Think, Check, Submit” (thinkchecksubmit.org) provide a checklist to help researchers identify trustworthy journals, and will likely be a pivotal tool in combating predatory publishers.\n\n\nOpen Access and ‘open research’\n\nOA exists in a constantly evolving scholarly research ecosystem. As such, it is important to note how it is interconnected to other facets of the scholarly communication system. Here, we discuss the implications that the transition to OA has on developments in the broader context of ‘Open Data’ and ‘Open Science’, or ‘Open Research’.\n\nThe overall movement of OA has become conjoined with the push for Open Data. Data sharing is fundamental to scientific progress, because data lead to the knowledge generated in research articles. Furthermore, data sharing has recently become a common requirement, together with OA, for both research funding and publication. The data sharing policy from PLOS illustrates the high degree of overlap between OA and Open Data; authors of articles published in PLOS are required to share the data except if they have valid reasons not to (i.e., an opt-out system; journals.plos.org/plosone/s/ data-availability).\n\nThe benefits of Open Data are diverse, including a citation advantage. Combined with the citation advantage for OA articles, providing data alongside publications can increase citations on average by 30% (Piwowar & Vision, 2013) and up to 69% (Piwowar et al., 2007), but this evidence is entirely field-dependent [e.g., Dorch et al., 2015]. Below we cover six additional benefits of Open Data.\n\nFirst, data sharing enhances reproducibility, a crucial aspect in a time where some scientific domains appear to have problems with reproducibility [e.g., Open Science Collaboration, 2015]. Several factors could form the basis for this ‘crisis’, such as an overemphasis on novelty instead of rigour, selective reporting of results, an overemphasis on statistical significance, and insufficient documentation of the research methods. Publicly sharing data, code, and materials can alleviate issues with reproducibility. This is especially pertinent in the modern sciences, where a substantial proportion of published results draw on quantitative experiments and computer simulations. As such, it is largely impossible to reproduce these experiments as they become more complex and associated datasets increase in complexity. When full access to the data, metadata, and the code used to produce the ultimate results are provided alongside publication, this greatly improves reproducibility.\n\nSecond, publicly available data can be used to stimulate innovations, such as new analytical methods. An excellent example of this is provided by the neuroimaging OpenfMRI project, where shared data have been used to examine the effects of different processing pipelines on analysis outcomes (Carp, 2012) and test new methods to characterize different cognitive tasks (Turner & Laird, 2012). Another good example is the Protein Data Bank (PDB) (Berman et al., 2000), a project which has enabled the re-use of the primary structural data and opened up new avenues of research, despite the latter not being expected.\n\nThird, data sharing enables new research questions that can only be answered by combining datasets, which now remain separated. Analyzing vast volumes of data can yield novel and perhaps surprising findings. This allows for integrated research hypotheses on the underlying processes behind the original data and observations. Exploratory approaches to large datasets can be seen as hypothesis generating tools, which later fuel hypothesis testing to confirm or disprove these hypotheses (Wagenmakers et al., 2012).\n\nFourth, the realization that data will ultimately be shared and visible to the community provides a strong incentive for researchers to ensure they engage in better data documentation and, therefore, research methods. For example, the willingness to publicly share data has been associated with fewer statistical errors in the final research article (Wicherts et al., 2011).\n\nFifth, public data sharing provides a digital backup for datasets, protecting valuable scientific resources. Moreover, a considerable amount of data produced every day does not ultimately lead to publication and often remain hidden. Such data might remain in a hidden file-drawer despite being valid, creating a systematic bias in the information available. Public data sharing opens this file-drawer and, consequently, allows independent assessments of whether the data are valid or not.\n\nFinally, and tightly connected with the fifth point, sharing data can certainly reduce the cost of performing research. A file-drawer has been indicated to greatly reduce the efficiency of research in detecting effects (van Assen et al., 2014). Open Data, as such, discourages redundant data collection (i.e., data that have been already collected but never made publicly accessible) and simultaneously allows researchers to better approximate what is happening in their fields. This will have a large effect on research costs, resulting in savings that can be then be used for more productive research goals.\n\nBeyond OA and Open Data lies a more integrated approach to research, referred to more broadly as Open Science (i.e., Science 2.0, Open Scholarship). According to the European Commission’s Horizon 2020 programme, Open Science is defined as “The transformation, opening up and democratisation of science and research through ICT, with the objectives of making science more efficient, transparent and interdisciplinary, of changing the interaction between science and society, and of enabling broader societal impact and innovation\". Consequently, we see OA as only one of the multiple challenges currently facing the ‘open transformation’ of the scholarly publishing system (Watson, 2015), and should be considered in the wider contexts and complimentary domains of research transparency and open source.\n\nAs Kriegeskorte et al. (2012) pointed out, OA is now widely accepted as desirable and becoming a reality in many academic spheres. However, the second essential complementary element to research – evaluation – has received less attention despite the large amount of research that has been done to document its current limitations (Benos et al., 2007; Birukou et al., 2011; Ioannidis, 2005; Ioannidis, 2012a; Ioannidis, 2012b; John et al., 2012; Nosek & Bar-Anan, 2012; Simmons et al., 2011). Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current assessment systems Kriegeskorte et al. (2012). Future research regarding better ways to improve scholarly communication will be instrumental in providing evidence to support the transformation of the publishing system and design new alternatives (Buttliere, 2014; Ghosh et al., 2012; Kriegeskorte et al., 2012; Pöschl, 2012), which will draw heavily upon on open publishing framework driven by developments and newly emerging models in OA.\n\n\nConclusions\n\nThis review presents published evidence of the impact of Open Access on the academy, economy and society. Overall, the evidence points to a favorable impact of OA on the scholarly literature through increased dissemination and reuse. OA has the potential to be a sustainable business venture for new and established publishers, and can provide substantial benefits to research- and development-intensive businesses, including health organisations, volunteer sectors, and technology. OA is a global issue, highlighted by inequalities beset at all levels between developing and developed nations, and largely fueled by financial inequality. Current levels of access in the developing world are insufficient and unstable, and only OA has the potential to foster the development of stable research ecosystems. While predatory publishing remains an ongoing issue, particularly in the developing world, increasing public engagement, development of OA policies, and discussion of sustainable and ethical publishing practices can remove this potential threat to OA.\n\nFor libraries, universities, governments, and research institutions, one major benefit of lowering the cost of knowledge is a budget that allows them to spend their resources more wisely. For researchers themselves, the goal is to increase their audience and impact by delivering wider and easier access for readers. For publishers, promoting OA is a reaction to the wants and needs of their community. Furthermore, subscription-based publishers have (partly) answered the call of the an increasing global demand for OA by giving their green light to author self-archiving (Harnad et al., 2008), as well as through establishing numerous ‘hybrid’ OA options. In an author survey, Swan & Brown (2004) reported that the vast majority of their sample indicated that they would self-archive willingly if their employer (or funding body) required them to do so. Similarly, in a study by Swan & Brown (2005) the vast majority of researchers (81%) indicated that they would comply with mandates that made OA a condition of funding or employment. Fortunately, it seems that funders and research organisations are moving in that direction. Since 2005, the number policies supporting OA publishing increased steadily. Consequently, it is now the responsibility of researchers to ensure OA to their publications either by choosing the green or the gold road.\n\nThe fact that OA impacts upon such a diverse range of stakeholders, often with highly polarised and emotional viewpoints, highlights the ongoing need for evidence-informed discussion and engagement at all levels. As Peter Suber, a leading voice in the OA movement, stated: “As long as they do not have the power to stop Open Access, the toll-access publishers are not the enemy\". Therefore, Open Access supporters should focus their efforts on working for new models and systems rather than trying to undermine or punish the existing ones. OA remains only one of the multiple challenges that the scholarly publishing system is currently facing. As highlighted in this review, the empirical evidence for OA is overwhelmingly positive. Further research needs to move from investigating the effects of OA to researching the broader effects of Open Science.",
"appendix": "Author contributions\n\n\n\nAll authors contributed equally to the writing of this manuscript using the Overleaf collaborative writing platform.\n\n\nCompeting interests\n\n\n\nJPT currently blogs for the PLOS Paleo Community, and works for ScienceOpen. CHJH is a Center for Open Science ambassador. DCJ and FW are members of the Open Access Working Group of EURODOC. PM is a Research Data Alliance member. LBC works for the University of Pittsburgh, which has an Open Access library publishing department. All views presented here are strictly personal.\n\n\nGrant information\n\nThis research was partly funded by the Belgian National Fund for Scientific Research through a FRIA grant. PM acknowledges support from the European Commission Horizon 2020 Programme under Grant Agreement 634107 (PHC32-2014) ‘MULTIMOT’.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to collectively acknowledge the OpenCon community for inspiring this paper, and for providing continuous discussion about the various aspects of Open Access. In particular, we are grateful to Brett Buttliere, Audrey Risser, Sarah Barkla, and April Clyburne-Sherin for contributing resources to the development of this paper, and Tracey Depellegrin Connelly, Matt Menzenski, and Joseph McArthur for helpful comments on an earlier draft. We also thank Neil Saunders who provided the base code to extract data from PubMed Central. We would also like to thank Andy Nobes for drawing our attention to the work of INASP. PM would like to thank Lennart Martens for insightful discussions on Open Science.\n\n\nReferences\n\nAbbott BP, Abbott R, Abbott TD, et al.: Observation of Gravitational Waves from a Binary Black Hole Merger. Phys Rev Lett. 2016; 116(6): 061102. PubMed Abstract | Publisher Full Text\n\nAdie E: Attention! a study of open access vs non-open access articles. Figshare. 2014. Publisher Full Text\n\nAllen HG, Stanton TR, Di Pietro F, et al.: Social media release increases dissemination of original articles in the clinical pain sciences. PLoS One. 2013; 8(7): e68914. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlonso-Gamboa JO, Russell JM: Latin American scholarly journal databases: a look back to the way forward. In Aslib Proceedings. Emerald Group Publishing Limited, 2012; 64(1): 32–45. Publisher Full Text\n\nAlperin JP, Fischman GE, Willinsky J: Scholarly communication strategies in Latin America’s research-intensive universities. Educación superior y sociedad. 2011; 16(2). Reference Source\n\nAman V: Is there any measurable benefit in publishing preprints in the arxiv section quantitative biology? CoRR. 2014; abs/1411.1955. Reference Source\n\nAnderson C: Free: How today’s smartest businesses profit by giving something for nothing. Random House, 2013. Reference Source\n\nAntelman K: Do open-access articles have a greater research impact? Coll Res Libr. 2004; 65(5): 372–382. Publisher Full Text\n\nArzberger P, Schroeder P, Beaulieu A, et al.: Promoting access to public research data for scientific, economic, and social development. Data Sci J. 2004; 3: 135–152. Publisher Full Text\n\nAtchison A, Bull J: Will open access get me cited? an analysis of the efficacy of open access publishing in political science. PS Polit Sci Polit. 2015; 48(01): 129–137. Publisher Full Text\n\nBeagrie N, Houghton JW: The value and impact of data sharing and curation: A synthesis of three recent studies of UK research data centres.2014. Reference Source\n\nBeall J: Criteria for determining predatory open-access publishers. Scholarly Open Access. 2015. Reference Source\n\nBeall J: Predatory publishers are corrupting open access. Nature. 2012; 489(7415): 179. PubMed Abstract | Publisher Full Text\n\nBenos DJ, Bashari E, Chaves JM, et al.: The ups and downs of peer review. Adv Physiol Educ. 2007; 31(2): 145–152. PubMed Abstract | Publisher Full Text\n\nBerman HM, Westbrook J, Feng Z, et al.: The protein data bank. Nucleic Acids Res. 2000; 28(1): 235–242. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBirukou A, Wakeling JR, Bartolini C, et al.: Alternatives to peer review: novel approaches for research evaluation. Front Comput Neurosci. 2011; 5: 56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBjork B, Roos A, Lauri M: Scientific journal publishing: yearly volume and open access availability. Inform Res. 2009; 14(1). Reference Source\n\nBloudoff-Indelicato M: Text-mining block prompts online response. Nature News. 2015; 527(7579): 413. Publisher Full Text\n\nBogich T, Ballesteros S, Berjon R, et al.: On the marginal cost of scholarly communication. 2016. Reference Source\n\nBohannon J: Who’s afraid of peer review? Science. 2013; 342(6154): 60–5. PubMed Abstract | Publisher Full Text\n\nBowman JD: Predatory publishing, questionable peer review, and fraudulent conferences. Am J Pharm Educ. 2014; 78(10): 176. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrembs B: What goes into making a scientific manuscript public? 2015; Accessed: 2016-3-24. Reference Source\n\nButtliere BT: Using science and psychology to improve the dissemination and evaluation of scientific work. Front Comput Neurosci. 2014; 8: 82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarp J: On the plurality of (methodological) worlds: estimating the analytic flexibility of fmri experiments. Front Neurosci. 2012; 6: 149. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChamberlain S, Boettiger C, Ram K: rplos: Interface to the Search ‘API’ for ‘PLoS’ Journals. 2015. Reference Source\n\nChan L, Kirsop B, Arunachalam S: Open access archiving: the fast track to building research capacity in developing countries. 2005. Reference Source\n\nChatterjee P, Biswas T, Mishra V: Open access: the changing face of scientific publishing. J Family Med Prim Care. 2013; 2(2): 128–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng W, Ren S: Evolution of open access publishing in Chinese scientific journals. Learn Publ. 2008; 21(2): 140–152. Publisher Full Text\n\nColomb J, Brembs B: Sub-strains of Drosophila Canton-S differ markedly in their locomotor behavior. F1000Res. 2015; 3: 176. Publisher Full Text\n\nCostas R, Zahedi Z, Wouters P: Do “altmetrics” correlate with citations? extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J Assoc Inf Sci Technol. 2015; 66: 2003–2019. Publisher Full Text\n\nDavis PM, Fromerth MJ: Does the arxiv lead to higher citations and reduced publisher downloads for mathematics articles? Scientometrics. 2007; 71(2): 203–215. Publisher Full Text\n\nDavis PM, Lewenstein BV, Simon DH, et al.: Open access publishing, article downloads, and citations: randomised controlled trial. BMJ. 2008; 337: a568. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavis PM: Open access, readership, citations: a randomized controlled trial of scientific journal publishing. FASEB J. 2011; 25(7): 2129–2134. PubMed Abstract | Publisher Full Text\n\nDenicola R: Copyright and open access: reconsidering university ownership of faculty research. Nebraska Law Review. 2006; 85(2). Reference Source\n\nDenker SP: Collaboration with IBM Watson supports the value add of open access.2016. Reference Source\n\nDerrida J: Who’s afraid of philosophy?: Right to philosophy 1. Stanford University Press, 2002; 1. Reference Source\n\nDjuric D: Penetrating the omerta of predatory publishing: the Romanian connection. Sci Eng Ethics. 2015; 21(1): 183–202. PubMed Abstract | Publisher Full Text\n\nDorch SBF, Drachen TM, Ellegaard O: The data sharing advantage in Astrophysics. arXiv. 2015. Reference Source\n\nEger T, Scheufen M: The past and the future of copyright law: Technological change and beyond. Liber Amicorum Boudewijn Bouckaert, forthcoming. 2012; 37–64. Reference Source\n\nEvans JA, Reimer J: Open access and global participation in science. Science. 2009; 323(5917): 1025. PubMed Abstract | Publisher Full Text\n\nEysenbach G: Citation advantage of open access articles. PLoS Biol. 2006; 4(5): e157. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEysenbach G: Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. J Med Internet Res. 2011; 13(4): e123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlynn S, Hollis A, Palmedo M: An economic justification for open access to essential medicine patents in developing countries. J Law Med Ethics. 2009; 37(2): 184–208. PubMed Abstract | Publisher Full Text\n\nFrandsen TF: The integration of open access journals in the scholarly communication system: Three science fields. Inf Process Manag. 2009; 45(1): 131–141. Publisher Full Text\n\nGargouri Y, Hajjem C, Larivière V, et al.: Self-selected or mandated, open access increases citation impact for higher quality research. PLoS One. 2010; 5(10): e13636. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGaule P, Maystre N: Getting cited: does open access help? Research Policy. 2011; 40(10): 1332–1338. Publisher Full Text\n\nGentil-Beccot A, Mele S, Brooks T: Citing and reading behaviours in high-energy physics. Scientometrics. 2010; 84(2): 345–355. Publisher Full Text\n\nGhosh SS, Klein A, Avants B, et al.: Learning from open source software projects to improve scientific review. Front Comput Neurosci. 2012; 6: 18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlenisson P, Glänzel W, Janssens F, et al.: Combining full text and bibliometric information in mapping scientific disciplines. Inf Process Manag. 2005; 41(6): 1548–1572. Publisher Full Text\n\nGlover M, Buxton M, Guthrie S, et al.: Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes. BMC Med. 2014; 12(1): 99. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonzalez GH, Tahsin T, Goodale BC, et al.: Recent Advances and Emerging Applications in Text and Data Mining for Biomedical Discovery. Brief Bioinform. 2016; 17(1): 33–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHajjem C, Harnad S, Gingras Y: Ten-year cross-disciplinary comparison of the growth of open access and how it increases research citation impact. 2006; arXiv preprint cs/0606079. Reference Source\n\nHandke C, Guibault L, Vallbé JJ: Is Europe falling behind in data mining? Copyright’s impact on data mining in academic research.2015. Publisher Full Text\n\nHarmston N, Filsell W, Stumpf MP: What the papers say: text mining for genomics and systems biology. Hum Genomics. 2010; 5(1): 17–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarnad S, Brody T: Comparing the impact of open access (OA) vs. non-OA articles in the same journals. D-lib Magazine. 2004; 10(6). Publisher Full Text\n\nHarnad S, Brody T, Vallieres F, et al.: The access/impact problem and the green and gold roads to open access: An update. Serials Review. 2008; 34(1): 36–40. Publisher Full Text\n\nHenneken EA, Kurtz MJ, Eichhorn G, et al.: Effect of e-printing on citation rates in astronomy and physics. arXiv. 2006. Reference Source\n\nHitchcock S: The effect of open access and downloads (‘hits’) on citation impact: a bibliography of studies. 2013. Reference Source\n\nHoorn E, van der Graaf M: Copyright issues in open access research journals: The authors perspective. D-Lib Magazine. 2006; 12(2): 6. Publisher Full Text\n\nIoannidis JP: Why most published research findings are false. PLoS Med. 2005; 2(8): e124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIoannidis JP: Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci. 2012a: 7(6): 645–654. PubMed Abstract | Publisher Full Text\n\nIoannidis JP: Scientific communication is down at the moment, please check again later. Psychol Inq. 2012b; 23(3): 267–270. Publisher Full Text\n\nJohn LK, Loewenstein G, Prelec D: Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012; 23(5): 524–32. PubMed Abstract | Publisher Full Text\n\nKhabsa M, Giles CL: The number of scholarly documents on the public web. PLoS One. 2014; 9(5): e93949. PubMed Abstract | Publisher Full Text\n\nKmietowicz Z: Publishers withdraw 2500 journals from free access scheme in Bangladesh. BMJ. 2011; 342: ISSN 0959-8138. Reference Source\n\nKnobloch J, Albiez EJ, Schmitz H: A serological survey on viral haemorrhagic fevers in Liberia. In Annales de l’Institut Pasteur/Virologie. Elsevier. 1982; 133: 125–128. Publisher Full Text\n\nKousha K, Abdoli M: The citation impact of open access agricultural research: A comparison between OA and non-OA publications. Online Information Review. 2010; 34(5): 772–785. Publisher Full Text\n\nKriegeskorte N, Walther A, Deca D: An emerging consensus for open evaluation: 18 visions for the future of scientific publishing. Front Comput Neurosci. 2012; 6: 94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKurtz MJ, Henneken EA: Open access does not increase citations for research articles from the astrophysical journal. arXiv. 2007. Reference Source\n\nKurtz MJ, Eichhorn G, Accomazzi A, et al.: The effect of use and access on citations. Inf Process Manag. 2005; 41(6): 1395–1402. Publisher Full Text\n\nLaakso M, Welling P, Bukvova H, et al.: The development of open access journal publishing from 1993 to 2009. PLoS One. 2011; 6(6): e20961. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLansingh VC, Carter MJ: Does open access in ophthalmology affect how articles are subsequently cited in research? Ophthalmology. 2009; 116(8): 1425–1431. PubMed Abstract | Publisher Full Text\n\nLarivière V, Haustein S, Mongeon P: The Oligopoly of Academic Publishers in the Digital Era. PLoS One. 2015; 10(6): e0127502. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLawrence S: Online or invisible? Nature. 2001; 411(6837): 521. Reference Source\n\nLawson S, Meghreblian B: Journal subscription expenditure of UK higher education institutions [version 3; referees: 4 approved]. F1000Res. 2015; 3(274). PubMed Abstract | Publisher Full Text\n\nLeitner F, Valencia A: A text-mining perspective on the requirements for electronically annotated abstracts. FEBS Lett. 2008; 582(8): 1178–1181. PubMed Abstract | Publisher Full Text\n\nMalhotra A, Younesi E, Gurulingappa H, et al.: ’HypothesisFinder:’ a strategy for the detection of speculative statements in scientific text. PLoS Comput Biol. 2013; 9(7): e1003117. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatheka DM, Nderitu J, Mutonga D, et al.: Open access: academic publishing and its implications for knowledge equity in Kenya. Global Health. 2014; 10(1): 26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMazzucato M: The entrepreneurial state. Soundings. 2011; 49(49): 131–142. Publisher Full Text\n\nMcCabe MJ, Snyder CM: Identifying the effect of open access on citations using a panel of science journals. Econ Inq. 2014; 52(4): 1284–1300. Publisher Full Text\n\nMcGlynn T: The evolution of pseudojournals. Small Pond Science. Dominguez Hills, CA, 2013. Reference Source\n\nMcKiernan E, Bourne PE, Brown CT, et al.: The open research value proposition: How sharing can help researchers succeed. Figshare. 2016. Publisher Full Text\n\nMcVeigh ME: Open access journals in the ISI citation databases: analysis of impact factors and citation patterns: a citation study from Thomson Scientific. Thomson Scientific. 2004. Reference Source\n\nMetcalfe TS: The rise and citation impact of astroph in major journals. arXiv preprint astro-ph/0503519, 2005. Reference Source\n\nMetcalfe TS: The citation impact of digital preprint archives for solar physics papers. Solar Phys. 2006; 239(1–2): 549–553. Reference Source\n\nMoed H: The effect of “open access” upon citation impact: an analysis of arxiv’s condensed matter section. 2006. Reference Source\n\nMohammadi E, Thelwall M, Haustein S, et al.: Who reads research articles? an altmetrics analysis of Mendeley user categories. J Assoc Inf Sci Technol. 2015; 66(9): 1832–1846. Publisher Full Text\n\nMüller-Langer F, Watt R: Copyright and open access for academic works. Review of Economic Research on Copyright Issues. 2010; 7(1): 45–65. Reference Source\n\nNatarajan J, Berrar D, Dubitzky W, et al.: Text mining of full-text journal articles combined with gene expression analysis reveals a relationship between sphingosine-1-phosphate and invasiveness of a glioblastoma cell line. BMC Bioinformatics. 2006; 7(1): 373. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorris M, Oppenheim C, Rowland F: Open access citation rates and developing countries. In ELPUB. 2008; 335–342. Reference Source\n\nNosek BA, Bar-Anan Y: Scientific utopia: I. opening scientific communication. Psychological Inquiry. 2012; 23(3): 217–243. Publisher Full Text\n\nNuijten MB, Hartgerink CH, van Assen MA, et al.: The prevalence of statistical reporting errors in psychology (1985-2013). Behav Res Methods. 2015; 1–22. ISSN 1554-351X, 1554-3528. PubMed Abstract | Publisher Full Text\n\nOdlyzko A: Economic costs of toll access. Open Access: Key Strategic, Technical and Economic Aspects. 2006; 4: 39–43 . Publisher Full Text\n\nOdlyzko AM: Open access, library and publisher competition, and the evolution of general commerce. CoRR. 2013. Reference Source\n\nOkerson A: With feathers: Effects of copyright and ownership on scholarly publishing. College and Research Libraries. 1991; 52(5): 425–38. Publisher Full Text\n\nOpen Science Collaboration: PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015; 349(6251): aac4716. PubMed Abstract | Publisher Full Text\n\nPacker AL: The SciELO open access: a gold way from the south. The Canadian Journal of Higher Education. 2009; 39(3): 111–126. Reference Source\n\nPiwowar H: Altmetrics: Value all research products. Nature. 2013; 493(7431): 159. PubMed Abstract | Publisher Full Text\n\nPiwowar HA, Vision TJ: Data reuse and the open data citation advantage. PeerJ. 2013; 1: e175. ISSN 2167-8359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiwowar HA, Day RS, Fridsma DB: Sharing detailed research data is associated with increased citation rate. PLoS One. 2007; 2(3): e308. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPorter AL, Kongthon A, Lu JC: Research profiling: Improving the literature review. Scientometrics. 2002; 53(3): 351–370. Publisher Full Text\n\nPöschl U: Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Front Comput Neurosci. 2012; 6: 33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nResearch Information Network: Activities, costs and funding flows in the scholarly communications system in the UK. Technical report, 2008. Reference Source\n\nRussel RD: The business of academic publishing: A strategic analysis of the academic journal publishing industry and its impact on the future of scholarly publishing. Electron Journal of Academic and Special Librarianship. 2008; 9(3).\n\nSahu DK, Gogtay NJ, Bavdekar SB: Effect of open access on citation rates for a small biomedical journal. 2005. Reference Source\n\nSatyanarayana K: Journal publishing: the changing landscape. Indian J Med Res. 2013; 138(1): 4–7. PubMed Abstract | Free Full Text\n\nScheufen M: Copyright Versus Open Access: On the Organisation and International Political Economy of Access to Scientific Knowledge. Springer, 2015. Publisher Full Text\n\nSchimmer R, Geschuhn KK, Vogler A: Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access. 2014. Reference Source\n\nSchimmer R, Geschuhn KK, Vogler A: Disrupting the subscription journals’ business model for the necessary large-scale transformation to open access. 2015. Publisher Full Text\n\nSchwarz GJ, Kennicutt RC Jr: Demographic and citation trends in astrophysical journal papers and preprints. 2004; 36(5). arXiv preprint astro-ph/0411275. Reference Source\n\nShatkay H, Pan F, Rzhetsky A, et al.: Multi-dimensional classification of biomedical text: toward automated, practical provision of high-utility text to diverse users. Bioinformatics. 2008; 24(18): 2086–2093. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShavell S: Should copyright of academic works be abolished? Journal of Legal Analysis. 2010; 2(1): 301–358. Publisher Full Text\n\nShen C, Björk BC: ‘predatory’ open access: a longitudinal study of article volumes and market characteristics. BMC Med. 2015; 13(1): 230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimmons JP, Nelson LD, Simonsohn U: False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011; 22(11): 1359–66. PubMed Abstract | Publisher Full Text\n\nSolomon DJ, Björk BC: A study of open access journals using article processing charges. J Am Soc Inf Sci Technol. 2012; 63(8): 1485–1495. Publisher Full Text\n\nSutton C: Is free inevitable in scholarly communication? the economics of open access. College & Research Libraries News. 2011; 72(11): 642–645. Reference Source\n\nSwan A: The open access citation advantage: Studies and results to date. 2010. Reference Source\n\nSwan A, Brown S: Authors and open access publishing. Learn Publ. 2004; 17(3): 219–224. Reference Source\n\nSwan A, Brown S: Open access self-archiving: An author study. 2005. Reference Source\n\nTurner JA, Laird AR: The cognitive paradigm ontology: design and application. Neuroinformatics. 2012; 10(1): 57–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUnited Nations: Universal declaration of human rights. 1948. Reference Source\n\nvan Assen MA, van Aert RC, Nuijten MB, et al.: Why publishing everything is more effective than selective publishing of statistically significant results. PLoS One. 2014; 9(1): e84896. ISSN 1932-6203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Noorden R: Trouble at the text mine. Nature. 2012; 483(7388): 134–135. PubMed Abstract | Publisher Full Text\n\nVan Noorden R: Open access: The true cost of science publishing. Nature. 2013; 495(7442): 426–429. PubMed Abstract | Publisher Full Text\n\nVan Noorden R: Elsevier opens its papers to text-mining. Nature. 2014; 506(7486): 17. PubMed Abstract | Publisher Full Text\n\nVanclay JK: Factors affecting citation rates in environmental science. J Informetr. 2013; 7(2): 265– 271. Publisher Full Text\n\nWagenmakers EJ, Wetzels R, Borsboom D, et al.: An agenda for purely confirmatory research. Perspect Psychol Sci. 2012; 7(6): 632–638. PubMed Abstract | Publisher Full Text\n\nWang X, Liu C, Mao W, et al.: The open access advantage considering citation, article usage and social media attention. Scientometrics. 2015; 103(2): 555–564. Publisher Full Text\n\nWatson M: When will ‘open science’ become simply ‘science’? Genome Biol. 2015; 16(1): 101. ISSN 1465-6906. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWicherts JM, Bakker M, Molenaar D: Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS One. 2011; 6(11): e26828. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWillinsky J: Copyright contradictions in scholarly publishing. First Monday. 2002; 7(11). Reference Source\n\nWillinsky J: The Access Principle: The Case for Open Access to Research and Scholarship. MIT Press, Cambridge, Mass. 2006. Reference Source\n\nWohlrabe K, Birkmeier D: Do open access articles in economics have a citation advantage? 2014. Reference Source\n\nXia J: Predatory journals and their article publishing charges. Learn Publ. 2015; 28(1): 69–74. Publisher Full Text\n\nXia J, Harmon JL, Connolly KG, et al.: Who publishes in “predatory” journals? J Assoc Inf Sci Technol. 2015; 66(7): 1406–1417. Publisher Full Text\n\nXu L, Liu J, Fang Q: Analysis on open access citation advantage: an empirical study based on oxford open journals. In Proceedings of the 2011 iConference. ACM. 2011; 426–432. Publisher Full Text\n\nZhang Y: The effect of open access on citation impact: a comparison study based on web citation analysis. Libri. 2006; 56(3): 145–156. Publisher Full Text"
}
|
[
{
"id": "13293",
"date": "15 Apr 2016",
"name": "Gwilym Lockwood",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is the first time I've written an open peer review, although I always sign private peer reviews. Normally I'd make comments directed to the authors and the authors alone, but since this is open, I've also included a section for other readers of this paper. This may sound a bit like an Amazon or Airbnb review or something.Short summary for readersThis is an excellent paper about the academic, economic, and societal benefits and impacts of Open Access. It's a good introductory text for people who don't know much about OA and would like to know more. It's also a good persuasive text for stakeholders in policy, universities, publishing, funding, etc. positions who may be interested in including OA in their decision making.In addition to its attention to detail, its main strengths are its focus, its brevity, and its relative impartiality.One of the difficulties with writing about OA is that there are so many overlapping issues; this paper is very good at giving a brief overview or description of the other issues, pointing the reader in the direction of somewhere with more information, and then getting back onto the topic.Another thing about OA is that its advocates are very passionate about it. As with any cause, that's a good thing for its supporters, but overwhelmingly pro-OA resources can seem potentially off-putting to neutrals. This paper does an excellent job of presenting an evidence-based pro-OA viewpoint in a measured tone and without coming across as ideological.One possible caveat is that the paper presents extensive evidence of what OA does, but it doesn't tackle the meatier issue of how to implement it successfully. However, I feel that's a separate issue which is beyond the scope and purpose of this paper.Suggestions and comments for authorsFirst of all, great article! Well done and thank you for pulling together what is a disparate collection of links and literature into a one-stop shop which is both useful and coherent. I like this article a lot... but my role here is to criticise and make it better, so the rest of this review will focus on that.This article is well-written and well-structured. That's made it much easier as a reviewer to simply go through the article and highlight my issues with it paragraph by paragraph, rather than having to make it coherent first and then sort out the smaller things.The vast majority of the issues I have with this paper are minor ones, so it didn't make sense to have separate major/minor sections; rather, I'll just go through them in order in the text.(I printed this out to underline/comment on, so for me, tables 1 and 2 came during the academic case for OA section. Online, they're supplementary materials, and I think it's best that way, but this is why I'm commenting on the tables during that section)Abstract\"We recommend that OA supporters focus their efforts on working to establish viable new models and systems of scholarly communication, rather than trying to undermine the existing ones...\"In general, I agree with this sentiment. However, I feel that its inclusion in the abstract is a bit jarring as the text of the article doesn't really cover recommendations to OA supporters at all, other than in the very last paragraph. I think that's good, as I feel this paper is best suited as relatively neutral source of information rather than a preaching to the converted or ideology discussion kind of purpose. So, I think this part can be left out of the abstract; it doesn't refer to any particular \"recommendations to OA supporters\" bit in the text and potentially clouds the strength of the relative impartiality of the paper.A brief history of OA\"BioMed Central and ... PLOS were founded in the early 2000s and remain successful businesses to date.\" (p3, col2)Technically, PLOS is a non-profit. I suggest changing successful businesses to successful business models. This both highlights the financial sustainability of OA (increased APCs at PLOS notwithstanding) and also sets it apart from traditional publishers, which are definitely successful businesses.The academic case for OA figure 1 (p4, top)I have difficulty interpreting the y-axis on figure 1. It's labelled as cumulative number of PubMed articles relative to 2000, but I'm not sure how to read it. Reading off 2014, non-OA is c.22 on the y-axis, and OA is c.33 on the y-axis. Based on the figure 1 caption about the ratio, I'm interpreting this as meaning that, in 2014, the ratio of cumulative PubMed articles was approx 33:22 OA to non-OA, or in other words, 60% of PubMed articles in 2000-2014 were OA. However, I'm not sure if this is how it's meant to be interpreted. I think that it's well visualised, and really makes it clear how OA has taken off, but exactly what the numbers represent on the y-axis is unclear to me: number of articles? number of times more articles? It could use some relabelling.\"Napster moment\" (p4, col1)I like the comparison, but it could use a citation (even just the Napster wikipedia article) and/or a little more explanation to clarify what that means.Table 1:\"1991 ... by the American physician Paul Ginsparg\"He's a physicist, not a physician.Table 2:Xu et al (2011)I don't think this reference was very well cited. Firstly, Oxford Open Journals are listed as a discipline, when they're the source of papers across disciplines. The actual disciplines were Medicine, Social Sciences, Mathematics & Physical Sciences, Life Sciences, and Humanities. Secondly, you list the citation advantage as 138.87%. However, one of the main findings of this paper was the disparity in citation advantages; it ranged from 163.16% for OA articles in Mathematics & Physical Sciences to an actual citation disadvantage of -49.24% for OA articles in Humanities. Given the pro-OA nature of the paper, I feel like you have an extra responsibility to report the few anti-OA pieces of evidence.Gargouri et al (2010) (page 8, col1)This paragraph is about a possible confound for the OA citation advantage, where it could be that researchers choose to publish OA for extra cool findings, and you use the Gargouri et al. study to counter this... which is totally correct. You write:\"Gargouri et al. (2010) compared citation counts [for articles which were] self-selected as OA or mandated as OA. The study concluded that both were cited significantly more than non-OA articles. As such, these findings rule out a selection bias\"This is true that both OA types were cited more than non-OA. However, it's also missing the crucial point that there was no difference in citation between self-selected OA articles and mandatory OA articles. Including this would strengthen your point to show that it's OA itself which leads to the citation advantage.The whole section about altmetrics (subhead societal impact of the academic case for OA, p8, col2) could use some attention. It's not clear until much later what the difference is between alternative metrics (i.e. altmetrics), i.e. the various types of metrics which are alternative to journal impact factors, and Altmetric, i.e. the company which is often confusingly referred to as Altmetrics (not in this paper, to be fair, but elsewhere). A quick disambiguating sentence or two would be really useful here.In the following paragraph (page 8, col2), you write about OA altmetrics advantage, and say that there's a logical assumption that OA articles should have one. However, this doesn't consider the fact that the prestige of some journals is advertising in and of itself. You can, and do, get a lot of closed-access papers which generate high altmetrics (social media attention, Mendeley readhership) from academics who do have access. And sure enough, in the next paragraph, (page 8 and 9), the Wang et al. 2015 article finds that the OA altmetric advantage doesn't extend to the most impactful articles. I think this section can be made more nuanced and informative by quickly discussing the role of journal prestige. Nothing in depth, just as something that exists and needs to change (for example, you could point people to Brembs et al. and the Deep Impact paper in Frontiers).\"Essentially, copyright is a tool wielded by traditional publishers for financial gain rather than fostering creativity...\"I don't disagree with this. However, I feel it comes on too strong. I think it's fair to say that most people's immediate opinion of copyright is \"well, I'd like my stuff to be copyrighted, as that means people can't steal it and pass it off as their own\". I think that you need a little more detail here, even just two or three sentences to explain how and why copyright is used for financial gain rather than author protection. Otherwise, it just sounds political/ideological, and counterintuitive for people who haven't read much about copyright.Glenisson et al. (2005) citation (page 9, col2).You write that TDM has \"proven to be useful for a large variety of applications\", and use the Glenisson citation to back this up. I have to say here that I know very little about TDM; however, following through to the Glenisson paper, I don't see how it supports that conclusion. I read it and it seems to show a proof-of-concept kind of study: that TDM can group a set of papers into themes in the same way that an expert can. This is really cool and everything, but I don't think that that substantiates your point that TDM is useful for a large variety of applications. Rather, I'd like to see a couple of specific examples, which you then describe more fully in the next paragraph. One good one is Swanson 1987 (I think - taken from here: http://people.ischool.berkeley.edu/~hearst/papers/acl99/acl99-tdm.html), who used TDM to make the link between migraines and magnesium deficiency.\"...simply because one can no longer keep up with the published literature\".Small point, but I think it's worth stressing that this is due to the amount of literature that there is.The economic case for OAthe pay-to-publish part (p10, col2)I feel this glosses over problems with pay-to-publish. You come back to predatory OA later, but this isn't quite the same: I think it could use a couple of extra sentences describing what the conflict of interest for researchers is, and also stress that pay-to-publish makes it potentially in a journal's interest to accept more papers than they necessarily should. One of the most common anti-OA arguments I see in non-scientific media is that OA is pay-to-publish, which is often misrepresented as \"pay-to-publish is publication bribery\". I think this section needs a little more substance to it to acknowledge/address this.\"making publication costs dependent on the value added...\" (page 11, col1)When talking about the value added by journals, this paragraph ignores the elephant in the room: journal prestige. Again, I know that this isn't the purpose of this article, but I think it could really be strengthened by mentioning it before moving on.\"Much primary research actually takes place outside of academia inside research and development departments\" (page 11, col2)The part following this sentence is muddy. First, you talk about R&D outside academia (i.e. presumably private research), and then you talk about access to research results because they're publicly financed public goods. So, what does that mean, that R&D from private businesses who've invested their own capital in it should be made available to all? (maybe I agree with that, in some cases, but a lot of people sure won't)I think this paragraph could be honed a bit; otherwise, it's straying into the ideological territory of saying that all private research should be made public for the public good. That transcends OA in scholarly publishing, and makes OA in scholarly publishing too easy to dismiss.The cancer research paragraph (page 12, col1) is also unclear. It took me a while to figure out it's talking about UK expenditure - my first assumption of \"total expenditure\" meant worldwide. It's also not totally clear what the point is - the geographical origin of research is unrelated to its open status. I think that it's quite a leap to write (apologies for paraphrasing) \"83% of UK economic benefit from cancer research comes from research outside UK, therefore open access is good\", because I think it conflates two different things.Also, small point, \"17% of the annual net-monetary was estimated\" is missing the word benefit after net-monetary.The societal case for OASmall point: as somebody who wears a linguistics hat quite often, it rankles to read on page 11 \"Examples of [non-academic] groups who might benefit include... those who work in linguistics and translation\". Translation, for sure, but linguistics is an academic field - you even mention the Lingua to Glossa movement organised by academic linguists later in the manuscript! To me, this is like writing \"...those who work in biology and vets\", lumping the academic field and a practical use of that field together. Just referring to translation is fine.Citizen engagement (page 13, col 1)I agree that these are great examples of citizen engagement with science, but at the risk of sounding like an Elsevier representative, interest in projects like Galaxy Zoo does not entail desire to download and read papers. In fact, you could even make the (spurious) argument that those projects come into existence precisely because citizens aren't interested in downloading and reading papers. I don't actually agree with that, I agree with your general point... but I think that citizen science project interest and citizen science paper interest. Obviously I think it is in the public interest to have science journals OA, but this isn't the right argument (and I think the sentence \"Such statements conflate a lack of desire or need for access with the denial of opportunity to access research\" is perfect). I think a stronger argument would be to look at existing OA journals, such as PLOS and Frontiers, and see how many views and downloads come from people who aren't academics. If you can point to, say, some of the most viewed/downloaded PLOS papers and say \"look, 30% (or whatever, that's a random number) of these readers aren't academics, they're real people who are interested in it\", that would make for a stronger argument.Quibble about the \"yes, we were warned about Ebola\" example: the finding from that paper (that Liberians have Ebola antibodies in their blood, suggesting the endemic presence of Ebola) is actually written on the first page preview of the paper (http://www.sciencedirect.com/science/article/pii/S0769261782800282/part/first-page-pdf, accessed from my laptop outside my institution). It could be argued that anybody could see this finding anywhere in the world, meaning that it's not a problem of OA, it's a problem about searching and indexing. A good counterargument to that is obviously that this paper would have been unsearchable with TDM at the start of the outbreak when people were combing through all West African Ebola literature.\" 'green' model of OA adoption\" (page 13, col2)You generally refer to Green and Gold routes, with the colours capitalised. Just a small terminology thing to keep consistent.A much more important thing is also on page 13, col2:\"The pay-to-publish system is a potentially greater burden for authors in developed countries, considering that they are not used to paying publication costs, and funding systems for OA are not as well-established as those in the Western world.\"--> developing countries, not developed countries!Predatory publishers (page 14, col1)I agree with Ross Mounce's comment on the paper: you give Beall too much importance. I think it can be a useful list and should be mentioned, but definitely include some caveats like the ones Ross writes, or the fact that he added Frontiers to the list because of a couple of editorial mistakes.ConclusionsPeter Suber (page 16, col1)You describe him as \"a leading voice in the OA movement\", but I think you should write what his positions are (see http://cyber.law.harvard.edu/~psuber/wiki/Peter_Suber) in order to justify his importance.Other general thingsThere are no proposed solutions in this paper, which is totally fine, because it's beyond the scope of the paper. I feel it could benefit by putting in a couple of sentences here and there about who is needed for driving this change: academics, funders, governments, etc.I was disappointed not to see anything about the Dutch government and university library organisations' collective drive towards OA. They've changed the national law on copyright, they've reached agreements with most major publishing groups, they may well introduce mandatory OA publishing in the Netherlands in 2016, and they've made it one of the main priorities of their EU presidency this year. It's like the best example of how a whole country can take the lead and sort it out. I think including a quick reference to the Netherlands as an example of excellent OA policy (in the same way that you mention sciELO in Latin America) would go a long way towards convincing the people who are reading this thinking, \"ah, yes, I guess OA makes sense in the developing world, but we're doing fine here in the West and it would be too difficult to change things\". A good summary of that is here: http://openaccess.nl/en/in-the-netherlands/current-situationFinal remarksThat's the end of my 2800-odd word review. I really enjoyed reading this paper, going through it, and trying to find ways to improve it. Thanks to the authors for writing an excellent paper.",
"responses": []
},
{
"id": "13430",
"date": "18 Apr 2016",
"name": "Peter Suber",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is very well-done, unusually thorough and detailed. Here are a few ways to improve it. When I refer to page numbers, I mean the page numbers in the PDF of v1, April 11, 2016.http://f1000research.com/articles/5-632/v1 \"You\" refers to the authors. Apologies in advance if I sometimes cite my own work in these comments. p. 1. In the abstract you say, \"The economic case for Open Access is less well-understood, although it is clear that access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services.\" This understates the economic case. For example, some subscription journals convert to OA precisely for economic benefits. See the preliminary version of David Solomon, Bo-Christer Björk, and Mikael Laakso, \"Converting Scholarly Journals to Open Access: A Review of Approaches and Experiences\" now open for public comment. (The final version will be published this summer.)https://osc.hul.harvard.edu/programs/journal-flipping/public-consultation/ See especially section 4.6, \"Increased revenue and financial viability.\"https://osc.hul.harvard.edu/programs/journal-flipping/public-consultation/4/6/ p. 3. You say, \"The Green route refers to author self-archiving, in which a version of the peer-reviewed article is posted online to a repository or website.\" Green OA also applies to preprints, which are not peer-reviewed. p. 3. You say, \"This [green] route is dependent on journal or publisher policies on self-archiving (sherpa.ac.uk/romeo).\" This is importantly incomplete. It overlooks rights retention. Some individual authors retain enough rights to authorize green OA on their own. While this may be fairly rare, rights-retention OA policies at universities are increasingly common. More than 80 institutions in North America, Europe, Africa, and Asia have now adopted rights-retention OA policies. Under these policies, the faculty grant non-exclusive rights to their institution before they sign future publishing contracts. The institution then has permission to make those future articles OA without having to seek permission from publishers. (The institutions also grant the same non-exclusive rights back to authors.) For more detail on rights-retention OA policies, see Stuart Shieber and Peter Suber, \"Good Practices for University OA Policies.\" http://bit.ly/goodoa p. 3. You say, \"A subscription to all peer-reviewed journals is not affordable for any single individual, research institute or university (Odlyzko, 2006).\" This is true and important, but it's a pity you don't cite more recent evidence than 2006. An important kind of evidence for this proposition is that not even Harvard University can afford all the journals needed by its faculty and students, and must cancel journals every year for budgetary reasons alone. I've collected seven public statements from Harvard to this effect (2008-2012) in the supplements to p. 30 of my 2012 book (Open Access, MIT Press, 2012).http://bit.ly/oa-book#p30.2 p. 3. You say, \"Much of the driving force behind this global change has been through a combination of direct, grassroots advocacy initiatives in conjunction with policy changes from funders and governments.\" Please add *university policies* to this list. They're on a par with funder policies in importance, and they're far more numerous. ROARMAP shows that 7+ times more universities have OA policies than funders. p. 3. You say, \"The Open Access movement is intrinsically tied to the development of the Internet and how it redefined communication and publishing (Laakso et al., 2011).\" For more documentation on how the OA movement arose as soon as the internet arose, see my Timeline of the Open Access Movement.http://legacy.earlham.edu/~peters/fos/timeline.htm In 2009, I moved the timeline to the Open Access Directory wiki, and you should probably cite that version: Timeline of the open access movement...http://oad.simmons.edu/oadwiki/Timeline ...especially subsection on developments before 2000.http://oad.simmons.edu/oadwiki/Timeline_before_2000 (You might cite this timeline again on p. 5, when you introduce your own timeline.) p. 3. You say, \"One result of the growing OA movement is the rise of OA-only publishers....\" Somewhere in this paragraph, I'd mention that some OA publishers are for-profit (e.g. BMC) and some are non-profit (e.g. PLoS). p. 4. Your section on the impact advantage is very well-done. Most treatments are much briefer, less careful, and less detailed than yours. I have just these suggestions. You cite authors of individual studies, and Alma Swan's 2010 literature review. But you don't cite the mother lode of literature on this topic: Steve Hitchcock's annotated bibliography, \"The effect of open access and downloads ('hits') on citation impact: a bibliography of studies.\"http://eprints.soton.ac.uk/354006/1/oacitation-biblio-snapshot0613.html Or more precisely, you cite it once, 10 paragraphs before the section on the impact advantage begins. You should cite it again within the section on the impact advantage. You should mention that it's comprehensive and annotated. Hitchcock stopped updating it in 2013. But you should mention that SPARC Europe has committed to update it through its Open Access Citation Advantage Service.http://sparceurope.org/oaca/ Finally, in the same place where you cite Swan's literature review, you should cite Ben Wagner's literature review, \"Open Access Citation Advantage: An Annotated Bibliography,\" Issues in Science and Technology Librarianship, Winter 2010.http://www.istl.org/10-winter/article2.html p. 5. In the timeline entry for 2002, the BOAI was released on February 14, not January 14. p. 6. In the timeline entry for 2013, I'd say that the suicide of Aaron Swartz \"increases\" (not \"gains\") international attention for the OA movement, or \"draws new attention\" to the OA movement. The current language suggests that the OA movement didn't have international attention before that, which is very far from the truth. p. 9. You say, \"Shifting copyright to stay with the author allows for wider re-use, including TDM, and forms the basis for a robust and developing public domain.\" You shouldn't use \"public domain\" here. In copyright law, the term has a specific meaning which you don't mean here. p. 10. You say, \"Only recently has any transparency into the detailed costs of subscriptions been gained by using Freedom of Information Requests to bypass non-disclosure agreements between libraries and publishers (Lawson & Meghreblian, 2015).\" Here you overlook the earlier Big Deal Contract Project in the US, from Ted Bergstrom, Paul Courant, and Preston McAfee. It too used public records laws and Freedom of Information requests. I'm not sure when it launched, but it was before 2009.http://www.econ.ucsb.edu/~tedb/Journals/BundleContracts.html p. 10. You say, \"The average production cost for one paper is estimated to be around $3500–$4000 (Van Noorden, 2013).\" I've seen dozens of widely varying estimates of this cost, most of them much lower than Van Noorden's. Unfortunately I don't have time to hunt them down. I hope you can introduce at least a few more, if only to show that estimates differ widely here. p. 10. You say, \"Philip Campbell (Editor-in-Chief of Nature) stated that his journal’s internal costs were at $20,000–$30,000 per paper....\" To clarify, I think he meant that this was the cost per published paper. If Nature rejects x articles for every one it publishes, then this includes the cost of peer reviewing x rejected articles. Since Nature is very selective, x is high. But this \"cost per published paper\" should not be compared to costs for peer-reviewing a single paper or the production costs of publishing an accepted paper. p. 10. You say, \"OA publishing is most prevalent in the form of ‘pay-to-publish’....\" This is either false or misleading. About 70% of peer-reviewed OA journals charge no APCs at all. In that sense, the fee-based model is not the most prevalent. It's a minority model. On the other hand, about 50% of the articles published in peer-reviewed OA journals are published in the fee-based variety. On my claim that most OA journals charge no APCs: See my article, \"Good facts, bad predictions,\" SPARC Open Access Newsletter, June 2006.https://dash.harvard.edu/handle/1/4391309 And my article, \"No-fee open-access journals,\" SPARC Open Access Newsletter, November 2, 2006. https://dash.harvard.edu/handle/1/4552050 The DOAJ used to make it easy to see what percentage of listed journals were fee-based and what percentage were no-fee. But it has temporarily made that difficult by combining the categories of \"no-fee journals\" and \"journals for which we don't have enough information to say.\" On my claim that about half the articles published in peer-reviewed OA journals are published in the fee-based variety, see the updates to p. 170 of my 2012 book. There I cite three studies and quote the relevant excerpts.http://bit.ly/oa-book#p170 p. 10. When you describe ways in which fee-based OA journals mitigate some problems arising from the model, you mention the firewall between the editorial and business side of the journal (good), and you mention fee waivers (good). You should also mention fee discounts, which many journals give in lieu of fee waivers. You should also mention that most fees charged by fee-based journals are paid by funders (59%), or the author's employer (24%), and only 12% are paid by authors themselves. These numbers are from Suenje Dallmeier-Tiessen et al., \"Highlights from the SOAP project survey. What Scientists Think about Open Access Publishing,\" arXiv, January 29, 2011, Table 4.http://arxiv.org/abs/1101.5260 p. 12. You say, \"In his article Sutton (2011)...\"please change \"his\" to \"her\". The article is by Caroline Sutton. p. 12. You say, \"While OA is not a solution to all aspects of research accessibility (e.g., language barriers and disability access remain continuing issues to be addressed)....\" See my 2012 book (Open Access, MIT Press, 2012, http://bit.ly/oa-book), at pp. 26-27, where I make much the same point. \"OA isn't universal access\" and by itself doesn't overcome \"filtering and censorship barriers\", \"language barriers\", \"handicap access barriers\", or \"connectivity barriers\". p. 13. You refer to \"the fact that access to knowledge is actively prohibited in fields like public health....\" I don't know what you mean here by \"actively prohibited\". p. 13. You say, \"Some traditional publishers, and some academics, have argued that public access to research is not required because research papers cannot be understood by non-specialists....\" Here you might want to cite Section 5.5.1 (\"OA for Lay Readers,\" pp. 115-119) of my 2012 book. p. 13. You say, \"The shift from a ‘reader pays’ to an ‘author pays’ mode....\" I recommend avoiding the term \"author pays\" for the reasons I gave in my fourth comment to p. 10 above. Most OA journals don't charge author-side fees, and among those who do, most fees are not paid by authors. p. 13. You say, \"This has been at least partially mitigated with waiver fees for authors from developing countries and additional provisions in research grants....\" Yes. But again, don't forget that the majority of peer-reviewed journals are no-fee journals. See my fourth comment to p. 10 above. pp. 15-15. You say, \"Fortunately, it seems that funders and research organisations are moving in that direction. Since 2005, the number policies supporting OA publishing increased steadily. Consequently, it is now the responsibility of researchers to ensure OA to their publications either by choosing the green or the gold road.\" Since you're recapitulating some grounds for optimism here, I'd also reiterate the growth of rights-retention OA policies. See my second comment on p. 3 above. p. 16. You say, \"As Peter Suber, a leading voice in the OA movement, stated: 'As long as they do not have the power to stop Open Access, the toll-access publishers are not the enemy'.\" Thanks for quoting me. I wanted to give you the source to cite. Unfortunately, I don't think you're using an exact quote. Here's the closest one I can find: \"TA [toll-access] publishers are not the enemy. They are only unpersuaded. Even when they are opposed, and not merely unpersuaded, they are only enemies if they have the power to stop OA. No publisher has this power, or at least not by virtue of publishing under a TA business model. If we have enemies, they are those who can obstruct progress to OA. The only people who fit this description are friends of OA who are distracted from providing OA by other work or other priorities.\" It's from \"Two distractions,\" SPARC Open Access Newsletter, May 3, 2004.https://dash.harvard.edu/handle/1/4391169",
"responses": [
{
"c_id": "1948",
"date": "03 May 2016",
"name": "Alexander Grossmann",
"role": "Reader Comment",
"response": "This is a very useful reviewer's report which may assist the authors to modify and further improve their paper in it's next version.A question to Peter about his claim that most OA journals charge no APC: You say \"About 70% of peer-reviewed OA journals charge no APCs at all. In that sense, the fee-based model is not the most prevalent. It's a minority model.\"Would you give a more recent reference for this statement, please? The cited publications which were mentioned in your report seem to be not up-to-date at least because they were published 10 years ago. Meanwhile there are much more OA journals on the market, both hybrid or OA-only, in comparison with 2006. Moreover we should consider the fact that new OA journals are mostly free of charge for the first two years in their market introduction phase. After this period they are likely to become author-payed journals with APCs. I am not sure if the latter observation has been reflected by those statistics, too.Thanks for clarifying this point."
},
{
"c_id": "1971",
"date": "10 May 2016",
"name": "Jon Tennant",
"role": "Author Response",
"response": "Hi Alexander,I've been looking into this, and the most up-to-date statistics for this based on the DOAJ come from this source: http://citesandinsights.info/civ16i4.pdf. Figures here seem to suggest that 71% of journals in the DOAJ do not levy an APC. This is likely to change slightly with the updated 'crackdown' from the DOAJ (http://www.nature.com/news/open-access-index-delists-thousands-of-journals-1.19871), but I'll add a reference to this in.Jon"
},
{
"c_id": "1997",
"date": "25 May 2016",
"name": "Peter Suber",
"role": "Reviewer Response",
"response": "Here are the latest stats from the DOAJ (May 24, 2016). https://goo.gl/LejTAw"
},
{
"c_id": "1998",
"date": "25 May 2016",
"name": "Jon Tennant",
"role": "Author Response",
"response": "Dear Alexander, I'm just posting the link that Peter was kind enough to send us yesterday, with updated statistics on this matter: https://plus.google.com/+PeterSuber/posts/HjrRDcrZS8p Important points: Here are the numbers as of May 24, 2016: Total number of journals listed in DOAJ = 8,858 Yes (fee-based) = 1,424 = 16% No (no-fee) = 2,601 = 29% No info = 4,833 = 55% Further references and information are provided in that post, and will be integrated into the next version of this manuscript. Best, Jon"
}
]
},
{
"id": "13291",
"date": "22 Apr 2016",
"name": "Paige Brown Jarreau",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting and timely review of the issue of open access to scientific literature.The two other reviewers have highlighted specific issues that should be addressed in the revision of this article, and I agree with these issues. I've added other notes below. I think this review article would benefit from a re-write to correct potentially biased language in support of open access and to round out the review with further evidence of open access impacts on citation rates, altmetrics, scientific literacy / public engagement and research quality. Notes: The authors cite “fostering a culture of greater scientific literacy” as a benefit of open access. While this is theoretically a benefit, has more or less open access in particular scientific fields been tied to greater or lesser scientific literacy in those areas? Is this potential benefit supported by research literature? The authors should be clear on what the evidence-based benefits of open access are, and also what the potential drawbacks are. References to related research should be provided on this topic. The structure the authors use for laying out their evidence and the language they use (e.g. “[the] case for Open Access”) appear to lean more toward the positive impacts / benefits of open access from the outset. The authors should be very careful to review the evidence first before making value-based statements or arguments about open access, even if the evidence-based benefits outweigh any potential drawbacks, or lack of significant benefits, in the end. Correct the typo in the following sentence: “[In] A longitudinal study Eysenbach (2006) compared…” The authors write: “One alternative explanation for the citation advantage could be that researchers choose to publish OA when a finding is more impactful, but empirical evidence contradicts this selection effect. Gargouri et al. (2010) compared citation counts within a cohort of OA articles that had either been self-selected as OA or mandated as OA (e.g., by funders). The study concluded that both were cited significantly more than non-OA articles. As such, these findings rule out a selection bias from authors as the cause for the citation advantage (Gargouri et al., 2010).” However, couldn’t funded research also have a tendency to be considered “more impactful,” because it was chosen in the first place to be funded and mandated as OA? The authors should discuss this, and whether there is any research that experimentally investigates whether open access provides a citation advantage. This could perhaps be suggested as future research. The authors should also discuss how/why some studies have found no citation advantage for OA papers. Related to social media mentions of research papers and citation counts, the authors might also consider citing Liang, X., Su, L. Y. F., Yeo, S. K., Scheufele, D. A., Brossard, D., Xenos, M., ... & Corley, E. A. (2014). Building Buzz (Scientists) Communicating Science in New Media Environments. Journalism & Mass Communication Quarterly, 1077699014550092. [PDF]\n\nIn discussing the economics of OA, the authors should also discuss any evidence of potential drawbacks for various stakeholders, such as where funds for pay-to-publish fees will come from and how these fees may affect individual researchers. Pay-to-publish models of OA may also burden early career researchers and researchers working in fields where research grants are more difficult to obtain. The authors do not discuss the potential impact of OA on research quality or reproducibility (reproducibility is only mentioned in the context of open data). As this has been a controversial issue in the past (e.g. the mentioned 'sting' operations) the authors should discuss any research that has investigated the impact of open access on the rigor of peer reviews, research quality, presence of replication studies / reproducibility, etc. There has also been some discussion of whether open peer review (like that used by F1000Research) affects the quality of reviews, e.g. (Rooyen et al. 1999). The authors should mention this and/or subsequent literature when addressing open peer review.There have also been studies on scientists' / journal article authors' perceptions and attitudes toward open access, e.g. Schroter and Tite (2005; 2006). The authors might considering summarizing some of this research, as it gives context to some of the existing barriers to open access and perceived drawbacks among researchers.Summary:In summary, the topic of this review is important and timely. However, this paper falls short of what I would expect from a systematic review in terms of systematically summarizing previous research findings related to the impact of open access on scientific publishing, public engagement with science, science literacy and altmetrics. The authors should be careful to hold back value judgements / arguments related to the case for open access until having systemically reviewed the evidence-based benefits, drawbacks, and/or lack of significant benefits. The authors should also provide some discussion of how we might objectively weigh any evidence-based benefits with potential drawbacks for various stakeholders including researchers and especially early career researchers. The authors should avoid summarizing mostly the findings of previous studies that find positive impacts of open access on the various domains of potential impact they consider in their paper. The evidence already presented in the paper is rigorous and detailed. However, I would recommend a revision that rounds this review out with more systematic evidence.",
"responses": []
},
{
"id": "13292",
"date": "25 Apr 2016",
"name": "Anne Tierney",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is a comprehensive review of the complexities of OA. I have come late to the discussion on this paper, and I find that the previous reviewers have been meticulous in their critique of the paper, to the point I have very little to add. However, there are a couple of points for consideration. What is the effect (if any) of the UK Research Excellence Framework on Open Access? To what extent is disciplinarity a factor in Open Access? I ask this question because of the high impact of the sciences and biomedical research, but wonder about Arts and Humanities (and other areas) as a comparision. As one of the other reviewers said, \"this doesn't consider the fact that the prestige of some journals is advertising in and of itself.\" While this is true of scientific journals, the same can't be said, for example, of education journals, so there is a lack of parity between disciplines. There was also an assumption of the willingness of reviewers to continue to offer their services freely. This aspect of OA (and subscription-based) publishing is hardly ever critiqued, but it assumed to be part of the process.All in all, this paper gives a lot of food for thought. I don't expect a rewrite of the paper, based on my comments, but I would welcome further discussion on where the authors (and readers) see OA going in the future.",
"responses": []
},
{
"id": "13295",
"date": "28 Apr 2016",
"name": "Christopher Chambers",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTennant et al offer a timely and insightful review of the various effects of open access publishing on science and society. The paper is well structured and enjoyable to read. Although I am not an expert on open access publishing, I also found the discussion of the literature quite balanced and evidence-based.I have just three recommendations for revisions:In the discussion of the OA citation advantage (which is excellent), the authors are very careful to avoid asserting a causal link between the OA status of a paper and the number of citations it generates. However, in my view, the conclusion of the Gargouri (2010) paper should be caveated. While the results of Gargouri are consistent with the absence of a self-selection bias in producing the OA advantage, they cannot rule it out. It might be the case that funders or institutions that mandate OA are also those that are more likely to support/host higher impact research. Furthermore, given that OA mandates are never 100% effective, perhaps authors are more likely to comply with a mandate for work they perceive to be of high impact. A useful addition to this section would be specify the conditions under which we could determine whether or not OA causally influences citations. This would require a randomised controlled trial in which articles are randomly are assigned to OA and non-OA routes. To my knowledge, no such a trial has yet been undertaken, although the authors will be in a more informed position to know whether this is the case. There is not much discussion in the paper of the distinction between full OA and hybrid OA. Given that the APCs for hybrid OA articles tend to be substantially higher than those for full OA articles, this may warrant more prominent coverage in the economic case for OA. It is also relevant to the brief mention of the Finch Report, which (controversially) favours gold OA (including the hybrid route) over green OA. I was glad to see the link formed between OA and open science more generally, as many researchers and advocates draw a distinct (and somewhat arbitrary) line between these initiatives. In this context, it would perhaps be relevant to mention the TOP guidelines (https://cos.io/top/). TOP is very much an evolving entity so it would be interesting to consider the inclusion of an OA standard in future revisons of TOP.Minorp13 This sentence is difficult to parse: \"Whereas this is hyper-variable, and strongly dependent on a range of factors, it is the fact that any public interest in science that is of importance.\"",
"responses": [
{
"c_id": "1952",
"date": "03 May 2016",
"name": "David Wojick",
"role": "Reader Comment",
"response": "Chris, given my comments below about how one-sided this article is, in favor of OA, I am curious as to how you think it is balanced? Balanced in what respect?"
},
{
"c_id": "2121",
"date": "01 Aug 2016",
"name": "Charles Oppenheim",
"role": "Reader Comment",
"response": "David, in view of your well-known antipathy to OA, don't you think you should have declared your competing interest?"
}
]
}
] | 1
|
https://f1000research.com/articles/5-632
|
https://f1000research.com/articles/5-2351/v1
|
20 Sep 16
|
{
"type": "Method Article",
"title": "Saccharomyces cerevisiae single-copy plasmids for auxotrophy compensation, multiple marker selection, and for designing metabolically cooperating communities",
"authors": [
"Michael Mülleder",
"Kate Campbell",
"Olga Matsarskaia",
"Florian Eckerstorfer",
"Markus Ralser",
"Michael Mülleder",
"Kate Campbell",
"Olga Matsarskaia",
"Florian Eckerstorfer"
],
"abstract": "Auxotrophic markers are useful tools in cloning and genome editing, enable a large spectrum of genetic techniques, as well as facilitate the study of metabolite exchange interactions in microbial communities. If unused background auxotrophies are left uncomplemented however, yeast cells need to be grown in nutrient supplemented or rich growth media compositions, which precludes the analysis of biosynthetic metabolism, and which leads to a profound impact on physiology and gene expression. Here we present a series of 23 centromeric plasmids designed to restore prototrophy in typical Saccharomyces cerevisiae laboratory strains. The 23 single-copy plasmids complement for deficiencies in HIS3, LEU2, URA3, MET17 or LYS2 genes and in their combinations, to match the auxotrophic background of the popular functional-genomic yeast libraries that are based on the S288c strain. The plasmids are further suitable for designing self-establishing metabolically cooperating (SeMeCo) communities, and possess a uniform multiple cloning site to exploit multiple parallel selection markers in protein expression experiments.",
"keywords": [
"Saccharomyces cerevisiae",
"centromeric plasmid",
"auxotrophic markers",
"self-establishing metabolically cooperating (SeMeCo) communities",
"metabolism"
],
"content": "Introduction\n\nAuxotrophic markers are single gene perturbations of essential metabolic pathways, that are exploited in the efficient selection of strains, plasmids and genome editing. Further, they are used in a diverse spectrum of genetic technologies, as their selection is efficient, their use economic, and in contrast to antibiotic selection markers, they do not revert by mutation1–3. In budding yeast, auxotrophic marker alleles important for histidine, leucine, uracil, methionine, lysine, adenine and tryptophan metabolism have been crossed or cloned into the popular S. cerevisiae laboratory strains. Harbouring 5 auxotrophic marker mutations, his3Δ1, leu2Δ0, ura3Δ0, lys2Δ0 or met17Δ04–9, strains derived from the S288c background served as the parents of the yeast gene-deletion collection5, and subsequent genetic libraries that are based on this principle. These libraries include gene deletion mutants5,10,11, genetically introduced GFP, GST, and TAP fusions12–14, transposon insertion mutants15, decreased abundance by mRNA perturbation (DAmP) mutants16, Tet-promoter controlled expression17 and the ts-alleles for essential genes18,19. Furthermore, systematic strain collections of other fungal species including Schizosaccharomyces pombe20,21 and the pathogens Candida glabrata22, Candida albicans23 or Neurospora crassa24, all involve use of auxotrophic markers as well. As a result, auxotrophic backgrounds are omnipresent in a large number of functional genomic experiments, and have been used in a countless number of small-scale experiments, resulting in their ubiquity across yeast molecular biology literature.\n\nIn order for a metabolic gene to function as an auxotrophic marker, it needs to be part of a metabolic pathway for which the cells possess an extracellular uptake and a sensing mechanism for the product of the interrupted pathway. Auxotrophic marker mutations are hence associated with metabolites that are readily taken up from the environment. This includes the metabolites exchanged between cells in communal growth, in particular amino acids25. The biosynthesis of amino acids accounts for up to half of the metabolic flux towards biomass, with amino acids making up to 2/3rds of the total mass of polar metabolites26,27. As a consequence, a shift from self-synthesis to uptake, as enforced by auxotrophy, is not without biological consequence. In fact, most of the genome-wide gene expression is sensitive to epistatic interaction within the Saccharomyces metabolic-genetic background28. The physiological effects arising from auxotrophy and complemented marker genes have been highlighted by several yeast labs for more than a decade2,29–33. Most importantly, to grow auxotrophic strains, amino acids and nucleotides need to be added to the growth medium. Nutrient supplementation affects not only the interrupted pathway itself, but the biosynthesis of other essential compounds, in particular the enzymatic cofactors, due to the metabolic network responding to perturbation at the systems levels, and hence, affecting multiple metabolic pathways in parallel2,34,35. Cell growth has consistently shown to be affected by nutrient supplementation, reflecting the variation in energy costs between biosynthesis and uptake/incorporation of the provided nutrients2,34. In batch cultures, supplements are also consumed at different rates33. As a consequence, nutrient availability changes during batch culture growth, rendering cells physiologically different between growth phases. In classic molecular biology, the use of a matched auxotrophic background as a wild-type control has been considered sufficient to account for the effects of auxotrophy2. Transcriptomic, proteomic and metabolomic analysis of complemented auxotrophs show however that this is not the case; the metabolic background deficiencies interact epistatically with the majority of the coding genome and in a context dependent manner. The biological explanation for this phenomenon is that metabolism is intrinsically intertwined with the gene expression machinery and is dependent on metabolic flux distribution. The same gene deletion introduced in a different auxotrophic background can hence cause an entirely different transcriptional response, so that a matched parent background is not able to compensate for these effects28.\n\nWe here present a series of single copy plasmids derived from the pHLUM minichromosome, which can be used for restoring prototrophy as well as for testing the metabolic capacity of budding yeast, by compensating for the possible combinations of his3, leu2, ura3, met17 (or lys2) deficiencies. For their use in S. cerevisiae, the plasmids contain a centromeric origin for single copy expression and express the marker genes under native S. cerevisiae promoter sequences39. To exploit multiple markers to reduce the plasmid segregation problem for protein expression experiments, we further introduced the uniform multiple cloning site of the pRS300 vector series. For cloning and manipulation in E. coli, the shuttle vectors contain a bacterial high-copy replication origin (pUC) and an ampicillin antibiotic resistance marker. Finally, the pHLUM series of the plasmid contains an N-terminal fragment (α-peptide) of the E. coli beta galactosidase (lacZ), for blue-white selection in appropriate cloning strains40, and an F1 origin for use in phage libraries. These plasmids can be used for complementing unused auxotrophies in laboratory yeast stains, to express proteins exploiting multiple parallel selection markers, and to study metabolite exchange interactions in synthetic yeast communities.\n\n\nMaterials and methods\n\nEscherichia coli strain DH5α was used as plasmid host and strains containing the recombinant plasmids were selected on LB medium with ampicillin (100 µg/ml) and grown at 37°C. Two commonly used S. cerevisiae strains in the S288c background, BY4741 (MATa his3Δ1 leu2Δ0 met17Δ0 ura3Δ0) and BY4742 (MATα his3Δ1 leu2Δ0 lys2Δ0 ura3Δ0)4, were used to test for genetic complementation of auxotrophic requirements by the plasmids created. The strains were grown in YPD (2% Glucose, 20 g/l peptone (Bacto™), 10 g/l yeast extract (Bacto™)) or synthetic minimal (SM) medium (2% glucose, 6.8 g/l yeast nitrogen base), as indicated. To enable growth of auxotrophic strains, the SM medium was supplemented with 20 mg/l histidine, 60 mg/l leucine, 20 mg/l uracil, 20 mg/l methionine and/or 50 mg/l lysine as indicated.\n\nFor site directed mutagenesis the Quikchange Lightning kit (Agilent) was used according to manufacturer guidelines, taking 50 ng of plasmid DNA as a template and 6.3 µM of each oligonucleotide (primers O09-O12, Table 1) to a total volume of 25 µl. The manufacturer's recommended cycling parameters, with a 2.5 min extension time, were followed.\n\nAll other enzymes for molecular cloning were purchased from New England Biolabs (NEB) and used as instructed. Genomic DNA was extracted from yeast with repeated freeze-thawing of cells in a lysis buffer as per previous publication36. DNA from genomic and plasmid templates was amplified with Phusion High-Fidelity DNA Polymerase (Finnzymes) supplemented with the CES combinatorial enhancer solution to increase primer specificity as described previously37.\n\nPlasmids were isolated both from E. coli and S. cerevisiae with the QIAprep Spin Miniprep Kit (Qiagen). For the latter, a Qiagen protocol (Michael Jones, Chugai Institute for Molecular Medicine, Ibaraki, Japan, https://www.qiagen.com/gb/resources/resourcedetail?id=5b59b6b3-f11d-4215-b3f7-995a95875fc0&lang=en) was used. The protocol employs 425–600 μm acid-washed glass beads (Sigma) for mechanical lysis (30 sec, 6.5 M/s in a FastPrep®-24 Instrument (MP Biomedicals)). For homologous recombination to construct pLUK, two complementary primers were designed with 5’ and 3’ leader sequences homologous to plasmid 5’ and 3’ of the HIS3 marker gene (primers O15-O16, Table 1). The oligonucleotides were annealed by heating to 95°C then gradual cooling to RT. The plasmid pHLUK was linearized by cutting inside the HIS3 sequence with MscI. The yeast strain BY4741 was transformed with 100 ng of cut vector and 500 ng of annealed primers and was selected on SM medium supplemented with histidine. Clones with successful homologous recombination events were identified by failure to grow on SM medium without histidine.\n\nYeast strains were transformed with a high efficiency lithium acetate, PEG, salmon sperm protocol using 300 ng of plasmid per reaction38.\n\n\nResults\n\nFor the genetic complementation of the commonly used auxotrophic lesions in HIS3, LEU2, URA3 and MET17 we have previously constructed the pHLUM minichromome (Addgene ID 40276)33. The plasmid is based on pRS31339, contains a centromer, and an autonomous replication sequence, and the HIS3 marker as derived from the pRS313 backbone. The additional marker genes LEU2, URA3 and MET17 were cloned from other popular yeast plasmids (pRS42539, p426GPD41 and pRS4114 and were placed between unique restriction sites, so that they can be individually excised33, primers O01 - O06, Table 1). The three-gene insert is flanked by BamHI and XhoI and unique restriction sites AscI and SphI were designed between LEU2/URA3 and URA3/MET17, respectively, to allow for selective excision of the individual markers. However, in this original version of pHLUM, the HIS3 marker cannot be removed in a straightforward manner.\n\nIn order to improve the usefulness of the minichromosome, we decided to redesign the plasmid backbone, replacing all 4 marker genes but leaving the multiple cloning site of pRS313 intact. With a site directed mutagenesis strategy, we added an AatII restriction site 5’ and XhoI and BamHI sites 3’ of HIS3 (primers O09-O10, Table 1). In the same reaction we eliminated the BamHI and XhoI recognition sites from the multiple cloning site by exchanging two bases and preserving the sequence of the lacZ ɑ-peptide (primers O11-O12, Table 1). With the new restriction sites available and the absence of XhoI and BamHI in the multiple cloning site, the DNA fragment containing LEU2, URA3 and MET17 could be excised from pHLUM with XhoI and BamHI and integrated 3’ of the HIS3 gene on the modified pRS313 vector. The resulting plasmid was named pHLUM (plasmid HIS3 LEU2 URA3 MET15) (version 2). It maintains 8 unique endonuclease recognition sites in the multiple cloning site and the capacity for colorimetric lacZ complementation assays (Figure 1A).\n\nPhysical maps of pHLUM (version 2) and pHLUK minichromosomes, the centromeric parents for the generated S. cerevisiae vector series. pHLUM (version 2) expresses HIS3, LEU2, URA3 and MET17 to complement auxotrophies in BY4741/MATa strains of the knock-out collection, while pHLUK expresses HIS3, LEU2, URA3 and LYS2, for the BY4742/MATα series.\n\nIn the typical MATα derivatives of the S. cerevisiae gene deletion collection (i.e. BY4742), LYS2 is deleted while the MET17 marker is wild-type. We used pHLUM (version 2) as a template and exchanged the MET17 marker with LYS2 to create an analoguos vector series (pHLUK). The LYS2 coding sequence contains recognition sites for both XhoI and BamHI. We therefore removed the BamHI site from pHLUM (version 2), and introduced at the same position a recognition sequence for PmeI. To this end we synthesised two complementary oligonucleotides (O13-O14, Table 1) and annealed them by heating to 95°C and gradual cooling to RT to yield a small double stranded DNA segment containing a PmeI site and cohesive ends to the BamHI digested pHLUM (version 2). The digested vector was dephosphorylated with Antarctic phosphatase (NEB), the annealed primers phosphorylated with polynucleotide kinase (NEB) and then ligated by T4 DNA ligase abolishing the recognition site for BamHI. Next, we amplified the LYS2 gene from BY4741 genomic DNA (O07-O08, Table 1) including the promoter and terminator regions according to the yeast promoter atlas42. Primer O07 contained recognition sites for SalI and MluI and primer O08 for SphI (Table 1). The modified plasmid was digested with XhoI and SphI and the MET17 marker replaced with the LYS2 PCR product digested with SalI/SphI. The cohesive ends of SalI and XhoI DNA fragments are compatible and abolish their recognition sites upon ligation. The MluI site allows digestion of the vector between LYS2 and HIS3 and excision of either marker (Figure 1B).\n\nThe unique endonuclease recognition sites between each of the marker genes facilitated the creation of the 21 derivatives of pHLUM (version 2) and pHLUK containing between 1 and 3 marker genes, in all possible combinations. The marker genes were excised by digestion with appropriate endonucleases, the 3' and 5’ overhangs were removed or filled in with T4 DNA polymerase and the plasmid ligated with T4 ligase (Table 2, Figure 2). The plasmid pLUK was generated using homologous recombination in yeast.\n\nSD, site directed mutagenesis; RE, restriction endonuclease; BE, blunt end ligation; HR, homologous recombination; AP, annealed primer cloning\n\n(A) The plasmids are generated from pHLUM (version 2) containing HIS3, LEU2, URA3 and MET17, and (B) pHLUK containing HIS3, LEU2, URA3 and LYS2 expressed under control of S. cerevisiae promoter and terminator sequences. Unique restriction sites between the marker genes and in the multiple cloning site (M) are indicated in the parent pHLUM (version 2) and pHLUK (Figure 1). Loss or acquisition of unique restriction sites is highlighted in the individual vector maps.\n\nThe completed plasmids were re-sequenced, which led to some corrections compared to the Genebank deposited version of pRS313 (GenBank: U03439.1) (Supplementary material). Successful genetic complementation of auxotrophic markers is illustrated upon transforming BY4741 and BY4742 strains, with the generated plasmids, and subsequent scoring of their growth on selective media for histidine, leucine, uracil and methionine or lysine, respectively. The plasmids restored all auxotrophies in BY4741 and BY4742 in the desired combination (Figure 3). Further, we tested the functionality of the lacZ α-peptide sequence retained in the pHLUM (version 2) series for blue-white selection, by transforming them in DH5α (Figure 3C). On X-Gal containing medium, a blue colour shift was observed.\n\n(A) S. cerevisiae haploid strains, BY4741 (MATa his3Δ1 leu2Δ0 met17Δ0 ura3Δ0) and (B) BY4742 (MATα his3Δ1 leu2Δ0 lys2Δ0 ura3Δ0)4 were transformed with each of the 23 centromeric plasmids from the pHLUM (version 2) and pHLUK series and spotted onto synthetic minimal medium containing four or all five supplements of 20 mg/l histidine, 60 mg/l leucine, 20 mg/l uracil, 20 mg/l methionine or 50 mg/l lysine, as indicated in black in left hand key. (C) Use of the plasmid series for colorimetric lacZ assays: DH5α transformed with the lacZ containing plasmid pUC19, the lacZ missing plasmid pET-20b(+) and pHLUM (version 2) (pHLUMv2), parent of 23 plasmid series. A shift from white to blue on X-Gal containing LB medium due to the presence of a partial lacZ sequence.\n\n\nDiscussion\n\nDue to the physiological impact of auxotrophy one would in an ideal world conduct all yeast experiments in prototrophic backgrounds and, if the objective of the experiment is to study a physiological process, use cells grown in minimal nutrient medium. However, most existing Saccharomyces lab strain resources are auxotrophic, and a majority of genetic techniques depend on the ability to select with genetic markers. The switch to antibiotic resistance markers is not a viable alternative to auxotrophies in many cases, as antibiotics can be expensive, are prone to persistence of sensitive cells, and by interfering with translation or transcription, have strong biological effects on their own43,44. We have noticed in our previous work, that a useful workaround, or compromise, for many applications is to complement the unused auxotrophic marker mutation with a multi-gene containing, single copy, centromeric single copy plasmid (minichromosome) that compensates for metabolic deficiencies present in the cell28,33. By nature, introducing an episome adds a new constraint due to its segregation. However, we found that the four metabolic genes on the pHLUM minichromosome provide selective advantage also under nutrient-rich growth conditions, so that cells retain the vector even in the absence of selection pressure33. Further, we tested for copy number effects, and found that the expression of HIS3, LEU2, MET15 and URA3 from the minichromosome fully suffices the biosynthetic needs33. A situation in which all cells are provided with a high concentration of nutrients, as would occur with media supplementation, may also be less native to cells community where, usually, a certain fraction of cells are dependent on metabolite exchange25,45. For the typical experiment, the constraints arising from segregation of a single-copy minichromosome that restores auxotrophy, are hence much smaller compared to the problems caused by the use of nutrient supplemented media and auxotrophic strain backgrounds.\n\nTo support the work with prototrophic yeasts, we present here 23 minichromosomal vectors for restoring prototrophy in popular laboratory strains of budding yeast. These plasmids compensate for histidine (HIS3), leucine (LEU2), uracil (URA3), methionine (MET17) and lysine (LYS2) deficiencies and combinations thereof, which have been introduced into many yeast strains derived from the S288c background. Furthermore, the multiple cloning array is compatible with the widely used pRS300 plasmid series and provides unique restriction sites to facilitate cloning of genes of interest. The different marker genes of these vectors also enable expression analysis in various genetic backgrounds. The main intended application of these plasmids is to restore auxotrophy in laboratory strains and to be able to conduct experiments in minimal nutrient supplemented medium. In this way, the effect of amino acid and nucleotide biosynthetic metabolism, which is responsible for a major fraction of the metabolic flux of a cell, as well as has a profound impact on gene expression and physiology28, can be studied.\n\nAnother application for these plasmids is to study metabolic cooperation in self establishing yeast communities (SeMeCo). It has been known for a long time that a subpopulation of plasmid free cells can co-grow alongside plasmid containing cells, despite using nutrient selection2,46–48. In our lab we have exploited this property to study metabolite exchange interactions between cells, and developed a system of self-establishing metabolically cooperating communities (SeMeCo) in which a series of auxotrophs cooperate to enable the growth of a yeast community25,45. This system exploits plasmid segregation, starting from an initially self-supporting cell, that grows progressively into an increasingly heterogeneous population, which is able to proliferate on the basis of nutrient exchange occuring between yeast cells. The progressive self-establishment overcomes a failure that is typically observed when yeast auxotrophs are forced to establish a bilateral cooperation. Other than through self-establishment, this lethality is overcome by yeast cells being genetically modified to artificially overproduce metabolites needing to be exchanged. The synthetic communities generated in this way have been intensively studied and serve as a model for ecological metabolite exchange interactions49–52. The new vector series, having multiple auxotrophic markers on single centromeric plasmids, can support the design of such communities, as it removes the likelihood of recombination occurring when multiple plasmids are used in parallel, to obtain the desired auxotrophic background.\n\nFinally, the uniform multiple cloning site (MCS) in the plasmid series allows for the inclusion of marker proteins, such as GFP or beta-galactosidase, to track individual cell types in SeMeCo communities, that reveal phenotypic heterogeneity at the single cell level45. This MCS further allows the use of these plasmids for the recombinant expression of proteins. Here, one can profit from multiple auxotrophic markers on one plasmid to improve selection and reduce plasmid segregation rate, so that (as long as no disadvantageous protein is expressed) the plasmids can be maintained in rich medium in the absence of selection pressure33. This strategy of using multiple markers in parallel can further be exploited to increase selection pressure, to counteract the well-known issue of clonal selection phenotypes, emerging when overexpressing recombinant proteins.\n\nIn summary, to test both the effects of prototrophy as well as the metabolic capacity of budding yeast, to design self-establishing metabolically cooperating communities, and to profit from multiple selection markers when expressing proteins, we present a series of centromeric plasmids that can compensate for histidine (HIS3), leucine (LEU2), uracil (URA3), methionine (MET17) or lysine (LYS2) deficiencies in 23 possible combinations. These vectors are accessible individually (Table 2) or as a Kit from Addgene (www.addgene.org/kits/prototrophy/). We hope they benefit the community when analysing the importance of biosynthetic metabolism for gene function, gene expression, physiology and metabolite exchange.\n\n\nData availability\n\nThe full sequences of the 23 plasmids are deposited in Addgene under the ID numbers listed in Table 1.",
"appendix": "Author contributions\n\n\n\nMM designed the plasmids, MM, KC, OM, and FE constructed the plasmids, MM, KC, tested the plasmids, MR designed the study, MM, KC and MR wrote the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Francis Crick Institute which receives its core funding from Cancer Research UK (FC001134), the UK Medical Research Council (FC001134), and the Wellcome Trust (FC001134), as well as grant funding from the Wellcome Trust (RG 093735/Z/10/Z, 200829/Z/16/Z) and the ERC (Starting Grant 260809) to MR.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Eric Perkins (Addgene) for supporting this project by sequencing of the 23 plasmids.\n\n\nSupplementary material\n\nPlasmid sequences used in Figure 1 and Figure 2. Plasmid sequences for the 23 vectors have been assembled from the pHLUM and pRS300 vector sequences, respectively, and verified and corrected upon resequencing.\n\nClick here to access the data.\n\n\nReferences\n\nTong AH, Evangelista M, Parsons AB, et al.: Systematic genetic analysis with ordered arrays of yeast deletion mutants. Science. 2001; 294(5550): 2364–8. PubMed Abstract | Publisher Full Text\n\nPronk JT: Auxotrophic yeast strains in fundamental and applied research. Appl Environ Microbiol. 2002; 68(5): 2095–100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRothstein RJ: One-step gene disruption in yeast. Methods Enzymol. 1983; 101: 202–11. PubMed Abstract | Publisher Full Text\n\nBrachmann CB, Davies A, Cost GJ, et al.: Designer deletion strains derived from Saccharomyces cerevisiae S288C: a useful set of strains and plasmids for PCR-mediated gene disruption and other applications. Yeast. 1998; 14(2): 115–32. PubMed Abstract | Publisher Full Text\n\nWinzeler EA, Shoemaker DD, Astromoff A, et al.: Functional characterization of the S. cerevisiae genome by gene deletion and parallel analysis. Science. 1999; 285(5429): 901–6. PubMed Abstract | Publisher Full Text\n\nFink GR: GENE-ENZYME RELATIONS IN HISTIDINE BIOSYNTHESIS IN YEAST. Science. 1964; 146(3643): 525–7. PubMed Abstract | Publisher Full Text\n\nLacroute F: Regulation of pyrimidine biosynthesis in Saccharomyces cerevisiae. J Bacteriol. 1968; 95(3): 824–32. PubMed Abstract | Free Full Text\n\nMasselot M, De Robichon-Szulmajster H: Methionine biosynthesis in Saccharomyces cerevisiae. I. Genetical analysis of auxotrophic mutants. Mol Gen Genet. 1975; 139(2): 121–32. PubMed Abstract | Publisher Full Text\n\nSatyanarayana T, Umbarger HE, Lindegren G: Biosynthesis of branched-chain amino acids in yeast: regulation of leucine biosynthesis in prototrophic and leucine auxotrophic strains. J Bacteriol. 1968; 96(6): 2018–24. PubMed Abstract | Free Full Text\n\nGiaever G, Chu AM, Ni L, et al.: Functional profiling of the Saccharomyces cerevisiae genome. Nature. 2002; 418(6896): 387–91. PubMed Abstract | Publisher Full Text\n\nRyan O, Shapiro RS, Kurat CF, et al.: Global gene deletion analysis exploring yeast filamentous growth. Science. 2012; 337(6100): 1353–6. PubMed Abstract | Publisher Full Text\n\nHuh W-K, Falvo JV, Gerke LC, et al.: Global analysis of protein localization in budding yeast. Nature. 2003; 425(6959): 686–91. PubMed Abstract | Publisher Full Text\n\nSopko R, Huang D, Preston N, et al.: Mapping pathways and phenotypes by systematic gene overexpression. Mol Cell. 2006; 21(3): 319–30. PubMed Abstract | Publisher Full Text\n\nGhaemmaghami S, Huh W-K, Bower K, et al.: Global analysis of protein expression in yeast. Nature. 2003; 425(6959): 737–41. PubMed Abstract | Publisher Full Text\n\nRoss-Macdonald P, Coelho PS, Roemer T, et al.: Large-scale analysis of the yeast genome by transposon tagging and gene disruption. Nature. 1999; 402(6760): 413–8. PubMed Abstract | Publisher Full Text\n\nBreslow DK, Cameron DM, Collins SR, et al.: A comprehensive strategy enabling high-resolution functional analysis of the yeast genome. Nat Methods. 2008; 5(8): 711–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMnaimneh S, Davierwala AP, Haynes J, et al.: Exploration of essential gene functions via titratable promoter alleles. Cell. 2004; 118(1): 31–44. PubMed Abstract | Publisher Full Text\n\nKofoed M, Milbury KL, Chiang JH, et al.: An Updated Collection of Sequence Barcoded Temperature-Sensitive Alleles of Yeast Essential Genes. G3 (Bethesda). 2015; 5(9): 1879–87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBen-Aroya S, Coombes C, Kwok T, et al.: Toward a comprehensive temperature-sensitive mutant repository of the essential genes of Saccharomyces cerevisiae. Mol Cell. 2008; 30(2): 248–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDecottignies A, Sanchez-Perez I, Nurse P: Schizosaccharomyces pombe essential genes: a pilot study. Genome Res. 2003; 13(3): 399–406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim DU, Hayles J, Kim D, et al.: Analysis of a genome-wide set of gene deletions in the fission yeast Schizosaccharomyces pombe. Nat Biotechnol. 2010; 28(6): 617–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwarzmüller T, Ma B, Hiller E, et al.: Systematic phenotyping of a large-scale Candida glabrata deletion collection reveals novel antifungal tolerance genes. PLoS Pathog. 2014; 10(6): e1004211. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoble SM, French S, Kohn LA, et al.: Systematic screens of a Candida albicans homozygous deletion library decouple morphogenetic switching and pathogenicity. Nat Genet. 2010; 42(7): 590–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColot HV, Park G, Turner GE, et al.: A high-throughput gene knockout procedure for Neurospora reveals functions for multiple transcription factors. Proc Natl Acad Sci USA. 2006; 103(27): 10352–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampbell K, Vowinckel J, Mülleder M, et al.: Self-establishing communities enable cooperative metabolite exchange in a eukaryote. eLife. 2015; 4: pii: e09943. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNissen TL, Schulze U, Nielsen J, et al.: Flux distributions in anaerobic, glucose-limited continuous cultures of Saccharomyces cerevisiae. Microbiology. 1997; 143(Pt 1): 203–18. PubMed Abstract | Publisher Full Text\n\nPark JO, Rubin SA, Xu YF, et al.: Metabolite concentrations, fluxes and free energies imply efficient enzyme usage. Nat Chem Biol. 2016; 12(7): 482–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlam MT, Zelezniak A, Mülleder M, et al.: The metabolic background is a global player in Saccharomyces gene expression epistasis. Nat Microbiol. 2016; 1: 15030. PubMed Abstract | Publisher Full Text\n\nPetek Çakar Z, Sauer U, Bailey JE: Metabolic engineering of yeast: the perils of auxotrophic hosts. Biotechnol Lett. 1999; 21(7): 611–6. Publisher Full Text\n\nCanelas AB, Harrison N, Fazio A, et al.: Integrated multilaboratory systems biology reveals differences in protein metabolism between two reference yeast strains. Nat Commun. 2010; 1: 145. PubMed Abstract | Publisher Full Text\n\nGupta JC, Mukherjee KJ: Stability studies of recombinant Saccharomyces cerevisiae in the presence of varying selection pressure. Biotechnol Bioeng. 2002; 78(5): 475–88. PubMed Abstract | Publisher Full Text\n\nKokina A, Kibilds J, Liepins J: Adenine auxotrophy--be aware: some effects of adenine auxotrophy in Saccharomyces cerevisiae strain W303-1A. FEMS Yeast Res. 2014; 14(5): 697–707. PubMed Abstract | Publisher Full Text\n\nMülleder M, Capuano F, Pir P, et al.: A prototrophic deletion mutant collection for yeast metabolomics and systems biology. Nat Biotechnol. 2012; 30(12): 1176–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNiederberger P, Miozzari G, Hütter R: Biological role of the general control of amino acid biosynthesis in Saccharomyces cerevisiae. Mol Cell Biol. 1981; 1(7): 584–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWagner A, Fell DA: The small world inside large metabolic networks. Proc Biol Sci. 2001; 268(1478): 1803–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarju S, Fedosyuk H, Peterson KR: Rapid isolation of yeast genomic DNA: Bust n’ Grab. BMC Biotechnol. 2004; 4(1): 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRalser M, Querfurth R, Warnatz HJ, et al.: An efficient and economic enhancer mix for PCR. Biochem Biophys Res Commun. 2006; 347(3): 747–51. PubMed Abstract | Publisher Full Text\n\nGietz RD, Schiestl RH: High-efficiency yeast transformation using the LiAc/SS carrier DNA/PEG method. Nat Protoc. 2007; 2(1): 31–4. PubMed Abstract | Publisher Full Text\n\nSikorski RS, Hieter P: A system of shuttle vectors and yeast host strains designed for efficient manipulation of DNA in Saccharomyces cerevisiae. Genetics. 1989; 122(1): 19–27. PubMed Abstract | Free Full Text\n\nCronan JE Jr, Narasimhan ML, Rawlings M: Insertional restoration of beta-galactosidase alpha-complementation (white-to-blue colony screening) facilitates assembly of synthetic genes. Gene. 1988; 70(1): 161–70. PubMed Abstract | Publisher Full Text\n\nMumberg D, Müller R, Funk M: Yeast vectors for the controlled expression of heterologous proteins in different genetic backgrounds. Gene. 1995; 156(1): 119–22. PubMed Abstract | Publisher Full Text\n\nChang DT, Huang CY, Wu CY, et al.: YPA: an integrated repository of promoter features in Saccharomyces cerevisiae. Nucleic Acids Res. 2011; 39(Database issue): D647–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAndersson DI, Levin BR: The biological cost of antibiotic resistance. Curr Opin Microbiol. 1999; 2(5): 489–93. PubMed Abstract | Publisher Full Text\n\nAndersson DI: The biological cost of mutational antibiotic resistance: any practical conclusions? Curr Opin Microbiol. 2006; 9(5): 461–5. PubMed Abstract | Publisher Full Text\n\nCampbell K, Vowinckel J, Ralser M: Cell-to-cell heterogeneity emerges as consequence of metabolic cooperation in a synthetic yeast community. Biotechnol J. 2016;. 11(9): 1169–78. PubMed Abstract | Publisher Full Text\n\nChristianson TW, Sikorski RS, Dante M, et al.: Multifunctional yeast high-copy-number shuttle vectors. Gene. 1992; 110(1): 119–22. PubMed Abstract | Publisher Full Text\n\nMeinander NQ, Hahn-Hägerdal B: Fed-batch xylitol production with two recombinant Saccharomyces cerevisiae strains expressing XYL1 at different levels, using glucose as a cosubstrate: a comparison of production parameters and strain stability. Biotechnol Bioeng. 1997; 54(4): 391–9. PubMed Abstract | Publisher Full Text\n\nSardonini CA, Dibiasio D: A model for growth of Saccharomyces cerevisiae containing a recombinant plasmid in selective media. Biotechnol Bioeng. 1987; 29(4): 469–75. PubMed Abstract | Publisher Full Text\n\nMomeni B, Brileya KA, Fields MW, et al.: Strong inter-population cooperation leads to partner intermixing in microbial communities. eLife. 2013; 2: e00230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShou W, Ram S, Vilar JM: Synthetic cooperation in engineered yeast populations. Proc Natl Acad Sci USA. 2007; 104(6): 1877–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoek TA, Axelrod K, Biancalani T, et al.: Resource Availability Modulates the Cooperative and Competitive Nature of a Microbial Cross-Feeding Mutualism. PLoS Biol. 2016; 14(8): e1002540. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMüller MJ, Neugeboren BI, Nelson DR, et al.: Genetic drift opposes mutualism during spatial population expansion. Proc Natl Acad Sci USA. 2014; 111(3): 1037–42. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "16449",
"date": "10 Oct 2016",
"name": "Campbell Gourlay",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper nicely describes the generation and testing of a useful plasmid set that allows complementation of commonly used auxotrophic markers in the budding yeast system. The paper provides clear and adequate descriptions of their construction and therefore represents a solid piece of work that is ready for indexing as it is currently presented.",
"responses": []
},
{
"id": "16547",
"date": "24 Oct 2016",
"name": "Amy A. Caudy",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe Ralser group has thoughtfully designed and created a comprehensive set of plasmids suitable for the complementation of various combinations of the widely used leucine (leu2), uracil (ura3), methionine (met17, formerly known as met15), histidine (his3), and lysine (lys2) markers. Auxotrophy can have significant effects on a variety of phenotypes, as shown by this group and others (and as well-described in the introduction). This new collection of plasmids provides a convenient source of auxotrophic markers supplied from their endogenous promoters on a CEN/ARS plasmid, which is near single-copy in most situations.\nNot content to make a simple derivative of the pHLUM plasmid they previously created in order to build a prototrophic version of the yeast deletion collection, the group went to significant efforts to preserve several sites including BamHI and XhoI within the blue/white compatible multiple cloning site of their base vector. These plasmids have been fully sequenced in collaboration with Addgene, revealing a handful of mutations not present in the original (theoretical) sequence of the parent plasmid deposited in Genbank more than a decade ago.\nOne minor point could be mentioned for clarity -\nThe introduction mentions supplementation of \"nucleotides\" but it is important to note that yeast cells do not take up (phosphorylated) nucleotides directly. Instead, these are dephosphorylated by extracellular phosphatases to nucleosides and then imported. There are also transporters that can efficiently take in nucleobases such as uridine.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2351
|
https://f1000research.com/articles/5-1551/v1
|
30 Jun 16
|
{
"type": "Research Article",
"title": "Targeted pharmacotherapy after somatic cancer mutation screening",
"authors": [
"Thomas M. Polasek",
"Karen Ambler",
"Hamish S. Scott",
"Michael J. Sorich",
"Peter A. Kaub",
"Andrew Rowland",
"Michael D. Wiese",
"Ganessan Kichenadasse",
"Karen Ambler",
"Hamish S. Scott",
"Michael J. Sorich",
"Peter A. Kaub",
"Andrew Rowland",
"Michael D. Wiese",
"Ganessan Kichenadasse"
],
"abstract": "Many patients with solid tumours are treated with targeted pharmacotherapy based on the results of genetic testing (‘precision medicine’). This study investigated the use of targeted drugs after OncoFOCUS™+KIT screening in patients with malignant melanoma, non-small cell lung cancer and metastatic colorectal cancer, and then audited the results against the National Comprehensive Cancer Network (NCCN) guidelines. Patients who were not indicated for targeted pharmacotherapy did not receive such treatment (99%, 100/101). Of the patients indicated for targeted drugs, 79% (33/42) received treatment according to NCCN guidelines. In 48% (20/42) of these patients the results from OncoFOCUS™+KIT screening were required for targeted drug selection. This study highlights the growing importance of precision medicine approaches in directing pharmacotherapy in medical oncology.",
"keywords": [
": targeted pharmacotherapy",
"oncology",
"precision medicine",
"dabrafenib erlotinib",
"bevacizumab",
"malignant melanoma",
"non-small cell lung cancer",
"metastatic colorectal cancer"
],
"content": "Introduction\n\nOver the last 20 years the molecular profiles of many solid tumours have been characterised. The discovery of specific variants in critical proteins that influence cancer pathogenesis has seen the development of ‘targeted pharmacotherapy’ – drugs that selectively inhibit unique molecular targets in tumour cells. Compared to traditional cytotoxic agents, targeted drugs have considerable benefits in the treatment of cancer, including improved response rates and less toxicity1.\n\nThis field of cancer therapeutics is rapidly evolving with several hundred ongoing clinical trials. However, there are no local guidelines in Australia to inform the prescribing of targeted pharmacotherapy. As a consequence, clinicians often use resources from pharmaceutical companies, conference presentations, journal publications or recommendations from other countries, such as the US National Comprehensive Cancer Network (NCCN) guidelines, for their clinical practice. Although the NCCN guidelines are not always directly applicable for practice in Australia, these are reviewed annually, are freely available (www.nccn.org), and have best practice recommendations for targeted pharmacotherapy use in selected cancers.\n\nIn addition to the well documented role of estrogen/progesterone receptor and HER-2 testing in selecting therapies for breast cancer, three other important cancers in Australia, malignant melanoma, non-small cell lung cancer (NSCLC) and metastatic colorectal cancer (mCRC), now have targeted drugs available for treatment based on genetic testing. Dabrafenib, with or without trametinib, is used for malignant melanoma with activating BRAF mutations (‘BRAF positive’)2. Patients with NSCLC that harbours activating EGFR mutations (‘EGFR positive’) are recommended the EGFR inhibitors erlotinib or gefitinib3. Two monoclonal antibodies that also inhibit EGFR (cetuximab and panitumumab) significantly improve survival in patients with mCRC that is RAS wild-type (WT), whereas those with mutations in RAS are essentially insensitive4. Bevacizumab is a selective inhibitor of VEGR that is also used in mCRC but response rates are independent of RAS status i.e., genetic testing is often not necessary for treatment decisions. Bevacizumab is frequently used first-line in combination with chemotherapy regimens such as FOLFOX, FOLFIRI and CapeOX5. Figure 1 shows the 2015 NCCN recommendations for targeted pharmacotherapy based on the molecular profiles of the cancers investigated in this study6–9.\n\nOncoFOCUS™+KIT is a somatic cancer mutation screen offered by SA Pathology (www.sapathology.sa.gov.au) for clinicians in South Australia. The test analyses the oncogenes KRAS, NRAS, EGFR, BRAF and KIT. Clinically significant mutations in these genes are reported as either ‘no mutation detected’ (WT) or as a specific mutation e.g., BRAF V600E. Screening with OncoFOCUS™+KIT has recently been introduced at the Flinders Centre for Innovation in Cancer (FCIC), an academic healthcare centre located in the southern suburbs of Adelaide that specialises in research and treatment of cancer. Given this introduction into clinical practice, and the lack of local prescribing guidelines, the aim of this study was to audit targeted pharmacotherapy use after screening against the latest NCCN recommendations.\n\n\nMethods\n\nA retrospective chart-based audit of OncoFOCUS™+KIT results and targeted pharmacotherapy use was conducted. Ethics approval for the study was granted by the Southern Adelaide Human Research Ethics Committee (application 137.15). Inclusion criteria were: ≥ 18 years, diagnosis of malignant melanoma, advanced NSCLC or mCRC, record of attendance at the FCIC in 2014, and OncoFOCUS™+KIT results reported in 2014. The electronic patient system OACIS was searched for genetic test results and relevant discharge summaries, multi-disciplinary team meeting summaries and electronic and/or hardcopy case notes were reviewed to determine pharmacotherapy use. In a small number of cases (21), information about medications used in private practice was confirmed with the treating oncologist. Retrieval of data was conducted over a 3 month period between June-August 2015. Results were presented as descriptive data or as a percentage.\n\n\nResults\n\nSixty percent (90/149) of the cohort were male and 40% (59/149) were female, with a mean average age of 67.6 years (range 34 to 91 years). At the audit cut-off date, 48.3% (72) were alive, 49.7% (74) were deceased and the living status of 2.0% (3) could not be determined. There were similar numbers of patients with NSCLC (68) and mCRC (63) but a smaller number of patients with malignant melanoma (18).\n\nOncoFOCUS™+KIT results for patients with malignant melanoma, NSCLC and mCRC are shown in Figures 1A–C, respectively. All patients were KIT WT. Importantly, the cohort had similar cancer mutation rates as previously reported. Forty four percent with malignant melanoma had an activating BRAF mutation (40–60% reported10), 17.6% had EGFR-positive NSCLC (10–20% reported11), and 46% had RAS mutant mCRC (40% reported12). These data suggest that the FCIC cohort is representative of the wider population.\n\nOf the 149 patients included, only 6 patients (3.8%) were excluded from the analysis of targeted pharmacotherapy use due to incomplete records. Figure 2 shows the percentage of patients who received or did not receive a targeted drug according to NCCN guidelines. Appropriately, almost all patients not indicated for targeted pharmacotherapy did not receive targeted pharmacotherapy (99%, 100/101). Of the 42 patients in the total cohort indicated for targeted therapy, 79% (33/42) received such treatment according to NCCN guidelines (Figure 2). Of the 25 patients with mCRC that was RAS WT, 36% (9/25) had targeted pharmacotherapy directed by OncoFOCUS™+KIT with an anti-EGFR drug (8 cetuximab, 1 panitumumab), 52% (13/25) received bevacizumab, and 12% (3/25) did not receive a targeted drug in contrast to NCCN guidelines. If bevacizumab in RAS WT mCRC is excluded, 48% (20/42) of the total indicated cohort received appropriate targeted drugs following OncoFOCUS™+KIT screening i.e., required genetic test results for a targeted drug to be prescribed.\n\n\nDiscussion\n\nThis study is the first to report utilisation rates of targeted pharmacotherapy after OncoFOCUS™+KIT screening. As expected, patients who were not indicated for targeted pharmacotherapy did not receive such treatment. In contrast, the use of targeted drugs directed by OncoFOCUS™+KIT screening was relatively low (48%).\n\nThis result may be explained by factors that are independent of OncoFOCUS™+KIT results. First, the use of bevacizumab in mCRC does not require genetic testing – it is considered equivalent to cetuximab and panitmumumab in RAS WT mCRC and was given first-line to most patients with mCRC at FCIC6. This is confusing because bevacizumab is a targeted drug by definition, selectively inhibiting VEGR. Second, targeted drugs for NSCLC and mCRC were subsidised by the Australian Pharmaceutical Benefits Scheme (PBS) in 2014 as second-line only. Thus, patients on first-line chemotherapy appropriately did not receive targeted drugs, despite having mutations suggesting they may benefit from such treatment. During 2014, anti-EGFR drugs became indicated for first-line treatment of EGFR-positive NSCLC and were funded by the PBS13. Likewise, cetuximab and panitumumab are now PBS-subsidised as first-line treatment in RAS WT mCRC13. Not differentiating between first- and second-line targeted pharmacotherapy is a major limitation of the study (note that half the cohort was still alive at the audit cut-off date, precluding a more complete analysis of the temporal relationships between screening and targeted pharmacotherapy use). Third, a number of patients had genetic testing close to the end of life. These patients were considered too unwell for further oncology treatment, or declined targeted drugs when offered, preferring to transfer to palliative care.\n\nThe exact role of targeted drugs for some of the cancer mutations reported by OncoFOCUS™+KIT is unclear. For example, approximately 5–9% of colorectal cancers (7.9% in this study) are characterised by a specific mutation in the BRAF gene (V600E) which causes constitutive activity, in theory bypassing inhibition by cetuximab and panitumumab and potentially making them insensitive14. In the colon cancer NCCN guidelines, BRAF mutation testing is currently optional and not part of decision making for anti-EGFR drugs6. A recent meta-analysis suggests that there is currently insufficient evidence to conclude that patients with mCRC harbouring BRAF mutations should be denied anti-EGFR therapy over concerns of poor efficacy15. However, there are conflicting views on whether BRAF status should influence use of anti-EGFR therapy16,17, and hence some clinicians may potentially utilise BRAF status to make treatment decisions. This highlights the difficulty of auditing medical oncology prescribing where guidelines and the underlying evidence are rapidly evolving.\n\nThe OncoFOCUS™+KIT screening panel is currently limited to five oncogenes. The status of other oncogenes that may influence treatment decisions is determined separately. For example, patients with NSCLC are also tested for ALK rearrangements, and if positive are eligible for treatment with crizotinib (although it is not currently PBS-subsidised for this indication)8. Once the importance of emerging genetic alternations is established in these cancers, such as MET amplifications, ROS1 and RET rearrangements, and HER2 mutations, the OncoFOCUS™+KIT screening panel could be expanded to facilitate more complete molecular diagnosis.\n\nIn conclusion, this study showed that most patients at the FCIC receive pharmacotherapy for their cancer according to NCCN guidelines (93%), and that the results of a somatic cancer mutation screening test are applied reasonably well to drug selection. Precision medicine approaches are of increasing importance when directing pharmacotherapy in medical oncology.\n\n\nData availability\n\nF1000Research: Dataset 1. OncoFOCUS screening raw data, 10.5256/f1000research.9040.d12750818",
"appendix": "Author contributions\n\n\n\nParticipated in research design: Polasek and Kichenadasse\n\nCollected data: Polasek and Ambler\n\nPerformed data analysis: Polasek\n\nWrote or contributed to writing of the manuscript: Polasek, Scott, Sorich, Kaub, Rowland, Wiese and Kichenadasse\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no supporting grants were involved in this work.\n\n\nAcknowledgements\n\nWe thank Glenice Cheetham, Musei Ho and Madelyn Zawitkowski at SA Pathology for assay development and conducting the genetic analyses.\n\n\nReferences\n\nSimon R, Rowchowdhury S: Implementing personalized cancer genomics in clinical trials. Nat Rev Drug Discov. 2013; 12(5): 358–369. PubMed Abstract | Publisher Full Text\n\nAtkinson V: Medical management of malignant melanoma. Aust Prescr. 2015; 38(3): 74–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiaconne G: Epidermal growth factor receptor inhibitors in the treatment of non-small-cell lung cancer. J Clin Oncol. 2005; 23(14): 3235–3242. PubMed Abstract | Publisher Full Text\n\nSorich MJ, Wiese MD, Rowland A, et al.: Extended RAS mutations and anti-EGFR monoclonal antibody survival benefit in metastatic colorectal cancer: a meta-analysis of randomized, controlled trials. Ann Oncol. 2015; 26(1): 13–21. PubMed Abstract | Publisher Full Text\n\nLv C, Wu S, Zheng D, et al.: The efficacy of additional bevacizumab to cytotoxic chemotherapy regimens for the treatment of colorectal cancer: an updated meta-analysis for randomized trials. Cancer Biother Radiopharm. 2013; 28(7): 501–509. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNCCN. NCCN Clinical Practice Guidelines in Oncology. Colon Cancer Version 3. 2015. 2015.\n\nNCCN. NCCN Clinical Practice Guidelines in Oncology. Melanoma Version 3. 2015. 2015.\n\nNCCN. NCCN Clinical Practice Guidelines in Oncology. Non-Small Cell Lung Cancer Version 7. 2015. 2015.\n\nNCCN. NCCN Clinical Practice Guidelines in Oncology. Rectal Cancer Version 3. 2015. 2015.\n\nMenzies AM, Haydu LE, Visintin L, et al.: Distinguishing clinicopathologic features of patients with V600E and V600K BRAF-mutant metastatic melanoma. Clin Cancer Res. 2012; 18(12): 3242–3249. PubMed Abstract | Publisher Full Text\n\nHirsch FR, Bunn PA Jr: EGFR testing in lung cancer is ready for prime time. Lancet Oncol. 2009; 10(5): 432–433. PubMed Abstract | Publisher Full Text\n\nRoth AD, Tejpar S, Delorenzi M, et al.: Prognostic role of KRAS and BRAF in stage II and III resected colon cancer: results of the translational study on the PETACC-3, EORTC 40993, SAKK 60-00 trial. J Clin Oncol. 2010; 28(3): 466–474. PubMed Abstract | Publisher Full Text\n\nHealth Do: Pharmaceutical Benefits Scheme. 2015. Accessed 31/7/2015. Reference Source\n\nTol J, Nagtegaal ID, Punt CJ: BRAF mutation in metastatic colorectal cancer. N Engl J Med. 2009; 361(1): 98–99. PubMed Abstract | Publisher Full Text\n\nRowland A, Dias MM, Wiese MD, et al.: Meta-analysis of BRAF mutation as a predictive biomarker of benefit from anti-EGFR monoclonal antibody therapy for RAS wild-type metastatic colorectal cancer. Br J Cancer. 2015; 112(12): 1888–1894. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCremolini C, Di Maio M, Petrelli F, et al.: BRAF-mutated metastatic colorectal cancer between past and future. Br J Cancer. 2015; 113(11): 1634–1635. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRowland A, Dias MM, Wiese MD, et al.: Reply: Comment on 'Meta-analysis of BRAF mutation as a predictive biomarker of benefit from anti-EGFR monoclonal-antibody therapy for RAS wild-type metastatic colorectal cancer'. Br J Cancer. 2015; 113(11): 1635. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPolasek T, Ambler K, Scott H, et al.: Dataset 1 in: Targeted pharmacotherapy after somatic cancer mutation screening. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15849",
"date": "24 Aug 2016",
"name": "Andrew A. Somogyi",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title suitably addresses the content of the article; in the Abstract, after the second last line, a comment should be made regarding the 52% who were screened by the test but did not receive the targeted drugs. The content of the article in general is of a high standard, it could be improved to the casual and inexpert reader by clarifying where KIT, KRAS and NRAS fit within the overall picture as BRAF deals with MM, EGFR with NSCLC and RAS with mCRC; and of the 6 exclusions the number in each cohort (targeted versus no targeted) could be stated. Finally, Figure 2 is unclear and a disconnect between the figure (BRAF, EGFR, RAS positive, negative) and the legend (…received or did not receive targeted pharmacotherapy), to include actual numbers and not just percentages would also make it clearer. The conclusions are most appropriate and it succinctly highlights the complex interplay between testing, prescribing, guidelines, evidence and funding all of which are rapidly changing but are not connected, all of which have universal implications.",
"responses": [
{
"c_id": "2181",
"date": "13 Sep 2016",
"name": "Thomas Polasek",
"role": "Author Response",
"response": "We thank Prof Somogyi for these very useful comments and suggestions regarding our manuscript. We have now uploaded version 2 to incorporate changes based on the review as described below: ‘after the second last line, a comment should be made regarding the 52% who were screened by the test but did not receive the targeted drugs’. We have now added the following in the abstract, ‘with the remaining 52% (22/42) prescribed drugs independent of the screening results for various reasons.’ ‘by clarifying where KIT, KRAS and NRAS fit within the overall picture’. In the introduction we have now added that KIT testing is used for selecting imatinib for metastatic melanoma e.g. ‘whereas imatinib can be used for KIT-mutated melanoma’. We have not added further comments about the differences between KRAS and NRAS, since they are considered together in targeted drug selection for mCRC (as already described in the manuscript by referring to RAS), and mutations in these for metastatic melanoma and NSCLC are not currently indications for targeted drug selection. ‘and of the 6 exclusions the number in each cohort (targeted versus no targeted) could be stated’. We have already stated that these 6 could not be included in the analysis due to incomplete records (Results first line paragraph 3). ‘Figure 2 is unclear and a disconnect between the figure (BRAF, EGFR, RAS positive, negative) and the legend (…received or did not receive targeted pharmacotherapy), to include actual numbers and not just percentages would also make it clearer’. We have added a new Figure 2 to make the distinction clearer, including raw data."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1551
|
https://f1000research.com/articles/5-2332/v1
|
16 Sep 16
|
{
"type": "Software Tool Article",
"title": "mouseTube – a database to collaboratively unravel mouse ultrasonic communication",
"authors": [
"Nicolas Torquet",
"Fabrice de Chaumont",
"Philippe Faure",
"Thomas Bourgeron",
"Elodie Ey",
"Fabrice de Chaumont",
"Philippe Faure",
"Thomas Bourgeron"
],
"abstract": "Ultrasonic vocalisation is a broadly used proxy to evaluate social communication in mouse models of neuropsychiatric disorders. The efficacy and robustness of testing these models suffer from limited knowledge of the structure and functions of these vocalisations as well as of the way to analyse the data. We created mouseTube, an open database with a web interface, to facilitate sharing and comparison of ultrasonic vocalisations data and metadata attached to a recording file. Metadata describe 1) the acquisition procedure, e.g., hardware, software, sampling frequency, bit depth; 2) the biological protocol used to elicit ultrasonic vocalisations; 3) the characteristics of the individual emitting ultrasonic vocalisations (e.g., strain, sex, age). To promote open science and enable reproducibility, data are made freely available. The website provides searching functions to facilitate the retrieval of recording files of interest. It is designed to enable comparisons of ultrasonic vocalisation emission between strains, protocols or laboratories, as well as to test different analysis algorithms and to search for protocols established to elicit mouse ultrasonic vocalisations. Over the long term, users will be able to download and compare different analysis results for each data file. Such application will boost the knowledge on mouse ultrasonic communication and stimulate sharing and comparison of automatic analysis methods to refine phenotyping techniques in mouse models of neuropsychiatric disorders.",
"keywords": [
"Mouse",
"ultrasonic vocalisations",
"mouse communication",
"database",
"mouseTube",
"mouse behaviour",
"open data",
"open analysis"
],
"content": "Introduction\n\nMice are the most broadly studied animal models in scientific research. They are used to understand causes and mechanisms of human diseases, as well as to develop new therapeutic strategies. More and more scientists are interested in their social behaviour, and aim to improve housing conditions, better understand the way pharmacological substances or genetic mutations act on social life, or simply better know their models in order to develop the most adequate tests.\n\nMice are social animals and use olfactory, tactile, visual but also auditory signals to regulate their relationships. Indeed, mice emit audible and ultrasonic vocalisations to communicate with their conspecifics. These vocalisations might represent an “easy-to-record” proxy for sociality. Audible signals (20 Hz – 20 kHz) are much less frequent than ultrasonic vocalisations; we therefore focus on ultrasonic vocalisations, ranging between 20 kHz and more than 120 kHz. Mouse ultrasonic vocalisations are rapidly successive pure tones of short duration, high frequency modulations, and with or without frequency jump(s) (reviewed in (Portfors, 2007)). Pups utter isolation calls in their first 2 weeks of life (Zippelius & Schleidt, 1956). These vocalisations reliably trigger maternal retrieval (Sewell, 1970; Zippelius & Schleidt, 1956). Juvenile and adult mice utter ultrasonic vocalisations when encountering an unknown conspecific of the same sex (Chabout et al., 2012; Hammerschmidt et al., 2012; Maggio & Whitney, 1985; Panksepp et al., 2007). Both sexes utter these calls, but male vocal behaviour is maximised by social isolation in many cases (Chabout et al., 2012; Scattoni et al., 2010). These vocalisations may play a role in social recognition and hierarchy establishment, at least in females (D’Amato & Moles, 2001; Moles et al., 2007). Finally, sexually mature males vocalise when encountering an oestrus female or urinary cues from her (Holy & Guo, 2005; Whitney et al., 1973). These calls increase the probability of the female staying in proximity with the male emitter (Hammerschmidt et al., 2009).\n\nDespite these identified contexts of emission, knowledge about the real significance and structure of mouse ultrasonic vocalisations still suffers from several weaknesses. First, the functions of these vocalisations, specifically those emitted by juvenile and adult mice, are still unclear. Indeed, male calling in presence of an oestrus female might represent a courtship situation (Holy & Guo, 2005). These calls are nevertheless not structurally different from those of adult females, suggesting at least another function such as proximity maintenance (Hammerschmidt et al., 2012; Seagraves et al., 2016). It is also still unclear to what extent the temporal organisation and the fine acoustic structure of the calls are meaningful for the receiving mice. They are physiologically able to perceive subtle acoustic variations (Portfors et al., 2009), but behavioural evidence for the meaning of these subtle variations remains scarce (Hammerschmidt et al., 2009; Wöhr et al., 2011a). Second, the emission of ultrasonic vocalisations is highly dependent on the emitter’s identity (Holy & Guo, 2005), the receiver’s identity (Seagraves et al., 2016), and the context (e.g., Yang et al., 2013). These sources of variability remain under-explored and could explain the lack of reproducibility in several assays. Finally, the domain suffers from a lack of automation of the analysis of these signals. Some laboratories have developed their own detection and/or analysis methods (e.g., Chabout et al., 2015; Hammerschmidt et al., 2012; Holy & Guo, 2005; Neunuebel et al., 2015; Seagraves et al., 2016; von Merten et al., 2014) or use commercial and/or manual solutions for detection and/or analysis (Chabout et al., 2012; Ey et al., 2013; Wöhr et al., 2011b; Yang et al., 2013). Nevertheless, little is known about the advantages and disadvantages of each of these methods, and comparisons of these methods on the same files would be highly valuable for the field.\n\nTo counteract these current weaknesses of the domain, we developed mouseTube. This database is designed to share and exchange recording files from mouse ultrasonic vocalisations, along with all the corresponding metadata. It aims at increasing knowledge on mouse vocal communication, improving reproducibility of the experiments and stimulating the development of robust analysis tools. The web interface is available at http://mousetube.pasteur.fr.\n\nData uploaded on mouseTube are shared between all members of the community connected to mouseTube. Members uploading data on mouseTube are fully responsible for the content of uploaded files and the accuracy of the metadata provided. Data uploaded still belong to the laboratory that recorded them, but the owner gives the right to the members of mouseTube to use them for analyses and publications. Any member of the community can download data. mouseTube data are freely available upon online registration and can be used for subsequent publications. Any publications derived from the data should state the contributors (user) of the data and mousetube.pasteur.fr as being the data source and, whenever possible, cite the original paper(s) in which data have been first described. mouseTube administrators decline all responsibilities for the content of data and metadata.\n\n\nData and metadata\n\nThe mouseTube database stores the links toward each audio recording file and all the corresponding metadata. The audio recording files themselves are stored on external servers, owned by each laboratory and accessible with the login and password given when registration is confirmed (these servers should be configured with these login and password). This allows the owner to control his or her own data. For a video tutorial on how to upload data on mouseTube, please see (Ferhat et al., 2016).\n\nmouseTube is a web interface coded in php and a MySQL relational database hosted by an Apache server. The web interface allows users (i.e., data contributors and data downloaders) to manage the data in the database. In relational databases, each table has its own unique key to connect tables together (it is possible to combine several keys but in the case of mouseTube, we identify a simple unique key for each table). In this way, the data are well organised and it is very fast to find all the specificities of an element, looking through the links.\n\nmouseTube is organised as seven tables connected by unique keys within the database (Figure 1):\n\n- “strain”: gathers all mouse strains already entered in the database. This table contains information about the name of the strain, the background on which it has been generated, and the bibliographic reference where it has been first described. If a new strain is needed, every user can send an email with all requested information for the administrators to add it. This procedure will avoid any.\n\n- “subject”: lists all the individual mice entered by contributing users. New subjects are created by each contributing user. The table “subject” is connected to the table “user” (unique key “id_user”), meaning that one individual belongs to only one contributing user. The “subject” table is also connected to the table “strain” by the unique key “id_strain”, meaning that one subject can only be characterised by one strain. This table stores all information relative to each subject such as its origin, name, sex, genotype, treatment, and subgroup.\n\n- “user”: stores the information about all contributing and non-contributing users having access to the mouseTube database. This table is connected to the table “subject” and “protocol” through the unique key “id_user”, identifying the owner of a subject and the protocols that have been used by this person (a subject or a protocol belongs to only one user). The table “user” stores contact information and the encrypted version of the password and login. All user information can be changed by him/herself whenever he/she needs to.\n\n- “protocol”: lists all the protocols entered by users. Protocols are created as free text by each user and should provide enough information to be replicable. This table is connected to the table “user” through the unique key “id_user” designating its creator. The table “protocol” contains information about the name of the protocol, its description, which user has created it, and the number of recording files generated for each mouse with this protocol.\n\n- “experiment”: lists all the experiments (i.e., a set of audio files recorded for a group of individuals with one protocol) entered by the users. New experiments can be created by each user. It is connected to the “protocol” table through the unique key “id_protocol” since each experiment involves one unique protocol.\n\n- file: lists all the vocalisations files entered by the users, for each individual mouse (unique key “id_subject”) within each experiment (unique key “id_experiment”). This means that one file relates to a unique subject and a unique experiment. The preferred format is the uncompressed “.wav” one.\n\n- “latest news”: this table retraces all the new actions that have been performed on mouseTube. It allows researchers to follow updates made by each user (unique key “id_user”).\n\nmouseTube is organised in 7 tables connected by unique keys.\n\nA help section is available on the home page of each user’s session. This part provides all details about the information stored in each section of mouseTube. It can also be accessed at any time through the “Help” tab available on each page.\n\nFor each protocol, a name is requested. Users should also specify how many recording files are generated for each mouse in this protocol. Users should then provide a very precise description of the exact protocol. If another user has already created the protocol, the owner of the vocalisation files needs to create it again with a similar name and same information. He/she will then be able to select it when he/she creates an experiment. The database is built in this way to track changes in each protocol.\n\nFor each experiment embedding several audio recording files, the following metadata are stored:\n\n- name of the experiment\n\n- protocol\n\n- mouse group\n\n- date of the beginning of the experiment\n\n- temperature\n\n- light phase for testing (light or dark period)\n\n- hardware used (microphone & sound card; auto-completion of this field to ease the identification of the equipment)\n\n- acquisition software\n\n- sampling frequency\n\n- bit depth\n\n- laboratory\n\nThese metadata will permit useful comparisons of the software, hardware and environmental conditions of each audio recording. This information is compulsory to evaluate constraints on each recording.\n\nThe mouseTube web interface allows users to search different types of data into the database without coding a MySQL request. Data can be searched according to several criteria, such as protocols, owner of the files, mouse strain, and experiment. Users can select all recording files from a specific mouse strain or user/laboratory. mouseTube also aims at providing the users with different protocols to record mouse ultrasonic vocalisations. mouseTube even provides the possibility to search for individual mice. The more information is entered, the easier it is to find a subject. To download any vocalisation file, users need the login and password provided at the confirmation of their registration and common to all servers hosting vocalisation files linked in mouseTube. These are different from their personal login and password.\n\n\nUse cases\n\nIn this example, we use the search function of mouseTube in the protocol section to gather information on potential protocols to record ultrasonic vocalisations in adult male mice and to collect data on control animals in this protocol.\n\nBy going through all protocols (Figure 2), we found several protocols used to record male ultrasonic vocalisations. Elodie Ey provided protocols to record male-male interactions in different cages after 3 weeks of isolation or male-female interactions. Jonathan Chabout also provided a protocol to record male vocalisations in response to urine (male or female), an anesthetised mouse (male or female), or an active female. With these selected protocols, at the time of writing, we managed to find 16 individuals from the C57BL/6J strain recorded in the male-male interaction protocol from E. Ey, 48 individuals from the ProSAP1/Shank2 strain recorded in the male-oestrus female protocol from E. Ey, and 12 individuals from the B6D2F1/J strain recorded with the protocol of J. Chabout. Altogether, we now have an important set of reference files for each protocol to compare to our data, and soon we will try the different protocols in our laboratory. The output file (Supplementary file 1) provides a sample of the metadata and links to the files recorded in the male – oestrus female interactions by E. Ey. This file is automatically generated for each vocalisation search request and can be downloaded to save the metadata related to each file.\n\nIn this example, we plan to test a new detection algorithm under various recording conditions. To do so, we need to gather files recorded with different equipment under various levels of background noise conditions. We use the search function of mouseTube in the vocalisation section to select three vocalisation files in each of the protocols available in mouseTube. The diversity of recording conditions allows us to investigate the limit of our detection algorithm to extract ultrasonic vocalisations from background noise. We end up with a set of many vocalisation files to include in our test set. For each vocalisation search, a table recapitulates the corresponding metadata of each vocalisation file, specifying the protocol and hardware and software used. We can therefore test our detection algorithm on files knowing the number of individuals present during the recording session and the background noise that is then generated as well as the quality of the recording equipment (e.g., the microphone frequency response).\n\n\nConclusion\n\nWe present mouseTube, a database with a web application to boost knowledge on mouse ultrasonic communication. This database stores recording files of mouse ultrasonic vocalisations as well as the corresponding metadata. It provides a source of information on the protocols to record mouse ultrasonic vocalisations and on the availability of recording files for different mouse strains.\n\nAt the time of writing this paper, mouseTube provides a platform to up- or download mouse recording files and the corresponding metadata. The aim is to constantly develop mouseTube, and we are currently exploring ways of enabling users to analyse their data online, where the owner of the data will be notified of each analysis performed with their data. The database will also offer researchers the option of keeping a portion of their data on a private part of mouseTube until they have been analysed and published, after which the data will be made publicly available. We will develop this analysis system shortly. We also aim to open mouseTube to plug-in other analysis systems. The users will then be able to choose which software they want to analyse their data.\n\n\nData and software availability\n\nAudio recording files are available for all mouseTube users via the web application http://mousetube.pasteur.fr. To download the vocalisation files from the different storage servers, users need to enter the login and password common to all servers hosting mouseTube data files. These are provided upon registration online.\n\nF1000Research: Dataset 1. Output file automatically generated for each request of vocalisation file, 10.5256/f1000research.9439.d135667 (Torquet et al., 2016).",
"appendix": "Author contributions\n\n\n\nNT, TB, and EE elaborated the concept of the mouseTube database. NT developed the mouseTube database and its web application. NT and EE coordinated the project, tested beta versions of the database and wrote the manuscript. FdC, PF and TB provided inputs during manuscript preparation.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Fondation de France; by the ANR FLEXNEURIM [ANR09BLAN034003]; by the ANR [ANR- 08-MNPS-037-01- SynGen]; by Neuron-ERANET (EUHF-AUTISM); by the Fondation Orange; by the Fondation FondaMentale; by the Fondation de France; by the Fondation Bettencourt-Schueller. The research leading to this article has also received support from the Innovative Medicine Initiative Joint Undertaking under grant agreement no. 115300, resources of which are composed of financial contribution from the European Union's Seventh Framework Program (FP7/2007-2013) and EFPIA companies' in kind contribution.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Eliza Jones for lending her voice for the video presenting mouseTube.\n\n\nReferences\n\nChabout J, Sarkar A, Dunson DB, et al.: Male mice song syntax depends on social contexts and influences female preferences. Front Behav Neurosci. 2015; 9: 76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChabout J, Serreau P, Ey E, et al.: Adult male mice emit context-specific ultrasonic vocalizations that are modulated by prior isolation or group rearing environment. PLoS One. 2012; 7(1): e29401. PubMed Abstract | Publisher Full Text | Free Full Text\n\nD’Amato FR, Moles A: Ultrasonic vocalizations as an index of social memory in female mice. Behav Neurosci. 2001; 115(4): 834–840. PubMed Abstract | Publisher Full Text\n\nEy E, Torquet N, Le Sourd AM, et al.: The Autism ProSAP1/Shank2 mouse model displays quantitative and structural abnormalities in ultrasonic vocalisations. Behav Brain Res. 2013; 256: 677–689. PubMed Abstract | Publisher Full Text\n\nFerhat AT, Torquet N, Le Sourd AM, et al.: Recording Mouse Ultrasonic Vocalizations to Evaluate Social Communication. J Vis Exp. 2016; (112). PubMed Abstract | Publisher Full Text | Free Full Text\n\nHammerschmidt K, Radyushkin K, Ehrenreich H, et al.: Female mice respond to male ultrasonic 'songs' with approach behaviour. Biol Lett. 2009; 5(5): 589–592. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHammerschmidt K, Radyushkin K, Ehrenreich H, et al.: The structure and usage of female and male mouse ultrasonic vocalizations reveal only minor differences. PLoS One. 2012; 7(7): e41133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoly TE, Guo Z: Ultrasonic songs of male mice. PLoS Biol. 2005; 3(12): e386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaggio JC, Whitney G: Ultrasonic vocalizing by adult female mice (Mus musculus). J Comp Psychol. 1985; 99(4): 420–436. PubMed Abstract | Publisher Full Text\n\nMoles A, Costantini F, Garbugino L, et al.: Ultrasonic vocalizations emitted during dyadic interactions in female mice: A possible index of sociability?. Behav Brain Res. 2007; 182(2): 223–230. PubMed Abstract | Publisher Full Text\n\nNeunuebel JP, Taylor AL, Arthur BJ, et al.: Female mice ultrasonically interact with males during courtship displays. eLife. 2015; 4: e06203. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPanksepp JB, Jochman KA, Kim JU, et al.: Affiliative behavior, ultrasonic communication and social reward are influenced by genetic variation in adolescent mice. PLoS One. 2007; 2(4): e351. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPortfors CV: Types and functions of ultrasonic vocalizations in laboratory rats and mice. J Am Assoc Lab Anim Sci. 2007; 46(1): 28–34. PubMed Abstract\n\nPortfors CV, Roberts PD, Jonson K: Over-representation of species-specific vocalizations in the awake mouse inferior colliculus. Neuroscience. 2009; 162(2): 486–500. PubMed Abstract | Publisher Full Text\n\nScattoni ML, Ricceri L, Crawley JN: Unusual repertoire of vocalizations in adult BTBR T+tf/J mice during three types of social encounters. Genes Brain Behav. 2010; 10(1): 44–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeagraves KM, Arthur BJ, Egnor SE: Evidence for an audience effect in mice: male social partners alter the male vocal response to female cues. J Exp Biol. 2016; 219(Pt 10): 1437–1448. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSewell GD: Ultrasonic communication in rodents. Nature. 1970; 227(5256): 410. PubMed Abstract | Publisher Full Text\n\nTorquet N, de Chaumont F, Faure P, et al.: Dataset 1 in: mouseTube – a database to collaboratively unravel mouse ultrasonic communication. F1000Research. 2016. Data Source\n\nvon Merten S, Hoier S, Pfeifle C, et al.: A role for ultrasonic vocalisation in social communication and divergence of natural populations of the house mouse (Mus musculus domesticus). PLoS One. 2014; 9(5): e97244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitney G, Coble JR, Stockton MD, et al.: Ultrasonic emissions: do they facilitate courtship of mice. J Comp Physiol Psychol. 1973; 84(3): 445–452. PubMed Abstract | Publisher Full Text\n\nWöhr M, Moles A, Schwarting RK, et al.: Lack of social exploratory activation in male μ-opioid receptor KO mice in response to playback of female ultrasonic vocalizations. Soc Neurosci. 2011a; 6(1): 76–87. PubMed Abstract | Publisher Full Text\n\nWöhr M, Roullet FI, Hung AY, et al.: Communication impairments in mice lacking Shank1: reduced levels of ultrasonic vocalizations and scent marking behavior. PLoS One. 2011b; 6(6): e20631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang M, Loureiro D, Kalikhman D, et al.: Male mice emit distinct ultrasonic vocalizations when the female leaves the social interaction arena. Front Behav Neurosci. 2013; 7: 159. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZippelius HM, Schleidt WM: Ultraschall-Laute bei jungen Mäusen. Naturwissenschaften. 1956; 43(21): 502–502. Publisher Full Text"
}
|
[
{
"id": "16689",
"date": "30 Sep 2016",
"name": "Takefumi Kikusui",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is presenting the web-based mouse ultrasonic vocalization database. The aim and the methods of the system MouseTube are well designed and organized. My only concern is with how many users would be registered and how many calls would be uploaded. More users, more fascinating. Therefore the authors need to do other things to collect the users and calls. For example, DNA seq data should be registered if papers using the new sequence need to register the sequence first before paper submission. A such “obligated” system would accelerate the values of this system. If the authors can propose such things to the scientific committee, that would be fantastic. I believe that both the title and abstract are appropriate for the content of the article. The design, methods and analysis of results from the study have been explained and are appropriate for the topic studied. The conclusions are sensible, balanced and justified on the basis of the results obtained.",
"responses": [
{
"c_id": "2249",
"date": "21 Oct 2016",
"name": "Elodie Ey",
"role": "Author Response",
"response": "We thank the reviewer for his comment. We agree that it would be helpful to broaden the distribution of the system. We are working on it. We are working on a way to couple the analysis of the data files with the database. This will for sure attract many users. More and more journals ask for the availability of the raw data, and we should make sure that mouseTube will be accepted as a way to publish the raw data. Some journals accept it; others are still reticent. We are also working on it."
}
]
},
{
"id": "16373",
"date": "21 Oct 2016",
"name": "A. Katrin Schenk",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article reports about an open mouse ultrasonic vocalization (USV) database (mouseTube) to store both USV data and its metadata about recording equipment, conditions and experimental protocols. As the authors clearly point out, the field of mouse ultrasonic vocalization research is challenging due to the very high variability of both the data itself and the experimental protocols, recording equipment and analysis software/approaches used. The establishment of the mouseTube database, if implemented well and adopted by mouse USV researchers, could represent the single most important advance in the field to date. mouseTube has the potential to enhance experimental reproducibility, standardize recording condition reporting and allow for a rich test bed for analysis software and methods. Again, this will SIGNIFICANTLY advance the field.\n\nAs for implementation, the article gives detailed information on the structure of the mouseTube database and several examples of specific use cases. The structure is well thought out and it is clear that, as more users come to mouseTube, the database will be flexible enough to accommodate iterative user-driven changes.\n\nOverall, the article describes a well thought out implementation of a desperately needed tool for USV research.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2332
|
https://f1000research.com/articles/5-2020/v1
|
17 Aug 16
|
{
"type": "Study Protocol",
"title": "Multi-site tumor sampling (MSTS) improves the performance of histological detection of intratumor heterogeneity in clear cell renal cell carcinoma (CCRCC)",
"authors": [
"Rosa Guarch",
"Jesús M. Cortés",
"Charles H. Lawrie",
"José I. López",
"Rosa Guarch",
"Jesús M. Cortés",
"Charles H. Lawrie"
],
"abstract": "Current standard-of-care tumor sampling protocols for CCRCC (and other cancers) are not efficient at detecting intratumoural heterogeneity (ITH). We have demonstrated in silico that an alternative protocol, multi-site tumor sampling (MSTS) based upon the divide and conquer (DAC) algorithm, can significantly increase the efficiency of ITH detection without extra costs. Now we test this protocol on routine hematoxylin-eosin (HE) sections in a series of 38 CCRCC cases. MSTS was found to outperform traditional sampling when detecting either high grade (p=0.0136) or granular/eosinophilic cells (p=0.0114). We therefore propose that MSTS should be used in routine clinical practice.",
"keywords": [
"Intratumor heterogeneity",
"tumor sampling",
"multi-site tumor sampling grade",
"pathology",
"clear cell renal cell carcinoma"
],
"content": "Introduction\n\nClear cell renal cell carcinoma (CCRCC) is the most frequent form of renal cancer in Western Countries1 and a paradigmatic example of intratumoural heterogeneity (ITH)2–5. ITH is a major factor in the unpredictable clinical behavior and treatment failure response that these tumors can display5 and as a consequence, detection of ITH by pathologists is becoming an increasingly important metric of clinical practice.\n\nWe have recently demonstrated that a multi-site tumor sampling (MSTS) protocol following the divide-and-conquer (DAC) algorithm outperforms routine sampling protocols (RS) in detecting ITH when tested in silico6. Since such a strategy does not necessarily increase the cost of procedures and can be performed without significant changes in the pathologist’s routine, we proposed its generalized implementation in pathology labs6,7. This study extends this hypothesis to a real life scenario by comparing the MSTS protocol with RS when detecting classic morphological ITH in a series of 38 CCRCC.\n\n\nMaterial and methods\n\nThirty-eight CCRCC were prospectively collected from the Pathology Department of the Cruces University Hospital (Barakaldo, Spain). All patients were informed about the potential use for research of their surgically resected tissues, and accepted this eventuality by signing an information consent approved by the local Ethics Committee (CEIC). The two sampling protocols MSTS and RS were applied in each case. The RS8 method consisted of selecting one tissue fragment per centimeter of tumor diameter plus an additional fragment of each suspicious area by the naked eye. Alternatively, the MSTS6,7 method consisted of selecting a large number of small fragments including six to eight of them in the same cassette and fixing the number of cassettes to one per centimeter of tumor (Figure 1). Thus, the two sampling protocols made use of the same number of cassettes. Tissue samples were fixed in formalin and embedded in paraffin following routine methods. Four-micron-thick histological slides were processed in an automatized stainer (Symphony system, Ventana Medical Systems Inc., Tucson, USA).\n\nThe study was performed on hematoxylin-eosin (HE) stained histological slides exclusively. Two experienced pathologists (RG, JIL) reviewed all HE sections in a blind fashion. Fuhrman grade, cell type (clear vs. granular eosinophilic), and the presence of necrosis and/or sarcomatoid change were evaluated in all cases and in both sampling methods. Grade was grouped as low (G1/2) and high (G3/4) for higher consistency.\n\nResults of the two methods (RS and MSTS) were compared by applying a chi-squared test (χ2), a test applied to sets of categorical data to evaluate the hypothesis of independence between the two groups. In particular, we made use of the script chi2test.m (available to download at http://es.mathworks.com/matlabcentral/fileexchange/16177-chi2test) and run it in Matlab (The Mathworks, Inc, version 2012a). For instance, to test if MSTS detected more high-grade tumors (G3/4) than RS (results in columns H and D respectively from the Excel file containing the raw data), we first counted the total number of high labels in RS (column D) and in MSTS (column H), giving a total number of 21 cases for RS and 31 cases for MSTS. Next, considering a total number of 38 CCRCC cases, we run in Matlab p=chi2test ([31 38-31; 21 38-21]), which returns a p-value of p=0.0136. Similarly, we compare the performances of the two methods with regard to the categories of presence of granular eosinophilic cells, sarcomatoid phenotype and tumor necrosis.\n\n\nResults\n\nThe series consisted of 32 males and 6 females with an average age of 63 years (range 41–87), and average tumor diameter of 8.5 cm (range 4–15). Overall, MSTS was more informative than RS in 28 of 38 cases (73.5%). In particular, MSTS detected a significantly higher number of high-grade tumors (G3/4) than RS (31 vs. 21 cases respectively, χ2 test, p=0.0136) and a significantly higher number of tumors containing granular eosinophilic cells (32 vs. 22 cases respectively, χ2 test, p=0.0114) (Table 1).\n\nAlthough MSTS also detected a higher number of tumors displaying sarcomatoid phenotype (12 vs. 6 cases, respectively) and a higher number of cases presenting tumor necrosis (10 vs. 7 cases, respectively), their figures did not reach significant levels (Table 1) probably because both were detected by the naked eye and were sampled in both protocols.\n\nMSTS: Multi-site tumor sampling, RS: Routine sampling\n\nMoreover, MSTS detected a clear cell papillary renal cell carcinoma (CK7+/CD10-) component in one case that RS missed.\n\n\nDiscussion\n\nThe clinical importance of detecting ITH is becoming clearer as time passes and as a consequence represents one of the most challenging tasks facing pathologists today5. However, pathologists have not yet adapted the old sampling protocols and seem not aware of a concerning paradox: The success of sophisticated devices and expensive platforms in detecting key tumor mutations depends on the selection rightness of tumor pieces which are (very often) made by residents. The combination of lack of solid evidence for the necessity to change current practice and a reluctance to incur new costs and increased work load may be responsible of this attitude.\n\nWe present evidence that the MSTS protocol is much more effective than RS in detecting high grade areas and other histological parameters that determine tumor aggressiveness and prognosis in CCRCC. Importantly the MSTS protocol does not incur extra costs to pathology labs6,7. A similar approach (but for a different purpose) was already reported in 1990 by Battifora and Mehta to optimize the screening of new histologic reagents9.\n\nFinally, a thorough histological analysis such as MSTS performs may also help the pathologists in detecting hidden or unexpected tumor histologies, i.e., hybrid tumors, collision neoplasms, histologically complex tumors, and minor but crucial components in a huge tumor, giving definite clues for a complete diagnosis.\n\n\nData availability\n\nF1000Research: Dataset 1. Clinic-pathological data corresponding to the two RS and MSTS sampling methods in 38 CCRCC, 10.5256/f1000research.9419.d13288310",
"appendix": "Author contributions\n\n\n\nJIL exposed the problem; JIL, JMC and CHL designed the study; RG selected the cases and built the picture, RG and JIL reviewed the cases; RG, JMC, CHL and JIL wrote the final version of the manuscript and agreed with this submission.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nCHL and JMC acknowledge financial support from Ikerbasque: The Basque Foundation for Science. This work was partially funded by grant SAF2013-48812-R from Ministerio de Economía y Competitividad (Spain).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nReferences\n\nSiegel RL, Miller KD, Jemal A: Cancer statistics, 2016. CA Cancer J Clin. 2016; 66(1): 7–30. PubMed Abstract | Publisher Full Text\n\nGerlinger M, Rowan AJ, Horswell S, et al.: Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med. 2012; 366(10): 883–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLópez JI: Intratumor heterogeneity in clear cell renal cell carcinoma: a review for the practicing pathologist. APMIS. 2016; 124(3): 153–9. PubMed Abstract | Publisher Full Text\n\nHiley C, de Bruin EC, McGranahan N, et al.: Deciphering intratumor heterogeneity and temporal acquisition of driver events to refine precision medicine. Genome Biol. 2014; 15(8): 453. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVenkatesan S, Swanton C: Tumor evolutionary principles: How intratumor heterogeneity influences cancer treatment and outcome. Am Soc Clin Oncol Educ Book. 2016; 35: e141–9. PubMed Abstract | Publisher Full Text\n\nLopez JI, Cortes JM: A divide-and-conquer strategy in tumor sampling enhances detection of intratumor heterogeneity in routine pathology: A modeling approach in clear cell renal cell carcinoma [version 2; referees: 4 approved]. F1000Res. 2016; 5: 385. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLópez JI, Cortés JM: A multi-site cutting device implements efficiently the divide-and-conquer strategy in tumor sampling [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2016; 5: 1587. Publisher Full Text\n\nTrpkov K, Grignon DJ, Bonsib SM, et al.: Handling and staging of renal cell carcinoma: The International Society of Urological Pathology Consensus (ISUP) conference recommendations. Am J Surg Pathol. 2013; 37(10): 1505–17. PubMed Abstract | Publisher Full Text\n\nBattifora H, Mehta P: The checkerboard tissue block. An improved multitissue control block. Lab Invest. 1990; 63(5): 722–4. PubMed Abstract\n\nGuarch R, Lopez JI, Lawrie CH, et al.: Dataset 1 in: A multi-site tumor sampling improves the histological detection of high-grade clear cell renal cell carcinomas. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15825",
"date": "23 Aug 2016",
"name": "Jason L. Hornick",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent demonstration of the benefits of the proposed sampling approach. Obviously a major goal of such an approach would be to reveal molecular heterogeneity that might be missed by routine sampling. The authors might briefly comment on this in the discussion.",
"responses": []
},
{
"id": "15937",
"date": "30 Aug 2016",
"name": "Fabio F. Facchetti",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study extend previous demonstrations on the usefulness of this original sampling procedure, that might be relevant also to detect variability of molecular landscape in tumors, whatever is their origin.\nJust an annotation to better understand the method used:\nWas the total surface of fragments contained in a MSTS significantly different from that of a RS cassette?\n\nHow was grading and other parameter assigned in the MSTS cassette, in the sense, based even on a single or part of fragment of those contained in the all cassette?\n\nSimilarly, was in the RS given based on part (and how much?) of the section?\n\nI do understand that the procedure used likely followed the published rules, but this might be mentioned in the paper.",
"responses": []
},
{
"id": "15724",
"date": "30 Aug 2016",
"name": "Miguel A. Piris",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors demonstrate that multi-site tumor sampling improves the sensitivity for the detection of molecular heterogeneity in routine paraffin-embeded clear cell renal cell carcinoma samples.\nMolecular heterogeneity is a relevant feature of the advanced cancer samples that determines the adaptation capacity of the tumoral cells and their capacity to survive to the therapy. Standards for recognizing or reporting tumor heterogeneity are still to be defined. In this sense, this work is an inspiring example. It would be great to know whether this heterogeneity has clinical prognostic or predictive implications.",
"responses": []
},
{
"id": "16008",
"date": "01 Sep 2016",
"name": "Kevin O. Leslie",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis well-designed study details a successful tissue sampling technique for addressing the inherent problem of morphologic diversity present in clear cell renal cell carcinoma. The title is appropriate, the design and methods are sound, and the conclusions are sensible.",
"responses": []
},
{
"id": "15895",
"date": "02 Sep 2016",
"name": "Giuseppe Zamboni",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting work in which the authors applied a simple yet very smart approach to address the very complex issue of intratumoral heterogeneity, for which clear cell renal cell carcinoma stands as a paradigm.\nSuch method could indeed be applied to other tumor entities as well; moreover it could form the basis for a practical approach to tackle the problem of the minimum required number of samples that must be collected in order to cover the most of the molecular landscape of tumors.\nLastly I would like to suggest that the authors might briefly explain in the introduction what is the “divide and conquer” strategy, as they did in their previous paper.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2020
|
https://f1000research.com/articles/5-1672/v1
|
12 Jul 16
|
{
"type": "Opinion Article",
"title": "Health as a Complex Adaptive System: a new dimension of patient care in internal medicine and general practice",
"authors": [
"Johannes Bircher",
"Eckhart G. Hahn",
"Johannes Bircher"
],
"abstract": "This paper explores the diagnostic and therapeutic potential of a new concept of health. Investigations into the nature of health have led to a new definition that explains health as a complex adaptive system (CAS) and is based on five components (a-e). Humans like all biological creatures must satisfactorily respond to (a) the demands of life. For this purpose they need (b) a biologically given potential (BGP) and (c) a personally acquired potential (PAP). These properties of individuals are embedded within (d) social and (e) environmental determinants of health. Between these five components of health there are 10 complex interactions that justify health to be viewed as a CAS. In each patient, the current state of his health as a CAS evolved from the past, will move forward to a new future, and has to be analyzed and treated as an autonomous whole. A diagnostic procedure is suggested as follows: together with the patient, the five components and 10 complex interactions are assessed. This may help the patient to better understand his situation and to recognize possible next steps that may be useful for him to evolve toward more health by himself. In this process mutual trust in the patient-physician interaction is critical. The described approach offers new possibilities to help patients to improve their health.",
"keywords": [
"Meikirch model",
"health",
"complex adaptive system",
"primary care",
"internal medicine",
"family medicine",
"diagnosis",
"therapy"
],
"content": "Introduction\n\nCitizens consult their physicians when they feel that something is not in order, e.g. when they experience pain, fatigue or any other disorder. Physicians then examine them and specifically look for pathological changes. After investigation they make a provisional diagnosis and explore their patients further or treat them accordingly. This type of thinking goes back to the pathologist Rudolf Virchow, who in 1858 used 20 lectures to describe “cellular pathology”, a characterization of different diseases1. Although the foundations of medicine have vastly changed since then, the general principles of medical practice have remained the same. Only over the past 20 years, complexity science has gradually entered into medicine2,3. This has become particularly important for the interpretation of health and disease as different states of a complex adaptive system (CAS). The Meikirch model is a new definition of health that exhibits all the features of a CAS4. For such systems the concepts based on Virchow’s pathology are no longer appropriate. An understanding of health and disease now requires appreciation of complexity science. It introduces a new dimension for diagnosing and treating patients. It includes the potential to improve health in a way that hitherto was practiced only exceptionally. The purpose of this paper is to summarize the relevant features of the Meikirch model and to reveal in detail how the model and complexity science may be applied for a better understanding of a patient’s disease and for its treatment.\n\n\nThe Meikirch model: definition of health and disease\n\nThe Meikirch model is based on five components (Box 1) and 10 complex interactions (Figure 1). This framework allows to define health and disease as a complex adaptive system (Box 2). Figure 1 depicts the five components from a to e. The interactions are exhibited as double-edged arrows from 1 – 10. A short explanation of the five components and their interactions is presented below. The complete description of the model with its scientific background is given in the original publications4,5.\n\n\n\na. Life's demands (LD)\n\na. Biologically given potential (BGP)\n\na. Personally acquired potential (PAP)\n\na. Social determinants (SD)\n\na. Environmental determinants (ED)\n\nThe Meikirch model consists of five components (a–e) and ten complex interactions (1–10).\n\n\n\n1. Health is a dynamic state of wellbeing emergent from conducive interactions between an individual's potentials, life's demands, and social and environmental determinants.\n\n2. Health results throughout the life course when an individual's potentials and social and environmental determinants suffice to respond satisfactorily to the demands of life. Life's demands can be physiological, psychosocial, or environmental, and vary across individuals and contexts, but in every case unsatisfactory responses lead to disease.\n\nEach human must fulfil his demands of life6. Physiological, psychosocial and environmental demands vary with time and circumstances. Physiological demands are related to the homeokinetic balance of nutrients, energy and water to maintain bodily functions including procreation; examples are work, pregnancy, childbirth and brain function. Psychosocial demands are the individual´s exposure and response to social conditions to succeed in social integration and mental, personal and spiritual development. Expectations and roles related to work, family and society as a whole combine with personal aspirations, values and lifestyle in changing settings and contexts. This includes also peace with the fact, that every human being must die. Environmental demands include availability and immediate or latent threats from living conditions (e.g. water, nutrients, climate, radioactivity, pollutants, carcinogens, workplace conditions).\n\nThe potential of an individual to meet his demands of life is partly biological e.g. a gift by nature - biologically given potential (BGP) - and partly acquired during life – personally acquired potential (PAP). At the time of birth the BGP is based on the genetic equipment, epigenetic regulation and quality of the pregnancy. The BGP diminishes throughout life and is zero at the time of death. During lifetime the BGP may be threatened or damaged by socioeconomic disadvantages, diseases, injuries and defects. The PAP results from the entirety of physiological, mental, spiritual and social resources acquired during lifetime. It may continue to grow when a person cares for it. Social and environmental conditions also influence the growth of the PAP by providing or withholding determinants of health.\n\nSocial determinants of health strongly interact with the demands of life and the potentials of the individual4. Equity and equality, social concerns, working conditions, autonomy and social participation affect health and longevity7,8 and are major determinants of health. Likewise, environmental determinants of health are factors in living and working conditions affecting each person. They may sometimes be of global significance like natural resources, population growth and climate change9,10.\n\nBased on these five components and their interactions with each other the Meikirch model represents a new definition of health and disease as shown in Box 2. Possible outcomes on individual and public health care as a result of a hypothetical implementation of the Meikirch model have been discussed elsewhere and suggestions for clinical and health systems research have been made5.\n\n\nHealth as a Complex Adaptive System (CAS)\n\nA complex adaptive system is an entity with a more or less permeable boundary between it and its nearby environment (Figure 2)11. It can take up material and energy from the environment (input), release end products (output, e.g. entropy) and do work. Within the system there are many different parts called agents. In Figure 2 they are symbolized as circles. They continuously and autonomously interact with each other in a nonlinear manner, contributing to the product, the so called emergence of the CAS. The term emergence indicates a new and often unpredictable quality which is more than the sum of the functions of each part. A CAS always functions as a whole. Attractors are sites to which the energy flow of the system may be drawn. A CAS is equipped with a learning and bonus arrangement for the interactions among its agents. This gives it the possibility to adapt to changes in the environment, i.e. to learn. If for some reason this adaptation functions poorly, the CAS suffers. If it does not function at all the CAS becomes chaotic and goes into a crisis or vanishes. Repeated critical disturbances may lead to the so called butterfly effect2. Examples of medical conditions are ventricular fibrillation, epileptic seizures, tantrum, or psychotic states. Every CAS has evolved from a prior condition and autonomously progresses toward an unforeseeable future state. A CAS may be part of a larger CAS or be composed of many CASs. Such structures are called nested CASs.\n\nIn the Meikirch model the five components (a–e) including their subcomponents are regarded as agents (Figure 1). Interacting with each other they spontaneously arrange themselves in such a way that the evolving emergence, i.e. the state of health, is the result of the functioning of the system as a whole. In each case a specific working-arrangement is operational, but it is not necessarily the best solution for the system. Energy flow in humans has been called vitality, drive, or sense of purpose. This indicates that energy flow may also be regarded as immaterial, e.g. based on a desire to be loved, on pursuit of values, or on living for a spiritual purpose. Investigation of the material and immaterial double nature of human energy flow may help to better understand the health of a person.\n\nThe entire life is an evolutionary process. Biologically human life originates with the fertilization of an oocyte. This then passes through the stages of embryo and foetus to the maturity needed for birth. At some time during intrauterine life the personality of the individual is created. At the least, physicians and midwifes say that in the new-born it is clearly recognizable. From then on the complete Meikirch model is fully operational during all phases of the life course of each person. Thus evolution from birth to death demonstrates clearly how many adaptive processes occur as part of the different complex interactions described by the model. While the two potentials evolve, the demands of life, the social, and the environmental determinants also vary. Within these limiting and supporting contexts individuals follow an autonomously chosen life course. Under such conditions it is not surprizing that some adaptations may not be fully successful for some time or permanently. Challenging examples are malnutrition, infectious diseases, love deprivation, sexual maturation, pregnancy, genetic defects, professional stress, the raising of children, physical involution, aging, etc. Such changes may lead an individual as a system into a state of crisis. If it is minor, the two potentials still may manage the demands of life and the difficulties may resolve spontaneously after some time. Such situations are not considered to represent a disease. Yet, they may evolve into a chronic state that draws energy from the person and thereby may explain e.g. insomnia, chronic fatigue, or somatoform symptoms. If the defect gets more severe it may lead to a disease that requires more medical attention. In the Meikirch model the term disease implies that for any reason one or several adaptation processes are not successful enough to empower the two potentials to satisfy the demands of life.\n\n\nMethods\n\nConsequences of the Meikirch model and of the properties of CASs are explored with the purpose to better understand the state of health of patients, particularly in internal medicine and general practice. The individual as a patient, according to the Meikirch model, is considered to be a nested CAS, composed of grouped CASs and being embedded in higher CASs. For this purpose, the significance of each of the five components and of each interaction within the Meikirch model must be visualized. In addition, possibilities to support favourable evolutions of the respective CAS and its meaning for the whole person (nested CAS) were studied. In this process, the deduction and induction cycles were repeated until coherent results were received.\n\n\nResults\n\nDisturbances in health and healing follow a pattern that can be described by four categories:\n\n1. Minor maladaptations lead either immediately or with varying delays to discomfort (illness) or to signs of a disease. Examples for the former are minor acute infections or tension headaches, and for the latter, overweight, type 2 diabetes, or arterial hypertension.\n\n2. A more relevant disturbance of the system leads it into a crisis, i.e. it becomes “chaotic”. Such states may e.g. be corrected spontaneously, or by behavioural changes, or by interacting with a physician or healer, by medications, or by operations. Thereafter there may not be an immediate complete recovery to health. The full adaptive evolution may take time and further interventions that are called convalescence or rehabilitation may be needed. These phenomena may lead to complete healing or to healing with defects.\n\n3. If a CAS is disturbed continuously for a prolonged time, the CAS apparently is not able to satisfactorily respond to the demands of life. This represents a chronic disease or invalidity. If the condition is progressive and serious, it may lead to death. Examples are rheumatic or degenerative diseases and different types of neoplasms.\n\n4. Considering a disturbed state of health as a maladapted CAS implies that patients cannot simply be healed by the actions of a competent physician. Healing much rather is the result of a process of self-reorganization, enabling the two potentials to again satisfactorily fulfil the demands of life. The task of physicians and other health professionals therefore consists in being competent advisors and fellow human beings that assist the patient to realize the necessary evolution himself.\n\nInitially it is appropriate to examine a patient with an ordinary medical history and physical examination to which all indicated laboratory tests and imaging procedures are added. When this does not lead to a satisfactory and clear result, it may be purposeful to perform an analysis of the patient’s health as a CAS. In this case the five components and the ten complex interactions of the Meikirch model are assessed by an extended history as exemplified in Box 3. A thorough analysis will give the patient a new way to look at his health and how he has led his life. He will discover aspects he did not think about before, and this may be of therapeutic value. At the same time the physician may start to interpret the patient’s history and findings in a new way. He may discover further possibilities for helping the patient to autonomously evolve to a new state which hopefully comes closer to health.\n\n\n\nIn addition to the five components and the ten complex interactions also the energy flow and other patterns of the model need to be investigated. The questions enumerated are just examples that have to be adapted and complemented further as needed for each patient’s specific problems.\n\nQuestions related to the components of the model (a–e):\n\na) Which specifically are the demands of life to which the patient has to respond?\n\nb) How does the patient appreciate the evolution of his physical health?\n\nc) How does the patient feel about himself? Can he manage himself? Does he invest in his future?\n\nd) How is the patient integrated into family, household, friends, society and government?\n\ne) In which type of natural environment is the patient living?\n\nQuestions about the interactions (1–10)\n\n1. How does the physical body of the patient (past and current) interact with his demands of life?\n\n2. How does the patient deal with his physical, psychological and spiritual demands of life?\n\n3. How does the patient interact with himself, especially with his body? Does he invest in it?\n\n4. How does the patient interact with family, household, friends, and government etc.?\n\n5. How does the physical body of the patient interact with the society? (past, present, future)\n\n6. How does the society influence the demands of life?\n\n7. What tis the attitude of the patient toward his natural environment?\n\n8. Which are the past, present and future interactions of the natural environment with the patient’s physical body?\n\n9. How does the natural environment modify his demands of life?\n\n10. How does the society interact with his natural environment?\n\nQuestions about vitality, motivation and purpose in life\n\nWhat is the source of the patient’s vitality? Is it spontaneous or rather focused on objectives or purposes? Which occasions induce which type of vigour? What is his purpose in life?\n\nHow is the patient’s physical, intellectual, and emotional vitality? How much is hedonistic and how much eudaimonic?\n\nWhat does the patient do with his vitality? Is it used mostly in family, profession, or hobbies?\n\nHow is the energy flow between the patient and his physician?\n\nQuestions about temporal patterns\n\nWhen did the patient feel completely healthy the last time? When and how did he loose his health or wellbeing?\n\nWhat were the manifestations of the crisis?\n\nHow was the time course of the disease up to now? Which factors induced aggravation and which improvements? Which changes within or outside the patient induced which type of changes?\n\nWhat is the explanation of the patient for his current state of health and for his failure to improve it? What does the patient need in order to get over the present crisis?\n\nWhat are the future plans of the patient? How much sense of purpose do they give him?\n\nObviously for all medically diagnosed conditions treatments are to be prescribed as indicated. Yet, in medicine, indications generally leave much room for judgements. Therefore the findings collected by assessing all components and interactions of the Meikirch model must be considered and integrated as much as possible. A CAS cannot be manipulated to health. It must be assisted to reorganize itself autonomously to a new state, in order to better fulfil the demands of life, hence better health and well-being. Therefore the role of the physician is to accompany the patient during the process he goes through. Some advice, assistance, or therapeutic intervention may be helpful, but only the patient is in a position to create his new future state for himself. By analysing his condition as a CAS together with his physician including all components and all interactions of the model he presumably receives many new ideas that he can use to emerge to a healthier state in the future. For example, he may want to make up his mind whether or not he will accept all the conditions that have determined his life in the past. In this respect, a discussion with his physician of alternatives with their consequences may be useful. For many patients it might be constructive to deal with the energy flow in their system, e.g. to speak about the purpose of their lives.\n\nThe process of reorientation based on the Meikirch model will take time. During this period it may help the patient, if he finds in his physician a trustful human being with whom he can discuss all sorts of alternatives. Ultimately though, patients have to create their own future. It will encourage them, if they feel understood, trusted and accompanied by an experienced person with a sincere interest in their wellbeing.\n\nIn internal medicine and general practice there are many patients who come for consultations because they feel ill. Yet, on examination no clear pathology is found. So far such complaints are explained as functional and often are degraded by physicians as unimportant. Patients then receive drugs that may be symptomatically beneficial or placebos, more often than not harmful or noceboes. Instead of acting with benign neglect, the Meikirch model offers a true and positive alternative approach. In many cases it will help the patient to understand his problems, to readjust his potentials and to advance his readaptation to the demands of life. Thereby patients may again come closer to a state of health and wellbeing.\n\n\nDiscussion\n\nAt the present time the Meikirch model is a hypothesis grounded on a theoretical framework. Yet, until now much of health care has not been concerned with an understanding of the nature of health; it used instead an intuitive notion of wellbeing which did not lead to new insights. In contrast, a rational understanding of health - as given by the Meikirch model - offers innovative opportunities. Today this model is better founded on scientific evidence than other definitions of health. Its ultimate validity, however, will be documented only by using it in practice. This must be done with due consideration to the special features of the model. Much further research is urgently needed.\n\nFor the past 150 years medicine has been working with methods derived from Newtonian natural science and obviously has achieved major advances. They are based, however, predominantly on materialism and neglect the social and spiritual features of the human nature. In addition, until recently medicine has not considered systems theory. It appears that these two aspects offer new opportunities for health care to become even more effective. Systems thinking implies that science based on Newton must be complemented by complexity science. Particularly for the purpose of health care a phenomenological, narrative, evolutionary holism must be added to analytical reductionism11. Poorly functioning parts are not simply corrected by appropriate drugs or surgical operations. Instead considerations of the evolution of the patient’s health to the present state, earlier successes in self-management and failures in the handling of his present crises can be evaluated. Antonovsky’s sense of coherence and meaningfulness also may be very helpful12. Necessary changes a patient has to realize must not come top-down from the physician, but rather bottom-up, originating in the patient himself, e.g. by new insights. For this purpose mutually trusting patient-physician interactions are critical for a successful future: the physician must believe in the patient’s abilities to evolve to a new state and must accompany and support him with loving wisdom in this endeavour.\n\nWhen speaking with older and experienced general practitioners, and when reading about how they managed their difficult patients, it becomes evident, that they knew their patients from the past quite well. In many difficult situations they often had to accompany rather than to treat them. Such patients remained very loyal because they understood what their doctor had contributed to their health. At the same time physicians realized that they had nothing more to offer than their personal support as a professional human being. The Meikirch model offers now a rational approach to such difficult cases and it is hoped that it will give new opportunities for patients to move toward better health. At the first glance the described system’s approach to patient care is similar to what Michael Balint intended with his groups13. He was psychiatrist and pursued the purpose to train general practitioners in psychotherapy. In contrast, the systems theory focusses on a new look at a patient’s possible unresolved evolutionary steps, analyses the biological given and personally acquired potentials and offers him an opportunity to progress further in his personal biography. More research is needed to validate the promises and limitations of this approach.\n\nThe Meikirch model distinguishes two types of very different potentials with which the demands of life must be met, the biologically given potential (BGP) and the personally acquired potential (PAP). The latter is the resource that continuously pilots the adjustment to new life situations. It is the locus of executive functions14. For this purpose it interacts with all components of the system. The PAP is the seat of memory, visions, fantasy, reasoning, attentional control and inhibitory control, and problem solving. Its sustained evolution toward more and more wisdom is critical for the maintenance of health. The PAP can learn to compensate in part for losses of the BGP. This leads to an interesting aspect of the relationship between the two potentials. It may be compared to rider and horse. If the rider wants that his horse serves him well he has to take good care of his horse. It appears that the neglect of the PAP in modern medicine is well perceived by patients. Therefore they turn to complementary or alternative medicine. In fact, much of the success of homeopathy and other methods might be explained by the physician patient interaction with its effects on the complex adaptive system that expresses the patient’s health. This mechanism may be relevant also for much of the success of other complementary or alternative treatments. It is our opinion, however, that it will be better to work with the CAS in a planned and scientifically justifiable manner based on the Meikirch model than to apply unproven methods. The model would also serve as an excellent framework for a proper practice of evidence based medicine as defined by David L. Sackett: “Evidence-based medicine (EBM) requires the integration of the best research evidence with our clinical expertise and our patient’s unique values and circumstances15.” This applies also to the newly evolving holistic clinical approach “Integrative Medicine and Health” that “reaffirms the importance of the relationship between practitioner and patient, focuses on the whole person, is informed by evidence, and makes use of all appropriate therapeutic and lifestyle approaches, healthcare professionals and disciplines to achieve optimal health”16.\n\nWhen looking at health as a lifelong and complex evolutionary process, it is not surprising that crises do occur frequently. Throughout human life there are several major and many minor evolutionary steps to be taken. End of breastfeeding, beginning of school, puberty, professional formation and advancement, partnership, family, menopause, and involution of old age are some of the more demanding processes. Today they must be overcome in a society that offers insufficient respect for the personality of each individual. Lack of a motivating purpose in life and insufficient social support have become almost normal. Economic exploitation, power plays, isolation, social neglect and even wars weigh heavy on the demands of life. A culture that is really concerned with the health and wellbeing of its individuals needs to strongly support lifelong human development by investing in life-affirming compassion and truth5. The Meikirch model gives a framework for how this could be achieved.",
"appendix": "Author contributions\n\n\n\nBoth authors contributed to drafting the work, were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nVirchow R: Die Cellularpathologie in ihrer Begründung auf physiologische und pathologische Gewebelehre. Berlin: Hirschwald, 1858. Reference Source\n\nLorenz E: The Essence of Chaos. Edward Lorenz: Seattle, 1993; Appendix 1: S181–184. Reference Source\n\nSturmberg JP, Martin CM, Katerndahl DA: Systems and complexity thinking in the general practice literature: An integrative, historical narrative review. Ann Fam Med. 2014; 12(1): 66–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBircher J, Kuruvilla S: Defining health by addressing individual, social, and environmental determinants: new opportunities for health care and public health. J Public Health Policy. 2014; 35(3): 363–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBircher J, Hahn EG: Understanding the nature of health: New perspectives for medicine and public health. Improved wellbeing at lower costs [version 1; referees: 2 approved]. F1000Res. 2016; 5: 167. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaklem PT, Seely A: Towards a definition of life. Perspect Biol Med. 2010; 53(3): 330–40. PubMed Abstract | Publisher Full Text\n\nWilkinson R, Pickett K: The Spirit Level: Why Equality is Better for Everyone. London: Penguin Books, 2009. Reference Source\n\nMarmot M, Wilkinson RG: Social Determinants of Health. 2nd edition, Oxford University Press, 2005. Reference Source\n\nBrundtland GH: Report of the World Commission on Environment and Development: Our Common Future. 1987. Reference Source\n\nWatts N, Adger WN, Agnolucci P, et al.: Health and climate change: policy responses to protect public health. Lancet. 2015; 386(10006): 1861–914. PubMed Abstract | Publisher Full Text\n\nLansing JS: Complex Adaptive Systems. Annu Rev Anthr. 2003; 32: 183–204. Publisher Full Text\n\nAntonovsky A: Unraveling the mystery of health - How people manage stress and stay well. Jossey-Bass Publishers, San Francisco, 1987. Reference Source\n\nBalint M: Training general practitioners in psychotherapy. Br Med J. 1954; 1(4854): 115–20. PubMed Abstract | Free Full Text\n\nDiamond A: Executive functions. Annu Rev Psychol. 2013; 64: 135–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStraus SE, Richardson WS, Galsziou P, et al.: Evidence-Based Medicine: How to Practice and Teach it. 4th edition, Churchill Livingstone-Elsevier, Edinburgh, 2010. Reference Source\n\nAcademic Consortium for Integrative Medicine & Health (The Consortium): Advancing integrative medicine and health through academic institutions and health systems. Reference Source"
}
|
[
{
"id": "15187",
"date": "25 Jul 2016",
"name": "Joachim P Sturmberg",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is an extension of the authors’ previous paper in this journal and stimulates long overdue debate. My previous comments 1 principally also apply to this paper.\n\nTwo additional points:\n\nThe paper’s title “Health as a Complex Adaptive System: a new dimension of patient care in internal medicine and general practice” only loosely reflects the content of the paper, I would think that a title similar to “Applying a complex adaptive system’s understanding of health to clinical care” would be stronger and more appropriate. Offering clinicians alternative ways of practicing is certainly most welcome.\n\nThe paper - implicitly - wrestles to come to terms with the notions of health in relation to the “non-health” terms of illness and disease 2 . Even definable pathology is not objective as many studies have shown significant disagreement between experienced clinicians looking at the same “example of disease”. Highlighting the definitional problems and distinctions is especially important for primary care physicians as many people present with unspecific symptoms rather than a “specific diagnosis” 3. Diagnoses are socially constructed 4 and have become a “tyranny for clinicians” 5 . From a complexity perspective thinking about patients’ complaints as “conditions” or “states of being” clearly fosters more creative approaches to patient care and the emergence of more satisfactory outcomes.",
"responses": []
},
{
"id": "15463",
"date": "05 Aug 2016",
"name": "Dianne Delva",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an article describing the confluence of two theories, the model of health as defined by the authors in the Meikirch model which appears to be a model developed and published by the authors, and complex adaptive systems. The authors propose this as a method to better help patients adapt and aim towards improved health.\n\nStrengths:\n\nThe authors have developed a theory and are proposing this explicit theory can better support patient care. Primary care is known for its complexity and uncertainty. Thus the system is a good framework for thinking about a holistic approach to patient care and to the variability of adaptation based on the components. The categories of illness and disease would be helpful for learners to understand that not all illness has a desired resolution. The comments on complex adaptive systems as a way of understanding health in individuals and society is useful in reminding us that linear models often ignore the unintended consequences of change.\n\nSuggestions:\nAs a family physician, I found the paper both interesting and difficult to follow. In general the language is clear with minor errors. I am not an expert on complex adaptive systems but agree with the comment regarding the use of some terms as detracting from the premise of the article. My main suggestion would be to expand the methods section. It is not clear to me who was involved in the process, how this was carried out. I wonder if this would be better as a theoretical proposal rather than a scientific study. The examples chosen seem to be the opinions of the authors and have not been tested. Some are quite vague and would require fuller explanation.\n\nI also agree that comparing this theory to others might be useful. McWhinney is referenced and as a leader in Family Medicine in Canada, the four principles of family medicine and a model we use to teach residents for patient-centered care helps us to provide care in family medicine that is not likely to be enhanced by the model. The four principles are:\n\nThe family physician is a skilled clinician. Family medicine is a community-based discipline. The family physician is a resource to a defined practice population. The patient-physician relationship is central to the role of the family physician.\nIt may be that the authors are specialists and thus are less familiar with general/family practice. The idea that “Physicians then examine them and specifically look for pathological changes.” is not the premise for family medicine in Canada. The four principles address the environment and social determinants of health in situating the patient in the community and family in which they live. The idea of accompanying the patient resides in the value of continuity of care and the patient-physician relationship. Knowing the patient’s values and past helps to tailor the approach to support the patient in the direction they wish to go. The skilled clinician is a critical thinker who can deal with uncertainty and as a resource to the practice population advocates for the system to better serve the population.\n\nIn Canada we teach a model of interviewing that includes the traditional history focused on disease interwoven with the IFFE model. These include the patient’s ideas of what is wrong with them, their fears, how their function is affected and their expectations from the physician/system. These questions are essential to the patient-centered approach and situate the consultation clearly with the patient as the CAS.\n\nIn particular, I find the suggestions for taking a history based on the model cumbersome and clearly the language would be challenging for patients. It is also not clear to me how these questions are related to the CAS.\n\nThe treatment approach mirrors the patient-centered approach taken in family medicine. We know that patients often do not fill prescriptions and asking patients what might work for them or what they are comfortable with, helps one to support a treatment approach that is right for the particular patient. Being aware that a patient does not have the resources to access a particular treatment is essential to a collaborative effort for care.\nSome of the examples either require more explanation or less linear thinking. The idea that in utero does not include the environment overlooks the social determinants of health of the mother and their effect on the in utero environment.\n\nI have been involved in a number of Balint groups and recall reading Michael Balint’s book. The statement “He was psychiatrist and pursued the purpose to train general practitioners in psychotherapy.” does not reflect my experience. My understanding was he focused on Difficult Clinician-patient relationships to help general practitioners interact and “accompany” our patients more effectively. The groups were meant to support physicians in their own context. I did not see this as training in psychotherapy.\n\nMinor suggestions\n“Examples of medical conditions are ventricular fibrillation, epileptic seizures, tantrum, or psychotic states.” This statement needs further explanation. It is not logical from the discussion if we think of health in a holistic manner.\n\nThere may be cultural differences in writing and I appreciate English may not be the first language of the authors; however I find it a poor reflection to refer to all patients and physicians as male.",
"responses": [
{
"c_id": "2195",
"date": "19 Sep 2016",
"name": "Eckhart G. Hahn",
"role": "Author Response",
"response": "Response to the comments by Dr. Dianne Delva We as authors appreciate the care Dr. Delva has devoted to our paper. It has helped us significantly to improve the manuscript and make it readable for family physicians. Also, we are happy that she recognizes the overall strengths of our contribution. We understand that she appreciated the value of thinking in terms of health and disease as complex adaptive systems (CAS) to embrace uncertainty and a holistic approach to patient care. In order to respond to her concerns with some aspects of our manuscript we would offer the following point by point response. We will also indicate where we have made changes and corrections of the manuscript. From Dr. Delva´s critical comments as a family physician we have come to the conclusion that the original title does not completely reflect the content of the paper and may be misinterpreted. We do not intend to challenge any of the established models as applied to family medicine. We rather intended to explore the possible results of applying the principles of complex adaptive system´s thinking in primary patient care, possibly extending the present practice. Therefore, to avoid any misunderstanding, we have changed the title of the paper into “Applying a complex adaptive system´s understanding of health to primary care”. This was also suggested in the review of Dr. Sturmberg (accessible under http://f1000research.com/articles/5-1672/v1). Interestingly, Dr. Delva refers to Dr. I. R. McWhinney, who had developed four principles for family medicine. He certainly is an important person for the principles of family medicine. These principles refer to the physician as a skilled clinician, to family medicine as a community based discipline, to family physicians as resource to a defined population, and view the patient-physician interaction as central. Interestingly these four postulates are also part of the Meikirch model. Also the proposals of the so called Wonca-tree are included in the model. The Meikirch Model is wider than the specific concepts for family medicine and therefore might in the future serve as an extended basis on which the others stand: this is what we suggest to explore. We agree with Dr. Delva that the physician must value the continuity of care and be competent to deal with uncertainty. We, as proposed in the Meikirch model, have a different view how the patient should be supported, derived from the thinking of health and disease as a complex adaptive system. As a CAS, the patient´s situation evolves independently and autonomously, often against his or her will. The support in our opinion has the purpose to help the patient to move to his new state that results in a more satisfactory emergence. Often the route to this new state is not the “direction the patient wants to go”, but the direction to which the patient has to submit himself. That is why he needs the physicians support. On page 4, an effort has been made to further clarify what a CAS means and what it may contribute to the primary care of patients. In figure 2 the term “attractor” was removed because it is not essential to understand the principle of a CAS. On pages 5 and 6, the method section has been expanded as suggested by Dr. Delva. Both authors have contributed to the development and description of hypothetical consequences of the Meikirch model to stimulate future research. One of the authors (EGH) is trained in and practiced general internal medicine (and gastroenterology) and is familiar with Robert Nikolaus Braun´s conceptual frame of family medicine which is prevalent in German speaking countries. In a participatory and integrative approach to the patient, the inclusion of the patient´s ideas, fears, functions and expectations (IFFE model) is indeed essential for any primary encounter between patient and physician. It is also part of the Meikirch model, as can be seen in box 3 on page 6. Dr. Delva finds it cumbersome to take a history based on the Meikirch model and states that it is not clear to her how the questions in box 3 are related to the CAS. Obviously this table contains examples, is not exclusive and must be adjusted to the patient’s specific problems. The purpose of box 3 is to give the physician a tool to make sure that the history he took covers the whole Meikirch model, i.e. is complete in this sense. The questions suggested in box 3, derived from the Meikirch model with its complex interactions, will also be the basis for describing the patient as a CAS. The Meikirch model is in itself based on complex adaptive systems thinking. Dr. Delva, is right in suggesting to use plain language in real life communication with patients. In order to clarify the issue of the situation in utero we believe that the direct environment of a fetus is the uterus of the mother. The environmental and social determinants of the mother have nonetheless an indirect effect on the child´s development in utero as has been pointed out in a previous description of the Meikirch model. Dr. Delva refers to her understanding of Balint groups. We have taken the information given in our paper directly from an original publication by Michael Balint (1896–1970) in the Br Med 1954 (see reference 13). Balint groups may have developed in a direction not intended by Balint himself, and we agree that the term psychotherapy is not well defined and can be understood in many ways in Balint´s original paper. The case reports included in this paper describe complex psychodynamic situations. We have changed our text on page 9 to stay as close to Balint´s original paper as possible. The butterfly effect used as a metaphore on page 4 is now described in more detail to assure a better understanding of our mentioning of ventricular fibrillation, epileptic seizures, tantrum or psychotic states. These conditions, because of their dramatic appearance, are exclusively meant as an illustration of the butterfly effect. Wherever the gender-neutral terms “patient” or “physician” were used in our text we have now avoided to use them in a gender-specific context by either omitting “he” or “she” or using the plural, including in the abstract. Fink, W. & Kamenski, G. Wien Med Wochenschr (2009) 159: 173. doi:10.1007/s10354-009-0666-8 Bircher J, Hahn EG: Understanding the nature of health: New perspectives for medicine and public health. Improved wellbeing at lower costs [version 1; referees: 2 approved]. F1000Res. 2016;5:167. 27134730 10.12688/f1000research.7849.1 4837984. See page 4 „personally aquired potential“."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1672
|
https://f1000research.com/articles/5-1716/v1
|
15 Jul 16
|
{
"type": "Method Article",
"title": "Development and interval testing of a naturalistic driving methodology to evaluate driving behavior in clinical research",
"authors": [
"Ganesh M. Babulal",
"Aaron Addison",
"Nupur Ghoshal",
"Sarah H. Stout",
"Elizabeth K. Vernon",
"Mark Sellan",
"Catherine M. Roe",
"Aaron Addison",
"Nupur Ghoshal",
"Sarah H. Stout",
"Elizabeth K. Vernon",
"Mark Sellan",
"Catherine M. Roe"
],
"abstract": "Background: The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer’s disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD. Methods: Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results: Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion: Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction. However, these devices require strict protocols and controlled testing for adoption into research paradigms. After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD. Data can be aggregated over time to look at changes or adverse events and ascertain if decline in performance is occurring.",
"keywords": [
"naturalistic driving",
"interval testing",
"geographic information system",
"global positioning systems",
"in-vehicle technology"
],
"content": "Background\n\nMotor vehicle crashes (MVC) are a leading cause of injury among older adults (586 daily) in the United States1 and MVC deaths have steadily climbed over the past decade, along with an increase in crash risk with each year2. Coupled with the growth of the aging population and the increasing prevalence of dementias like Alzheimer disease (AD), being able to predict when driving performance will decline may prevent crashes and deaths among older adult drivers and others who share the roadway3–5.\n\nTo this end, our research program seeks to better characterize the driving behaviors of older adults and predict the onset of driving difficulties so that we can implement appropriate interventions to maintain safety and prolong driving life6. We are particularly interested in the association between preclinical AD measured using molecular biomarkers such as levels of Aβ42 and tau in the cerebrospinal fluid7, as well as imaging of amyloid8 and tau9 lesions in vivo.\n\nRoad tests and driving simulators are the most common and dominant measures used to assess driving performance and determine road safety10. Both methods have proven reliable and valid in evaluating poor driving performance and estimating crash risks for older drivers11,12. However, driving is an overlearned task and controlled conditions like the road test and simulator may not reflect driving as it occurs on a daily basis or expose errors made by experienced or cognitively-normal drivers outside of these controlled conditions13. Other limitations of both methods include rater subjectivity, anxiety (poorer performance), Hawthorne effect, dedicated single site measures, simulator sickness and high equipment cost, maintenance, and programming13–16.\n\nDue to these limitations, we sought to find an objective, cost-effective method that would allow us to assess future research of driving performance longitudinally on a daily basis among hundreds of older adults in the actual environments that they drive, something that has been unavailable until now. This manuscript describes the first step in our work to adapt a commercial global positioning data acquisition system (GPDAS) and develop a methodology to evaluate driving performance. This technology is capable of collecting data at a constant rate over any determined time. However, due to the cost of data storage and greater programming time with larger volumes of data, we sought to determine the “optimal” time interval for accurate data collection using GPDAS devices.\n\n\nMethods\n\nThe GPDAS device (G2 Tracking DeviceTM Azuga, Inc) is compact (length = 1.7”, width = 1.8”, height = 1”, weight = 1.1 ounce), plugs into the on-board diagnostic systems port (OBDII) and uses the vehicle’s own battery to supply the 12 volts required to function. Only vehicles manufactured in 1996 or later are compatible with the device. The device’s wireless capabilities include use of third generation mobile phone network (3G), jamming detection, Bluetooth, internal antenna and Firmware-Over-The-Air update for configuration of device firmware. Its global positioning system (GPS) capability includes a 56-channel receiver with a 4-Hertz acquisition rate, accuracy of 2.5 meters circular error probable (CEP) and integrated anti-jamming capability. Finally, it has a tri-axial (X, Y, Z) accelerometer with 8–13 bits of resolution on each axis. The accelerometer can detect and report changes in acceleration over +/- 16 g-force and the data can be reported at a rate ranging from 1 to 24 Hertz.\n\nThe GPDAS device sends data at intervals of 30, 60, or 120 seconds, which shows the exact location, speed, and date/time at each interval. The optimal time interval would accurately represent the route traveled using the minimum number of data points possible in order to minimize cost and extraneous data collection. The data collected did not contain any personal or identifying information about the drivers. Ethical permission to conduct this study was sought and received via expedited review from the Washington University Human Research Protection Office who determined that this is a non-human subjects study (201412024). Informed consent was obtained from all drivers who participated in this study.\n\nAfter being plugged into the OBDII port, the GPDAS device extracts the signal from the vehicle speed sensor (VSS), which measures the transaxle speed, also known as the wheel speed. The VSS is the reference speed that the majority of a vehicle’s systems rely upon to achieve their specific functionality. For example, the Engine Control Module uses the VSS signal to modify engine functions and initiate specific diagnostic routines, while the variable assist power steering system uses it to regulate power steering pressure for assistance at slow speeds. The speed displayed on the speedometer is generally greater than the actual VSS signal, ranging anywhere between 1–3 mph more. The VSS signal, which the GPDAS device uses, is the most accurate reflection of the vehicle speed. Installation takes less than one minute, and once plugged in, the device accesses available satellites for orientation and then begins simultaneously transmitting data to secured servers using available cell phone towers. These data can be then accessed online in real-time or stored in a database for retrieval at a later date. If a vehicle is driven in areas where cellular signal is lost, data continue to be collected and is then re-transmitted when a stronger signal is established. When a vehicle is turned off, the device enters sleep mode but sends a signal every four hours to indicate the ignition is off but the device is still functioning. When a vehicle is turned on, the device immediately begins sending data at a specified time interval. A standard set of data is obtained during a trip, which is specified as the time between when the ignition is turned on and off. These variables include time and date stamp, drive time, idling time, miles, latitude and longitude, speeding over posted speed limit, hard braking, sudden acceleration and an alert if the device was unplugged and plugged back in. Since the device is powered by the vehicle’s battery, if the battery starts to drop below the required 12V, the device sends out a series of alerts indicating insufficient power and will stop transmitting if the power drops below 10V. Additionally, the device will detect problem codes that the vehicle’s computer may send out (e.g. check engine light indicating oxygen sensor requires replacement).\n\nA structured driving course of approximately seven miles (Figure 1) was designed to represent various real-world driving conditions with a comprehensive mix of stoplights, stop signs, right and left hand turns and merging into traffic. The route began at an office complex in an urban setting and continued several blocks east following a divided boulevard. Drivers then turned south to merge onto a freeway. The freeway section of the route provided driving conditions and associated data logging for highway speeds. Drivers then exited into a large park where the designated route was designed to simulate more rural driving conditions. The park section also allowed for more nuanced driving such as roundabouts where data interval logging could be analyzed for correctness to the real-world, and to simulate driving events such as a U-turn or missed turn. Finally, the drivers exited the park and returned to the office complex starting point driving on surface streets with traffic to simulate additional urban conditions.\n\nA map of the route was provided to each member of the driving research team, as well as the turn-by-turn driving directions. Drivers did not navigate the route prior to data collection. All GPDAS data were logged into daily csv files with results uploaded to a secured server by Azuga. Automation scripts were used to validate files and copies stored on a secured server. A secured file transfer protocol was designed and automated to transfer the log files from Azuga’s server to our servers on a daily basis.\n\nThree healthy subjects drove a single course in three different vehicles. The drivers negotiated the course at three different time intervals (30 seconds, 60 seconds, 120 seconds), and at three different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). In order to minimize bias associated with the order of driving combinations and day, the time intervals and time of day were randomized for all drivers. Depending on the time of day, data were collected over several days, including weekdays and weekends. Each driver yielded nine sets of data (i.e., all possible combinations of time interval and time of day). The device remained installed in the vehicle without removal until each driver completed the set of routes.\n\nData were logged into files (csv) stored in a secure Amazon S3 folder. Data were downloaded in bulk and sorted into folders based on the respective driver IDs. Secondary sorting was done for time interval and filtered for spurious data points. All data were imported to ArcGIS Desktop 10.2 software (Environmental Systems Research Institute, Redlands, CA, USA) and plotted on a map using the latitude and longitude coordinates logged by the GPDAS device (Figure 2).\n\nEach dataset was queried for a specific time interval, such as 30 seconds. The resulting dataset was used as an input for the Network Analyst extension of ArcGIS. A base road network (edge network) was also loaded into the tool for the routing algorithm. Routing algorithms use an impedance to determine “cost” of travel on the network, but are often defined in terms of time needed to traverse a given section of the network or distance needed to travel the network segment. The network impedance was defined by time of travel on the base network. The routing algorithm processed all coordinate data from each driving circuit, creating a line representing the path traveled during data collection. These data were then visualized in ArcGIS as lines (Figure 3).\n\nEach of the time intervals was evaluated for best fit to the base road network. Best fit was determined by comparing the route generated by the routing algorithms to the actual true route of the course. ArcGIS was used to conduct spatial comparisons between the routes driven and the real-world road course or “correct” route to determine “best” fit. The results of this analysis were used to determine the preferred data collection interval for the device.\n\n\nResults\n\nThe mapped routes were displayed and symbolized by time interval for the main visualization product. In addition to identification of the preferred time interval for data collection, the visualization process also revealed data artifacts and incorrect routing of the base network. The source of these errors was explained primarily by the interval of data collection. For instance, if the data collection interval is too large, the driver may travel through several turns before the next valid data point is logged. This absence of specific data points to guide the routing algorithm may lead to incorrect assumptions, resulting in incorrect path of travel produced.\n\nAs Figure 4 illustrates on the two-minute interval, an incorrect line was generated (zigzag line) due to lack of data about the actual route (red line). Other events, such as hard braking, can also add intermediate data points to aid in the process, but these points are unpredictable and cannot be relied upon for routing protocols since any stimulus from the external environment can impact driving behavior and trigger an event. The 30-second collection interval was determined by the software to have the strongest “goodness of fit” and lowest number of incorrect paths traveled compared to the one and two minute data collection intervals.\n\n\nDiscussion\n\nThis study investigated the optimal time interval for data collection using a GPDAS device to accurately capture a driven route while weighing the considerations of cost associated with data storage and post-processing efforts. The 30-second interval was determined to be the most accurate based on goodness of fit and was affordable for our research program. Technological innovations have led to faster processors and ability to gather greater volumes of data. Yet, the challenges required to analyze big data include large statistical and computational costs, incidental homogeneity, noise, and an inherent requirement to develop newer, robust statistical models to deal with larger sample sizes17,18. Further, given a stricter funding climate, researchers working with clinical populations cannot afford the time and cost to collect, process and analyze continuous data using existing naturalistic research paradigms. Some studies use in-vehicle data recorders that require hours of installation and extensive modification of participants’ vehicle11. Others use in-vehicle cameras which may modify driving behavior and also require hours of post-processing and extensive rater training16,19. Larger studies that collect hundreds of hours of data may require participants to regularly return to the study site and have the data from their vehicle downloaded20. Studies using smart phone applications require participants to charge phones, turn the phone on and off and to remember to bring it in the vehicle when driving, thereby elevating participant burden21.\n\nAs a whole, driving research and crash prevention research is shifting toward the use of naturalistic methodologies for evaluating driving performance22–25. Development and interval testing of a naturalistic driving methodology to evaluate driving behavior is required to measure real world driving conditions and responses26 in a variety of clinical populations. The methodology presented here implements a non-obtrusive device installed in the OBDII port of a vehicle. The device stores locational data in the form of latitude and longitude at each time sampled, as well as driving behaviors that may occur at any time, such as hard braking, speeding, and vehicle on/off events. Since all data are tied to a spatial location, it is possible to understand the “place” of where data have been collected.\n\nIt is important to note that driving events such as “what happened at this exact moment or day?” should not be singled out. The inherent value of longitudinal data collection is to collect data to better understand changes over time for an older adult driver that may be otherwise hidden from observation. The true potential of this methodology is that data gathered could be linked to other databases to answer a number of questions. One can link weather and meteorological databases to understand the impact of the weather on driving patterns and Department of Transportation databases on road construction to explore how roadwork influences driving navigation. Driving behavior for clinical populations could also be evaluated in a pre-post design for patients who have had medication changes, surgeries, stroke, undergoing chemotherapy or radiation, a diagnosis of seizures or a range of neurological conditions that ultimately impact driving for a brief or longer period of time. Naturalistic driving research has the potential to study and aid the management of driving behavior of older adults with chronic neurological disease like dementia. The long-term goal of our program is to model driving behavior and driving risk of older adults using this naturalistic driving data to identify driving decline over time and develop educational interventions to improve driving performance, decrease vehicle crash risk while driving and structure driving retirement for older adults with a higher risk for MVCs.\n\n\nData availability\n\nF1000Research: Dataset 1. Determining the optimal interval collection for a Global Positioning Data Acquisition System, 10.5256/f1000research.9150.d12887727\n\n\nConsent\n\nInformed consent was obtained from all drivers who participated in this study from Washington University Human Research Protection Office.",
"appendix": "Author contributions\n\n\n\nGMB, AA and CMR designed the study, collected and analyzed the data, and drafted the manuscript. NG, SSH, EKV and MS contributed to the conception and design of the study and revised it critically for intellectual content. All authors have read and approved the final version of the manuscript for publication and agree to be accountable for all aspects of the work.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for this study was provided by the National Institute on Aging [R01AG043434(Roe), R01AG43434-03S1(Roe), P50-AG05681(Morris), P01-AG03991(Morris), P01-AG026276(Morris)]; Fred Simmons and Olga Mohan, and the Charles and Joanne Knight Alzheimer’s Research Initiative of the Washington University Knight Alzheimer’s Disease Research Center.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nNational Highway Traffic Safety Administration EDRWG: Event Data Recorders-Summary of Findings by the NHTSA EDR Working Group. 2001. Reference Source\n\nNational Highway Traffic Safety Administration: Traffic Safety Facts 2012 data: older population. 2014. Reference Source\n\nAdler G, Rottunda SJ: Mandatory testing of drivers on the basis of age and degenerative diseases: stakeholder opinions. J Aging Soc Policy. 2010; 22(3): 304–19. PubMed Abstract | Publisher Full Text\n\nAlzheimer’s Association: 2015 Alzheimer's disease facts and figures. Alzheimers Dement. 2015; 11(3): 332–84. PubMed Abstract | Publisher Full Text\n\nPrevention CfDC: Web-based Injury Statistics Query and Reporting System (WISQARS)(2010). National Center for Injury Prevention and Control, CDC. 2011. Reference Source\n\nRoe CM, Barco PP, Head DM, et al.: Amyloid Imaging, Cerebrospinal Fluid Biomarkers Predict Driving Performance Among Cognitively Normal Individuals. Alzheimer Dis Assoc Disord. 2016. PubMed Abstract | Publisher Full Text\n\nBlennow K, Zetterberg H: The past and the future of Alzheimer's disease CSF biomarkers-a journey toward validated biochemical tests covering the whole spectrum of molecular events. Front Neurosci. 2015; 9: 345. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKlunk WE: Amyloid imaging as a biomarker for cerebral β-amyloidosis and risk prediction for Alzheimer dementia. Neurobiol Aging. 2011; 32(Suppl 1): S20–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXia CF, Arteaga J, Chen G, et al.: [18F]T807, a novel tau positron emission tomography imaging agent for Alzheimer's disease. Alzheimers Dement. 2013; 9(6): 666–76. PubMed Abstract | Publisher Full Text\n\nEby DW, Silverstein NM, Molnar LJ, et al.: Driving behaviors in early stage dementia: a study using in-vehicle technology. Accid Anal Prev. 2012; 49: 330–337. PubMed Abstract | Publisher Full Text\n\nHunt LA, Murphy CF, Carr D, et al.: Reliability of the Washington University Road Test. A performance-based assessment for drivers with dementia of the Alzheimer type. Arch Neurol. 1997; 54(6): 707–712. PubMed Abstract | Publisher Full Text\n\nOdenheimer GL, Beaudet M, Jette AM, et al.: Performance-based driving evaluation of the elderly driver: safety, reliability, and validity. J Gerontol. 1994; 49(4): M153–M159. PubMed Abstract | Publisher Full Text\n\nRizzo M, Reinach S, McGehee D, et al.: Simulated car crashes and crash predictors in drivers with Alzheimer disease. Arch Neurol. 1997; 54(5): 545–551. PubMed Abstract | Publisher Full Text\n\nDavis JD, Papandonatos GD, Miller LA, et al.: Road test and naturalistic driving performance in healthy and cognitively impaired older adults: does environment matter? J Am Geriatr Soc. 2012; 60(11): 2056–2062. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Winter J, Van Leuween P, Happee P: Advantages and Disadvantages of Driving Simulators: A Discussion. Citeseer: 2012; 47–50. Reference Source\n\nOtt BR, Papandonatos GD, Davis JD, et al.: Naturalistic validation of an on-road driving test of older drivers. Hum Factors. 2012; 54(4): 663–674. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJacobs A: The pathologies of big data. Commun ACM. 2009; 52(8): 36–44. Publisher Full Text\n\nFan J, Han F, Liu H: Challenges of Big Data Analysis. Natl Sci Rev. 2014; 1(2): 293–314. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStavrinos D, Ross L, Sisiopiku V: A Naturalistic Driving Study Across the Lifespan. 2014. Reference Source\n\nDingus TA, Hankey JM, Antin JF, et al.: Naturalistic Driving Study: Technical Coordination and Quality Control. 2015. Reference Source\n\nJohnson DA, Trivedi MM: Driving style recognition using a smartphone as a sensor platform. IEEE: 2011; 1609–1615. Reference Source\n\nDingus TA, Klauer S, Neale V, et al.: The 100-car naturalistic driving study, Phase II-results of the 100-car field experiment. 2006. Reference Source\n\nNeale VL, Dingus TA, Klauer SG, et al.: An overview of the 100-car naturalistic study and findings. National Highway Traffic Safety Administration, Paper. 2005. Reference Source\n\nShichrur R, Sarid A, Ratzon NZ: Determining the sampling time frame for in-vehicle data recorder measurement in assessing drivers. Transp Res Part C Emerg Technol. 2014; 42: 99–106. Publisher Full Text\n\nToledo T, Musicant O, Lotan T: In-vehicle data recorders for monitoring and feedback on drivers’ behavior. Transp Res Part C Emerg Technol. 2008; 16(3): 320–331. Publisher Full Text\n\nMolnar LJ, Charlton JL, Eby DW, et al.: Factors affecting self-regulatory driving practices among older adults. Traffic Inj Prev. 2014; 15(3): 262–272. PubMed Abstract | Publisher Full Text\n\nBabulal G, Addison A, Ghoshal N, et al.: Dataset 1 in: Development and interval testing of a naturalistic driving methodology to evaluate driving behavior. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15332",
"date": "01 Aug 2016",
"name": "David K. Johnson",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThree subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. The 30-second interval was reported as optimal. Time of day was apparently uninformative.\n\nI think this paper could make a significant contribution to a burgeoning and important literature. How to measure driving in older adults is a complex issue. If there were valid processes and instrumentation for the meaningful and accurate measurement of safety-relevant driver outcomes in older adults, this contribution would be huge. However, I had a hard time following the results and conclusions drawn by the investigators.\n\nI found that most of this manuscript read like a proposal and it editing was seemingly incomplete. As it currently reads, there are significant revisions that need to be made to improve its intelligibility. For instance I had a very hard time finding any real or intelligible outcomes in the results section. The only results I could find were in the abstract (What does it mean to \"capture the actual route driven along with the lowest number of incorrect paths\") and then again buried in the discussion section properly. Is there really no intelligible data in the results section? Further, it is opaque to me why 30 seconds is preferred - what data supports the 30-second interval and the investigators’ reasoning behind this the acceptance of this interva? If it's GIS type data, why isn't reported and interpreted? There is an oblique reference to a GOF but where is the table and logic behind the decision to adopt one model over another? More detail and logic is needed to make the authors' writing, logic and conclusions more clear. At the end of the manuscript I felt that the abstract's conclusion that 30 seconds was an optimal time for data collection is unsupported substantively.",
"responses": [
{
"c_id": "2171",
"date": "15 Sep 2016",
"name": "Ganesh Babulal",
"role": "Author Response",
"response": "Dr. Johnson, Thank you for your review and comments. They were helpful in our revision. This brief method article sought to determine which interval was more accurate/optimal given the available choices using GIS data by adopting a commercially off the shelf GPS device. We designed a study to examine if the 30 seconds collection interval was more accurate than the 60 and 120 seconds. We found that 30 seconds was indeed more accurate across the three drivers, days and times. Based on your feedback, we relocated the main finding at the beginning of the results section instead of the end. The background needed to understand the results is detailed in the last two paragraphs of the methods section. This is GIS data and it is explained in that section that we used a combination of ArcGIS software, the Network Analyst extension tool and routing algorithms to compare the driven routes to the actual routes. Our coauthor who is an expert in GIS, AA, indicates that unfortunately there are no p-values or confidence intervals to report with this type of data. Based on the GIS data captured from the COTS device, the 30 seconds was more accurate with fewer errors compared to the 60 and 120 seconds interval. The dataset is available for review."
}
]
},
{
"id": "15049",
"date": "30 Aug 2016",
"name": "Monique M. Williams",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe the use of a commercial-off-the-shelf (COTS) device to ascertain which interval provided the most accurate data regarding the route driven. The objective is to determine the optimal interval that would be applied to future research studies involving individuals with and without Alzheimer’s disease.\nA total of three participants drove a single 7-mile driving course. The course was designed to reflect real world driving conditions. Participants drove in different cars, at different times of day (morning, afternoon, and night), and different intervals (120-second, 60-second, and 30-second). The three participants were healthy and cognitively normal.\n\nThe datasets were analyzed. The preferred data collection interval was determined based on the results of best fit. Best fit was operationalized as the route generated by the routing algorithm as compared with the actual route driven. The analyses revealed that the 30-second interval provided the optimal (best fit) characterization of the route.\n\nThe ability to use a COTS device to assess driving skills has considerable public health implications in the context of an aging nation. The study is a critical initial step in determining the utility of the COTS device for prospective research studies. With the inherent limitations of driving simulators and road testing, the use of the device as described provides a cost-effective option that more closely emulates real world driving experiences.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1716
|
https://f1000research.com/articles/5-736/v1
|
25 Apr 16
|
{
"type": "Method Article",
"title": "An assay to measure poly(ADP ribose) glycohydrolase (PARG) activity in cells",
"authors": [
"Dominic I. James",
"Stephen Durant",
"Kay Eckersley",
"Emma Fairweather",
"Louise A. Griffiths",
"Nicola Hamilton",
"Paul Kelly",
"Mark O'Connor",
"Kerry Shea",
"Ian D. Waddell",
"Donald J. Ogilvie",
"Stephen Durant",
"Kay Eckersley",
"Emma Fairweather",
"Louise A. Griffiths",
"Nicola Hamilton",
"Paul Kelly",
"Mark O'Connor",
"Kerry Shea",
"Ian D. Waddell",
"Donald J. Ogilvie"
],
"abstract": "After a DNA damage signal multiple polymers of ADP ribose attached to poly(ADP) ribose (PAR) polymerases (PARPs) are broken down by the enzyme poly(ADP) ribose glycohydrolase (PARG). Inhibition of PARG leads to a failure of DNA repair and small molecule inhibition of PARG has been a goal for many years. To determine whether biochemical inhibitors of PARG are active in cells we have designed an immunofluorescence assay to detect nuclear PAR after DNA damage. This 384-well assay is suitable for medium throughput high-content screening and can detect cell-permeable inhibitors of PARG from nM to µM potency. In addition, the assay has been shown to work in murine cells and in a variety of human cancer cells. Furthermore, the assay is suitable for detecting the DNA damage response induced by treatment with temozolomide and methylmethane sulfonate (MMS). Lastly, the assay has been shown to be robust over a period of several years.",
"keywords": [
"PARG",
"PARP",
"olaparib",
"DNA damage response",
"Base excision repair",
"MMS",
"ADP ribosylation"
],
"content": "Introduction\n\nCells use a varied array of post-translational protein modifications to regulate signalling pathways. One of these is ADP ribosylation whereby single units or multiple, branched polymers of ADP are covalently attached to a target protein. For example, poly(ADP) ribosylation (PARylation) plays a particularly important role in base excision repair with poly(ADP) ribose (PAR) polymerase 1 (PARP1) detecting single strand breaks that occur during this pathway. PARP1, which binds to these single strand breaks undergoes auto-modification creating up to 200 PAR chains1, that subsequently recruit the rest of the repair machinery including XRCC1 and POLB to complete the repair. The ADP ribose chains on PARP are hydrolysed by the enzyme poly(ADP) glycohydrolase (PARG). The correct functioning of this pathway is key for repair to complete. Cancer cells rely on DNA repair more heavily than normal cells and inhibitors of these pathways have been in preclinical and clinical evaluation for a number of years2. The success of this strategy is exemplified by the inhibition of PARPs using olaparib that recently gained regulatory approval for use in ovarian cancers3. However, as auto-modified PARP1 is less able to bind DNA, inhibition of PARG has also been hypothesized as a suitable therapeutic target. This is even more germane as there are now 17 known members of the PARP (otherwise known as ADP ribosyl transferase diptheria-like; ARTD) family yet no known close homologues of PARG. PARG inhibition may therefore offer a more direct approach to derailing the DNA repair pathway without the problems of redundancy. Molecules that are claimed to inhibit PARG have existed for some time. Many of these are large tannin-like molecules such as gallotannin which have been shown to have a number of effects unrelated to PARG inhibition (e.g. anti-oxidant properties4). Other compounds, such as APD-HPD and rhodamine-based PARG inhibitors (RBPIs), have shown good specificity for eukaryotic PARG but are either not cell permeable or have only been tested in biochemical assays5–7. Attempts to discover new synthetic PARG inhibitors have resulted in compounds that also inhibit PARP or have low potency8–10. We therefore carried out a high throughput screen (HTS) directed against human PARG and identified a small number of hits which were carried through to a computational and medicinal chemistry programme11. We were mindful of the need to develop assays to detect cell-permeable inhibitors and the method development is contained herein.\n\n\nMaterials and methods\n\nUnless otherwise stated, all reagents were purchased from Sigma-Aldrich (Dorset, UK). Methylmethanesulfonate (MMS) was diluted in dimethyl sulfoxide (DMSO) to 250 mg/mL from the purchased stock. Temozolomide was dissolved in DMSO at 20 mg/mL. All cells were purchased from ATCC (LGC, Teddington, UK) unless otherwise stated and regularly checked for mycoplasma and were regularly sent for authentication. HeLa cells were maintained in RPMI 1640 (Sigma R0883) + 1% Glutamax + 10% FBS. PARG KD cells were purchased from Tebu-bio (PARG Hela Silencix 01-00085, Peterborough, UK) and maintained in DMEM + 1% Glutamax + 10% FBS + 125 µg/mL hygromycin B (#10687010; ThermoFisher, Northumberland, UK). All cells were maintained at sub-confluence at 37°C in a humidified incubator containing 5% CO2 in the absence of antibiotics. Mouse embryonic fibroblasts were cultured in DMEM (# 10938-025; Invitrogen, Paisley, UK) + 10% FBS + 1% L-glutamine and H1048 cells were grown in RPMI (# 21875-034; Invitrogen) + 10% FBS. SW620 cells were cultured in DMEM (#D6546) + 10% FBS + 1% L-glutamine. Dose response curves were generated using Prism v5.2 (Graphpad Software Inc, La Jolla, USA).\n\nExponentially growing HeLa cells were trypsinized and resuspended in complete media before being filtered through a 40 µM cell strainer (#352340, BD Falcon, Oxford, UK). Cells were then counted using a Muse cell counter (Merck Millipore, Hertfordshire, UK) and seeded in 30 µL of media at 4×104 cells/mL in Greiner 384-well plates (#781091, Greiner Bio-One, Stonehouse, UK) and placed in a cell culture incubator. After 16–24 h the plates were centrifuged briefly at 164×g and the cells dosed with compound(s) or vehicle (DMSO) control using an Echo 550 (Labcyte, Dublin, Ireland). Initially an 8-point dose response with two replicates per point was used with doubling dilutions (0.02–30 µM) and this was extended to a 10-point dose response with 3-fold dilutions (0.001–30 µM) as more potent compounds were identified. After 1 h the plate was re-spun and cells co-dosed with different concentrations of MMS (50–250 µg/mL final concentration) or DMSO using the Echo 550 and incubated for a given time (30 min–2 h) at 37°C in a cell culture incubator. Media was removed from the plate by inversion and cells were fixed with ice-cold 95% methanol/ phosphate buffered saline (PBS) for 15 min at -20°C and then washed once with PBS at room temperature. Cells were then permeabilized using PBS/Triton 0.1% for 20 min, and washed once in PBS before adding anti-PAR antibody (10H (#AM80), Merck Millipore) at 1:4000 in antibody blocking buffer (ADB; 5% Fetal bovine serum, 0.1% Tween20 in PBS) and incubated overnight at 4°C. Cells were then washed three times with PBS, before adding rabbit anti-mouse Alexofluor 488 (A11029, ThermoFisher) at 1:1000 and Hoechst 33342 (at 1:5000) in ADB and incubated for 1 h at room temperature and protected from light. Following three washes with PBS, the plates were sealed and images captured using a 10× objective on a CellInsight (ThermoFisher) and analysed using Cellomics Scan software (ThermoFisher). The mean of the intensity of nuclear spots at 488 nM was reported. Initial assays shown in the Supplementary data used only a single dose of MMS for 0–60 min. Studies using temozolomide used the same procedure as with MMS, with a stock solution of temolozomide made at 20 mg/mL in DMSO.\n\nCells in 96-well plates were fixed with the addition of 100 µL ice cold 10% trichloroacetic acid to the media. After 1 h at 4°C, the cells were washed twice with PBS and left to dry. Once dry, 100 µL 0.2% sulforhodamine B (SRB) was add to each well and incubated for 15 min at room temperature. The cells were washed three times with 200 µL 1% acetic acid and then dried. To solubilise the remaining SRB, 200 µL 10 mM Tris pH10.5 was added to each well and the plate incubated with agitation for 10 min. Absorbance at 520 nM was measured on a plate reader (Biotek, Swindon, UK).\n\n\nResults\n\nPARylation is principally driven by PARPs 1–3 after DNA damage and alkylating agents are known to induce base excision repair (BER) pathways, intermediates of which lead to activation of PARPs12. Our preliminary data showed Hela cells that have been stably knocked down (KD) for PARG were more sensitive to growth inhibition by the alkylating agent MMS (Supplemental Figure 1a). This led to the initial finding that 250 µg/mL MMS induced PAR chains in PARG KD cells and the peak of PAR chains detected was approximately 20 min after MMS addition (Supplemental Figure 1b–d).\n\nUsing the same antibody, an immunofluorescence assay was designed to detect PAR chains in cells. Hela cells were used as they showed increased PAR by western blot after MMS and responded to PARG KD by substantially increasing PAR after MMS (Supplemental Figure 1b). We set up a standard assay based on our previous experience and online protocols for nuclear antigen detection. This used 95% methanol/PBS for fixation and 0.1% Triton X-100 for permeabilization. Hela cells were dosed with 250 µg/mL MMS for different amounts of time. Initial analysis of the PAR signal showed an increase in signal at approximately 25 min (Figure 1). A nuclear mask was generated from Hoechst-stained cells to select regions of interest (ROI) in the 488 nm channel (Figure 1 – analysis panels).\n\nUsing a high content imaging system the Hoechst stained nuclei (left-hand panels) are used to create a nuclear mask (green circle in Analysis). Anti-PAR antibody (FITC) detects the increase in PAR chains (centre panels) that is then quantified using the nuclear mask from the Hoechst signal (right-hand panels). Fluorescence intensity is shown in as red dots within the nuclear mask.\n\nDifferent parameters were selected on the Cellomics’ Scan software to report the intensity of the signal within the nuclear area (mask). Both the total intensity of the nuclear PAR signal (Figure 2a; mean_circtotalintensity) and the total intensity of PAR signal points (spots) within the nucleus (Figure 2b; mean_circspottotalintensity) showed a maximum at 25 min and then returned to baseline after 60 min. However, the total intensity of nuclear spots was chosen as the parameter for ongoing experiments as this provided the greatest signal window. We also noted that there was no significant change in cell number over the time course of the experiment (Figure 2c).\n\n(a) The cellular average (from 9 fields) of the total intensity of nuclear fluorescence of PAR after 250 µg/mL MMS as a function of time. (b) The cellular average (from 9 fields) of the total intensity of punctate nuclear fluorescence of PAR after 250 µg/mL MMS as a function of time. (c) Analysis of cell number using Hoechst-stained nuclei after dosing with MMS showing that there is no decrease in total cell number after 1 h treatment.\n\nWe initiated a drug discovery programme into PARG inhibitors based on the results of a high throughput screening (HTS) assay of 1.4M compounds11. Using a prototype PARG inhibitor from this programme (PDD00016133) we tested a dose response with 0–250 µg/mL MMS (Figure 3a) and 1 h of incubation post MMS dosing. This time point was chosen because at this time, in the absence of PARG inhibition, PAR chain detection has returned to base level. Pleasingly, DMSO alone (no MMS) had no measureable effect on nuclear PAR chains (Figure 3a). However, PDD00016133 gave a dose-dependent increase in nuclear PAR signal in MMS-treated cells. In our biochemical assay, the same compound gave an EC50 of 0.36 µM (n=22) and we were surprised that the apparent cellular EC50 2.2 µM was significantly less potent. We therefore tested lower concentrations of MMS and showed that decreasing MMS to 50 µg/mL increased the sensitivity of the assay but the bottom of the curve was not reached (Figure 3a).\n\n(a) PAR signal response with 25 µg/mL MMS shows that lower doses of MMS only elicit a nuclear PAR response with longer incubation times. (b) Decreasing the concentration of MMS moved the PARG inhibitor IC50 to the left indicating a greater sensitivity. (c) Increasing the time of incubation with MMS shifts the PARG inhibitor IC50 curve to the right decreasing sensitivity. (d) A selection of eight PARG inhibitor compounds from a PARG biochemical screen with a range of potencies also shows a range of sensitivities with this PAR chain assay. Different chemical cores of the compounds are shown (green, orange, blue). Compound 4 is PDD00016133.\n\nDecreasing the concentration of alkylating agent clearly changed the observed PAR chain response although too little MMS decreased sensitivity (Figure 3b). We therefore investigated how the PAR chain signal changed with time after dosing with 50 µg/mL MMS (Figure 3c). Two hours of exposure to 50 µg/mL MMS provided a dose-response to PARG inhibition with PDD00016133, but with EC50 values increased (6.7 µM) when compared to high doses of MMS seen in Figure 3a. Decreasing the incubation time with 50 µg/mL MMS moved the dose response curve to the left with 30–60 min showing the best response (EC50 = 0.3 µM and 0.5 µM respectively). However, in both of these shorter incubation times we still observed high levels of nuclear PAR signal at the lowest dose of the PARG inhibitor. We therefore increased the dose range and tested a 10-point dose response with 3-fold dilutions between each point. A 1 h incubation time was chosen as this provided optimum sensitivity as well as enough time to dose and process a large number of plates. These assay conditions were tested with a selection of PARG inhibitors with different sensitivities from our biochemical assay. The combination of a 10-point dose response of the PARG inhibitor with 50 µg/mL MMS for 1 h clearly demonstrated that we had cell permeable inhibitors of PARG that ranged from low nanomolar to micromolar potencies (Figure 3d).\n\nWe then explored whether other cell lines or other DNA damaging agents could be used with this assay. Firstly, we explored if murine cells responded to MMS. Murine embryonic fibroblasts (MEFs) and the human small cell lung cancer cell line H1048 were dosed with MMS and showed a similar IC50 compared with Hela cells (14.5 µM and 9.0 µM, Figure 4a, b). The PAR chain assay was run on MEFs with the inhibitor PDD00016133 and 50 µg/mL MMS for 1 h. In the absence of MMS there was no increase in nuclear PAR chains detected with this inhibitor. However, in the presence of MMS, the PARG inhibitor led to a dose-dependent increase in nuclear PAR chain signal (Figure 4c). This dose-dependent increase in PAR chain signal after MMS was also seen in H1048 cells (Figure 4d).\n\n(a) MEFs and (b) SCLC H1048 treated with MMS were fixed and stained with sulforhodamine B (SRB) after 72 h. (c) MEFs and (d) H1048 cells show a dose-dependent increase in PAR chains after MMS treatment.\n\nWe next explored whether a more clinically relevant DNA alkylating agent could induce PAR chains. Temozolomide (TMZ) is a DNA alkylating agent and is used as a standard-of-care treatment for patients with glioblastoma13. Here we used the colorectal cancer cell line SW620 that we knew was sensitive to alkylating agents (Figure 5a) and which has been used in xenograft studies in combination with TMZ and the PARP inhibitors olaparib or AG01469914,15. First we used the same assay conditions to determine whether increasing concentrations of TMZ induced PAR chains that could be maintained by inhibiting PARG with a potent inhibitor (compound 8 from Figure 3d). As expected from the previous cell lines, one hour after treatment with TMZ alone there was no PAR signal detectable in SW620 cells. However, the presence of 300 nM compound 8 led to a TMZ dose-dependent increase in PAR signal (Figure 5b). Furthermore, using a set amount of TMZ (150 µg/mL) we were able to show that PARG inhibition by compound 8 led to a dose-dependent increase in PAR signal (Figure 5c). Unsurprisingly, pre-treatment with olaparib, which prevents PARP1 PARylation did not lead to any increase in PAR signal after TMZ treatment.\n\n(a) SW620 cells treated with MMS for 72 and stained with Hoechst show a similar dose-dependent decrease in proliferation in comparison with other cell lines tested (b) SW620 cells pre-treated with compound 8 at 300 nM increase PAR chains in response to 1 h temozolomide (1.5 µg/mL–200 µg/mL). However, pre-treatment with DMSO or olaparib (300 nM) had no effect on PAR chains at this time point. (c) SW620 cells treated with increasing concentrations of a PARG inhibitor (cmpd 8) and 150 µg/mL temozolomide for 1 h showed a dose-dependent increase in PAR chains. As expected at this time point treatment with olaparib had no effect on PAR chains.\n\nFinally, we quantified the relationship between individual assay results in Hela cells for PDD00016133 against its geomean over a period of 2½ years (Figure 6). Over 100 assays with PDD00016133 were run during that time, of which 85% were within ±0.25×pIC50 of its geomean. Interestingly, cell cultures that had passage numbers of less than 8 or more than 19 were more likely to give results for this compound that exceeded these limits.\n\nThe PAR assay was run over a period of 2½ years resulting in over 100 assays. The pIC50 for compound PDD00016133 was plotted for each assay. The pIC50 geomean is indicated with the blue crossed line and ± 0.25 pIC50 is indicated with the red lines.\n\n\nDiscussion\n\nA number of molecules have been used to inhibit PARG but concerns have been raised as to their selectivity and potency both in biochemical assays and in cells. As part of a drug discovery programme for PARG inhibitors we designed and optimized a cell assay for PARG inhibitor activity. Our initial work showed that the higher dose of MMS (250 µg/mL) resulted in a complete dose response curve for our PARG inhibitor but potency was lower than we expected. By reducing the amount of DNA damage the sensitivity of the assay increased, presumably as the detection of SSBs by PARP1 and its associated machinery was not overwhelmed. However, the lowest dose of the PARG inhibitor still resulted in relatively high levels of PAR chains after 30–60 min that was resolved when the dose response was extended.\n\nImmunofluorescence assays using the 10H mouse hybridoma antibody for detecting PAR were first published over 20 years ago16. However, detailed quantification using immunofluorescence of the amount of PAR chains found after DNA damage appears to be absent from the literature. Instead, enzyme linked immune absorbance assays (ELISA) or dot-blots have been used to detect the reduction of PAR chains following the use of PARP inhibitors17–19. There have been studies that have followed the kinetics of PAR chain accumulation after treatment with the alkylating agent MNNG or the oxidant H2O220 but none on the increase of PAR following temozolomide treatment. However, studies using RNA interference have been able to show a delay in hydrolysis of nuclear PAR after treatment with H2O2 and knockdown of PARG21.\n\nThe suitability of this assay for screening PARG inhibitors in Hela cells is clear from the data collected over time and with different compounds (Figure 3d and Figure 6). However, MEFs and H1048 cells displayed a response that was indicative of the Hela cell response prior to optimisation (Figure 4c, d), suggesting that more method development would be needed if these cells were going to be used for routine testing.\n\nThis assay was designed to test for PARG inhibition after a DNA damage signal. However, a number of PARPs are involved in non-DNA damage related processes (e.g. tankyrases, reviewed in 22) that take place outside the nucleus. Hydrolysis of PAR chains created by other PARPs is likely to involve PARG or ARH323. It is possible that these PARG inhibitors prevent such processes but modification of this assay would have to be undertaken to detect non-nuclear PAR.\n\nIn summary, we have designed a sensitive assay to test for PARG inhibition in cells. The assay was appropriate and stable for long term use and detected PAR chains from different species and different cell lines.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for Figure 2–Figure 6 in ‘An assay to measure poly(ADP ribose) glycohydrolase (PARG) activity in cells’, 10.5256/f1000research.8463.d11922524",
"appendix": "Author contributions\n\n\n\nD.J., S.D., N.H., E.F., L.G., P.K., K.S. and K.E. designed and conducted the biological experiments. D.J. and S.D. conceptualized the experiments and I.W., M. O’C and D.O. provided strategic direction. D.J. prepared the manuscript.\n\n\nCompeting interests\n\n\n\nThere are no competing financial interests to declare. Stephen Durant, Kay Eckersley, Kerry Shea, and Mark O’Connor were all employees of AstraZeneca PLC at the time experiments took place.\n\n\nGrant information\n\nThis work was funded by Cancer Research UK (Grant numbers C480/A1141 and C5759/A17098).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nFigure 1. Preliminary results during assay development. (a) PARG KD Hela cells (PARG Silencix) are more sensitive to MMS than control cells (IC50 6.2 µg/mL vs 16.5 µg/mL) over 5 days. (b) PAR detection by western blot shows that MMS induced PAR response in control (-) and a substantial response in PARG KD cells (KD). (c) HT29 cells treated with 250 µg/mL MMS show an initial increase in PAR signal at 20 min with a gradual reduction in detectable PAR chains. (d) Cartoon of proposed PAR chain response in cells. Click here to access data.\n\nSupplementary methods and notes on compounds used. Click here to access data.\n\n\nReferences\n\nAlvarez-Gonzalez R, Jacobson MK: Characterization of polymers of adenosine diphosphate ribose generated in vitro and in vivo. Biochemistry. 1987; 26(11): 3218–24. PubMed Abstract | Publisher Full Text\n\nKelley MR, Logsdon D, Fishel ML: Targeting DNA repair pathways for cancer treatment: what's new? Future Oncol. 2014; 10(7): 1215–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrampton JE: Olaparib: a review of its use as maintenance therapy in patients with ovarian cancer. BioDrugs. 2015; 29(2): 143–50. PubMed Abstract | Publisher Full Text\n\nBakondi E, Bai P, Erdélyi K, et al.: Cytoprotective effect of gallotannin in oxidatively stressed HaCaT keratinocytes: the role of poly(ADP-ribose) metabolism. Exp Dermatol. 2004; 13(3): 170–8. PubMed Abstract | Publisher Full Text\n\nOkita N, Ashizawa D, Ohta R, et al.: Discovery of novel poly(ADP-ribose) glycohydrolase inhibitors by a quantitative assay system using dot-blot with anti-poly(ADP-ribose). Biochem Biophys Res Commun. 2010; 392(4): 485–9. PubMed Abstract | Publisher Full Text\n\nSlama JT, Aboul-Ela N, Jacobson MK: Mechanism of inhibition of poly(ADP-ribose) glycohydrolase by adenosine diphosphate (hydroxymethyl)pyrrolidinediol. J Med Chem. 1995; 38(21): 4332–6. PubMed Abstract | Publisher Full Text\n\nFinch KE, Knezevic CE, Nottbohm AC, et al.: Selective small molecule inhibition of poly(ADP-ribose) glycohydrolase (PARG). ACS Chem Biol. 2012; 7(3): 563–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSteffen JD, Coyle DL, Damodaran K, et al.: Discovery and structure-activity relationships of modified salicylanilides as cell permeable inhibitors of poly(ADP-ribose) glycohydrolase (PARG). J Med Chem. 2011; 54(15): 5403–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDunstan MS, Barkauskaite E, Lafite P, et al.: Structure and mechanism of a canonical poly(ADP-ribose) glycohydrolase. Nat Commun. 2012; 3: 878. PubMed Abstract | Publisher Full Text\n\nFormentini L, Arapistas P, Pittelli M, et al.: Mono-galloyl glucose derivatives are potent poly(ADP-ribose) glycohydrolase (PARG) inhibitors and partially reduce PARP-1-dependent cell death. Br J Pharmacol. 2008; 155(8): 1235–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStowell AI, James DI, Waddell ID, et al.: An HTS-compatible HTRF assay measuring the glycohydrolase activity of human PARG. Anal Biochem. in press. 2016; S0003-2697(16)00116-0. PubMed Abstract | Publisher Full Text\n\nHorton JK, Stefanick DF, Prasad R, et al.: Base excision repair defects invoke hypersensitivity to PARP inhibition. Mol Cancer Res. 2014; 12(8): 1128–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHottinger AF, Stupp R, Homicsko K: Standards of care and novel approaches in the management of glioblastoma multiforme. Chin J Cancer. 2014; 33(1): 32–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomas HD, Calabrese CR, Batey MA, et al.: Preclinical selection of a novel poly(ADP-ribose) polymerase inhibitor for clinical trial. Mol Cancer Ther. 2007; 6(3): 945–56. PubMed Abstract | Publisher Full Text\n\nAli M, Kamjoo M, Thomas HD, et al.: The clinically active PARP inhibitor AG014699 ameliorates cardiotoxicity but does not enhance the efficacy of doxorubicin, despite improving tumor perfusion and radiation response in mice. Mol Cancer Ther. 2011; 10(12): 2320–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBürkle A, Chen G, Küpper JH, et al.: Increased poly(ADP-ribosyl)ation in intact cells by cisplatin treatment. Carcinogenesis. 1993; 14(4): 559–61. PubMed Abstract | Publisher Full Text\n\nPlummer ER, Middleton MR, Jones C, et al.: Temozolomide pharmacodynamics in patients with metastatic melanoma: dna damage and activity of repair enzymes O6-alkylguanine alkyltransferase and poly(ADP-ribose) polymerase-1. Clin Cancer Res. 2005; 11(9): 3402–9. PubMed Abstract | Publisher Full Text\n\nLiu X, Palma J, Kinders R, et al.: An enzyme-linked immunosorbent poly(ADP-ribose) polymerase biomarker assay for clinical trials of PARP inhibitors. Anal Biochem. 2008; 381(2): 240–7. PubMed Abstract | Publisher Full Text\n\nIda C, Yamashita S, Tsukada M, et al.: An enzyme-linked immunosorbent assay-based system for determining the physiological level of poly(ADP-ribose) in cultured cells. Anal Biochem. 2016; 494: 76–81. PubMed Abstract | Publisher Full Text\n\nCortes U, Tong WM, Coyle DL, et al.: Depletion of the 110-kilodalton isoform of poly(ADP-ribose) glycohydrolase increases sensitivity to genotoxic and endotoxic stress in mice. Mol Cell Biol. 2004; 24(16): 7163–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlenn C, Althaus FR, Malanga M: Poly(ADP-ribose) glycohydrolase silencing protects against H2O2-induced cell death. Biochem J. 2006; 396(3): 419–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaikarainen T, Krauss S, Lehtio L: Tankyrases: structure, function and therapeutic implications in cancer. Curr Pharm Des. 2014; 20(41): 6472–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOka S, Kato J, Moss J: Identification and characterization of a mammalian 39-kDa poly(ADP-ribose) glycohydrolase. J Biol Chem. 2006; 281(2): 705–13. PubMed Abstract | Publisher Full Text\n\nJames D, Durant S, Eckersley K, et al.: Dataset 1 in: An assay to measure poly(ADP ribose) glycohydrolase (PARG) activity in cells. F1000Research. 2016. Data Source"
}
|
[
{
"id": "14957",
"date": "21 Jul 2016",
"name": "Xiaochun Yu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPoly(ADP-ribosyl)ation is activated in response to DNA damage. In this manuscript, the authors used the immunostaining assay to examine the DNA damage-induced PAR chain and measured the dePARylation process in cells. The assay itself is very simple and reliable. However, besides PARG, there are other dePARylation enzymes contributing for the cellular dePARylation. And the list of the dePARylation enzymes is still growing. Since this assay is to measure overall dePARylation instead of specifically examining the PARG activity, the authors are suggested to modify the title of this manuscript, which might be better to convey their message.",
"responses": [
{
"c_id": "2174",
"date": "09 Sep 2016",
"name": "Dominic James",
"role": "Author Response",
"response": "Many thanks for taking the time to carefully read this manuscript and for your helpful comment regarding other dePARylating enzymes. You are indeed correct as recent in vitro evidence has shown that PAR chains can be cleaved by phosphodiesterases (NUDT16, ENPP1) but their involvement in the hydrolysis of PAR chains after DNA damage in cells has not been shown. The PARG inhibitors are therefore most likely inhibiting PAR chain hydrolysis by PARG but we cannot rule out that other, as yet unidentified, proteins involved in the de-PARylation may well exist that may be affected by our PARG inhibitors. However, our biochemical assay (Stowell et al. reference 11) does show the selectivity of our compounds for PARG over PARP and ARH3 and another submitted manuscript details the binding of the inhibitors to PARG by crystallography and surface plasmon resonance (SPR). I think on reflection that we should keep the initial title of the manuscript."
}
]
},
{
"id": "15536",
"date": "19 Aug 2016",
"name": "Dik C van Gent",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes a useful and robust method to quantify PAR chains in cells after various treatments. This is important for evaluating the effects of various PARP and PARG inhibitors. Results and conclusions are in general well described and appropriate.\nI have some minor comments, mainly about the text and figure legends: The Cellomics’ Scan software was used to report the intensity of the PAR signal within the nuclear area, either quantified as the total intensity of the nuclear PAR signal or the total intensity of PAR signal points within the nucleus. Could the author clarify what defines a signal point? It would be useful for researchers using other software to reproduce these findings. Furthermore, could the authors indicate whether another method, such as simple thresholding, would also be possible? The curves in figures 3a and b suggest that the result might be quite similar.\nFigure 1: Last sentence ‘in’ should be removed.\n\nFigure 3: The text describing figure 3a and 3b in the legend was switched. Also, I would indicate in the figure that in 3c 50ug/ml MMS was used, as the concentration MMS is indicated for 3a and 3b but not for 3c. Also the sentence on page 6, lines 4-8, is not entirely clear: ‘… increase the sensitivity of the assay but the bottom of the curve was not reached.’ Is a somewhat difficult way to say that the authors used a range that did not go to low enough concentrations to reach background levels. Probably this sentence could be adapted for easier understanding.\n\nFigure 3d: use different colors for each different compound. It is now very hard to distinguish which line represents which compound.\n\nFigure 5a: after 72 the word ‘hours’ should be added.",
"responses": [
{
"c_id": "2173",
"date": "09 Sep 2016",
"name": "Dominic James",
"role": "Author Response",
"response": "Many thanks for taking the time to carefully read this manuscript and for your very detailed comments. We have addressed the minor typos. In addition, we have added the following to the method text: “and analysed using Cellomics Scan compartmental analysis software (ThermoFisher). A threshold determined by assessing the signal in DMSO treated cells was applied to the pixel intensity and a Box Detection application was used to detect objects smaller than five pixels in radius within the nucleus. The mean of the intensity of these nuclear spots at 488 nM or the mean intensity of the total nuclear signal at 488 nM was reported.” Simple thresholding, if applied to the nuclear area alone, would also achieve similar results and was initially carried out using ImageJ on images obtained from a microscope before the CellInsight was purchased. To address the slightly odd wording regarding the results from Figure 3, we have changed the sentence to read, “Increased the sensitivity of the assay and indicated that further dilutions of the compound needed to be made to generate a full EC50 curve.”. The point we were trying to convey was that a good screening assay should generate the sigmoidal curves with data points on all parts of the curve. Regarding Figure 3d, unfortunately it is not possible to only change the colours next to the compound names. In Prism, the points on the graph would also end up in different colours, which we felt was unappealing to look at. The main point which this figure is mean to convey is that we have used the assay to assess a number of compounds that have been shown to have very different potencies by a biochemical assay and this cell assay also discriminates between them. To aid visualisation we have added the sentence “The compounds are ordered by sensitivity (cmpd 1, least sensitive; cmpd 8, most sensitive)”."
}
]
}
] | 1
|
https://f1000research.com/articles/5-736
|
https://f1000research.com/articles/5-2323/v1
|
14 Sep 16
|
{
"type": "Method Article",
"title": "Instrumented gait assessment with a single wearable: an introductory tutorial",
"authors": [
"Silvia Del Din",
"Aodhán Hickey",
"Cassim Ladha",
"Sam Stuart",
"Alan K. Bourke",
"Patrick Esser",
"Lynn Rochester",
"Alan Godfrey",
"Silvia Del Din",
"Aodhán Hickey",
"Cassim Ladha",
"Sam Stuart",
"Alan K. Bourke",
"Patrick Esser",
"Lynn Rochester"
],
"abstract": "Background: Gait is a powerful tool to identify ageing and track disease progression. Yet, its high resolution measurement via traditional instruments remains restricted to the laboratory or bespoke clinical facilities. The potential for that to change is due to the advances in wearables where the synergy between devices and smart algorithms has provided the potential of ‘a gait lab on a chip’. Methods: Commercially available wearables for gait quantification remain expensive and are restricted to a limited number of characteristics unsuitable for a comprehensive assessment required within intervention or epidemiological studies. However, the increasing demand for low-cost diagnostics has fuelled the shift in how health-related resources are distributed. As such we adopt open platform technology and validated research methodologies to harmonise engineering solutions to satisfy current epidemiological needs. Results: We provide an introduction to conduct a routine instrumented gait assessment with a discrete, low-cost, accelerometer-based wearable. We show that the capture and interpretation of raw gait signals with a common scripting language can be straightforward and suitable for use within modern studies. We highlight the best approaches and hope that this will help compliment any analytical tool-kit as part of future cohort assessments. Conclusions: Deployment of wearables can allow accurate gait assessment in accordance with advocated methods of data collection as there is a strong demand for sensitive outcomes derived from pragmatic tools. This tutorial shows that instrumentation of gait using a single open source wearable is pragmatic due to low-cost and translational analytical methods to derive sensitive outcomes.",
"keywords": [
"accelerometer",
"biomarker",
"clinical assessment",
"diagnostic",
"gait",
"measurement"
],
"content": "\n\n\n\nCohort/pathological studies need objective methods of capturing outcomes sensitive to disease onset and progression.\n\nGait has been shown as a pragmatic and useful (bio) marker of incipient pathology, inform diagnostic, track disease progression and measure the efficacy of interventions.\n\nWearable technology offers the ability to capture gait data in any environment.\n\nA validated conceptual model of gait is presented. We recommend its adoption and use of a single low-cost wearable on the lower back with supplied analytical methodology.\n\nQuantified gait characteristics with wearables facilitate the possibility for personalised treatment and integration into modern telehealth infrastructures.\n\n\nIntroduction\n\nHuman locomotion (gait) can be described as the ability to perform a whole body movement in a rhythmical and consistent manner to transverse a distance in a safe and upright posture. Its preservation is important for independence and longevity in older adults and crucial for people with movement disorders whose quality of life is further threatened by falls and multisystem deconditioning1. Its correct quantification is now recognised as a powerful tool to identify ageing2, enhance diagnostics, measure efficacy of intervention and monitor disease progression2–4. Furthermore, its utility can be broadened to predict the risk of disease, falls, and cognitive decline5.\n\nWhile gait speed is a useful global characteristic of performance6 it may not capture the nature of underlying pathology7. Instrumenting gait to define more precise and clinical relevant spatio-temporal gait features (e.g. step time, step length) stem from the use of large, expensive mechanical laboratory-based equipment typical of clinical/laboratory facilities. A newer more practical approach has emerged in the form of wearable technology (wearables), i.e. lightweight, discrete and smaller accelerometer and/or gyroscope-based devices that can be attached to the body over/under clothing. The added benefit of these devices is their suitability for deployment in any setting: low-cost, continuous recording for a multitudinous number of gait cycles8 and potential for quantifying novel frequency-based gait features9. Despite their obvious advantages, their use has been limited to academic studies rather than regular clinical usage within epidemiological studies. This can be attributed to: (i) poor agreement when compared to traditional laboratory-based reference equipment during validation studies8,10; and (ii) bespoke technical/engineering skills required to design/implement algorithms for the interpretation of the raw signals which differ due to attachment location, e.g. chest or waist11. The latter presents a signal processing challenge beyond the scope of any (typical) clinical researcher for whom the application of wearables would yield greater dividends: gait assessment as an accurate and reliable prognostic tool for healthy and/or pathological populations2,12.\n\nIn this tutorial we address this problem which has hindered both engineering and clinical professions: development versus application. We provide an introduction on how gait can be instrumented with a single, low-cost wearable. This is informed by best practice, validated methodologies8,10 and a clinically relevant conceptual gait model7. We hope this tutorial will facilitate the utility of instrumented gait as a pragmatic tool for biomarker development in future epidemiological studies.\n\n\nMaterials and methods\n\nThe common sensor within modern wearables comprises a tri-axial (medio-lateral, anterior-posterior, longitudinal) accelerometer: due to low manufacturing cost, miniaturised size and low power consumption8. Data digitisation and associated memory within the wearable, one full battery charge of a modern wearable is sufficient to gather data every 0.01s (100 Hertz) for 7 days. The equivalent of over 180 million (60 data point/second × 3 axis) data points to analyse a participant. Accelerometers quantify acceleration (measured in meters per second squared, m∙s-2), calculated from the varying voltage generated within the sensor during movement (e.g. gait), for detailed functionality refer to 13. The signal generated is a combination of acceleration due to (i) dynamic conditions where each axis is perturbed due to 3-dimensional motion and (ii) static conditions where gravity has a pronounced effect on one axis of the tri-axial accelerometer (depending on attachment orientation) making this sensor useful for measuring static posture (lying, sitting, standing).\n\nThere is a plethora of commercial wearables for gait studies, e.g.: GaitUp (foot), Opal (ankle), StepWatch™ (shank) and DynaPort (lower back). Each of the aforementioned may not offer the high sampling rates to gather ~180 million data points but all positives/negatives depending on the research question and provision of pre-programed outcomes. Nevertheless, all may be constraint by proprietary software and hence inbuilt data analytics. However, a recent shift by manufacturers has seen the (intellectual property) shackles loosened/removed to allow access to the ‘raw’ wearable data for bespoke analysis, facilitating attachment to any anatomical location (e.g. Shimmer™)14,15. This has been driven by the rapidly developing ‘open-source movement’, a concept of allowing access to all technical schematics, software scripts and algorithm descriptions. As such the potential for researchers (engineering/clinical) to analyse and interpret wearable signals has risen. One open-source wearable is the movement monitor AX3 (from Axivity; dimensions: 23.0 × 32.5 × 7.6 mm; weight: 9 grams), which allows access to raw data and is not constrained by one anatomical location. While that device is low-cost, no proprietary software exists to aid analytics from the signals that are generated.\n\nThe following section details the instrumentation of gait in any environment. While numerous devices have been highlighted, we present a methodology for a high resolution device (100Hz) worn on the lower back.\n\nDue to the miniaturised form factor of most wearables, they can be worn discreetly on almost any body location. As different accelerations are experienced at different anatomical locations, correct placement is of paramount importance when attaching the wearable11. This is because algorithms used to investigate the signal and compute spatio-temporal outcomes are dependent on signal characteristics such as repeatable signal shapes/features. Typically, gait research has aligned to use of wearables located as close as possible to the centre of mass (CoM), i.e. the lower back (typically, 5th lumbar vertebrae, L5). This best tracks whole body movement and for the purposes of instrumented testing a number of physical capability assessments and associated algorithms16. In another, it facilitates the use of a single wearable which reduces burden on the researcher and participant. This is of paramount importance during intervention or epidemiological studies where large patient numbers are recruited and tested12,17,18. The following details a methodology for instrumented gait analysis that has been successfully implemented in several healthy and pathological studies8,10,12,18–20.\n\nDevice attachment. Commercial devices are usually equipped with a strap/belt/clip for attachment. For the purposes of instrumented gait it is preferable that the wearable is attached as firmly to the participant as possible, eliminating spurious movement due to slippage. This usually requires direct attachment to the skin with a combination of dermatological adhesive(s) (e.g. Hypafix, BSN Medical Limited, Hull, UK) and double-sided tape. However, during prolonged testing, the participant’s skin (if frail/dry) can become compromised as a result of slight wearable movement due to lack of protection from thin double-sided tape. A solution is to adopt an adhesive hydrogel (e.g. PALstickies, PALTechnologies, Glasgow, UK) which provides additional padding due to its thicker design. Some motion artefact (slippage) and misalignment due to correct orientation and placement may be eliminated at the pre-processing stage from previously recommended procedures21,22. Generally, under controlled gait assessment motion artefact is minimised due to a stringent and structured protocol. (Note: alternate locations (e.g. chest, waist) may be possible, depending on the robustness (suitability) of the algorithm used to accurately detect gait events for different locations other than from its intended use20).\n\nProtocol & gait characteristics. Validated instrumentation has shown that the use of a single wearable on L5 can capture 14 clinically relevant gait characteristics10,16. Derived from a conceptual model (Figure 1a) they have been shown to be sensitive to age and pathology2. Previous research suggests that the participant should perform a 2 minute continuous walk over a straight, or alternatively, looped path (Figure 1b) to record a sufficient number of gait cycles during steady state walking which improves the reliability of gait variability and asymmetry1,3. If steady state walking is required then the first 2.5 m of walking should be excluded23. If a testing environment doesn’t permit the use of a continuous walk, repeated intermittent walks and pooling of data is recommended. However, gait initiation/termination and their associated acceleration and deceleration periods may negatively influence results. This can be minimised by excluding the first and last steps (values) of the walks before pooling.\n\n(a) A conceptual model of gait showing 5 domains and 16 characteristics, M, A and V refer to mean, asymmetry and variability, respectively. 14/16 characteristics can be replicated with a single wearable worn on L5, step width (mean and variability) cannot. (b) A suitable path to test gait. The (suggested) 25m loop shown has sufficient linear paths to sustain steady state walking, while the curvilinear paths should be shallow enough to avoid abrupt directional changes.\n\nData import & segmentation. Matlab® is a scripting programming language for general scientific computing that utilizes matrix oriented high-level programming for a large number of numerical tasks on many common platforms. Data processing can be achieved using existing and/or prototypic algorithms via script or command structure interfaces24,25. Its support network (‘Matlab Central’), comprehensive toolboxes and ability to be translated to open-source languages (e.g. PythonTM, Octave) make it suitable for the processing of (gait) data into other programming software types26–29. Therefore, for the purposes of this tutorial Matlab® pseudo-code is provided.\n\nData must be downloaded from the wearable via associated software and saved securely. Data recorded by the wearable and saved by the proprietary software (including open-source) will typically be made available as a comma separated value (.csv) file due to its exchangeability. Importing the data to Matlab® (Appendix 1, 1) can be achieved through the use of the xlsread function which offers the freedom to import data from a single or multiple column array(s) within a specified spreadsheet (Appendix 1, 2).\n\nOnce imported, data will automatically be saved to Matlab® workspace as a variable. Typically some generic movement data will be recorded by the wearable during a testing session before/after the gait task and will need to be removed. If saved via a spreadsheet, erroneous data can be highlighted and deleted, trimming the data. If intermittent walks were performed, data can be segmented manually in the spreadsheet format prior to importing. (Note: Those familiar with Matlab®, the ginput function can be used to segment data; enables user to define the exact start/end of the walk due to cursor point and click on a plot and save the x-axis values (samples/frames), Appendix 1, 3).\n\nData preparation: pre-processing. Data captured by wearables are subject to ‘noise’: random fluctuations in the signal due to connecting hardware and/or external interference. Removing noise can be achieved by filtering. There are many techniques one can apply to a signal (e.g. Butterworth, Chebyshev), each with their own advantages/disadvantages. Essentially, filters are deemed useful depending on how well they can remove the unwanted signal due to various associated parameters. Care must be taken when choosing those values as it may impact algorithm analysis, feature extraction. Nevertheless, the literature details the most common method as the 4th order Butterworth filter with a cut off-frequency between 15–20 Hertz (Hz), Appendix 1(4). (For a comprehensive assessment of pre-processing of wearable gait signals refer to 30).\n\nCorrecting for offset & misalignment. When the wearable is attached to the participant, it is generally understood that the orientation or alignment of the device is offset due to attachment error and participant body shape. Additionally, gravity exerts a force, most notable on one axis. Attachment error and gravity can be easily overcome by asking the participant to remain still upon initial attachment and recording a few seconds of (quasi) static activity in a standing posture. The average/mean of the values captured by each axis in this posture is later subtracted from corresponding axes to eliminate offsets and misalignment.\n\nHowever, this method is best suited to correct acceleration data in static postures only and not recommended for post-processing of gait data22. The correct approach is to transform the tri-axial data into a horizontal-vertical orthogonal coordinate system, i.e. using trigonometry relating to the Cartesian coordinate system22,30. The methodology relies on calculating and correcting for the best estimates of the (offset/misalignment) angles (θ) between the true horizontal-vertical and that of the raw anterior-posterior (aa) and medio-lateral (am) accelerations. While the accelerometer within the wearable cannot provide the rotational angle (gyroscopes), it is deduced22 that the average value of aa and am will approach the sin of the angles within the same directions, Equation 1–Equation 4 (translated code Appendix 1, 5). By applying the inverse sin (arcsin) methodology, one can derive the necessary values needed to correct offset/misalignment in four straightforward, recommended30 steps:\n\n(i) Correction in the anterior-posterior plane (aA, note change of subscript case):\n\naA = aa cos θa – aν sin θa (1)\n\n(ii) An interim correction (a’ν) in the vertical direction must be derived before a true value for aV:\n\na′ν = aa sin θa + θν cos θa (2)\n\n(iii) Interim values in the vertical direction used to derive aM\n\naM = am cos θm – a′ν sin θm (3)\n\n(iv) Finally, aV may now be estimated:\n\naV = am sin θm + a′ν cos θm – 1g (4)\n\nThe above is achieved through mean, sin, cos and arcsin functions along with basic matrix multiplication (Appendix 1, 5).\n\nAlgorithms. Methodologies have been developed to quantify temporal and spatial characteristics for a wearable on L5, comparisons can be found here31. All aim to identify two features of gait: initial contact (IC, i.e. heel strike) and final contact (FC, i.e. toe off), Figure 2a. A robust temporal method31 uses wavelets32. This methodology is a powerful signal processing tool that has been used successfully in gait and postural transition analysis32–34, yet its use remains limited due to complexity. The basic premise is that it offers an extension on the Fourier transform by two procedures: continuous (CWT) and discrete (DWT) wavelet transforms. Detailed descriptions is beyond the scope of this manuscript, but can be easily described; (i) CWT: a correlation between waveforms (raw signal and probing function, i.e. wavelet) at different scales (~ frequencies) and positions (in time), where the resulting coefficients roughly correspond to the best match; and (ii) DWT: a combination of high/low pass filters to divide up a (raw) signal into various components. (see 35 in depth descriptions refer to). Nevertheless, implementing a CWT algorithm32 for IC/FC event detection can be relatively straightforward if utilising the Wavelet Toolbox within Matlab®, Appendix 1(6):\n\n(i) Numerical integration of the raw vertical acceleration (av) with the function cumtrapz\n\n(ii) Differentiation of the integrated signal with the cwt function (Wavelet ToolboxTM Matlab®) resulting in signal S1, Figure 2b\n\n(iii) Find S1 local minima times, which equate to IC, through the use of the findpeaks function, Figure 2b\n\n(iv) Differentiate signal S1 with cwt function to get signal S2,\n\n(v) Find local maxima (FC) times of signal S2 by using findpeaks, Figure 2b\n\nGait signal from a young healthy adult (a) The gait cycle with depictions of stride, step, stance and swing characteristics from the IC/FC events (b) The raw signal (av), integrated and differentiated CWT signals with corresponding IC/FC events. The IC/FC sequence must be amalgamated into one numerical array from the alternating peaks/troughs to estimate the correct timing sequence for stride, step, stance and swing times. (c) Step length can be derived using Equation 5, where h is derived from change of wearable height due to double integration of vertical acceleration (implementing cumtrapz function twice).\n\nTemporal characteristics. To fully replicate the characteristics of gait: step, stance, stride and swing times must be derived. This is achieved through the sequence of IC/FC events in relation to the double support phase of the gait cycle (see Figure 2). From the sequence (i) of IC/FC events, both left and right (opposite) events are identified, and subsequently step, stride, stance and swing times are estimated (Equation 5–Equation 8). For full details of calculating these parameters see 10,36.\n\nSpatial characteristics. A spatial algorithm based on the inverted pendulum model tracks the CoM37. However, the model is reliant on a known variable, wearable-height. This manual component is a weakness: requiring a known input and can have weak accuracy for step length or total distance walked8,12. Yet it remains a useful metric to compute via the simple relationship shown in Equation 5, where l is wearable height and h is change in height of the wearable (i.e. CoM) as the participant walks, Appendix 1(7). Subsequently, by fusing the algorithms from Figure 28,10, it is possible to quantify an estimate for step velocity (Equation 6 and Appendix 1, 7). However, implementing the cumtrapz function to derive velocity and speed from acceleration introduces an error known as drift. This can be eliminated through the use of filtering, but generally remains problematic within wearable gait analysis.\n\n\n\n\n\nVariability and asymmetry characteristics. It is useful to distinguish between left/right step characteristics for variability and asymmetry outcomes (Equation 11 a, b and 12, Appendix 1, 8) in asymmetrical diseases38. Differentiating between left/right during a long continuous walk is easier (assume first as left or right and alternate values thereafter) compared to repeated intermittent walks when (for robustness) it would be recommended to note what foot was used for initiation8. Alternatively, a protocol could request the participant initiates walking with the same foot. Subsequent assignment of values to left/right can be made during data analysis by manually dividing the data. (For the readers interest, left/right steps may be identified by automated but more complex algorithms and can be found here: 32,37). Correct calculation of variability1,10 and asymmetry is performed by:\n\n\n\nor\n\nVariability= SD(Steps) (7b)\n\nAsymmetryleft & right = |averageleft – averageright| (8)\n\n\nDiscussion and conclusion\n\nOur aim in this paper has been to present an introductory tutorial, learned from best practice and robust methodologies to instrumented gait with a single wearable. Drawing on a validated conceptual model we provide a suitable and robust means to quantify and implement an analysis framework to derive 14 clinically relevant gait characteristics, for quantification in any environment. This has practical implications for the understanding of instrumented gait in future epidemiological studies, as a useful diagnostic.\n\nIt is important to consider the limitations associated with a single tri-axial accelerometer wearable. Direct integration of the raw acceleration data can amplify errors in calculation and compromise the integrity of results. Raw acceleration data varies among controls and across pathologies, as such universal processing (algorithms) recommendations are difficult to derive39. Location of the wearable in this example is specific to the algorithms’ functionality and therefore gait outcomes quantified from alternation locations should treated with caution20.\n\nThough implementing the algorithm and associated signal processing techniques can seem straightforward, initial familiarisation with the scripting language(s) and implementation of code can be daunting. Nonetheless, the methodologies presented here provide an opportunity to add more informed, objective data to future epidemiological studies. Wearables are being increasingly used in free-living environments, richer in habitual behaviours and aligning with developing telehealth infrastructures5,12. Understanding the abilities as well as the limitations of existing technologies by all professions can help harmonise technological resources and find application in alternate fields of research.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for 'Instrumented gait assessment with a single wearable', 10.5256/f1000research.9591.d13536941.",
"appendix": "Author contributions\n\n\n\nSDD, AH, SS and AG conceived the methodology and drafted the paper with help from CL, PE and AB. LR developed the protocol and advised on best clinical practice with SS. SDD, AH and AG performed the scripted Matlab® algorithms with input from PE and AG. All authors contributed to critical revisions of the manuscript including methodology for the purposes of an introductory tutorial into instrumented gait.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nSDD, AG and LR are supported by the Biomedical Research Unit/Centre at Newcastle University. The research was also supported by the NIHR Newcastle CRF Infrastructure funding. AB is supported by the ADAPT, a Norwegian Research Council funded project (FRIMEDBIO, 230435). Views expressed are solely those of the authors.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAppendix\n\n\n\n\nReferences\n\nGalna B, Lord S, Rochester L: Is gait variability reliable in older adults and Parkinson's disease? Towards an optimal testing protocol. Gait Posture. 2013; 37(4): 580–5. PubMed Abstract | Publisher Full Text\n\nLord S, Galna B, Verghese J, et al.: Independent domains of gait in older adults and associated motor and nonmotor attributes: validation of a factor analysis approach. J Gerontol A Biol Sci Med Sci. 2013; 68(7): 820–7. PubMed Abstract | Publisher Full Text\n\nLord S, Galna B, Rochester L: Moving forward on gait measurement: toward a more refined approach. Mov Disord. 2013; 28(11): 1534–1543. PubMed Abstract | Publisher Full Text\n\nHobert MA, Maetzler W, Aminian K, et al.: Technical and clinical view on ambulatory assessment in Parkinson's disease. Acta Neurol Scand. 2014; 130(3): 139–147. PubMed Abstract | Publisher Full Text\n\nDel Din S, Godfrey A, Mazzà C, et al.: Free-living monitoring of Parkinson’s disease: Lessons from the field. Mov Disord. 2016; 31(9): 1293–313. PubMed Abstract | Publisher Full Text\n\nStudenski S, Perera S, Patel K, et al.: Gait speed and survival in older adults. JAMA. 2011; 305(1): 50–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLord S, Galna B, Rochester L: Moving forward on gait measurement: toward a more refined approach. Mov Disord. 2013; 28(11): 1534–43. PubMed Abstract | Publisher Full Text\n\nGodfrey A, Del Din S, Barry G, et al.: Instrumenting gait with an accelerometer: a system and algorithm examination. Med Eng Phys. 2015; 37(4): 400–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEsser P, Dawes H, Collett J, et al.: Insights into gait disorders: walking variability using phase plot analysis, Parkinson's disease. Gait Posture. 2013; 38(4): 648–52. PubMed Abstract | Publisher Full Text\n\nDel Din S, Godfrey A, Rochester L: Validation of an accelerometer to quantify a comprehensive battery of gait characteristics in healthy older adults and Parkinson's disease: toward clinical and at home use. IEEE J Biomed Health Inform. 2015. PubMed Abstract | Publisher Full Text\n\nCleland I, Kikhia B, Nugent C, et al.: Optimal placement of accelerometers for the detection of everyday activities. Sensors (Basel). 2013; 13(7): 9183–200. PubMed Abstract | Publisher Full Text\n\nGodfrey A, Lara J, Del Din S, et al.: iCap: Instrumented assessment of physical capability. Maturitas. 2015; 82(1): 116–22. PubMed Abstract | Publisher Full Text\n\nMathie MJ, Coster AC, Lovell N. H, et al.: Celler, Accelerometry: providing an integrated, practical method for long-term, ambulatory monitoring of human movement. Physiol Meas. 2004; 25(2): R1–20. PubMed Abstract | Publisher Full Text\n\nO'Donovan K, Greene BR, McGrath D, et al.: Caulfield, SHIMMER: A new tool for temporal gait analysis. In Conf Proc IEEE Eng Med Biol Soc. 2009; 2009: 3826–3829. PubMed Abstract | Publisher Full Text\n\nRabah H, Amira A, Ahmad A: Design and implementaiton of a fall detection system using compressive sensing and shimmer technology. In Microelectronics (ICM), 2012 24th International Conference on, 2012; 1–4. Publisher Full Text\n\nGodfrey A, Lara J, Munro CA, et al.: Instrumented assessment of test battery for physical capability using an accelerometer: a feasibility study. Physiol Meas. 2015; 36(5): N71–83. PubMed Abstract | Publisher Full Text\n\nLara J, Godfrey A, Evans E, et al.: Towards measurement of the Healthy Ageing Phenotype in lifestyle-based intervention studies. Maturitas. 2013; 76(2): 189–99. PubMed Abstract | Publisher Full Text\n\nLara J, O'Brien N, Godfrey A, et al.: Pilot Randomised Controlled Trial of a Web-Based Intervention to Promote Healthy Eating, Physical Activity and Meaningful Social Connections Compared with Usual Care Control in People of Retirement Age Recruited from Workplaces. PLoS One. 2016; 11(9): e0159703. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGodfrey A, Del Din S, Barry G, et al.: Within trial validation and reliability of a single tri-axial accelerometer for gait assessment. Conf Proc IEEE Eng Med Biol Soc. 2014; 2014: 5892–5. PubMed Abstract | Publisher Full Text\n\nDel Din S, Hickey A, Hurwitz N, et al.: Measuring gait with an accelerometer-based wearable: influence of device location, testing protocol and age. Physiol Meas. In press, 2016. Reference Source\n\nForner-Cordero A, Mateu-Arce M, Forner-Cordero I, et al.: Study of the motion artefacts of skin-mounted inertial sensors under different attachment conditions. Physiol Meas. 2008; 29(4): N21–31. PubMed Abstract | Publisher Full Text\n\nMoe-Nilssen R: A new method for evaluating motor control in gait under real-life environmental conditions. Part 1: The instrument. Clin Biomech (Bristol, Avon). 1998; 13(4–5): 320–327. PubMed Abstract | Publisher Full Text\n\nLindemann U, Najafi B, Zijlstra W, et al.: Distance to achieve steady state walking speed in frail elderly persons. Gait Posture. 2008; 27(1): 91–6. PubMed Abstract | Publisher Full Text\n\nBellec P, Lavoie-Courchesne S, Dickinson P, et al.: The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows. Front Neuroinform. 2012; 6: 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGunther UL, Ludwig C, Rüterjans H: NMRLAB-Advanced NMR Data Processing in Matlab. J Magn Reson. 2000; 145(2): 201–208. PubMed Abstract | Publisher Full Text\n\nBrewster M, Gobbert MK: A comparative evaluation of Matlab, Octave, FreeMat, and Scilab on tara. Citeseer. 2011. Reference Source\n\nBispo J, Reis L, Cardoso JM: Multi-Target C Code Generation from MATLAB. In Proceedings of ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming. 2014; 95. Publisher Full Text\n\nMurphy M: Octave: A free, high-level language for mathematics. Linux Journal. 1997; 1997: 8. Reference Source\n\nPerez F, Granger BE, Hunter JD: Python: an ecosystem for scientific computing. Comput Sci Eng. 2011; 13(2): 13–21. Publisher Full Text\n\nMillecamps A, Lowry KA, Brach JS, et al.: Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals. Comput Biol Med. 2015; 62: 164–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrojaniello D, Cereatti A, Della Croce U: Accuracy, sensitivity and robustness of five different methods for the estimation of gait temporal parameters using a single inertial sensor mounted on the lower trunk. Gait Posture. 2014; 40(4): 487–92. PubMed Abstract | Publisher Full Text\n\nMcCamley J, Donati M, Grimpampi E, et al.: An enhanced estimate of initial contact and final contact instants of time using lower trunk inertial sensor data. Gait Posture. 2012; 36(2): 316–8. PubMed Abstract | Publisher Full Text\n\nBidargaddi N, Klingbeil L, Sarela A, et al.: Wavelet based approach for posture transition estimation using a waist worn accelerometer. Conf Proc IEEE Eng Med Biol Soc. 2007; 2007: 1884–7. PubMed Abstract | Publisher Full Text\n\nNajafi B, Aminian K, Loew F, et al.: Measurement of stand-sit and sit-stand transitions using a miniature gyroscope and its application in fall risk evaluation in the elderly. IEEE Trans Biomed Eng. 2002; 49(8): 843–51. PubMed Abstract | Publisher Full Text\n\nSemmlow JL: Biosignal and Medical Image Processing. Second Edition. Taylor & Francis, 2008. Reference Source\n\nO'Brien N, McDonald S, Araujo-Soares V, et al.: The features of interventions associated with long-term effectiveness of physical activity interventions in adults aged 55–70 years: a systematic review and meta-analysis. Health Psychol Rev. 2015; 9(4): 417–33. PubMed Abstract | Publisher Full Text\n\nZijlstra W, Hof AL: Assessment of spatio-temporal gait parameters from trunk accelerations during human walking. Gait Posture. 2003; 18(2): 1–10. PubMed Abstract | Publisher Full Text\n\nHausdorff JM, Rios DA, Edelberg HK: Gait variability and fall risk in community-living older adults: a 1-year prospective study. Arch Phys Med Rehabil. 2001; 82(8): 1050–6. PubMed Abstract | Publisher Full Text\n\nMuro-de-la-Herran A, Garcia-Zapirain B, Mendez-Zorrilla A: Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensors (Basel). 2014; 14(2): 3362–94. PubMed Abstract | Publisher Full Text\n\nLuo J, Bai J, Shao J: Application of the wavelet transforms on axial strain calculation in ultrasound elastography. Progress in natural science. 2006; 16(9): 942–947. Reference Source\n\nDel Din S, Hickey A, Ladha C, et al.: Dataset 1 in: Instrumented gait assessment with a single wearable: an introductory tutorial. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17045",
"date": "18 Oct 2016",
"name": "Denise McGrath",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is a tutorial type presentation on how to capture a number of gait characteristics for use in a variety of study types using a single, lumbar-worn accelerometer. The paper does not present new data capture or analysis methods and therefore appears to be targeted specifically to new comers to gait analysis using wearable sensors. Tutorial articles of this type can be highly valued by such an audience. This is a well-organised article that is easy to follow for the most part. Given the target audience, we would recommend some further detail in places to make this a more comprehensive tutorial. Whether or not a “typical” clinical researcher would still require signal processing/analysis support to implement the analysis of the accelerometer data on the basis of this tutorial is debatable. Elements of the paper, in particular Appendix 1, serve as a useful accompaniment to the authors’ previous research such as: \"Validation of an Accelerometer to Quantify a Comprehensive Battery of Gait Characteristics in Healthy Older Adults and Parkinson’s Disease: Toward Clinical and at Home Use\".\n\nGeneral Comments:\nWhile this paper certainly has value to the gait analysis community, it is not a “Methods” paper in the journal’s strictest sense i.e. it does not describe “a new experimental or computational method, test or procedure..”. We would submit however that it satisfies the following description: “..technical articles that describe tools that facilitate the design or performance of experiments..”. This is ultimately an editorial decision.\n\nThe most validated and useful measure which users will get following this tutorial is IC and FC. This means that all the temporal characteristics can be well computed by following this tutorial.\n\nWhile the paper flows well in its current form and is easy to follow for someone experienced in gait analysis, extra detail and additional steps may be required for the novice, target audience e.g. further detail on calibration could be considered in places.\n\nIt could be useful to include a table outlining the current commercially available products and what they measure (what is validated and what is not).\n\nThe authors may consider an additional section for how one may compute more accurate spatial characteristics when using an IMU rather than just an accelerometer, as the current treatment of this issue is rather terse. If they do so, pointing the readers to relevant algorithms and existing open source MATLAB code could aid here.\n\nThe manuscript is generally well-written however a thorough check of grammar and sentence structure is warranted. The following sentences are examples:\n\n\"Abstract, Results: We provide an introduction to conduct a routine instrumented gait assessment\" \"Gait has been shown as a pragmatic and useful (bio) marker of incipient pathology, inform diagnostic, track disease progression and measure the efficacy of interventions.\" \"The common sensor within modern wearables comprises a tri-axial (medio-lateral, anterior-posterior, longitudinal) accelerometer: due to low manufacturing cost, miniaturised size and low power consumption.\" \"Data digitisation and associated memory within the wearable, one full battery charge of a modern wearable is sufficient to gather data every 0.01s (100 Hertz) for 7 days. The equivalent of over 180 million (60 data point/second × 3 axis) data points to analyse a participant.\" \"Each of the aforementioned may not offer the high sampling rates to gather ~180 million data points but all positives/ negatives depending on the research question and provision of pre-programed outcomes.\"\n\nSpecific Comments: Abstract: In the methods section, where it states \"open platform technology\" is it fair to say this considering MATLAB is a key feature of this tutorial paper and licenses are expensive?\n\nIntroduction: In paragraph two at “This can be attributed to:….”, in the interest of clarity, perhaps the authors could explicitly state and support the fact that temporal measures have been better validated than spatial ones to date and hence this tutorial mainly supports the analysis of temporal gait characteristics.\nInformation included later in the paper under \"current options\" may sit better in the introduction section. As suggested above, a table outlining the stated products in addition to other wearable sensing gait analysis products, their associated sampling rates, placement, output measures and cost would help readers understand the argument for using a low cost accelerometer solution and implementing the techniques described in this tutorial.\n\nMaterials & Methods: The description of the signal as (i) and (ii) is misleading and may cause readers to think that under dynamic conditions there is no gravitational acceleration present in the signals. Perhaps rephrase to a summation of inertial acceleration and gravitational acceleration acting on each axes.\nWhere the authors present a \"high resolution device (100Hz)…….\" they should also provide recommendation on accelerometer range and sensitivity for maximising signal quality when analysing gait with an accelerometer positioned at L5.\nWhile the authors go into good detail on sensor positioning, they do not mention the importance of calibrating a tri-axial accelerometer device. Not doing so can have a negative effect on the validity of computed gait characteristics, especially spatial ones. To make this tutorial more holistic, steps and sample code to do this could be added which would help a clinician/researcher to calibrate a ‘generic’ tri-axial accelerometer. The \"correcting for offset\" paragraph is only a partial solution.\nFigure 1(a) – “Dynamic Postural Control” may be a better description of these variables?\nFigure 1(b) would benefit by adding the dimensions of the walking circuit to the diagram.\nIn the pre-processing section a little more elaboration on what the \"unwanted signal\" is and adding a description of what the \"desired signal\" corresponds to would help readers new to the area of signal interpretation.\nIn the “Correcting for offset and misalignment” section where it is stated that \"the average value of [...] will approach the sin of the angles\" it should be stated whether the values are in m/s^2 or g. Also, it is worth noting that this assumption may not work well when the accelerometer is undergoing significant inertial acceleration i.e. at high gait speeds.\nFigure 2(b)-(c) would be tough to interpret if document was printed in black and white, but clear otherwise.\nIn steps (iii) and (v) of algorithms are there any thresholds or other inputs to the findpeaks function that people following the tutorial should know about?\nIn the \"spatial characteristics\" section where h is used, it may help to more explicitly state how h is calculated in the text as well as in Figure 2(c) which is currently difficult to interpret in black and white.\n\nDiscussion & Conclusion: I think it would be important to reiterate here that the methods presented are for application in steady state walking. The authors state elsewhere in the manuscript that shorter intermittent walks can be pooled, but caution against the effects of acceleration/ deceleration portions of the signal. They state that this, “...can be minimised by excluding the first and last steps (values) of the walks before pooling”. This is somewhat vague and needs to be cautioned against more clearly in the concluding paragraphs.",
"responses": []
},
{
"id": "16658",
"date": "02 Nov 2016",
"name": "Stephen J. Redmond",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors provide a tutorial on how to derive some important temporospatial gait parameters from a single waist-worn triaxial acceleromter. The tutorial is mostly pitched at a very basic level for someone with any signal processing expertise, but I think this is overall a strength given the target audience. However, in places it is not so clear and wanders into descriptions of signal processing methods without sufficient intuitive explanation of what these methods are trying to achieve - I will draw attention to this later in detailed comments.\nOn the whole, I think the focus of the paper is quite narrow and largely ignores a rapidly growing literature on the use of inertial monitoring units (IMUs,containing accelerometer, gyroscope, magnetometer, and barometer) which are in all modern smartphones, or appearing as small wearables for foot tracking and joint angle estimation, etc. It is OK not to include this in the tutorial, but for full disclosure I think it is important to tell how many more parameters could be estimated using these tools.\nMost of my comments below are minor and aim to help improve readability. There are, however, some technical errors which must be fixed before I would endorse the paper as correct.\nAbstract:\nThe language here is trying very hard to be more prosaic than it needs to be. I would consider edits parts of this.\n\nI disagree with the comment that MATLAB is an \"open platform\" - this should be deleted. There are MATLAB specific functions, like findpeaks, which would not be available elsewhere.\n\nIntroduction:\n\"for multitudinous gait cycles\"\n\nI tend to disagree with the statement \"their use has been limited to academic studies\". What is the difference anyway between an academic study and an epidemiological study?\n\n\"This can be attributed to...\". I don't think you have yet provided enough context for the reader to comprehend these issues that are bemoaned here.\n\nMaterials and methods:\n\"a triaxial (medio-lateral, anterior-posterior, longitudinal) accelerometer\". It is wrong and confusing to chose a body-centric coordinate frame for the sensor as this assumes a particular orientation on the body.\n\n\"static conditions... where gravity has a pronounced effect on one axis\". If one of the sensors axes are not aligned with vertical, gravity will have an effect on all axes.\n\n\"[various commercial devices]... may not offer high sampling rates\". Please say which do and don't, and what is your definition of \"high\".\n\n\"all positives and negatives depending on the research question...\". What do you mean by this statement? I don't understand.\n\nTypo: \"constrained\" not \"constraint\".\n\nAs mentioned above, IMUs are completely overlooked here.\n\nWhat is motivation for selecting 100 Hz? Tradition? The appendix states 15-20 Hz is the bandwidth of human movement, so about 50-60 Hz sampling rate should be ample.\n\np4: \"healthy and pathological studies\". Grammar problem. The studies are not healthy.\n\nDefine what is meant by \"slippage\".\n\n\"misalignment due to correct orientation\". Do you mean incorrect orientation?\n\nYou propose a lab-based test over a 25 m loop, but this is at odds with the introduction which motivates this tutorial as a method for \"deployment in any setting\". Intro should be changed.\n\n\"doesn't\". Write \"does not\". Do not use contractions in formal articles.\n\n\"utilises\" rather than \"utilizes\" as this appears to be UK English.\n\np5: Grammar. \"Its support network... comprehensive toolboxes\".\n\n\"Attachment error and gravity can be easily overcome\". This is a naive comment. Without an gyroscope it is very much not an easy task. Reword to say \"approximately\" or \"crudely\".\n\n\"The average/mean values of the values captured...\". In general you have not discussed the important issue of calibration properly. The offsets, sensitivities and non-orthogonality of the axes should be fixed before experiments start. See following references for more information1,2,3.\n\nI'm sticking my neck out here, but I'm very sure the method proposed to resolve accelerations in the world frame is wrong. This is a concern since the original source is 18 years old4. I've gone through the maths and I'm fairly certain it's incorrect, but would be happy to be corrected. The correct method would involve two rotations to get the correct rotation matrix. The first is a rotation about the AP axis (same as x-axis in world at the start), then tip the frame back by rotation around the y-axis of the world. The final rotation matrix gives same equations when inclination is only in xz-plane or yz-plane, as per Moe-Nilssen paper, but do not match when there is both pitch and roll simultaneously. It is also obvious this is wrong as Eq. (1) (p5) in this paper does not use the a_m accelerometer reading, and it would definitely contribute to the x-axis acceleration in the world frame if there is both pitch and roll.\n\nTypo in Eq (2): \\theta_v should be a_v.\n\np5. For the equations on this page, need to be clear what units you are working in. Units of g = 9.8 m/s^2.\n\nTypo: \"detailed descriptions is beyond\".\n\nDescription of CWT and DWT is confusing. DWT is just CWT with carefully chosen scale factors.\n\nThe description of detecting IC and FC using the method by McCalmey et al.5 would not be particularly clear to a novice reader. I looked up this paper. It was tested on only 18 young volunteers and seems to be heuristic. I think it's worth mentioning such limitations of the methods you've chosen to implement. Also, to anybody reading this section the signal processing feels a bit like black magic. Could some description of the motivation for each derived signal, or what it physically represents, be provided? I am struggling to follow the logic behind the processing myself and I have signal processing experience in this field.\n\nThe use of the Wavelet Toolbox and findpeaks function in MATLAB are a limiting factor in the implementation. Also, it is not clear in the text or code what the scale factor is. It seems to be 10 samples at a sample rate of 100 Hz, so 0.1 s. Why this scale? Also, your code should be configurable with respect to variable sampling rates. If it is to be used by those unskilled in the art, they should not be expected to understand the importance of such hyperparameters.\n\nFigure 2: You have not discussed the importance of DC offsets when using cumtrapz to integrate the acceleration signal. Have you removed the DC offset first? What will happen if this is not done? (it will accumulate linearly).\n\nFigure 2: The h for height seems incorrect to me with regards to the h used in the inverted pendulum model. I thought the inverted pendulum h is for total vertical distance travelled by COM on the arc of the circle?\n\nFigure 2: Need time axes scale, labels, and units.\n\np6: \"wearable-height\" should be \"wearable height\".\n\np7: Need to define terms in Eqs. (7b) and (8). What is \"SD\"? What is \"Steps\"? What is \"average\"?\n\np7: Dataset 1: \"if vertical is oriented at +1g, this can be inverted by multiplying by -1.\" Acceleration due gravity is vertically up, and so should read +1g, assuming the positive sense of the sensors z-axis also points up. There is also a typo here: \"format mat be converted\".\n\np7: spelling. \"alternation\" to \"alternative\".\n\nAppendix:\nUnits for Wn should be stated written in code as fn/(fs/2) with terms defined in comments.\n\nCode should be publicly available on a code repository (sorry if it is, but I cannot see the link).\n\nNotation is sloppy here, with \"Aa = AccData(:,2)\" and \"am = AccData(:,3)\".\n\n\"av\" variable is used but never defined. Does this code run, or is it just excerpts? Not clear to me.\n\n\"aaMean\" variable never created.\n\nReferences:\nWith 13/41 references involving self-citation. Cited literature could be more balanced.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2323
|
https://f1000research.com/articles/5-2322/v1
|
14 Sep 16
|
{
"type": "Case Report",
"title": "Case Report: Multiple complications after laparoscopic cholecystectomy with perforation and spilled gallstones",
"authors": [
"Jeanett Klubien",
"Dorte Winther Borgersen",
"Jacob Rosenberg",
"Hans-Christian Pommergaard",
"Dorte Winther Borgersen",
"Jacob Rosenberg",
"Hans-Christian Pommergaard"
],
"abstract": "Introduction Perforation of the gallbladder is a benign and common complication during laparoscopic cholecystectomy. However, it may result in stone spilling, which potentially can lead to serious postoperative complications.\n\nCase report A 70-year-old male underwent laparoscopic cholecystectomy for acute cholecystitis. The procedure was complicated by perforation of the gallbladder and spilling of gallstones. More than a year after the procedure, the patient developed subcutaneous abscesses containing some of the spilled stones, a computed tomography revealed a complex intraabdominal and intrathoracic fistula with communication from the abdominal cavity to pleura and ultrasonic imaging found a lost gallstone in the thorax. After two years, the patient developed pleural empyema and sepsis secondary to the condition. Presently, the patient awaits surgery for the fistula and empyema.\n\nConclusion Proper care should be taken to avoid stone spilling during laparoscopic cholecystectomy. However, if perforation and stone spilling occur, all visible stones should be removed during the procedure and the complication should be noted in the medical records. Furthermore, the patient should be thoroughly informed. This may help accelerate diagnosis if the patient later suffers from a complication related to lost stones.",
"keywords": [
"Laparoscopic cholecystectomy",
"spilled gallstones",
"lost gallstones",
"abscess",
"fistula",
"empyema",
"case report"
],
"content": "Introduction\n\nPerforation of the gallbladder during laparoscopic cholecystectomy (LC) is a well-known and common complication (8–40%)1 that may lead to intraabdominal spilling of gallstones and some of the spilled stones may not be retrieved despite all efforts. The incidence of lost stones during LC is less frequent and varies in the literature from 0.1 to 20%1–3. Although considered a benign complication, it is reported that 0.03–8.5% of the lost stones will lead to a postoperative complication2,3.\n\nWe present a case of multiple complications after perforation of the gallbladder and subsequent stone spilling during LC. This case report is reported according to the CARE statement4.\n\n\nCase report\n\nA 70-year-old Caucasian male, with a medical history of hypertension, was admitted in March 2014 after four days of diffuse abdominal pain and fever up to 39°C. A computed tomography (CT) scan identified multiple gallstones in an inflamed gallbladder. To verify the diagnosis, abdominal ultrasonic imaging confirmed multiple gallstones and thickening of the gallbladder wall as signs of acute cholecystitis. The patient underwent acute LC with the intraoperative finding of a severely inflamed gallbladder. In addition, the procedure was complicated by perforation of the gallbladder and gallstones were spilled. The gallbladder was removed using an endoscopic bag after complete dissection to prevent further stone spilling and all visible stones were removed. Lastly, the peritoneal cavity was irrigated with saline. The complication was noted in the medical records.\n\nOne year after the procedure, the patient was admitted with tenderness in the right upper quadrant. A CT was performed and showed a swelling in the upper right part of the abdominal wall and between the liver and the lower lobe of the right lung with calcifications at both sites assumed to be lost gallstones (Figure 1). The patient did not receive any treatment for the swellings.\n\nAn overview of the patient's hospital contacts and procedures after the laparoscopic cholecystectomy. s.c. subcutaneous, dxt. dexter, CT computed tomography, MGUS monoclonal gammopathy of undetermined significance, ATN acute tubular necrosis.\n\nDuring the period between 15 and 18 months following the LC, the patient returned to the hospital two times due to subcutaneous abscesses below the right rib curvature and the right side of the lower back. The suspected lost gallstones were assumed to have migrated to the subcutaneous tissue causing abscess formation. The diagnosis was confirmed by CT and compared with the previous CT (Figure 2). Both abscesses were located deep in the subcutaneous tissue and due to location and size, these were treated with ultrasound-guided incision and drainage. During these procedures, four gallstones were located and removed from the abscess cavities. Afterwards the patient was followed as an outpatient because of daily secretion from the abscess cavity on the patient’s back. Because of the unhealed abscess cavity, CT and ultrasound scans were performed 18 months after the LC. The CT revealed a complex intraabdominal and intrathoracic fistula with external opening in the lower right side of the back with communication to pleura. The ultrasonic imaging revealed a lost gallstone in the lower right side of thorax. The fistula was treated conservatively with drainage.\n\nAn abdominal computed tomography showing spilled gallstones at different levels 15 months after the laparoscopic cholecystectomy (dotted arrows). (a) Shows a gallstone behind the liver and (b) shows a gallstone in the abdominal wall.\n\nIn February 2016, the patient was admitted to the hospital because he had developed sepsis and pleural empyema secondary to the condition. The patient had a short stay at the intensive care unit and was discharged from the hospital after one month. During this month, the patient developed monoclonal gammopathy and acute tubular necrosis due to the infection in the fistula. After hospitalization, the fistula was rinsed daily with saline solution and during one of these procedures another gallstone was excavated. Presently, the patient awaits surgery for the fistula and empyema.\n\n\nDiscussion\n\nThis case is an example of serious complications caused by spilled gallstones. Migration of lost stones, as in this case, can cause both local and systemic complications. However, stone spillage is unavoidable in some patients despite precautionary measures.\n\nThe spilled stones may be harmless, but efforts should be made during the procedure to locate and remove all stones to prevent future local and systemic complications. The postoperative complications due to lost gallstones may develop weeks to several years after the primary procedure and are not necessarily located in the right upper quadrant2,5,6. Together with a lack of awareness or documentation in the medical records, this may contribute to a delayed diagnosis of a stone complication. However, delayed diagnosis may also be due to the fact that some gallstones are not visible on CT. Predisposing factors for complications of the spilled gallstones include older age, male sex, perihepatic localization of lost stones, acute cholecystitis, spilling of pigment stones compared with cholesterol stones, multiple stones (>15 stones), and large stone size (> 1.5 cm)1.\n\nIt is not mandatory to convert to open surgery for retrieving stones after perforation has occurred during LC3,6, due to a subsequent low incidence of severe postoperative complications2,3 and since conversion to open surgery is associated with a higher rate of systemic complications compared with laparoscopic surgery3. In this case report, the surgeon chose not to convert to open surgery to look for more lost gallstones, which goes well in hand with the recommendations found in the literature3,6. However, proper care should be taken to avoid stone spilling and thereby possible postoperative complications. All visible stones should be removed during the laparoscopic procedure and the gallbladder should be retrieved in an endoscopic bag upon dissection to prevent further stone spilling when a perforation has occurred. In this case, the gallstones were found on CT before complications developed. Perhaps, the abscesses and fistula could have been avoided if the stones had been removed when they were discovered.\n\nIn conclusion, stone spillage is an unavoidable and well-known problem to LC. If perforation and stone spillage occur, it should be noted in the medical records and the patient should be thoroughly informed about the lost stones and their possible postoperative complications. This may help the clinicians and accelerate the diagnosis if the patient later on suffers from a complication due to lost stones.\n\n\nConsent\n\nWritten informed consent was obtained from the patient for publication of this case report and any accompanying images and/or other details that could potentially reveal the patient’s identity.",
"appendix": "Author contributions\n\n\n\nJK, DW, JR, and HCP conceived the study. JK and HCP prepared the first draft of the manuscript. DWB, JR and HCP did the revision and all authors have read and approved the final version of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBrockmann JG, Kocher T, Senninger NJ, et al.: Complications due to gallstones lost during laparoscopic cholecystectomy. Surg Endosc. 2002; 16(8): 1226–1232. PubMed Abstract | Publisher Full Text\n\nZehetner J, Shamiyeh A, Wayand W: Lost gallstones in laparoscopic cholecystectomy: all possible complications. Am J Surg. 2007; 193(1): 73–78. PubMed Abstract | Publisher Full Text\n\nSchäfer M, Suter C, Klaiber C, et al.: Spilled gallstones after laparoscopic cholecystectomy. A relevant problem? A retrospective analysis of 10,174 laparoscopic cholecystectomies. Surg Endosc. 1998; 12(4): 305–309. PubMed Abstract | Publisher Full Text\n\nGagnier JJ, Kienle G, Altman DG, et al.: The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep. 2013; 7: 223. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHougård K, Bergenfeldt M: [Abdominal fistula 7 years after laparoscopic cholecystectomy]. Ugeskr Laeger. 2008; 170(36): 2803. PubMed Abstract\n\nHillingsø JG, Kristiansen VB: [Abscess in the right flank--a late complication of laparoscopic cholecystectomy]. Ugeskr Laeger. 1999; 161(32): 4520–4521. PubMed Abstract\n\nDemirbas BT, Gulluoglu BM, Aktan AO: Retained abdominal gallstones after laparoscopic cholecystectomy: a systematic review. Surg Laparosc Endosc Percutan Tech. 2015; 25(2): 97–99. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "16285",
"date": "26 Sep 2016",
"name": "Gabriel Sandblom",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe report describes a rare case where spilled gallstones following laparoscopic cholecystectomy resulted in fistulas to the subcutaneous tissue as well as to the pleura. Albeit an uncommon late complication from spilled gallstones, the report is an important reminder of the hazards of gallstone surgery and an argument for careful surgical technique in laparoscopic cholecystectomy for acute cholecystitis. Even if there is no high risk of developing severe complications such as those described in the report, it should be kept in mind when performing laparoscopic cholecystectomy. Granulomas due to gallstones left in the abdominal cavity is probably an overseen complication following gallstone surgery that may cause persisting pain.\nComments:\nDid the report from the cholecystectomy include more detailed description of the spilled gallstones? Where the gallstones left in the trochar site (which could explain the subcutaneous fistlulas)? Where any attempts taken to extract the stones?\n\nWere any bacterial cultures taken? The aggressive local growth, with formation of abscesses as well as fistulas raised the suspicion of uncommon pathogens (e.g. Salmonella Virchow or Clostridium perfringens)\n\nIn the discussion “In this case, the gallstones were found on CT before complications developed. Perhaps, the abscesses and fistula could have been avoided if the stones had been removed when they were discovered.”. Do the authors suggest a repeated laparoscopic procedure, even in asymptomatic patients, in order to remove spilled gallstones? This seems like a slightly hyperbolic recommendation from this single experience. A laparoscopic exploration aimed at discovering gallstones overseen at the first procedure may hardly be expected to reduce the risk of late complications.",
"responses": []
},
{
"id": "16707",
"date": "10 Oct 2016",
"name": "Tatsuhiro Masaoka",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors reported a case complicated with stone spillage after laparoscopic cholecystectomy (LC). Perforation of the gallbladder during LC is a common complication and the incidence of lost stones during LC is not so rare. However it is rare that lost stones lead such a severe postoperative complication. So I think this paper is very interesting. However the authors had better concern the following points.\nCase report session is well written. Authors summarize the long progression in compact. I would like to know at one year after procedure how do authors indicate treatment options to patients and his family. Authors described a CT just showed swelling around calcification. However the patient was conscious of abdominal tenderness, and the cause of tenderness is assumed to be lost gallstone. I think operation must be considered as the best treatment at this point. Authors should discuss more about this point. During the period between 15 and 18 months following the LC, subcutaneous abscesses were formed. The patient was treated with ultrasound-guided incision and drainage. Did authors take bacterial culture? How about antibacterial treatment? Proper antibacterial treatment is important for preventing for progression of diseases. If done, authors should mention detail antibacterial treatment such as choices, doses, duration etc.\n\nThe discussion session is also well written. However authors did not mentioned fistula with communication to pleura. I would like to know why the patient developed fistula. What is the main factor of fistulization? The position of spilled gallstone? Inadequate treatment for abscess? Just after LC, did authors take CT? Of course operation for removing spilled gallstone might be best choice for preventing fistulization when the gallstone was found. Is there any other choice avoiding fistula with communication to pleura? Authors should discuss about this point.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2322
|
https://f1000research.com/articles/5-1150/v1
|
03 Jun 16
|
{
"type": "Research Article",
"title": "MEPPitope: spatial, electrostatic and secondary structure perturbations in the post-fusion Dengue virus envelope protein highlights known epitopes and conserved residues in the Zika virus",
"authors": [
"Sandeep Chakraborty"
],
"abstract": "The dramatic transformation of the Zika virus (ZIKV) from a relatively unknown virus to a pathogen generating global-wide panic has exposed the dearth of detailed knowledge about this virus. Decades of research in the related Dengue virus (DENV), finally culminating in a vaccine registered for use in endemic regions (CYD-TDV), provides key insights in developing strategies for tackling ZIKV. The previously established MEPP methodology compares two conformations of the same protein and identifies residues with significant spatial and electrostatic perturbations. In the current work, MEPP analyzed the pre-and post-fusion DENV type 2 envelope (E) protein, and identified several known epitopes (His317, Tyr299, Glu26, Arg188, etc.) (MEPPitope). These residues are overwhelmingly conserved in ZIKV and all DENV serotypes. Characterization of α-helices in E-proteins show that α1 is not conserved in the sequence space of ZIKV and DENV. Furthermore, perturbation of α1 in the post-fusion DENV structure includes a known epitope Asp215, a residue absent in the pre-fusion α1. A cationic β-sheet in the GAG-binding domain that is stereochemically equivalent in ZIKV and all DENV serotypes is also highlighted due to a residue pair (Arg286-Arg288) that has a significant electrostatic polarity reversal upon fusion. Finally, two highly conserved residues (Thr32 and Thr40), with little emphasis in existing literature, are found to have significant electrostatic perturbation. Thus, a combination of different computational methods enable the rapid and rational detection of critical residues that can be made the target of small drugs, or as epitopes in the search for an elusive therapy or vaccine that neutralizes multiple members of the Flaviviridae family.",
"keywords": [
"Zika",
"Dengue",
"flaviviruses",
"microcephaly",
"Guillain-Barre Syndrome",
"envelope proteins",
"epitopes",
"neutralizing antibodies"
],
"content": "Introduction\n\nThe genus Flavivirus of the family Flaviviridae comprises of more than 70 viruses, including important human pathogens such as the Zika (ZIKV), Dengue (DENV), Japanese encephalitis (JEV), yellow fever (YFV), Tick-borne encephalitis (TBEV) and West Nile (WNV) viruses1,2. Currently, only four flaviviruses (YFV, TBEV, JEV and DENV) have licensed vaccines3,4. In flaviviruses, a single polyprotein encoded by a positive-sense RNA genome is cleaved by viral and host proteases into three structural (premembrane:prM, envelope:E and core:C) and seven non-structural (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) proteins5. These Class II fusion viruses6 enter the cell through clathrin-mediated endocytosis7,8, triggered by protonation of conserved histidine residues at low pH9,10. Conformational changes of E-homodimers to E-monomers at the viral surface expose a highly conserved fusion loop11, which subsequently penetrates the outer leaflet of the host membrane12, wherein a stable trimer creates a fusion pore allowing the nucleocapsid to enter the cytosol13. Subsequent to viral replication, virus assembly creates nonfusogenic immature particles in the lumen of the endoplasmic reticulum. The host protease furin in the trans-Golgi network converts this non-virulent form to a smooth virulent virion by cleaving the globular prM into pr and M proteins, of which the M protein remains associated with meta-stable E homodimers5,14.\n\nUntil recently, ZIKV infections were rare and confined to Asia and Africa15. An analysis of the 2007 ZIKV outbreak in Yap Island, Federated States of Micronesia concluded with the prophetic warning that ‘clinicians and public health officials should be aware of the risk of further expansion of Zika virus transmission’16. The dramatic transformation of this relatively unknown virus to a globally recognized pathogen occurred after it was detected in Brazil17, and quickly spread across the globe (Brazil, France, United States of America, and El Salvador to- date), prompting a WHO emergency committee to assess the linkage of this virus to microcephaly and Guillain-Barré syndrome (GBS)18,19.\n\nThis sudden crisis has exposed the dearth of detailed knowledge about ZIKV. Computational homology modeling has been used to address this limitation exploiting the large volume of data available on related viral structures20. While the genome of ZIKV was sequenced in 200721, the structure of mature ZIKV22 was only recently determined, elucidating several salient features of the E and M proteins, the target of most neutralizing antibodies23–25. However, decades of research on other members of the Flavivirus family provides a trove of information that needs to be contextualized with respect to ZIKV.\n\nDENV has four serotypes (DENV1-4)26. The essential challenge in developing a tetravalent DENV vaccine has been the fact that antibodies for a particular serotype can be enhancing, and potentially life-threatening for secondary infections with other serotypes27. Apart from vaccines, other anti-viral strategies include developing peptide vaccines28, using peptide-inhibitors derived from the viral proteins29, inhibiting the fusion process30 and anionic peptides that target cationic ‘hotspots’31,32. Computational epitope predictors like the sequence based RANKpep33 or the structure based Pepitope34 have been used to validate antibody binding35,36. A detailed structural analysis of proteins of these flaviviruses will provide deeper insight into conservation than a sequential analysis does. Furthermore, analyzing the spatial and electrostatic perturbations of protein structures after conformational changes arising due to the fusion process helps in identifying residues that are critical and possibly exposed to the environment, making them better candidates as vaccine epitopes or drug targets.\n\nIn the current work, several computational methods were used to analyze DENV and ZIKV E protein structures. Firstly, a quantitative analysis of spatial and electrostatic perturbation in the pre37 and post-fusion12 DENV-2 E proteins was done using MEPP38. This revealed that highly perturbed residues are overwhelmingly conserved, and also epitopes of known neutralizing antibodies23,35,39–43. Characterization of α-helices in E-proteins using techniques (PAGAL44,45) previously applied to the Ebola virus46, revealed that α1 in ZIKV-E and DENV-E proteins is not conserved in the sequence space. Furthermore, α1 is perturbed in the post-fusion protein in DENV2-E protein12, and includes a known epitope that is not part of the pre-fusion α141,42,47. PAGAL analysis also highlights a cationic β-sheet within a putative GAG-binding domain48,49, which consists of a pair of arginine residues that have significant electrostatic polarity reversal48,49. Finally, residues that are involved in antibody neutralizing by 5J7 were re-analyzed, and some conflicting results were obtained50.\n\n\nMethods\n\nThe MEPP (version 1)38 and PAGAL (version 1)44 packages have been previously described. The recently solved cryo-EM structure of ZIKV (PDB ID:5IRE) was used as the main structure for analysis of ZIKV in the current study22. PDB ID:1OKEA was the structure of the DENV2-E protein used for analyzing domains I–III, which lacks the stem and transmembrane domains37. The structure of post-fusion DENV2-E protein was obtained from PDB ID:1OK8A12. Since the post-fusion DENV2-E protein did not have side-chains densities for residues 145-158, these residues were removed from the pre-fusion protein (PDB ID:1OKEA) in order to have an uniform comparison (see 1OKEAFIXED.pdb in Dataset1). This has the implicit assumption that this loop effects both pre-fusion and post-fusion proteins in the same manner. A radius of 6Å was used to identify interacting residues38. The ‘distance perturbation index’ is computed by dividing the absolute distance deviation with the smaller of the distances.\n\nFor the stem and transmembrane domains, a DENV3 (PDB ID:3J6SA) structure was used. Since PDB ID:3J6SA has resolution of 6Å and no side-chain atoms, SWISS-MODEL51 was used to generate the model of PDB ID:3J6SA using the ZIKV-E protein (PDB ID:5IREA) as the template (see 3J6SASWISSA.pdb in Dataset1). Hardware requirements are very modest - all results here are from a simple workstation (8GB ram) and runtimes were a few minutes at the most.\n\nThe APBS (version 1.4) parameters were set as described previously in 52. APBS writes out the electrostatic potential in dimensionless units of kT/e where k is Boltzmann’s constant, T is the temperature in K and e is the charge of an electron. All protein structures were rendered by PyMOL(TM) Molecular Graphics System, version 1.7.0.0. (http://www.pymol.org/). α-helices and β-sheets were extracted using DSSP (version 2.2.1)53. Protein structures have been superimposed using MUSTANG (mustang-3.2.1)54. The color coding for the Edmundson wheel is as follows: all hydrophobic residues are colored red, while hydrophilic residues are colored in blue: dark blue for positively charged residues, medium blue for negatively charged residues and light blue for amides.\n\nMultiple sequence alignment was done using MAFFT (version 7.123b)55, and figures generated using the ENDscript server56. In order to obtain a multiple sequence alignment with a single representative of a stereochemical group (positive, negative, aromatic and non-polar residues) the following substitutions were done: E>D, R>K, S>T, W>F, Y>F, L>M, V>M, I>M, A>M. Gly (without a side chain) and Pro (with a cyclic side chain) were not substituted. His was also not substituted, due to its importance in pH sensing among flaviviruses9,10. PHYML (version 3.0) was used to generate phylogenetic trees from alignments57.\n\n\nResults\n\nThe focus of the study in the current paper is the ZIKV and DENV envelope (E) protein, a determinant of tropism and virulence58. Unless explicitly specified, residue numbering is based on DENV2 (PDB ID:1OKEA), while secondary structures are numbered according to the ZIKV protein (PDB ID:5IREA). Each E-protein subunit is about 500 residues long in these flaviviruses. The soluble ectodomain has three distinct domains (I, II, III) - domain I and II are interlaced in the sequence space59. These domains are followed by a stem region which contains two cationic amphipathic helices separated by a stretch of conserved sequences60,61, ending in an anchor region with two transmembrane helices (Figure 1, Figure 2). Apart from a conserved glycosylation site (Asn153) present in all flaviviruses, DENV has an additional site for N-linked glycosylation (Asn67) which regulates interaction with the lectin DC-SIGN62. The hydrophobic anchoring fusion loop (residues 98-109), which penetrates the outer bilayer leaflet of the host cell membrane to initiate cell entry63, is highly conserved in all flaviviruses (Figure 1, Figure 2).\n\nThe E-proteins form a raft-like structure, in complex with the M-proteins (not shown here). Most common exposed residues are the highly conserved fusion loop (residues 98-109 in blue), the glycosylation site (Asn154 in ZIKV, in orange), and the GAG-binding domain which consists of a cationic β-sheet (in magenta). The stem region consists of cationic amphipathic helices α3 and α4 in the E proteins in red and green, respectively. The hydrophobic transmembrane helices α5 and α6 are in cyan.\n\nThe most prominent difference between the E protein from DENV and other viruses analyzed here is a missing stretch of amino acids near the Asn153 glycosylation site. This stretch is the possible reason for an incorrect alignment of the conserved glycosylation site (N-x-S/T) sequence in the MSA (both ClustalW and MAFFT has this issue). Also, DENV has an additional glycosylation site (Asn67) missing in other viruses. MSA was done using MAFFT55, and the alignment of the secondary structures were done using ESPript56.\n\nThe pre-(PDB ID:1OKE37) and post-(PDB ID:1OK812) fusion conformations of DENV2 were used for MEPP analysis38. The major difference in these proteins is a 33Å displacement of domain III, as previously noted9,13. Several metrics were used for identifying residues that undergo spatial, electrostatic and secondary structure perturbations. The first analysis computed pairs of residues that have a electrostatic potential difference (EPD) reversal (EPD-R) (> 150 units), were within 8 Å of each other in both conformations and had minimal distance perturbation (<4Å). Residues were marked as (i) completely conserved, (ii) stereochemically equivalent or (iii) not conversed. His317, the residue implicated in pH sensing64,65, switches electrostatic polarity with respect to Thr315 (Table 1). Both His317 and Thr315 are conserved in ZIKV/DENV1-4/JEV/WNV (Figure 2), and are known epitopes23,43. Another pair (Arg286-Arg288) with EPD-R are stereochemically equivalent in ZIKV/DENV1-4/JEV/WNV (Figure 3), and lie on a putative GAG-binding domain preceding the DI/DIII linker48,49. Thr359, which is an epitope for the same MAb that binds Thr315 and His31723, but is not conserved even among DENV serotypes, is another such residue which has EPD-R with Ser363 (Figure 2). Thus, barring the pair Thr32-Thr40 (Table 1), all residues that have an EPD-R with respect to a spatially proximal residue are known to be epitopes, even when not conserved across different viruses.\n\nThese pairs have minimal distance perturbation (<4 Å), significant reversal in EPD (> 150 units) and are within 8 Å in both conformations. For example, Arg286-Arg288, part of a cationic β-sheet and a putative GAG-binding domain, has an electrostatic perturbation without having any relative spatial displacement. F-: final value in post-fusion DENV-E protein (PDB ID:1OK8A), O-: original value in pre-fusion DENV-E protein (PDB ID:1OKEA). Conserved in ZIKV and all four DENV serotypes? - Y: yes, N: no, StCh: stereochemically equivalent. Distances in Å. See Methods section for units of potential.\n\nThe following substitutions were done in the sequence space: E>D, R>K, S>T, W>F, Y>F, L>M, V>M, I>M, A>M in order to use a single amino acid for positive, negative, aromatic and non-polar residues. Gly (without a side chain) and Pro (with a cyclic side chain) were not substituted. His was also not substituted, due to its importance in pH sensing among flaviviruses9,10. These substitutions enable MAFFT to align the glycosylation site properly. Also, these show the stereochemical equivalence of the cationic residues β-21 in ZIKV (PDB ID:5IREA), which is part of the GAG-binding domain.\n\nNext, normalized distance deviations (see Methods) highlight Phe11, Tyr299, Ser7, Arg9, Glu26, Arg188, Glu13 and Gln316 as residues with the largest spatial perturbations (Figure 4a). Barring Ser7, all residues are completely conserved in ZIKV/DENV1-4 (Figure 2). A N-terminal peptide (DENV3,4-12 VGVGNRDFV) that enhances immunogenicity for CD8+ T cells when expressed from modified vaccinia Ankara includes Phe11, Ser7 and Arg939. Arg9 and Glu13 are also epitopes of other antibodies35. This particular study also showed that the N8R substitution DNA vaccine had a more neutralizing and protective effect than wild-type immunized sera, both in vitro and in vivo35. Arg9 is part of a salt bridge with Glu368 which maintains the structure of the E-protein in the pre-fusion state10,64. Tyr299 is part of the epitope for the cross-reactive neutralizing MAb DENV1-E10266. Arg188 is essential for infectivity, and is neutralized by DC4 Fab40, while the monoclonal antibody DD18-5 recognized residue Glu26 in DENV435, a residue predicted by the Pepitope server34. Thus, all spatially perturbed residues identified by MEPP are known epitopes.\n\nMEPP analyzes the spatial and electrostatic potential difference (EPD) deviations of each residue with respect to other residues in close proximity (< 6Å). (a) Distance deviation. Barring Ser7, all residues are completely conserved in ZIKV and DENV. All residues, barring Gln316, are known epitopes. Gln316 precedes the histidine residue responsible for initiating pH driven conformational changes during cell entry. (b) Electrostatic perturbation. Several residues identified by electrostatic features do not have known references as epitopes in current literature. Thr32 and Thr40 are two such residues, which are conserved in these flaviviruses.\n\nThe following residues have significant cumulative EPD deviations (>150 EPD units) with other residues within 6Å - Thr32, Thr40, Lys160, Lys247, Asp249, Arg288, Met297 and His317 (Figure 4b). His317, Arg288, Thr32 and Thr40 have been discussed above. Several residues identified by this electrostatic feature do not have known references in current literature. Of these residues, Lys160 and Met297 are not conserved in ZIKV/DENV1-4, while Asp249 is conserved in DENV, but not in ZIKV (Figure 2). However, Thr32 and Thr40 are two conserved residues (Figure 2) with EPD deviations, leading to an EPD-R as described above (Table 1).\n\nSubsequently, analysis of α-helices in the pre- and post-fusion DENV2-E protein revealed α1 is slightly perturbed post fusion, increasing in length by one residue (Asp215) compared to the pre-fusion α1 (Figure 5, Table 2). Asp215 is important for infectivity42, a proven41 and predicted47 epitope, and a membranotropic region of the E protein (peptide 29)67.\n\nZIKV (PDB ID:5IREA) in cyan, DENV2 (PDB ID:1OKEA) in magenta are superimposed using MUSTANG54. An extra loop is present in ZIKV/JEV/WNV (Figure 3), and absent in DENV, near the glycosylation site. α1 in ZIKV (residues 192-195) and α1 in DENV2 (residues 210-214) are not conserved. α1 of DENV2 increases in length by one post-fusion, and includes the known epitope Asp215.\n\nThe soluble ectodomain has two α-helices - α1 and α2. α1 is perturbed in the post-fusion DENV2, increasing in length by one to include the known epitope Asp215. Moreover, α1 is not conserved in the sequence space of ZIKV-E. α2 remains conserved all E-proteins, even after fusion. α3/α4 are amphipathic and cationic. The transmembrane helices (α5/α6) with no charged residues have a low hydrophobic moment. HM: Hydrophobic moment, RPNR: Relative proportion of positive residues among charged residues, Len: length of the α-helix, NCH: number of charged residues.\n\nThe secondary structures from DENV and ZIKV proteins were extracted using DSSP53, and analyzed using PAGAL44. The ZIKV-E protein (PDB ID:5IREA, length=501 residues) has six α-helices and thirty β-sheets (see SSEinfo.zip in Dataset 1). The Edmundson wheels68 for these α-helices in the stem region (Table 2) shows their amphipathic cationic nature (Figure 6) . Interfacial hydrophobicity plays a critical role in cell entry of viruses69. The membranotropic α3 and α4 in DENV467 has been studied extensively through mutational studies of the hydrophobic face60,61. Another strategy using peptide mimetic (residues 412 to 444, named DN59) derived from these helices showed inhibition of flaviviruses by releasing genomic RNA70,71. A similar study based on peptide mimetic of residues 419-447 (comprising the conserved stretch following α3 and α4) inhibited viral entry72. These peptides were most effective at inhibition when three residues (442-444) were mutated to tryptophan, the most hydrophobic residue according to the Wimley-White whole residue hydrophobicity scale73. An interesting feature of α4 is the complete conservation of residues on the charged surface - Ser439, Gly436, Lys432, Gly439 and H435 in DENV (Figure 6) , while the hydrophobic face is much more variable. Only Asn428 is not conserved (Figure 2).\n\n(a) α3 in DENV-E protein. (b) α3 in ZIKV-E protein. (c) α4 in DENV-E protein. (d) α4 in ZIKV-E protein. The Edmundson wheel shows the amphipathic cationic nature of the stem helices. The hydrophobicity of residues in the hydrophobic face is an important determinant of virulence61,70,71. The conservation of the charged face of α4 (c and d) is in contrast to several differences in the hydrophobic face. The color coding for the Edmundson wheel is as follows: all hydrophobic residues are colored red, while hydrophilic residues are colored in blue: dark blue for positively charged residues, medium blue for negatively charged residues and light blue for amides.\n\nZIKV and DENV are Class II fusion viruses that deploy β-sheet-rich domains to destabilize membranes6. The charged features of these β-sheets emphasizes β21 in ZIKV (294:KCRLK, preceding domain I/III linker) as distinctive, since it has three positively charged residues (Figure 7a). Two arginine residues on this putative GAG-binding domain48,49, stereochemically equivalent in ZIKV/DENV1-4/JEV/WNV (Figure 3), was identified by MEPP as having a significant electrostatic polarity reversal after membrane fusion (Table 1). This residue pair (Arg286-Arg288) remains on the β-sheet post-fusion. This cationic ‘hotspot’ might be the target of small anti-viral anionic peptides31,32. A separate study focused on mutations in the DI/DIII linker demonstrated that a compensatory mutation in α3 (DENV-E Q400H) restored virus-like particle assembly disrupted by a mutation (DENV-E Y299F). Interestingly, DENV-E Q400 is not conversed even among DENV serotypes74, and Tyr299 is distant from α3 (Figure 7b).\n\n(a) β-21 in ZIKV-E is the most distinctive, and has three positively charged residues (290:KCRLK). The stereochemical nature of these residues are conserved in ZIKV/DENV1-4/JEV/WNV (Figure 3). (b) β21 is part of the GAG-binding domain that precedes the domain I/III linker, and is physically proximal to the cationicα4. This sheet has a pair (DENV:Arg286-Arg288) with electrostatic polarity reversal post-fusion. A mutation of Tyr299 to Phe299 disrupted virus-like particle assembly, although it was compensated by a mutation in α3 (DENV-E Q400H), which is distant from Tyr299.\n\nThe phylogenetic tree for these flaviviruses derived from the multiple sequence alignment (MSA) of the E-protein shows that TBEV and YFV are related, and distant from ZIKV/DENV1-4/JEV/WNV (see Supplementary material Figure 1). TBEV and YFV were excluded from the MSA. Excluding TBEV and YFV shows that ZIKV, JEV and WNV have a loop near the glycosylation site which is missing in DENV (Figure 2). However, both ClustalW75 and MAFFT55) failed to align the glycosylation residues correctly. Replacing stereochemically equivalent residues (see Methods) corrected this alignment, and also gave a better visualization of conservation and differences (Figure 3).\n\nA recent study on the DENV3-specific human monoclonal antibody 5J7 demonstrated a very potent neutralizing effect through the binding of envelope proteins (PDB ID:3J6U)50. Interacting residues were determined based on a distance of 8Å since side-chain densities were not resolved (cryo-EM Fab resolution was 9Å). Table 1 in the study reported that T35 from the heavy chain of 5J7 (PDB ID:3J6UH) interacts with four residues (Q52, Q131, E133, N134) from the DENV-E protein (PDB ID:3J6UC), and with K307 and K308 from another E-protein of the same subunit. While T35 was within 10 Å for Q52, Q131, E133 and N134, the data on K307 and K308 could not be reproduced since T35 was found to be at a much larger distance from K308 in all three subunits (see Supplementary material Figure 2). The interacting residues of the heavy chain (PDB ID:3J6UH) and the light chain (PDB ID:3J6UL) with other subunits, as computed in this study (see Supplementary material Table 1 and Supplementary material Table 2, respectively). Distance-sorted interacting residues indicates Thr51 in the DENV3-E protein (chain C) is closest to the heavy chain (H-chain) (Table 3). This explains the specificity of 5J7 to DENV3, since Thr51 is found only in DENV1 and DENV3. Another interacting residue, Thr223, is not conserved in any other DENV or ZIKV virus (Figure 2). A different study using only the DENV3 domain III identified K307 and K308 as binding sites for mAb 14A4-8 in DENV3, but also included other domain III residues (K325, A329, G381 and I387) not present in 5J7 binding of DENV376.\n\nThr51 in the DENV-3 protein (chain C) is closest to the Leu109 of the heavy chain (H-chain), and Thr223/Thr224 in the DENV-3 protein (chain C) is closest to Ile101 in the light chain (L-chain). Thr224 is conserved in ZIKV and the other DENV serotypes. However, Thr51 is conserved only in DENV1, while Thr223 is not conserved in ZIKV or other DENV serotypes, explaining the lack of neutralization of other serotypes by 5J7. The H-chain also binds to the conserved fusion loop of another DENV3-E protein (chain A).\n\nSpatial congruence of catalytic residues in the active site of functionally equivalent proteins, even with no sequence homology77, has been long established78. Further, electrostatic potential difference (EPD)79,80 was also shown to be conserved in cognate pairs of active site residues in these active sites52,81,82. Comparison of apo and holo structures quantifying the spatial and electrostatic perturbations after ligand binding was shown to identify critical catalytic residues in several enzymes38.\n\nIn the current work, this basic postulate was extended to posit that perturbed residues in viral envelope proteins during fusion with the host membrane are good candidates as epitopes for vaccines and small drug targets (MEPPitope). Specifically, computational methods38,44,53 were used to analyze spatial, electrostatic and secondary structure perturbations between a pre-37. and post-fusion12 DENV2-E protein. These residues are overwhelmingly conserved in ZIKV and all DENV serotypes (Figure 2), and are known epitopes23,35,39–43. While perturbation was found to be a good predictor of an epitope, not all epitopes are perturbed. For example, the current study did not identify any residues in the fusion loop, the target of several neutralizing antibodies59,83–86, or Thr51/Thr224 (Table 3) that is an epitope of a potent neutralizing antibody50. The hydrophobic fusion loop sequence is highly conserved in all flaviviruses (Figure 2), demonstrating the importance of sequence alignment as a strategy to identify epitopes87. The current study identified few perturbed residues in domain III (only His317 and Thr315) as significantly perturbed, consistent with the observation that although antibodies targeted to domain III endow protection and minimize enhancement when present, they are redundant and can be replaced by neutralizing antibodies targeted to other epitopes on the virion88. This study indicates two residues (Thr32 and Thr40) as a significantly perturbed pair in terms of its electrostatic profile. Thr32 is conserved in all flaviviruses, while Thr40 in all conserved in all except TBEV, where it is the stereochemically equivalent Ser40. There has been no emphasis on these residues as epitopes in previous literature. In summary, the current study presents a computational methodology to extract structural and electrostatic features of envelope proteins that undergo conformational changes during fusion, which correlates well with known epitopes of DENV. Conservation of such residues in ZIKV provides a good strategy to leverage existing knowledge in developing ZIKV specific therapeutics.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for ‘MEPPitope: spatial, electrostatic and secondary structure perturbations in the post-fusion Dengue virus envelope protein highlights known epitopes and conserved residues in the Zika virus’. README.txt contains a description of the files, 10.5256/f1000research.8853.d12354989",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nI gratefully acknowledge Mridul Bhattacharjee and Nitin Salaye for logistic support. I would like to thank Ravindra Makde from High Pressure and Synchrotron Radiation Physics Division, Bhabha Atomic Research Centre, Mumbai, India for discussions on DENV structures.\n\n\nSupplementary material\n\nJapanese encephalitis virus (JEV, GenBank Accid:AAZ17558.1), West Nile virus (WNV, GenBank Accid: ACH99530.1), Tick-borne encephalitis virus (TBEV, GenBank Accid:AEP25269.1), Yellow fever virus (YFV, GenBank Accid:AAA92696.1). TBEV and YFV are distantly related to DENV, ZIKV, JEV and WNV.\n\nInteracting residues are identified as those with Cα atoms within 8 Å of each other. The heavy chain binds to the A (PDB ID:3J6UA) and C (PDB ID:3J6UC) chains of the DENV-E protein, as well as the light chain (PDB ID:3J6UL).\n\nSide-chain densities were not resolved for cryo-EM Fab 5J7–DENV3 map since it had a 9 Å resolution. It is reported that ‘a slightly negatively charged patch formed by T35, S37, S82 and S84 of the heavy chain interacted with a positively charged patch formed by K307 and K308 of the E protein’, by considering interacting residues between Fab and E proteins where pairs of Cα atoms are < 8 Å. However, the Cα atoms of these residues are quite far. For example, T35 from 5J7 is 108, 51 and 78 Å away from K308 in the three E-proteins (PDB ID:3J6UA, 3J6UC and 3J6UE), respectively.\n\nThe C-chain of the DENV-E protein interacts with both the heavy and light chains.\n\n\nReferences\n\nKuno G, Chang GJ, Tsuchiya KR, et al.: Phylogeny of the genus Flavivirus. J Virol. 1998; 72(1): 73–83. PubMed Abstract | Free Full Text\n\nMukhopadhyay S, Kuhn RJ, Rossmann MG: A structural perspective of the flavivirus life cycle. Nat Rev Microbiol. 2005; 3(1): 13–22. PubMed Abstract | Publisher Full Text\n\nGuy B, Briand O, Lang J, et al.: Development of the Sanofi Pasteur tetravalent dengue vaccine: One more step forward. Vaccine. 2015; 33(50): 7100–7111. PubMed Abstract | Publisher Full Text\n\nShan C, Xie X, Barrett AD, et al.: Zika virus: Diagnosis, therapeutics, and vaccine. ACS Infect Dis. 2016; 2(3): 170–172. Publisher Full Text\n\nPerera R, Kuhn RJ: Structural proteomics of dengue virus. Curr Opin Microbiol. 2008; 11(4): 369–377. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhite JM, Delos SE, Brecher M, et al.: Structures and mechanisms of viral membrane fusion proteins: multiple variations on a common theme. Crit Rev Biochem Mol Biol. 2008; 43(3): 189–219. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan der Schaar HM, Rust MJ, Chen C, et al.: Dissecting the cell entry pathway of dengue virus by single-particle tracking in living cells. PLoS Pathog. 2008; 4(12): e1000244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmit JM, Moesker B, Rodenhuis-Zybert I, et al.: Flavivirus cell entry and membrane fusion. Viruses. 2011; 3(2): 160–171. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMueller DS, Kampmann T, Yennamalli R, et al.: Histidine protonation and the activation of viral fusion proteins. Biochem Soc Trans. 2008; 36(Pt 1): 43–45. PubMed Abstract | Publisher Full Text\n\nKampmann T, Mueller DS, Mark AE, et al.: The role of histidine residues in low-pH-mediated viral membrane fusion. Structure. 2006; 14(10): 1481–1487. PubMed Abstract | Publisher Full Text\n\nRockstroh A, Barzon L, Pacenti M, et al.: Recombinant envelope-proteins with mutations in the conserved fusion loop allow specific serological diagnosis of dengue-infections. PLoS Negl Trop Dis. 2015; 9(11): e0004218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nModis Y, Ogata S, Clements D, et al.: Structure of the dengue virus envelope protein after membrane fusion. Nature. 2004; 427(6972): 313–319. PubMed Abstract | Publisher Full Text\n\nBressanelli S, Stiasny K, Allison SL, et al.: Structure of a flavivirus envelope glycoprotein in its low-pH-induced membrane fusion conformation. EMBO J. 2004; 23(4): 728–738. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu IM, Zhang W, Holdaway HA, et al.: Structure of the immature dengue virus at low pH primes proteolytic maturation. Science. 2008; 319(5871): 1834–1837. PubMed Abstract | Publisher Full Text\n\nHayes EB: Zika virus outside Africa. Emerg Infect Dis. 2009; 15(9): 1347–1350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuffy MR, Chen TH, Hancock WT, et al.: Zika virus outbreak on Yap Island, Federated States of Micronesia. N Engl J Med. 2009; 360(24): 2536–2543. PubMed Abstract | Publisher Full Text\n\nZanluca C, Melo VC, Mosimann AL, et al.: First report of autochthonous transmission of Zika virus in Brazil. Mem Inst Oswaldo Cruz. 2015; 110(4): 569–572. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCao-Lormeau VM, Blake A, Mons S, et al.: Guillain-barré syndrome outbreak associated with Zika virus infection in French Polynesia: a case-control study. Lancet. 2016; 387(10027): 1531–1539. PubMed Abstract | Publisher Full Text\n\nSmith DW, Mackenzie J: Zika virus and Guillain-Barré syndrome: another viral cause to add to the list. Lancet. 2016; 387(10027): 1486–1488. PubMed Abstract | Publisher Full Text\n\nEkins S, Liebler J, Neves BJ, et al.: Illustrating and homology modeling the proteins of the Zika virus [version 1; referees: 1 approved with reservations]. F1000Res. 2016; 5: 275. Publisher Full Text\n\nKuno G, Chang GJ: Full-length sequencing and genomic characterization of Bagaza, Kedougou, and Zika viruses. Arch Virol. 2007; 152(4): 687–696. PubMed Abstract | Publisher Full Text\n\nSirohi D, Chen Z, Sun L, et al.: The 3.8 å resolution cryo-em structure of Zika virus. Science. 2016; 352(6284): 467–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSukupolvi-Petty S, Austin SK, Purtha WE, et al.: Type- and subcomplex-specific neutralizing antibodies against domain iii of dengue virus type 2 envelope protein recognize adjacent epitopes. J Virol. 2007; 81(23): 12816–12826. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMesser WB, de Alwis R, Yount BL, et al.: Dengue virus envelope protein domain i/ii hinge determines long-lived serotype-specific dengue immunity. Proc Natl Acad Sci U S A. 2014; 111(5): 1939–1944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalconar AK: Identification of an epitope on the dengue virus membrane (M) protein defined by cross-protective monoclonal antibodies: design of an improved epitope sequence based on common determinants present in both envelope (E and M) proteins. Arch Virol. 1999; 144(12): 2313–2330. PubMed Abstract | Publisher Full Text\n\nYacoub S, Mongkolsapaya J, Screaton G: Recent advances in understanding dengue [version 1; referees: 3 approved]. F1000Res. 2016; 5: pii: F1000 Faculty Rev-78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuzman MG, Vazquez S: The complexity of antibody-dependent enhancement of dengue virus infection. Viruses. 2010; 2(12): 2649–2662. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi W, Joshi MD, Singhania S, et al.: Peptide Vaccine: Progress and Challenges. Vaccines (Basel). 2014; 2(3): 515–536. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchmidt AG, Yang PL, Harrison SC: Peptide inhibitors of dengue-virus entry target a late-stage fusion intermediate. PLoS Pathog. 2010; 6(4): e1000851. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe La Guardia C, Lleonart R: Progress in the identification of dengue virus entry/fusion inhibitors. Biomed Res Int. 2014; 2014: 825039. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaosutthipong C, Kanthong N, Flegel TW: Novel, anionic, antiviral septapeptides from mosquito cells also protect monkey cells against dengue virus. Antiviral Res. 2013; 98(3): 449–456. PubMed Abstract | Publisher Full Text\n\nAlhoot MA, Rathinam AK, Wang SM, et al.: Inhibition of dengue virus entry into target cells using synthetic antiviral peptides. Int J Med Sci. 2013; 10(6): 719–729. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReche PA, Glutting JP, Zhang H, et al.: Enhancement to the rankpep resource for the prediction of peptide binding to MHC molecules using profiles. Immunogenetics. 2004; 56(6): 405–419. PubMed Abstract | Publisher Full Text\n\nMayrose I, Penn O, Erez E, et al.: Pepitope: epitope mapping from affinity-selected peptides. Bioinformatics. 2007; 23(23): 3244–3246. PubMed Abstract | Publisher Full Text\n\nTang CT, Liao MY, Chiu CY, et al.: Generation of Monoclonal Antibodies Against Dengue Virus Type 4 and Identification of Enhancing Epitopes on Envelope Protein. PLoS One. 2015; 10(8): e0136328. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWen JS, Jiang LF, Zhou JM, et al.: Computational prediction and identification of dengue virus-specific CD4+ T-cell epitopes. Virus Res. 2008; 132(1–2): 42–48. PubMed Abstract | Publisher Full Text\n\nModis Y, Ogata S, Clements D, et al.: A ligand-binding pocket in the dengue virus envelope glycoprotein. Proc Natl Acad Sci U S A. 2003; 100(12): 6986–6991. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S: A quantitative measure of electrostatic perturbation in holo and apo enzymes induced by structural changes. PLoS One. 2013; 8(3): e59352. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinan BR, Flesch IE, Pinho TM, et al.: An intact signal peptide on dengue virus e protein enhances immunogenicity for CD8+ T cells and antibody when expressed from modified vaccinia ankara. Vaccine. 2014; 32(25): 2972–2979. PubMed Abstract | Publisher Full Text\n\nCox K, Tang A, Chen Z, et al.: Isolation of dengue neutralizing antibodies from single cell sorted human antigen specific memory b cell cultures (vac11p.1107). J Immunol. 2015; 194(1 Supplement): 212–15. Reference Source\n\nLin HE, Tsai WY, Liu IJ, et al.: Analysis of epitopes on dengue virus envelope protein recognized by monoclonal antibodies and polyclonal human sera by a high throughput assay. PLoS Negl Trop Dis. 2012; 6(1): e1447. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKroschewski H, Sagripanti JL, Davidson AD: Identification of amino acids in the dengue virus type 2 envelope glycoprotein critical to virus infectivity. J Gen Virol. 2009; 90(Pt 10): 2457–2461. PubMed Abstract | Publisher Full Text\n\nMidgley CM, Flanagan A, Tran HB, et al.: Structural analysis of a dengue cross-reactive antibody complexed with envelope domain III reveals the molecular basis of cross-reactivity. J Immunol. 2012; 188(10): 4971–4979. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Rao B, Dandekar A: PAGAL - Properties and corresponding graphics of alpha helical structures in proteins [version 2; referees: 2 approved]. F1000Res. 2014; 3: 206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Phu M, de Morais TP, et al.: The PDB database is a rich source of alpha-helical anti-microbial peptides to combat disease causing pathogens [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2014; 3: 295. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Rao B, Asgeirsson B, et al.: Characterizing alpha helical properties of Ebola viral proteins as potential targets for inhibition of alpha-helix mediated protein-protein interactions [version 3; referees: 2 approved]. F1000Res. 2014; 3: 251. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMazumder R, Hu ZZ, Vinayaka CR, et al.: Computational analysis and identification of amino acid sites in dengue E proteins relevant to development of diagnostics and vaccines. Virus Genes. 2007; 35(2): 175–186. PubMed Abstract | Publisher Full Text\n\nChen Y, Maguire T, Hileman RE, et al.: Dengue virus infectivity depends on envelope protein binding to target cell heparan sulfate. Nat Med. 1997; 3(8): 866–871. PubMed Abstract | Publisher Full Text\n\nRoehrig JT, Butrapet S, Liss NM, et al.: Mutation of the dengue virus type 2 envelope protein heparan sulfate binding sites or the domain III lateral ridge blocks replication in Vero cells prior to membrane fusion. Virology. 2013; 441(2): 114–125. PubMed Abstract | Publisher Full Text\n\nFibriansah G, Tan JL, Smith SA, et al.: A highly potent human antibody neutralizes dengue virus serotype 3 by binding across three surface proteins. Nat Commun. 2015; 6: 6341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArnold K, Bordoli L, Kopp J, et al.: The SWISS-MODEL workspace: a web-based environment for protein structure homology modelling. Bioinformatics. 2006; 22(2): 195–201. PubMed Abstract | Publisher Full Text\n\nChakraborty S, Minda R, Salaye L, et al.: Active site detection by spatial conformity and electrostatic analysis--unravelling a proteolytic function in shrimp alkaline phosphatase. PLoS One. 2011; 6(12): e28470. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoosten RP, te Beek TA, Krieger E, et al.: A series of PDB related databases for everyday needs. Nucleic Acids Res. 2011; 39(Database issue): D411–419. PubMed Abstract | Publisher Full Text\n\nKonagurthu AS, Whisstock JC, Stuckey PJ, et al.: MUSTANG: a multiple structural alignment algorithm. Proteins. 2006; 64(3): 559–574. PubMed Abstract | Publisher Full Text\n\nKatoh K, Standley DM: MAFFT multiple sequence alignment software version 7: improvements in performance and usability. Mol Biol Evol. 2013; 30(4): 772–780. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobert X, Gouet P: Deciphering key features in protein structures with the new ENDscript server. Nucleic Acids Res. 2014; 42(Web Server issue): W320–W324. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuindon S, Lethiec F, Duroux P, et al.: PHYML Online--a web server for fast maximum likelihood-based phylogenetic inference. Nucleic Acids Res. 2005; 33(Web Server issue): W557–559. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWan SW, Lin CF, Wang S, et al.: Current progress in dengue vaccines. J Biomed Sci. 2013; 20(37): 2–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDejnirattisai W, Wongwiwat W, Supasa S, et al.: A new class of highly potent, broadly neutralizing antibodies isolated from viremic patients infected with dengue virus. Nat Immunol. 2015; 16(2): 170–177. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin SR, Zou G, Hsieh SC, et al.: The helical domains of the stem region of dengue virus envelope protein are involved in both virus assembly and entry. J Virol. 2011; 85(10): 5159–5171. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPurdy DE, Chang GJ: Secretion of noninfectious dengue virus-like particles and identification of amino acids in the stem region involved in intracellular retention of envelope protein. Virology. 2005; 333(2): 239–250. PubMed Abstract | Publisher Full Text\n\nMondotte JA, Lozach PY, Amara A, et al.: Essential role of dengue virus envelope protein N glycosylation at asparagine-67 during viral propagation. J Virol. 2007; 81(13): 7136–7148. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNayak V, Dessau M, Kucera K, et al.: Crystal structure of dengue virus type 1 envelope protein in the postfusion conformation and its implications for membrane fusion. J Virol. 2009; 83(9): 4338–4344. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrakash MK, Barducci A, Parrinello M: Probing the mechanism of pH-induced large-scale conformational changes in dengue virus envelope protein using atomistic simulations. Biophys J. 2010; 99(2): 588–594. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChaudhury S, Ripoll DR, Wallqvist A: Structure-based pKa prediction provides a thermodynamic basis for the role of histidines in pH-induced conformational transitions in dengue virus. Biochem Biophys Rep. 2015; 4: 375–385. Publisher Full Text\n\nShrestha B, Brien JD, Sukupolvi-Petty S, et al.: The development of therapeutic antibodies that neutralize homologous and heterologous genotypes of dengue virus type 1. PLoS Pathog. 2010; 6(4): e1000823. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNemésio H, Palomares-Jerez F, Villalaín J: The membrane-active regions of the dengue virus proteins C and E. Biochim Biophys Acta. 2011; 1808(10): 2390–2402. PubMed Abstract | Publisher Full Text\n\nSchiffer M, Edmundson AB: Use of helical wheels to represent the structures of proteins and to identify segments with helical potential. Biophys J. 1967; 7(2): 121–135. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBadani H, Garry RF, Wimley WC: Peptide entry inhibitors of enveloped viruses: the importance of interfacial hydrophobicity. Biochim Biophys Acta. 2014; 1838(9): 2180–2197. PubMed Abstract | Publisher Full Text\n\nLok SM, Costin JM, Hrobowski YM, et al.: Release of dengue virus genome induced by a peptide inhibitor. PLoS One. 2012; 7(11): e50995. PubMed Abstract | Publisher Full Text\n\nHrobowski YM, Garry RF, Michael SF: Peptide inhibitors of dengue virus and West Nile virus infectivity. Virol J. 2005; 2(1): 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchmidt AG, Yang PL, Harrison SC: Peptide inhibitors of flavivirus entry derived from the E protein stem. J Virol. 2010; 84(24): 12549–12554. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhite SH, Wimley WC: Membrane protein folding and stability: physical principles. Annu Rev Biophys Biomol Struct. 1999; 28(1): 319–365. PubMed Abstract | Publisher Full Text\n\nde Wispelaere M, Yang PL: Mutagenesis of the DI/DIII linker in dengue virus envelope protein impairs viral particle assembly. J Virol. 2012; 86(13): 7072–7083. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarkin MA, Blackshields G, Brown NP, et al.: Clustal W and Clustal X version 2.0. Bioinformatics. 2007; 23(21): 2947–2948. PubMed Abstract | Publisher Full Text\n\nMatsui K, Gromowski GD, Li L, et al.: Characterization of a dengue type-specific epitope on dengue 3 virus envelope protein domain III. J Gen Virol. 2010; 91(Pt 9): 2249–2253. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRawlings ND, Barrett AJ: Evolutionary families of peptidases. Biochem J. 1993; 290(Pt 1): 205–218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKleywegt GJ: Recognition of spatial motifs in protein structures. J Mol Biol. 1999; 285(4): 1887–1897. PubMed Abstract | Publisher Full Text\n\nBaker NA, Sept D, Joseph S, et al.: Electrostatics of nanosystems: application to microtubules and the ribosome. Proc Natl Acad Sci USA. 2001; 98(18): 10037–10041. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDolinsky TJ, Nielsen JE, McCammon JA, et al.: PDB2PQR: an automated pipeline for the setup of Poisson-Boltzmann electrostatics calculations. Nucleic Acids Res. 2004; 32(Web Server issue): W665–667. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Rao BJ: A measure of the promiscuity of proteins and characteristics of residues in the vicinity of the catalytic site that regulate promiscuity. PLoS One. 2012; 7(2): e32011. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S, Asgeirsson B, Minda R, et al.: Inhibition of a cold-active alkaline phosphatase by imipenem revealed by in silico modeling of metallo-β-lactamase active sites. FEBS Lett. 2012; 586(20): 3710–3715. PubMed Abstract | Publisher Full Text\n\nSmith SA, de Alwis AR, Kose N, et al.: The potent and broadly neutralizing human dengue virus-specific monoclonal antibody 1c19 reveals a unique cross-reactive epitope on the bc loop of domain ii of the envelope protein. MBio. 2013; 4(6): e00873– 13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLai CY, Tsai WY, Lin SR, et al.: Antibodies to envelope glycoprotein of dengue virus during the natural course of infection are predominantly cross-reactive and recognize epitopes containing highly conserved residues at the fusion loop of domain II. J Virol. 2008; 82(13): 6631–6643. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCox KS, Tang A, Chen Z, et al.: Rapid isolation of dengue-neutralizing antibodies from single cell-sorted human antigen-specific memory B-cell cultures. In mAbs. number just-accepted, Taylor & Francis, 2016; 8(1): 129–40. PubMed Abstract | Publisher Full Text\n\nDeng YQ, Dai JX, Ji GH, et al.: A broadly flavivirus cross-neutralizing monoclonal antibody that recognizes a novel epitope within the fusion loop of E protein. PLoS One. 2011; 6(1): e16059. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeng CH, Liu SJ, Tsai JP, et al.: A novel dengue vaccine candidate that induces cross-neutralizing antibodies and memory immunity. Microbes Infect. 2009; 11(2): 288–295. PubMed Abstract | Publisher Full Text\n\nWilliams KL, Wahala WM, Orozco S, et al.: Antibodies targeting dengue virus envelope domain III are not required for serotype-specific protection or prevention of enhancement in vivo. Virology. 2012; 429(1): 12–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChakraborty S: Dataset 1 in: MEPPitope: spatial, electrostatic and secondary structure perturbations in the post-fusion Dengue virus envelope protein highlights known epitopes and conserved residues in the Zika virus. F1000Research. 2016. Data Source"
}
|
[
{
"id": "14191",
"date": "15 Jun 2016",
"name": "Shamala Devi Sekaran",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title needs to be shortened and revised as the should reflect the nature of the work which is all computational\n\nSome other minor comments\na. vaccine is only registered to date in 3 countries and not used by endemic regions as stated in line 4 of the abstract - please rewrite this. b. The last sentence has to be revised as I am not sure this addresses targets for drugs c. The last paragraph of introduction - was quantitative analysis of Den-2 E done with MEPP? d. Please check citation of APBS (version 1.4) which is given as Ref 52 e. In the analysis of secondary structures - first sentence needs to revise as this was only done for the E protein and not all proteins",
"responses": [
{
"c_id": "2177",
"date": "14 Sep 2016",
"name": "Sandeep Chakraborty",
"role": "Author Response",
"response": "Dear Dr Sekaran and Dr Zandi, I would like to thank you for taking the time to review this paper, and providing constructive criticism on the overall manuscript. I have revised the manuscript based on the comment of all three reviewers. Please find my responses below. The title needs to be shortened and revised as the should reflect the nature of the work which is all computational >> Done. Some other minor comments a. vaccine is only registered to date in 3 countries and not used by endemic regions as stated in line 4 of the abstract - please rewrite this. >> Done b. The last sentence has to be revised as I am not sure this addresses targets for drugs >> Revised. I have made the changes in the main manuscript too. c. The last paragraph of introduction - was quantitative analysis of Den-2 E done with MEPP? >> Yes. d. Please check citation of APBS (version 1.4) which is given as Ref 52 >> Ref 52 was intended to direct the reader to the parameters used. I have cited APBS before this reference to remove the confusion. e. In the analysis of secondary structures - first sentence needs to revise as this was only done for the E protein and not all proteins >> Fixed. I would like to thank you once again for the positive comments. best wishes, Sandeep"
}
]
},
{
"id": "15004",
"date": "28 Jul 2016",
"name": "Raju Nagarajan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author performed an essential structural analysis of ZIKV and DENV envelope proteins and reported the identification of unreported epitope residues with conservation details. The quality of writing and analysis is acceptable with some minor corrections.\nThe text is not much visible in Figure 6 and has to be revised.\n\nIn the caption of figure 5, it should be \"192-195\" instead of \"192-95\".\n\nTitle should be shortened.\n\nThe first sentence in the \"Analysis of secondary structures\" has to be revised to ensure that the analysis only performed for envelope proteins.",
"responses": [
{
"c_id": "2176",
"date": "14 Sep 2016",
"name": "Sandeep Chakraborty",
"role": "Author Response",
"response": "Dear Dr Nagarajan, I would like to thank you for taking the time to review this paper, and providing constructive criticism on the overall manuscript. I have revised the manuscript based on the comment of all three reviewers. Please find my responses below. The author performed an essential structural analysis of ZIKV and DENV envelope proteins and reported the identification of unreported epitope residues with conservation details. The quality of writing and analysis is acceptable with some minor corrections. The text is not much visible in Figure 6 and has to be revised. >> This figure is high resolution, and text is visible clearly. I will request the typesetting to increase the size of this figure in the pdf. In the caption of figure 5, it should be \"192-195\" instead of \"192-95\". >> Done. Title should be shortened. >> Done. The first sentence in the \"Analysis of secondary structures\" has to be revised to ensure that the analysis only performed for envelope proteins. >> Done. I would like to thank you once again for the positive comments. best wishes, Sandeep"
}
]
},
{
"id": "15317",
"date": "29 Jul 2016",
"name": "Yi Shi",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript entitled \" MEPPitope: spatial, electrostatic and secondary structure perturbations in the post-fusion Dengue virus envelope protein highlights known epitopes and conserved residues in the Zika virus\", Sandeep Chakraborty analyzed the pre- and post-fusion DENV type 2 envelope (E) protein, and identified several known epitopes, which are conserved in ZIKV and all DENV serotypes. Perturbations of spatial and electrostatic or secondary structure identified by a combination of different computational methods help to detect critical residues that can be made target of therapies.\nOverall, the paper provides interesting data. The paper could be improved by addressing the following points:\nThe title is too long and needs to be shortened.\n\nThere are some new antibodies against ZIKV or DENV reported recently, the paper would benefit by adding new information.\n\nα 5 and 6 are not labeled in Figure 1.\n\nIn the section of “conflicting data in comparison to previous studies” line 9, the statement “… from another E-protein of the same subunit” is inaccurate as subunit refers to a single polypeptide chain.",
"responses": [
{
"c_id": "2175",
"date": "14 Sep 2016",
"name": "Sandeep Chakraborty",
"role": "Author Response",
"response": "Dear Dr Shi, I would like to thank you for taking the time to review this paper, and providing constructive criticism on the overall manuscript. I have revised the manuscript based on the comment of all three reviewers. Please find my responses below. In the manuscript entitled \" MEPPitope: spatial, electrostatic and secondary structure perturbations in the post-fusion Dengue virus envelope protein highlights known epitopes and conserved residues in the Zika virus\", Sandeep Chakraborty analyzed the pre- and post-fusion DENV type 2 envelope (E) protein, and identified several known epitopes, which are conserved in ZIKV and all DENV serotypes. Perturbations of spatial and electrostatic or secondary structure identified by a combination of different computational methods help to detect critical residues that can be made target of therapies. Overall, the paper provides interesting data. The paper could be improved by addressing the following points: The title is too long and needs to be shortened. >> Done. There are some new antibodies against ZIKV or DENV reported recently, the paper would benefit by adding new information. >> Ref 79 and 88 have been added. α 5 and 6 are not labeled in Figure 1. >> Done. In the section of “conflicting data in comparison to previous studies” line 9, the statement “… from another E-protein of the same subunit” is inaccurate as subunit refers to a single polypeptide chain. >> The word \"subunit\" has been replaced with \"complex\". I would like to thank you once again for the positive comments. best wishes, Sandeep"
}
]
}
] | 1
|
https://f1000research.com/articles/5-1150
|
https://f1000research.com/articles/5-2289/v1
|
09 Sep 16
|
{
"type": "Opinion Article",
"title": "Puzzles in modern biology. II. Language, cancer and the recursive processes of evolutionary innovation",
"authors": [
"Steven A. Frank"
],
"abstract": "Human language emerged abruptly. Diverse body forms evolved suddenly. Seed-bearing plants spread rapidly. How do complex evolutionary innovations arise so quickly? Resolving alternative claims remains difficult. The great events of the past happened a long time ago. Cancer provides a model to study evolutionary innovation. A tumor must evolve many novel traits to become an aggressive cancer. I use what we know or could study about cancer to describe the key processes of innovation. In general, evolutionary systems form a hierarchy of recursive processes. Those recursive processes determine the rates at which innovations are generated, spread and transmitted. I relate the recursive processes to abrupt evolutionary innovation.",
"keywords": [
"evolutionary theory",
"natural selection",
"abrupt evolution",
"development"
],
"content": "Introduction\n\nMajor evolutionary innovations arise abruptly. Novel body forms appeared suddenly, the Cambrian explosion1. Seed-bearing plants spread across the earth almost instantaneously, Darwin’s abominable mystery2. Humans spoke, made symbolic art and dominated the world.\n\nA review3 of Why Only Us: Language and Evolution4 emphasizes the recurring controversy over evolutionary innovation:\n\nToday, opinion on the matter of language origins is still deeply divided. On the one hand, there are those who feel that language is so complex, and so deeply ingrained in the human condition, that it must have evolved slowly over immense periods of time…. On the other, there are those like Berwick and Chomsky who believe that humans acquired language quite recently, in an abrupt event.\n\nThe argument for slow evolution appeals to intuition. Such complexity cannot evolve suddenly. Evolution is an intrinsically slow process.\n\nAgainst the intuitive argument for the slow evolution of language, the evidence suggests that:3\n\nClearly, something revolutionary had happened to our species … All of a sudden, humans were manipulating information about the world in an entirely unprecedented way, and the signal in the archaeological record shifted from being one of long-term stability to one of constant change … by fifty years ago we were already standing on the moon…. So we need an explanation for the abrupt emergence of language …\n\nMy theme concerns the general understanding of evolutionary process. How surprising is abrupt evolutionary innovation? How do we understand what ‘abrupt’ means? To answer those questions, we must understand the nature of time in relation to generative process.\n\n\nThe abruptness of recursive growth\n\nThink about cancer. A tumor evolves by accumulating changes5. The initial changes may arise before one notices any sign of tumor or disease. Eventually, the tumor acquires novel traits that give it an uncontrolled growth advantage. Overwhelming disease soon follows.\n\nWithout modern technology, one sees tumors as arising abruptly. That suddenness comes from the growth rate of tumors, shaped by the history of evolutionary innovations. Synergism between growth and innovation sets the tempo at which we perceive novelty.\n\nGrowth by itself has a natural tempo that causes things to appear suddenly. In uncontrolled growth, an initial input size is multiplied by a growth factor, producing a bigger output size. The output then becomes the input for another round of recursive growth.\n\nA recursive doubling in size produces a series of 1, 2, 4, 8, 16, 32, …, with a size of 2n at the nth time step. A tumor typically must have billions of cells before it is noticed. To grow from one cell to a noticeable size of 30 billion cells, a recursively growing tumor must pass through 35 doubling periods.\n\nAfter just 5 more rounds of doubling growth, the tumor will be 32 times larger than the size at first detection. The time is short from being noticed to being overwhelmingly dominant.\n\nSeemingly abrupt appearance is a property of recursive growth. Put another way, the natural timescale of growth is explosive, whereas the natural timescale of our perception seems to be relatively steady. The perception of appearance by growth tends to be abrupt.\n\n\nEvolutionary innovation\n\nI invoked uncontrolled growth. But where does such growth come from? What is the nature of innovation that increases growth?\n\nWe may never know the answer for language. At present, we do not know the answer for tumors, even though tumors happen all the time right under our own skin. But perhaps the puzzle of evolutionary innovation in tumors will be solved one day6.\n\nDeeper understanding of evolutionary innovation in tumors may provide insight into what it takes, more generally, for the origin and spread of seed-bearing plants, of new body forms and of language. So I continue to discuss tumors. The abruptness of cancer is a model of evolutionary innovation.\n\nWe know that an aggressive tumor has acquired many evolutionary changes when compared to its normal ancestral tissue. Did most of those cancerous changes happen abruptly around the transition to perceptible aggressiveness? Or did many evolutionary changes accumulate slowly, over a long period, starting well before noticeable cancer?\n\nWe do not know exactly. But we can say what the likely processes are for evolutionary change in cancer, what the timescales are for those processes, and how the different processes interact. We can draft a rough solution to the puzzle of evolutionary innovation in cancer.\n\nI step through the key evolutionary processes and their consequences for the timescale of cancer. At first, the puzzle of cancer may seem rather distant from the puzzle of language. However, consider two questions.\n\nIs language an example of the known processes of evolutionary innovation? Or does the puzzle of language require a unique solution? We can discuss those questions in a more informed way after briefly considering cancer.\n\nA successful tumor gains the ability to break through tissue barriers, survive in novel environments, escape detection by immunity, ignore the normal checks on growth, alter its metabolic pathways for energy production, send signals that call other tissues to remodel the tumor’s environment, and many other novel traits5.\n\n\nDiscovery and integration\n\nInnovation proceeds by the layering of new changes on top of the recent changes. Each particular change creates new context, favoring a new set of changes. Three evolutionary processes of cancer likely apply to many cases of evolutionary innovation.\n\nFirst, an advantageous change enhances growth. Steady growth leads to the perception of abrupt origin. However, a single change by itself does not transform normal tissue into a cancer. Evolutionary innovation requires multiple changes. The early changes accumulate imperceptibly.\n\nSecond, each change alters the context for future innovation. At some point, a single subsequent change could ignite growth. However, current evidence suggests a complex array of interacting changes that arise and spread over different timescales6. Advances in biological technology will eventually resolve the timing and the role of particular changes.\n\nThird, as evolutionary change alters context, new pressures favor novel kinds of innovation. Sometimes, the novelty is itself a new generative mechanism that enhances the speed at which further novelty can be created. Or the novelty changes the way in which additional novelty integrates into the evolving population of cancerous cells.\n\nThe changing processes of discovery and integration in cancer likely arise in other evolutionary innovations. The following paragraphs describe a few examples for cancer. I then conclude by discussing aspects of language in relation to general properties of evolutionary innovation.\n\nSuppose that an innovative trait would be favored, but it arises only one time per million cellular divisions. A tissue typically has far more than one million cells. So the trait arises many times in one round of cell division. But only a few rare cells have the novel trait.\n\nThe novel trait creates a context that would favor an additional innovation. Because only a few cells have the novel trait, it may take a very long time before the second innovation follows. However, if the initial trait spreads, then many cells would have the trait. The time before the second innovation would then be very short, because of the large size of the target population.\n\nRapid spread of the first trait may happen because it has a growth advantage and reproductively outcompetes other cells. Or the trait may spread if it produces a signal that transforms other cells to express the same trait. Much of cellular behavior arises by intercellular signalling. Transformation by novel signalling is a key aspect of evolutionary innovation in cancer progression5.\n\nThe discovery of a new trait is often discussed in terms of genetic mutation. Mutation couples two aspects in one stroke: the creation of novelty and the transmission of that novelty to future generations. However, one may have to wait a very long time for mutation to create a particular innovation.\n\nAlternatively, the novel trait may first appear by cellular adjustment to a novel environment7. Initially, only a few cells may adjust to express the newly favored trait. Those cells gain an advantage, possibly transmitting to their descendants the tendency to adjust in the appropriate way. That process can favor rapid evolution of a novel trait that first appears by adjustment, or by learning, rather than by mutation8.\n\nAn environmental challenge may require two novel traits to arise simultaneously. For example, a novel cellular signal may require other cells to express a novel ability to respond to the signal. How do jointly synergistic traits evolve, if neither trait alone provides value9,10?\n\nIf some cells and their descendants remain spatially associated over time, then the group evolves almost like a single unit. The origin of the signal, initially by chance, strongly favors the recipient response. Signal and response may arise by one mutation then another. However, it may be a long time before two rare mutations arise.\n\nAlternatively, different cells with the same genetics inevitably have a certain amount of randomness in the traits that they express. A population of cells that, by chance, expresses the right combination of novel signal and response traits will gain a growth advantage.\n\nAny genetic tendency to express the right trait combination will increase. Over time, the beneficial combination evolves to be expressed more frequently11. The process assimilates an initial tendency for random expression of traits into an increased genetic tendency to express the traits. Synergistic trait combinations can evolve relatively rapidly by this process when compared to the slow pace of origin by sequential mutations.\n\nThese ideas about innovation follow from classical evolutionary theory. We do not yet know exactly which aspects apply to particular cancers. However, technological advances will soon provide additional insight.\n\n\nRecursive hierarchy\n\nMy discussion of evolutionary innovation and timescale for cancer applies broadly to any evolutionary system. Recursion unifies the conceptual frame.\n\nFirst, natural selection recursively drives the spread of innovations. Given an input population, selection enhances the frequency of beneficial traits, producing a new output population. The output then becomes the input for another round of selection. An innovation with constant benefit increases by recursive multiplication, transforming constancy of benefit into explosive increase.\n\nSecond, an innovation can act by enhancing the rate at which additional new innovations are discovered. A discovery mechanism increases by selection when it associates with the beneficial innovations that it creates12. Discovery applies recursively to each new generation.\n\nThird, the trait of an individual may itself be a recursive system. Our bodies develop from the single-cell union of egg and sperm into approximately 30 trillion cells. Genetics does not specify the exact form of the adult. Instead, evolutionary history has built a developmental language applied recursively to the birth cell13.\n\nNew evolutionary innovations arise by modification of the recursive developmental language. The encoding of traits in a recursive developmental language accelerates the discovery of innovations.\n\nIn addition to the development of body form, other traits are also encoded by rules applied recursively. For example, our immune system combines recursive mechanisms to discover innovations and recursive mechanisms to select and enhance beneficial innovations14. These recursive processes allow rapid discovery and expansion of novel defenses against infection.\n\n\nHuman language\n\nThis hierarchy of recursive processes provides the framework for understanding evolutionary innovation. The origin of human language falls naturally within this general evolutionary framework. However, the consequences of human language add a new process of innovation.\n\nBefore language, all evolutionary change had to follow a trajectory through the lineage of genes, a sufficiently stable molecular encoding of information to carry forward innovations.\n\nHuman language created a parallel system to encode and transmit information. That parallel system follows the same general principles of recursion and innovation. However, the distinction between encoding by language or by molecules influences the recursive hierarchy and the consequences for innovation. The parallel systems of language and molecules interact, although the degree of coupling is controversial.\n\nLanguage, as an innovation to the process of innovation, expands the recursive hierarchy and accelerates further innovation15,16. Evolutionary history has always been an evolving recursive hierarchy14. When an evolutionary innovation alters the recursive hierarchy in a way that accelerates further innovation, then abrupt change often follows.\n\nCancer, development and language differ. But they share the ways in which interacting recursive processes alter the timescale of innovation.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation grant DEB–1251035 supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nSmith MP, Harper DA: Earth Science. Causes of the Cambrian explosion. Science. 2013; 341(6152): 1355–1356. PubMed Abstract | Publisher Full Text\n\nFriedman WE: The meaning of Darwin’s ‘abominable mystery’. Am J Bot. 2009; 96(1): 5–21. PubMed Abstract | Publisher Full Text\n\nTattersall I: At the birth of language. Why Only Us: Language and Evolution. New York Review of Books, 2016; 27–28. Reference Source\n\nBerwick RC, Chomsky N: Why Only Us: Language and Evolution. MIT Press, Boston. 2015. Reference Source\n\nWeinberg RA: Moving Out: Invasion and Metastasis. The Biology of Cancer. Garland Science, New York; 2007. Reference Source\n\nFrank SA: Dynamics of Cancer: Incidence, Inheritance, and Evolution. Princeton University Press, Princeton, NJ. 2007. PubMed Abstract\n\nFrank SA, Rosner MR: Nonheritable cellular variability accelerates the evolutionary processes of cancer. PLoS Biol. 2012; 10(4): e1001296. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWest-Eberhard MJ: Developmental Plasticity and Evolution. Oxford University Press, New York. 2003. Reference Source\n\nFrank SA: The origin of synergistic symbiosis. J Theor Biol. 1995; 176(3): 403–410. PubMed Abstract | Publisher Full Text\n\nSkyrms B: Signals: Evolution, Learning, and Information. Oxford University Press, New York. 2010. Reference Source\n\nFrank SA: Natural selection. II. Developmental variability and evolutionary rate. J Evol Biol. 2011; 24(11): 2310–2320. PubMed Abstract | Publisher Full Text\n\nOtto SP, Lenormand T: Resolving the paradox of sex and recombination. Nat Rev Genet. 2002; 3(4): 252–261. PubMed Abstract | Publisher Full Text\n\nCarroll SB: Endless Forms Most Beautiful: The New Science of Evo Devo. W. W. Norton & Company, Inc, New York. 2005. Reference Source\n\nFrank SA: The design of natural and artificial adaptive systems. In MR Rose and GV Lauder, editors, Adaptation. Academic Press, San Diego, California. 1996; 451–505. Reference Source\n\nCavalli-Sforza LL, Feldman MW: Cultural transmission and evolution: a quantitative approach. Monogr Popul Biol. Princeton University Press, Princeton, New Jersey. 1981; 16: 1–388. PubMed Abstract\n\nBoyd R, Richerson PJ: Culture and the Evolutionary Process. University of Chicago Press, Chicago. 1985. Reference Source"
}
|
[
{
"id": "16214",
"date": "13 Sep 2016",
"name": "Andy Gardner",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe puzzle considered here is rapid evolutionary innovation, and Steven Frank draws parallels between the evolution of cancers and the evolution of language. Cancerous cell lineages must rapidly innovate in highly sophisticated ways to avoid being killed off by the immune system, and to achieve continued growth in an environment that is rapidly changing – including as a consequence of their own behaviour. Human language and associated cultural innovations appear to have happened in an explosive way that some have suggested is inconsistent with gradual evolutionary change.\n\nFrank suggests possible commonalities between these two phenomena, including how recursive growth may involve a ‘long fuse’ phase at which key innovation is occurring – perhaps invisibly – well before the explosion takes place, and how recursion itself may facilitate innovation owing to feedback of input and output.\n\nThis is a compelling synthesis that immediately suggests many more questions, and I’m sure will stimulate further investigation and new insights.\nThis article might benefit from the addition of a figure: an illustration or schematic showing how recursion facilitates explosive, innovative evolution. Also, for comparative purposes, might certain instances of stagnation and stasis be conceptualised and explained in terms of relative lack of recursion?\n\nAs is my policy, I waive anonymity.\nAndy Gardner",
"responses": [
{
"c_id": "2354",
"date": "07 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I thank Andy Gardner for his helpful summary and comments. I will have to think about a figure that can capture the key ideas. I welcome any suggestions on that. With regard to stasis and lack of recursion, perhaps Jon Wilkins' comment in his Referee Report is helpful in this regard. Wilkins defined strong recursion as situations in which one innovation actually changes the landscape for future innovations. In this light, Gardner's stasis from lack of recursion may associate with barriers that reduce the chance of Wilkins' strong recursion."
}
]
},
{
"id": "17054",
"date": "06 Dec 2016",
"name": "Jon Wilkins",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe most easily understood mode of evolution is the one where small, incremental changes accumulate over very long timescales, leading to gradual changes in form and function that can be observed in the continuity of forms in the fossil record. However, it is also apparent that, on occasion, there are rapid bouts of change and diversification. On the timescale that we normally think about for evolutionary processes, these changes can seem virtually instantaneous.\nSteven Frank illustrates this sort of abrupt evolutionary change with two examples: the diversification of flowering plants and the development of human language. In both cases, the “why” and “how” of the sudden transformations remain poorly understood.\n\nThe key insight of this paper is that cancer evolution — involving natural selection acting on clones within a developing tumor — is another example of rapid evolutionary innovation. But unlike the other examples, which represent ancient, and perhaps unique, events, tumor development is a process that is happening again and again today, on a timescale that is amenable to study using a variety of tools.\nOf course, understanding cancer progression is important in its own right. But this piece point out that the insights coming from the study of cancer may help us to understand other, less accessible, evolutionary transformations.\n\nFrank suggests that the key feature that permits this rapid change is “recursion”, which is used in a strong and weak sense — both of which are somewhat different from the way the term is typically employed in linguistics. In the weak sense, any phenomenon of exponential growth is deemed recursive in the sense that the outputs of this generation’s reproduction serve as the inputs for the next generation. He makes the point that exponential growth can create the illusion of suddenness when our observations are at a fixed scale.\nHowever, all biological reproduction is inherently exponential, and this argument should apply equally to the sudden evolutionary innovations and the more gradual transformations. So it is not clear why this would be the critical attribute for bursts of innovation.\n\nSomewhat more compelling is the stronger version of recursion used here, where one innovation actually changes the landscape for future innovations. This type of innovative potentiation is more clearly a hallmark of abrupt evolutionary transformation, and there is potentially great insight to be had from recognizing the commonalities among seemingly diverse types of rapid innovation.",
"responses": [
{
"c_id": "2353",
"date": "07 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I like Jon Wilkins' separation between strong and weak recursion in the biological context. With regard to what he calls the weak recursion of growth processes, Wilkins notes that all biological reproduction is inherently exponential, and this argument should apply equally to the sudden evolutionary innovations and the more gradual transformations. So it is not clear why this would be the critical attribute for bursts of innovation. I would say that exponential growth does not imply exponential innovation. Populations can grow without changing. Or the changes may be minor quantitative variations that nonetheless have strong fitness consequences. My point is that when a significant innovation does arise, the exponential nature of increase will lead to the appearance of suddenness, even if the true rate of spread is relatively slow."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2289
|
https://f1000research.com/articles/5-2288/v1
|
09 Sep 16
|
{
"type": "Opinion Article",
"title": "Puzzles in modern biology. I. Male sterility, failure reveals design",
"authors": [
"Steven A. Frank"
],
"abstract": "Many human males produce dysfunctional sperm. Various plants frequently abort pollen. Hybrid matings often produce sterile males. Widespread male sterility is puzzling. Natural selection prunes reproductive failure. Puzzling failure implies something that we do not understand about how organisms are designed. Solving the puzzle reveals the hidden processes of design.",
"keywords": [
"evolutionary theory",
"natural selection",
"infertility",
"speciation"
],
"content": "Introduction\n\nWhat do organisms do poorly that they should do well? Answers often lead to profound insight.\n\nOne thing organisms should do well is reproduce. Sterility is total biological failure. Natural selection prunes failure. Widespread sterility would be puzzling.\n\nYet, many human males produce incapable sperm. Some plants have up to 50% of individuals abort their pollen. Matings between recently diverged species often bear sterile males.\n\nCauses of sterility vary. Each cause is a separate topic. But separating topics hides the deeper unity of insight. Through failure we understand design.\n\nI briefly review examples of male sterility. These individual puzzles emerged haphazardly, rather than by systematic study of failure. From these examples, I return to my theme. Every apparent failure poses an important puzzle. We must seek failure, measure it, document its correlates, and analyze its causes.\n\n\nSperm dysfunction\n\nSperm dysfunction poses our first puzzle. Roughly 5% of human males fail to make good sperm1. In other animals, studies mention cases of male sterility2. These haphazard observations, based on limited data, provide only a vague hint. Puzzles often appear in this shadowy way. Let us follow the shadow. If sterility is higher than expected, what might explain the excess?\n\nA quirk of genetics predisposes males to failure3. A male inherits his mother’s mitochondria, but does not pass mitochondrial genes to his progeny. Natural selection cannot act on male-specific mitochondrial effects, because males do not transmit mitochondria. Mitochondrial mutations that reduce male fertility may increase by chance.\n\nStudies motivated by this theory have found mitochondrial mutations that reduce male fertility4,5. Those studies have also found other genes, inherited through both parents, that compensate for the mitochondrial defects. A male carrying the mitochondrial mutation and the compensatory genes have restored fertility.\n\nCompensation arises from the pathways of inheritance3. A gene inherited through both parents suffers reduced transmission when coupled with mitochondrially induced sperm defects. If a biparentally inherited gene compensates for the defects, the compensatory gene increases its own transmission.\n\nCompensation plays a key role in failure and design. In this case, biparentally inherited genes compensate for male-sterile mitochondrial mutations. Individuals appear to be nearly normal when carrying both the defect and the compensation. Failure occurs only when there is a mismatch in the coadapted defect-compensation interaction.\n\nAt present, only a few studies support these ideas about mitochondrial transmission and genetic coadaptation. Future studies may provide further support. Or it may turn out that other processes explain much of the observed failure. The puzzle remains unsolved. But we have a clue that points to disruption of coadapted gene complexes.\n\nI now turn to other examples of sterility and coadaptation. These additional puzzles provide a broader perspective on the nature of failure and design. Perhaps some of the insights from solving these additional puzzles will eventually lead back to better explanations for the apparently high levels of sperm dysfunction.\n\n\nAborted pollen\n\nAborted pollen poses our next puzzle. Most flowering plants are hermaphrodites. Each individual produces both ovules and pollen. Ovules correspond to female function. Pollen correspond to male function.\n\nSome hermaphroditic individuals abort their pollen. Rare male sterility within a population would not be surprising. Most traits fail in a few individuals. However, Darwin6 noted puzzling, widespread male sterility in many different plant species. Several populations have more than 10% male sterility, sometimes approaching 50%.\n\nTwo explanations of this apparent failure alter our perspective of design. First, male sterility prevents self-fertilization of ovules. If progeny suffer when inbred, the prevention of self-fertilization can be advantageous7. The gain from outbred ovules can outweigh the loss of pollen production by male sterility. If so, then male sterility is a beneficial design rather than a failure.\n\nSome studies support the outbreeding benefit of male sterility8. The initial surprise of a high failure rate has become a deeper insight into organismal design. However, many populations have high frequencies of male sterility that cannot be explained by avoiding self-fertilization9.\n\nThis puzzling excess of male sterility led to a second explanation9–11. Mitochondria transmit only through ovules, the female lineage. Pollen do not transmit mitochondria. A mitochondrial mutation would gain a benefit by aborting pollen and reallocating the saved energy to produce more successful ovules. The pollen-aborting mitochondria increase their transmission and can spread rapidly in populations. Mitochondrial mutations that abort pollen and enhance ovule success have been found in many species that previously had an unexplained excess of male sterility12.\n\nIn plants, the vast majority of genes transmit biparentally, through both pollen and ovules. Those biparental genes typically suffer reduced transmission when in a plant that aborts its pollen. In most cases, when a mitochondrial mutation exists that aborts pollen, there also exist biparentally inherited genes that can restore pollen fertility12.\n\nAn apparently normal hermaphrodite with full pollen fertility may often carry two opposing components. First, a mitochondrial mutation that, unblocked, causes male sterility. Second, a biparentally inherited gene that blocks the action of the mitochondrial mutation. Once again, “normal” function arises by compensatory coadaptation.\n\nCrosses between different species support this idea of coadaptation13. The parental populations may be almost entirely free of male sterility. Yet the hybrid progeny may express high frequencies of aborted pollen, which is male sterility.\n\nIn some cases, the hybrid male sterility arises by breaking up the coadapted complexes within each species. A hybrid progeny may carry the male sterile mitochondrial mutation of its “mother” but fail to inherit from its “father” the associated restorers of male fertility.\n\n\nHybrid male sterility\n\nA different kind of hybrid male sterility poses our final puzzle. Matings between species often produce defective progeny. Male sterility is one of the most common hybrid defects14–16. The puzzle concerns why male fertility should be particularly prone to failure. What does that failure reveal about design?\n\nThe previous puzzles raised two potential causes of hybrid male sterility. In each case, mitochondrial mutations disrupt male fertility. In response, biparentally inherited genes evolve to repress the disruption. The coadapted repressor genes restore male fertility. Hybrid matings cause mismatch of coadapted genes, leading to male sterility.\n\nNot all cases of hybrid male sterility arise from the breakdown of coadaptation between mitochondrial and biparental genes. What other aspects of design might lead to the observed widespread tendency for failure in hybrid males?\n\nHybrid defects typically arise from mismatch of coadapted genes. For male fertility, how does coadaptation evolve? What causes divergence between populations in their coadapted complexes?\n\nOur previous puzzles suggest how we might think about these broader questions. In the first puzzle of sperm dysfunction, males do not transmit mitochondrial mutations. Any mutation that influences only males has no consequence for transmission. This neutrality means that male sterile mutations can increase unopposed. But such mutations are not directly favored.\n\nMitochondrial mutations that disrupt male fertility accumulate slowly, by chance. Different populations accumulate different mutations. Each distinct mitochondrial mutation associates with distinct compensatory mechanisms of biparental genes.\n\nAs populations diverge, they will come to have different coadapted complexes of mitochondrial and biparental genes. The neutral accumulation of mitochondrial mutations causes slow but continual divergence of coadapted complexes.\n\nIn the second puzzle of pollen abortion, mitochondrial mutations also disrupt male fertility. However, the hermaphroditic system means that the decline in male fertility often associates with a rise in female fertility. A mitochondrial mutation that causes male sterility gains a transmission advantage through its increased female fertility. This benefit can drive rapid spread of mitochondrial mutations.\n\nIn this case, mitochondrial mutations that disrupt male fertility spread rapidly. In response, the associated biparental restorers of fertility will likely evolve rapidly. The conflict between different components of the genome causes populations to diverge rapidly in their coadapted gene complexes.\n\nThe two cases set endpoints for the range of processes that cause hybrid male sterility. On the one hand, ubiquitous neutral divergence occurs widely but relatively slowly. On the other hand, powerful conflict between components of the genome drives rapid divergence, but may arise relatively rarely.\n\nThe continuum ranges from ubiquitous and slow processes to less common and fast processes. Many processes fall along this continuum. The relative roles of these different process for hybrid male sterility remain controversial.\n\nFor a particular observation of male sterility, we may not know the particular associated process. However, my point is that we should pay attention to the observed failure. Through the study of failure, we gain a window onto the normally hidden underlying processes of design.\n\nFor hybrid male sterility, I describe one example of a ubiquitous but relatively slow process and one example of a rarer but relatively fast process. That contrast highlights a potentially important question about failure and design. Which tends to be more important, slow processes intrinsic to ubiquitous aspects of genetics or fast processes intrinsic to specific aspects of conflict?\n\nThe example of relatively slow ubiquity concerns the spread of beneficial mutations on the sex chromosomes17. In animals with separate sexes, males often carry a pair of different sex chromosome types, XY, whereas females carry a pair of the same chromosome type, XX.\n\nConsider a new mutation on the X chromosome that benefits only females. Because the X chromosome in females occurs in two copies, the new mutation on one X may be masked by the expression of the original gene carried on the other X. That masking effect can greatly reduce the rate at which beneficial mutations can spread.\n\nNow consider a new mutation on the X chromosome that benefits only males. Because the X chromosome in males occurs in only one copy, the new mutation is not masked by a different copy of the gene on another chromosome. The beneficial mutation is expressed immediately and can spread.\n\nThis asymmetry leads to faster evolution of male-specific effects on the X chromosome. As those X-linked effects evolve, other components of the genome may coadapt. Populations will diverge in coadapted gene complexes between X-linked genes and genes in other parts of the genome.\n\nWhen diverged populations hybridize, the faster-evolving X-linked male effects may be particularly susceptible to the mismatch of coadapted complexes. Some of those mismatches may reduce male fertility.\n\nIn this case, the divergence of coadapted complexes happens in a relatively passive way. The structure of the genetic system creates an asymmetric sieve. That sieve tends to enhance male-specific effects more strongly than female-specific effects. Coadaptation arises as a potentially weak response to general aspects of change rather than a strong response to a direct challenge.\n\nA contrasting example of relatively fast specificity concerns a conflict between different components of the genome. That conflict creates a direct and powerful pressure for change and coadaptation.\n\nOnce again, we begin with the XY chromosome pairing in males. A male transmits his X chromosome to daughters and his Y chromosome to sons. An X chromosome gains an advantage by increasing a male’s number of daughters. For example, an X can encode a mechanism that kills off Y bearing sperm. The male’s remaining sperm bear the X, so he produces all daughters.\n\nThis drive of X against Y occurs in some organisms18. The driving X gains a transmission advantage and can spread rapidly within populations. A driving X favors genes on other chromosomes that repress the drive. The other genes suffer because the driving X biases the sex ratio toward disadvantageous over-production of daughters. The driving X may also carry deleterious side effects.\n\nA driving X and coadapted repression of drive are powerful forces. Those powerful forces cause rapid change in populations and rapid divergence of coadapted complexes between populations19,20.\n\nIn natural populations, some processes of divergence will be like the asymmetric XY sieve. Change accumulates by a relatively nonspecific process, ubiquitous but relatively slow. Other processes will be like the conflict of XY drive. Change accumulates by the strong process of transmission bias, narrowly specific but relatively fast.\n\nThe importance of different processes must be determined by direct observation. My point concerns the value of focusing on failure. Tracing the observed failure of male sterility to its underlying cause will teach us much about the different processes that coadapt genomes and cause divergence between populations.\n\n\nConclusions\n\nThese various puzzles bring us to a broader question. Why does the study of failure reveal underlying design?\n\nRoughly speaking, we tend to see organisms as reasonably well designed engineering solutions. But we do not know how the components have been put together, how the components interact, and how the components have been designed to respond to different challenges. How can we reverse engineer the design?\n\nThe study of failure provides an important tool for reverse engineering. Why? Because when something complicated works, it is not easy to see how the components interact. Imagine that you have never seen a car. Someone gives you a car and asks you how it works. Look inside. There are many wires and connectors and components. What do they do?\n\nTry cutting a wire. The brakes fail. Through that failure, you know that the wire and the things connected by the wire have to do with braking. Failure reveals design.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation grant DEB–1251035 supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nKrausz C: Male infertility: pathogenesis and clinical diagnosis. Best Pract Res Clin Endocrinol Metab. 2011; 25(2): 271–285. PubMed Abstract | Publisher Full Text\n\nJainudeen MR, Hafez B: Reproductive Failure in Males. Reproduction in Farm Animals, 7th Edition. Lippincott Williams & Wilkins: 2000; 279–289. Publisher Full Text\n\nFrank SA, Hurst LD: Mitochondria and male disease. Nature. 1996; 383(6597): 224. PubMed Abstract | Publisher Full Text\n\nDowling DK, Friberg U, Lindell J: Evolutionary implications of non-neutral mitochondrial genetic variation. Trends Ecol Evol. 2008; 23(10): 546–554. PubMed Abstract | Publisher Full Text\n\nPatel MR, Miriyala GK, Littleton AJ, et al.: A mitochondrial DNA hypomorph of cytochrome oxidase specifically impairs male fertility in Drosophila melanogaster. eLife. 2016; 5: pii: e16923. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDarwin C: The Different Forms of Flowers on Plants of the Same Species.1877; John Murray, London. Publisher Full Text\n\nCharlesworth B, Charlesworth D: A model for the evolution of dioecy and gynodioecy. Am Nat. 1978; 112(988): 975–997. Publisher Full Text\n\nWeller SG, Sakai AK: Selfing and resource allocation in Schiedea salicaria (Caryophyllaceae), a gynodioecious species. J Evol Biol. 2005; 18(2): 301–308. PubMed Abstract | Publisher Full Text\n\nCharlesworth D: A further study of the problem of the maintenance of females in gynodioecious species. Heredity. 1981; 46: 27–39. Publisher Full Text\n\nLewis D: Male sterility in natural populations of hermaphrodite plants: the equilibrium between females and hermaphrodites to be expected with different types of inheritance. New Phytol. 1941; 40(1): 56–63. Publisher Full Text\n\nFrank SA: The evolutionary dynamics of cytoplasmic male sterility. Am Nat. 1989; 133(3): 345–376. Publisher Full Text\n\nChase CD: Cytoplasmic male sterility: a window to the world of plant mitochondrial-nuclear interactions. Trends Genet. 2007; 23(2): 81–90. PubMed Abstract | Publisher Full Text\n\nLaser KD, Lersten NR: Anatomy and cytology of microsporogenesis in cytoplasmic male sterile angiosperms. Bot Rev. 1972; 38(3): 425–454. Publisher Full Text\n\nCoyne JA, Orr HA: Speciation.2004. Sinauer Associates, Sunderland, MA. Reference Source\n\nSchilthuizen M, Giesbers MC, Beukeboom LW: Haldane’s rule in the 21st century. Heredity (Edinb). 2011; 107(2): 95–102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDelph LF, Demuth JP: Haldane's Rule: Genetic Bases and Their Empirical Support. J Hered. 2016; 107(5): 383–91. PubMed Abstract | Publisher Full Text\n\nCharlesworth B, Coyne JA, Barton NH: The relative rates of evolution of sex chromosomes and autosomes. American Naturalist. 1987; 130(1): 113–146. Publisher Full Text\n\nMeiklejohn CD, Tao Y: Genetic conflict and sex chromosome evolution. Trends Ecol Evol. 2010; 25(4): 215–223. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrank SA: Divergence of meiotic drive-suppression systems as an explanation for sex-biased hybrid sterility and inviability. Evolution. 1991; 45(2): 262–267. Publisher Full Text\n\nHurst LD, Pomiankowski A: Causes of sex ratio bias may account for unisexual sterility in hybrids: a new explanation of Haldane’s rule and related phenomena. Genetics. 1991; 128(4): 841–858. PubMed Abstract | Free Full Text"
}
|
[
{
"id": "16210",
"date": "13 Sep 2016",
"name": "Andy Gardner",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a snappy and thought-provoking Opinion article, written in Steven Frank’s characteristically laconic style.\n\nThe main message of the article is that, by investigating apparent failures in the design of organisms, we get a deeper understanding of organismal design. The particular focus here is on the suprisingly high levels of male sterility observed in many populations, and Frank takes us on a whirlwind tour of sperm dysfunction, pollen abortion and hybrid male sterility, highlighting commonalities and providing synthesis.\n\nFrank ends the article with a compelling metaphor: a car is a complicated object whose design may be difficult to fathom simply by inspection; but, by cutting a wire and noting that it causes the brakes to fail, we learn something about how it is put together. The design of the organism is also illuminated by its failure.\n\nI agree with all this, but would add two further thoughts.\n\nFirst, it could be that apparent failure owes not to actual maladaptation on the organism’s part but rather a misunderstanding on our part as to what constitutes good design. For example, sterile individuals abound in social insect colonies, and contemplation of this apparent puzzle led to a much deeper understanding of what it is organisms are designed to do (the theory of inclusive fitness).\n\nSecond, it could be that failure reveals we have been looking for design in the wrong place. Several of the specific examples discussed in this article concern conflicts of interest within the genome, e.g. between mitochondrial vs nuclear genes. Failure might not reveal organism design, but design at the gene level.\n\nAs is my policy, I waive anonymity\nAndy Gardner",
"responses": [
{
"c_id": "2362",
"date": "12 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I appreciate Andy Gardner's thoughtful commentary. He finished with two \"further thoughts\": First, it could be that apparent failure owes not to actual maladaptation on the organism’s part but rather a misunderstanding on our part as to what constitutes good design. Second, it could be that failure reveals we have been looking for design in the wrong place. Several of the specific examples discussed in this article concern conflicts of interest within the genome, e.g. between mitochondrial vs nuclear genes. Failure might not reveal organism design, but design at the gene level. I agree with these points. In my abstract, I said: Puzzling failure implies something that we do not understand about how organisms are designed. Solving the puzzle reveals the hidden processes of design. My article emphasized the importance of paying attention to traits of organisms that seem not to make sense. In retrospect, we understand the cases of male sterility that I discussed, for example, using Gardner's words, \"design at the gene level.\" But to achieve that current understanding, someone in the past had to think that something did not add up, that the observations were a problem that required serious attention. They had to see the excess male sterility as a puzzling failure, and then solve that puzzle. The difficulty with my article is that it is very hard to write in a way that evokes a real sense of the past, of what things looked like before we understood. The value of trying is that the lesson is profound. The lesson is that we should, at present, constantly be searching for the traits that do not make sense with respect to reasonable intuitions about design. Much of the progress in biology follows from the first identification of such puzzles. However, it is very hard to see what no one else sees. But there are some ways in which we can train ourselves, for example, studying the past successes and how they came about."
}
]
},
{
"id": "18319",
"date": "07 Dec 2016",
"name": "Harmit S. Malik",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a thoughtful and (characteristically) thought-provoking article on why sterility is a frequent observation in natural populations, despite all expectations to the contrary. The author cites a number of case studies to point out two basic underlying principles- the first being conflict and coevolution for instance between ‘male-harming’ mitochondria (my emphasis) and nuclear suppressors of male-harming behaviour. This arises from asymmetry of inheritance of nuclear and mitochondrial genomes. A related asymmetry of sex chromosomes gives rise to differential rates of evolution and therefore different opportunities for mismatches in gene expression, for instance seen in hybrids. The theme of gamete-killing meiotic drivers and their suppressors is not limited to sex chromosomes however, and a brief mention of fungal spore-killers and their suppressors would nicely round out the discussion. Finally such co-adapted systems rely on co-inheritance for maximum manifestation and as such, propagate best when recombination is not allowed to break up the co-adapted complex. Thus, many of these drive systems arise and are spurred by chromosome rearrangements that further imperil proper meiotic recombination and fertility.\n\nA second principle that might emerge is intrinsic, whereby for instance male sterility might, under the right circumstances, be favoured to encourage outcrossing.\n\nUltimately, the author espouses the view that failure in a process where success is expected reveals the design principles. Much in the same way a genetic screen might reveal the underlying components (an excellent allegory to this is “The salvation of Doug” by Bill Sullivan.... http://sullivan.mcdb.ucsc.edu/salvation-doug.html). However, it is here that I feel the analogy falters a bit.\n\nDespite my discomfort at the use of the term “design”, I feel the author refers to two different things as design. The first is about the genetic principles by which a pathway works (“cutting a brake wire in a car”). The second is that of selfish genetic elements, that have no socially redeeming value to contribute to a biological process, but exist and thrive solely due to their ability to exploit weaknesses (like a petulant hacker overriding the car’s electronic controls at the most inopportune time). While the first definition is about natural selection refining a biological process, the second is about loopholes that exist and likely will always remain, which sponsor ‘selfish behaviour’ in genes even at the expense of important attributes of the biological system, such as fertility.",
"responses": [
{
"c_id": "2361",
"date": "12 Dec 2016",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Harmit Malik provided an excellent commentary, arising from his own significant contributions to the subject. Malik finished his commentary by noting two alternative ways in which we might understand design: The first is about the genetic principles by which a pathway works (\"cutting a brake wire in a car\"). The second is that of selfish genetic elements, that have no socially redeeming value to contribute to a biological process, but exist and thrive solely due to their ability to exploit weaknesses (like a petulant hacker overriding the car’s electronic controls at the most inopportune time). I agree. My point is that the way in which we understand \"design\" at any particular time never completely matches the traits that we observe. Some traits will seem not to make sense in relation to the current understanding of design. Those mismatches are the cases to which we should pay close attention. Something about how we currently understand design does not add up. At present, we understand the logic of selfish genetic elements, so it is easy to write about the two kinds of design noted by Malik. It was not always so. To achieve current understanding, someone had to recognize the mismatch between what was seen and what people at that time thought about the nature of biological design. It was only by following up on the apparent failure of design that the broader vision of design at multiple levels came into focus. Failure reveals design."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2288
|
https://f1000research.com/articles/5-2273/v1
|
08 Sep 16
|
{
"type": "Data Note",
"title": "An annotation of cuts, depicted locations, and temporal progression in the motion picture \"Forrest Gump\"",
"authors": [
"Christian O. Häusler",
"Michael Hanke"
],
"abstract": "Here we present an annotation of locations and temporal progression depicted in the movie “Forrest Gump”, as an addition to a large public functional brain imaging dataset (http://studyforrest.org). The annotation provides information about the exact timing of each of the 870 shots, and the depicted location after every cut with a high, medium, and low level of abstraction. Additionally, four classes are used to distinguish the differences of the depicted time between shots. Each shot is also annotated regarding the type of location (interior/exterior) and time of day. This annotation enables further studies of visual perception, memory of locations, and the perception of time under conditions of real-life complexity using the studyforrest dataset.",
"keywords": [
"studyforrest",
"annotation",
"natural stimulation",
"scene perception",
"time perception",
"spatial cognition"
],
"content": "Introduction\n\nCognitive neuroimaging research is moving towards studying brain behavior under conditions of real-life-like complexity, and motion pictures are being utilized with increasing frequency as stimuli in “neurocinematics” studies1. What sets motion pictures apart from other dynamic naturalistic stimuli is that they are more likely to evoke time-locked response patterns in a larger portion of the brain while retaining synchrony across multiple individuals who are experiencing the same movie2,3. One likely reason for this is the structure of movies. They are typically not prolonged, contiguous captures of an environment from a first person perspective, but rather they are carefully assembled, using “cuts”, from hundreds of short sequences shot from a variety of perspectives4. These cuts are sharp discontinuities in the sensory input that require all viewers to re-assess the depicted environment in order to perform a cognitive re-orientation in fictional space and time. This re-orientation can be complex and involve a large bandwidth of cognitive processes: interpretation of contextual cues for detection of familiar settings, retrieval of prior knowledge from memory, discovery of change in locales and depicted characters. Consequently movies, and their cuts in particular, offer an excellent instrument to study complex, concurrent, real-life cognition.\n\nIn this study, we focus on spatial and temporal viewer re-orientation, and, to this end, describe changes in depicted location and time for all cuts in the motion picture “Forrest Gump”. This movie is the core stimulus of the studyforrest project (http://studyforrest.org). Two fMRI datasets are publicly available: 1) participants listening to an audio-movie version5 and 2) a subset of the original participants watching the audio-visual movie with simultaneous eye tracking6. Additional imaging data and movie annotations are available7,8, including an individual localization of the parahippocampal place area9 that has been implicated in spatial perception and scene processing10.\n\nThis new annotation extends the available knowledge about the structure of this complex natural stimulus and enriches the overall studyforrest dataset. These data can be used to investigate the formation of a representation of viewer location and the perception of (speeded or negative) temporal progression in the movie stimulus. For any study focusing on other aspects of real-life cognition, these new data can serve as additional confound measures describing key properties of major building blocks of this movie stimulus.\n\n\nMaterials and methods\n\nThe annotated stimulus was a slightly shortened (≈2 h) version of the movie Forrest Gump (R. Zemeckis, Paramount Pictures, 1994) with dubbed German soundtrack that is identical to the audio-visual movie annotated in 8. Further details on this particular movie cut, and how to reproduce it from commercially available sources, are available in 6.\n\nNote that table headers do not literally correspond to column headers, see Data legend (ToD: time of day).\n\nFirst, the movie was explored by two people, one of whom has an academic background in documentary film making, in order to generate a consistent list of labels for depicted and recurring locations.\n\nSubsequently, the actual annotation was performed by the first author using a multi-pass strategy. The movie was manually inspected frame-by-frame to determine the location of cuts (using the video editor Shotcut v16.02.01). For each new shot (sequence between two cuts), a number of properties (described below) were discerned and entered into a table. A total of four passes were performed by the same observer in order to validate the annotation.\n\nThe annotation table contains one line per shot and seven columns: 1) a shot’s start time, 2) a label for the shot’s major location, 3) a label for the setting within the location, 4) a label for the locale within the setting, 5) a flag indicating an interior or exterior setting, 6) a label for the type of temporal progression with respect to the previous shot, and 7) a label for the time of day. Further details are provided in the following sections. The respective column header labels are given in parenthesis.\n\nA shot’s start time is defined as the onset time of the first video frame of a shot after a cut. Time stamps a provided in seconds of movie onset.\n\nLocation was coded with three labels, each describing the depicted scenery with an increasing level of detail.\n\nMajor location (major_location) provides a coarse identification at the level of a town, county, or region where the respective story is taking place. Examples are: “Greenbow” or “Vietnam”.\n\nSetting (setting) further details the location by distinguishing places at the same major location, but are not in direct sight of each other. For example, Forrest Gump’s elementary school and the high school’s football field are both in Greenbow, Alabama but are not part of the same setting. A switch from one setting to another is typically synonymous with a transition to a new scene in a cinematographic sense. If the camera switched settings within a scene, the annotation deviates from the screenplay to make explicit the switch to another setting.\n\nLocale (locale) subdivides settings into distinguishable locales. Indoors, a locale is congruent with a particular room enclosed by walls. For example, Forrest Gump’s bedroom, the corridor downstairs, and the corridor upstairs are three different rooms inside the Gumps’ house (setting) on the Gumps’ property (major location). Outdoors, locales were distinguished when they were separated by a logical boundary, substantial distance, or shared no discernible landmarks. For example, the glade at the river and the location of the wounded Bubba are two different locales in the embattled jungle (setting) in Vietnam (major location). A locale’s label is identical to its setting label when only one locale is depicted for that setting.\n\nThis flag indicates whether a particular location is an open (“ext”) or enclosed space (“int”), such as a building or a vehicle.\n\nThis label indicates the depicted progression of time between the previous and the current shot. Four categories were distinguished: “-” labels a flashback, or jump into the past, independent of the temporal distance; “0” indicates no noticeable break in the ongoing stream of time, for example a sole change of viewing perspective; “+” represents noticeable jumps in time, ranging from several seconds to about one or two hours; and lastly “++” marks major time jumps from several hours (e.g. night vs. day) to several years.\n\nThis flag indicates whether a scene is at least partially illuminated by sunlight. Consequently, daytime and twilight (early sunrises and late sun settings) are labeled as “day”. If sunlight is entirely missing, the time of day is coded as “night”.\n\nThe released annotation is a single, text-based, comma-separated-value (CSV) formatted table (Dataset 1).\n\nThe source code for all descriptive statistics included in this paper is available in Dataset 2 (Python script).\n\n\nDataset validation\n\nTo check for human error in the cut time annotation, timings were compared to the results of an automatic detection algorithm and any deviation was manually verified.\n\nIn summary, the shortened version of the movie comprises 870 shots (duration: min=0.48 s, max=151.08 s, median=4.92 s, SD=10.86 s). There are 612 shots depicting outdoor locations and 256 interior shots. Most shots take place during daytime (706 day vs. 162 night). The majority of cuts involve no noticeable discontinuities of depicted time (640), but there are 61 small and 135 large time jumps, as well as 32 flashbacks.\n\nTable 2 provides information on the portrayal of unique locations in the movie.\n\nNumber of shots indicates the total number of shots in the movie for any particular location. Number of consecutive shots indicates how many shots are shown between two location changes at the respective level. Times revisited indicates how often a location reappears in the movie after it was depicted for the first time.\n\n\nData and software availability\n\nF1000Research: Dataset 1. CSV table with of depicted locations, and temporal progression in the motion picture “Forrest Gump”, 10.5256/f1000research.9536.d13482311\n\nF1000Research: Dataset 2. Python script to compute all descriptive statistics presented in the Data Note manuscript from the released annotations, 10.5256/f1000research.9536.d13482412",
"appendix": "Author contributions\n\n\n\nCH designed, performed, and validated the annotation, and wrote the manuscript. MH provided critical feedback on the procedure and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nMichael Hanke was supported by funds from the German federal state of Saxony-Anhalt and the European Regional Development Fund (ERDF), Project: Center for Behavioral Brain Sciences.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to Daniel Kottke for cross-checking the timing of the cuts in the movie using an automated detection routine, and Alex Waite for editing this manuscript. We would also like to thank Gavin Theren for patiently sharing his cinematographic knowledge during movie watching and for his high-level gastronomic skills.\n\n\nReferences\n\nHasson U, Landesman O, Knappmeyer B, et al.: Neurocinematics: The neuroscience of film. Projections. 2008; 2(1): 1–26. Publisher Full Text\n\nHasson U, Malach R, Heeger DJ: Reliability of cortical activity during natural stimulation. Trends Cogn Sci. 2010; 14(1): 40–48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLankinen K, Saari J, Hari R, et al.: Intersubject consistency of cortical MEG signals during movie viewing. Neuroimage. 2014; 92: 217–224. PubMed Abstract | Publisher Full Text\n\nCutting JE, Brunick KL, DeLong JE: The changing poetics of the dissolve in Hollywood film. Empirical Studies of the Arts. 2011; 29(2): 149–169. Publisher Full Text\n\nHanke M, Baumgartner FJ, Ibe P, et al.: A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Sci Data. 2014; 1: 140003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanke M, Adelhöfer N, Kottke D, et al.: A studyforrest extension, simultaneous fMRI and eye gaze recordings during prolonged natural stimulation. bioRxiv. 2016. Publisher Full Text\n\nHanke M, Dinga R, Häusler C, et al.: High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset [version 1; referees: 2 approved with reservations]. F1000Res. 2015; 4: 174. Publisher Full Text\n\nLabs A, Reich T, Schulenburg H, et al.: Portrayed emotions in the movie \"Forrest Gump\" [version 1; referees: 2 approved]. F1000Res. 2015; 4: 92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSengupta A, Kaule F, Guntupalli JS, et al.: A studyforrest extension, retinotopic mapping and localization of higher visual areas. Scientific Data, accepted.\n\nEpstein R, Kanwisher N: A cortical representation of the local visual environment. Nature. 1998; 392(6676): 598-601. PubMed Abstract | Publisher Full Text\n\nHäusler CO, Hanke M: Dataset 1 in: An annotation of cuts, depicted locations, and temporal progression in the motion picture “Forrest Gump”. F1000Research. 2016. Data Source\n\nHäusler CO, Hanke M: Dataset 2 in: An annotation of cuts, depicted locations, and temporal progression in the motion picture “Forrest Gump”. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16187",
"date": "14 Sep 2016",
"name": "Julien Dubois",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis Data Note presents a very useful (and labor intensive!) complement to the studyforrest dataset, providing additional annotations that can be used for data analysis: the timings of all the cuts in the movie, together with the depicted location and temporal progression for each transition. There are of course many, many other features that can be labeled in this movie, yet this particular set of features is useful on its own and will add to the bank of features already available. I have no major changes to suggest.\nMinor comments:\nwould the authors consider publishing the code for the automated detection routine that they used to cross-check manual cut detection?\n\na few cuts are annotated as locale \"unknown_...\". Is there no label that can be used rather than unknown? (I don't currently have the movie available to check these four cuts, at time stamps 4544.32, 4554.96, 4559.32 and 4561.56).\n\nVery minor comments:\nthe second sentence in the Introduction, \"What sets motion pictures apart from other dynamic naturalistic stimuli is that they are more likely to evoke time-locked response patterns in a larger portion of the brain while retaining synchrony across multiple individuals who are experiencing the same movie\", is rather unclear to me, in particular the use of \"while\" which indicates a contradiction -- why would time-locked responses in large portions of the brain not be synchronous across individuals?\n\ntypos:\n\n- in Materials and Methods/Shot start time (time) : correct to \"Time stamps *are* provided in seconds *from* movie onset\".\n- caption of Dataset 1: correct to \"CSV table with depicted locations and temporal progression [...]\"\n\ntable 1: for consistency, maybe the column headers should be the same as in the actual csv file?",
"responses": []
},
{
"id": "16188",
"date": "22 Sep 2016",
"name": "Dylan Wagner",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis data note describes an annotation dataset consisting of time indices and annotations describing location and temporal information for each identified shot in the movie Forest Gump. This dataset of movie annotations is intended to be used with the associated functional neuroimaging datasets of participants viewing the movie Forrest Gump. All in all, I think this is an excellent addition to the growing empire that is StudyForrest, one that clearly represents an enormous amount of effort.\n\nI have only minor comments:\n\nThe timing seems to be a bit off relative to the previously released dataset of scenes in the studyforrest GitHub repository. Presumably the annotation for shots and scenes should line up at scene starts but there appears to be consistent offset of about 12ms. For example, the last scene (“School bus stop”) starts at 6944.96 in the scenes.csv and 6944.84 in the attached dataset for shots in this paper. Moreover, the shots in this annotation don’t quite line up with the shots.csv on the github repo. The ~12ms offset is too large to be a single frame. It appears the authors switched from Advene to Shotcut for movie segmentation and annotation, perhaps therein lies the source of the mismatch?\n\nCould the authors expatiate on their method of identifying shots and cuts? If memory serves, in previous datasets they used an automated method to identify shots that was subsequently edited by hand. In this dataset, it appears all shots were identified by hand. Where all cuts identified? Or are there special cases were two cuts appearing in close succession were considered a part of one shot? For instance, in an action heavy scene you could presumably get an overabundance of cuts, but that level of granularity isn’t really useful (nothing changes) and potentially these could be combined into a single shot. If every cut was indeed identified and annotated, then my sincere condolences to the coder!\n\nAlthough it is extremely generous of the authors to provide python code for generating descriptive data and associated figures, I’ve examined this code file and unfortunately this reviewer simply cannot support the premature use of Python 3… You can pry 2.7 from my cold dead hands. ;)\n\nFinally, I would like to again thank the authors for openly sharing this wealth of data with the community. These annotations and the associated imaging data represent a generous sharing of valuable resources, one that I have no doubt will be useful to many researchers interested in the neuroscience of naturalistic cognition.",
"responses": []
},
{
"id": "16767",
"date": "03 Oct 2016",
"name": "Ming Meng",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis data note labels cuts in the motion picture \"Forrest Gump\" with starting time, three levels of location information, indoor/outdoor information, temporal progression, and the time of day. An automatic detection algorithm was also used to validate the labeling. It is very interesting as it provides an important tool for anyone who may be interested to study perception, attention, and other cognitive functions. I would like to thank the authors for their efforts and sharing this wealth of data.\n\nThe introduction and data description including materials and methods are all clearly written. I approve this article.\n\nMinor points: It might be informative to provide to readers that among the total number of deviation of human labeling and automatic detection, how many were human errors and how many were computer errors.\n\nPage 3, last sentence: typos?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2273
|
https://f1000research.com/articles/5-2269/v1
|
08 Sep 16
|
{
"type": "Research Article",
"title": "Reduced neuronal size and mTOR pathway activity in the Mecp2 A140V Rett syndrome mouse model",
"authors": [
"Sampathkumar Rangasamy",
"Shannon Olfers",
"Brittany Gerald",
"Alex Hilbert",
"Sean Svejda",
"Vinodh Narayanan",
"Shannon Olfers",
"Brittany Gerald",
"Alex Hilbert",
"Sean Svejda"
],
"abstract": "Rett syndrome (RTT) is a neurodevelopmental disorder caused by mutation in the X-linked MECP2 gene, encoding methyl-CpG-binding protein 2. We have created a mouse model (Mecp2 A140V “knock-in” mutant) expressing the recurrent human MECP2 A140V mutation linked to an X-linked mental retardation/Rett syndrome phenotype. Morphological analyses focused on quantifying soma and nucleus size were performed on primary hippocampus and cerebellum granule neuron (CGN) cultures from mutant (Mecp2A140V/y) and wild type (Mecp2+/y) male mice. Cultured hippocampus and cerebellar granule neurons from mutant animals were significantly smaller than neurons from wild type animals. We also examined soma size in hippocampus neurons from individual female transgenic mice that express both a mutant (maternal allele) and a wild type Mecp2 gene linked to an eGFP transgene (paternal allele). In cultures from such doubly heterozygous female mice, the size of neurons expressing the mutant (A140V) allele also showed a significant reduction compared to neurons expressing wild type MeCP2, supporting a cell-autonomous role for MeCP2 in neuronal development. IGF-1 (insulin growth factor-1) treatment of neuronal cells from Mecp2 mutant mice rescued the soma size phenotype. We also found that Mecp2 mutation leads to down-regulation of the mTOR signaling pathway, known to be involved in neuronal size regulation. Our results suggest that i) reduced neuronal size is an important in vitro cellular phenotype of Mecp2 mutation in mice, and ii) MeCP2 might play a critical role in the maintenance of neuronal structure by modulation of the mTOR pathway. The definition of a quantifiable cellular phenotype supports using neuronal size as a biomarker in the development of a high-throughput, in vitro assay to screen for compounds that rescue small neuronal phenotype (“phenotypic assay”).",
"keywords": [
"Rett syndrome",
"MECP2",
"Hippocampal neuronal cultures",
"Cerebellar granule neurons",
"Neuronal soma size",
"Neuronal nuclear size",
"mTOR pathway",
"IGF-1",
"Rictor"
],
"content": "Introduction\n\nRett syndrome (RTT) is a neurodevelopmental disorder caused by mutations in the X-linked MECP2 gene encoding methyl-CpG binding protein 21. Most human cases of MECP2 mutation result in the classical form of RTT affecting predominantly girls2–5. Additionally, MECP2 gene mutations have been linked to a broad range of other clinical and neurological phenotypes. One such mutation is MECP2 A140V, seen in both male and female subjects with non-classic Rett phenotypes such as intellectual disability, parkinsonism, and neuropsychiatric symptoms6–9. The MECP2 A140V mutation is a recurrent missense mutation (c.419C>T; p.Ala140Val) that shortens the alpha helix domain of the methyl CpG binding domain (MBD) without affecting methyl binding function. Protein functional studies have shown that this mutation instead results in the disruption of MECP2 interaction with ATRX (alpha thalassemia X-linked intellectual disability syndrome)10. We have previously reported the characterization of a mouse model expressing the Mecp2 A140V mutation11. Our previous studies in hemizygous male mutants found (i) increased cell packing density, and (ii) aberrant dendrite branching, similar to pathological findings seen in human RTT and other neurodevelopmental disorders. Neuropathological studies in human RTT cases have shown a reduction in brain size, increased cell packing density, and smaller neuronal size (soma)12–17. Fine structure analysis of neurons in human RTT brain tissue revealed decreased dendritic arborization and spine density14,17,18. The most common structural abnormalities reported in Mecp2 mutant mouse models (including the Mecp2-null mice) are thinning of the cortical layers, reduction in neuronal soma size, and decreased dendritic complexity11,19–22. Notably, neuronal soma size is considered a robust and reliable marker for MeCP2 function23.\n\nIn the present study, we have focused on defining an in vitro neuronal phenotype using primary hippocampal and cerebellar granule neuron cultures from wild type and Mecp2 A140V male animals. Here we report results of a quantitative study examining neuronal soma size at different days in vitro (DIV), demonstrating that the neuronal size phenotype is a reliable marker of Mecp2 mutant pathology. Given that RTT occurs predominantly in females, we were interested in evaluating an in vitro model of heterozygous females. Due to X-inactivation, neuronal cultures prepared from heterozygous females are a mixture of neurons expressing mutant MeCP2 and neurons expressing wild type MeCP2. We have developed a strategy to distinguish these two populations of neurons in cultures prepared from female Mecp2-mutation carriers and quantify soma size of mutant and wild type neurons. This approach allows us to compare wild type and mutant neurons plated on a single coverslip, prepared from a single female animal, eliminating confounding variables such as genetic background and culture conditions. Studies in such neuronal cultures from female heterozygotes allow us to differentiate between cell autonomous and cell non-autonomous effects of Mecp2 mutation.\n\nThe molecular mechanisms by which Mecp2 mutation results in a reduction of neuronal soma size are not well understood. Hippo and mammalian target of rapamycin (mTOR) pathways are considered to be primary molecular regulators of cell size24. mTOR is a highly conserved serine/threonine protein kinase that participates in two distinct, multi-protein complexes, mTORC1, and mTORC225. Rictor is an essential component of the mTORC2 (rapamycin-insensitive companion of mTOR) complex26. In addition to the role of mTORC1, studies using rictor knockout cell model have identified unique functions for the mTORC2 in the maintenance of neuronal structure and function27. Loss of rictor expression affects neuronal size (smaller soma size), morphology, and function28. Likewise, the PI3K-AKT-mTOR signaling pathway has also been shown to regulate dendritic complexity, soma size, and spine morphology29. Biochemical studies in Mecp2-mutant mice and stem cell model systems have found a significant reduction in mTOR signaling pathway activity30,31. In recent years, BDNF (brain-derived neurotrophic factor) and IGF-1 are pursued as therapeutic molecules in the treatment of RTT32–34. These molecules have been shown to rescue the normal neuronal size phenotype through mTOR pathway activation, thus supporting the idea that the mTOR pathway may play a significant role in RTT29,34,35. To investigate the molecular mechanism of neuronal size reduction in Mecp2 mutants, we analyzed the mTOR pathway in wild type and Mecp2 A140V brain tissues. Here we report that downregulation of rictor and alteration of the mTOR signaling pathway in the Mecp2 A140V brain represents a key element linking MeCP2 mutation to reduced neuronal size.\n\n\nMaterials and methods\n\nAnimals: The Institutional Animal Care and Use Committee (IACUC) of St. Joseph’s Hospital and Medical Center approved all animal experiments performed in this study (Protocol Approval #304 and Animal Welfare Assurance #A351-01). We have previously reported the construction and characterization of Mecp2 A140V \"knock-in\" mice (B6N.129-Mecp2tm1.1Vnar/J) used in this study11. For the neuronal morphological studies, cell culture, and western blot experiments we utilized mutant A140V hemizygous males generated from our in house breeding protocol and control animals from C57BL/6NCrL background (Charles River Laboratories). We employed a novel strategy for generating heterozygous female mice in which we can distinguish neurons expressing wild type (WT) MeCP2 from those expressing the A140V mutant MeCP2 (Figure 1). Mecp2 A140V carrier females were crossed with male transgenic mice carrying the Enhanced Green Fluorescent Protein (X-EGFP) gene integrated into the X-chromosome (Tg (GFPX) 4 Nagy/J, Jackson Laboratory (USA)). All female offspring from such crosses express GFP in those cells in which the paternal X-chromosome is active. About 50% of the female animals generated in such a cross will be heterozygous for the A140V mutation (A140V: X-EGFP). In brain tissue and cultures prepared from these females, neurons expressing WT MeCP2 will be GFP-positive, while those expressing mutant MeCP2 will not.\n\nPrimary neuronal cultures were prepared from the female Mecp2 A140V X-EGFP compound heterozygotes (A140V:X-EGFP) for analyzing the neuronal soma size of mutant and wild type neurons in single coverslips.\n\nCell culture: Dissociated cultures of hippocampal neurons were prepared from control and mutant (A140V) male mice as well as from female transgenic mice (A140V: X-EGFP) at the postnatal age of 0–1 days (P0–P1) as described previously, with modifications36. The hippocampi were dissected from the brain, and the tissues were enzymatically digested with 0.05% trypsin followed by mechanical dissociation using fire-polished Pasteur pipettes (trituration). The triturated suspension was allowed to settle for five minutes, and the supernatant was collected in a new tube and centrifuged for five mins at 200 X g to pellet the neurons. Final cell suspensions were prepared, counted, and plated on a poly-D-lysine coated glass cover slip. Hippocampal cultures were maintained in Neurobasal A with B-27 supplement without Insulin (Life Technologies, CA). To prevent clumping and allow for easy measurement of cell body size, we tested a range of seeding densities, ranging from 5,000–200,000 cells per coverslip in a 24-well plate. From our preliminary studies, we selected 50,000 cells per coverslip, and at selected days in vitro, coverslips containing cultured neurons were removed, and prepared for immunofluorescence after fixing with 4% paraformaldehyde. In some experiments, the cultures were treated with varying concentrations of IGF-1 (R&D Systems, Inc.) for 24 hrs. The cerebellar granule cells (CGN) were prepared from the postnatal age of 6 days (P6). The cerebellum was dissected from the brain, and the tissues were enzymatically digested with 0.05% trypsin followed by mechanical dissociation using fire-polished Pasteur pipettes. CGN cells were prepared and processed similarly to the hippocampus cell preparation. In the final step, the collected cells were left on a poly-D-lysine coated coverslip for 20 minutes in a humidified CO2 incubator (5% C02/95% air) at 37°C. The heavier cells tend to settle down on the coverslips while the small cerebellar granule neurons floated in the media. At the end of the incubation period, loosely adhered and floating granule neurons were dislodged, counted, and plated on poly-D-lysine coated glass cover with a seeding density of ~100,000 cells per coverslip in a 24-well plate.\n\nImmunostaining: Mice were anesthetized and perfused transcardiac with saline followed by 4% buffered paraformaldehyde (PFA). The brains were removed and post-fixed in 4% PFA overnight at 4°C. Free-floating sections were used for immunohistochemistry. Coronal sections (40 µM) of the brain were prepared using a Vibratome 1000 (Vibratome 1000 plus; Jed Pella Inc.). The sections were stored at -20°C or at 4°C in PBS (phosphate buffered saline) containing 0.05% sodium azide until use. Immunostaining of the neuronal cells and the tissue sections were done following the established protocols. The tissue sections were permeabilized using 0.3% Triton-X in PBS for 30 minutes, blocked with 10% normal goat serum (NGS), followed by primary antibody staining in PBS containing 5% NGS overnight at 4°C. Neuronal cells on coverslips were fixed with 4% paraformaldehyde and sucrose in PBS for 15 minutes at room temperature and permeabilized with 0.2% triton-X 100 for 5 min. Tissues and the cells were stained with the primary antibodies, which included mouse anti-NeuN (1:1000 dilution), (Abcam; ab177487), Rabbit anti-Lamin B (1:1000 dilution) (Abcam; ab16048), Rabbit anti-GFP (1:500 dilution) (Millipore, AB3080)) and Rabbit/Mouse anti-beta-III tubulin (1:1000 dilution), (Abcam, ab18207) at 4°C overnight, and washed four times with PBS for every fifteen minutes. Appropriate secondary antibodies (1:2500 dilutions) such as Alexa Fluor 488 Goat Anti-Rabbit (Life Technologies, A11034) , Alexa Fluor 594 Goat Anti-Rabbit (Life Technologies, R37117), Alexa Flour 488 Goat Anti-Mouse (Life Technologies, A11029), and Alexa Flour 594 Goat Anti-Mouse (Life Technologies, A11005) were incuabtaed for two hours at room temperature and washed four times with PBS for every fifteen minutes. After primary and secondary antibody staining, the sections were counterstained with 4′-6-diamidino-2-phenylindole (DAPI), mounted with Prolong Gold mounting media (Invitrogen, Carlsbad, California), cover-slipped and stored at 4°C in the dark until imaged with a confocal laser-scanning microscope.\n\nMicroscopy and size measurement: To analyze neuronal morphology and quantify neuronal soma and nuclear size, we utilized a Zeiss LSM710 confocal laser-scanning microscope (Carl Zeiss, Germany). Confocal images were captured at 20X and 40X objective using the Zeiss microscope, and the soma size was quantified by tracing the outline of the neuronal cell body using beta-III tubulin staining. The cross-sectional area of selected neurons was measured using both Zen blue (Carl Zeiss, Germany) and Image J software (Version 1.48b) (NIH, Maryland, Bethesda). By dual staining with both DAPI/Lamin-b and TUJ1, we also measured the nucleus size of identified neurons. The nucleus cross-sectional area (nuclear size) in neuronal cells was determined by tracing the outline of the nucleus using Lamin-B or DAPI staining for the nuclear envelope, and DAPI was used to visualize nuclear DNA in the DAPI channel.\n\nWestern blot: Fresh brain tissues from wild type and mutant male animals were dissected and washed twice with ice-cold PBS. Tissue extracts were prepared according to the manufacturer’s instruction using N-PER (Neuronal Protein Extraction Reagent, Thermo Scientific, IL, USA) containing protease inhibitors II, III and a phosphatase inhibitor cocktail (Sigma-Aldrich, MO, USA). Supernatants were collected after centrifugation at 12,000 g for 20 minutes at 4°C, and protein concentrations were determined using BCA protein assay kit according to the manufacturer’s instruction (Thermo Scientific, IL, USA). Equal amounts of total protein (25–50 μg) from the samples were loaded and separated by PAGE electrophoresis using NuPAGE Novex Bis-Tris 4%–12% gels (Invitrogen) and transferred to PVDF membranes (Thermo Scientific, IL, USA). Primary antibodies against mTOR pathway molecules were all obtained from Cell Science Technology, MA, USA (Sampler Kit #9862, Kit #9864 Kit #9964) and beta III tubulin (ab52901) (Abcam, MA, USA) were used at a dilution of 1:1000 in TBST. Goat anti-rabbit IgG (H+L) was conjugated to DyLight™ 680 fluorescent dye from Cell Science Technology, MA, USA (#5366) were used according to the manufacturer’s instruction to visualize the bands. Using an infrared imaging system (Odyssey; LICOR) the PVDF membranes were analyzed, and the signal intensity was determined with imaging software (Image Studio Lite 4.0 LICOR) and exported to a computer for graphic representation.\n\nStatistics: For all quantitative measurements, statistical analyses of data were performed using GraphPad Prism 6 software. Data is reported as mean values, with error bars indicating standard error of mean. Variables with expected normal distributions, including quantitative measurements of neuronal soma and nuclear size, were tested by using the non-parametric two-tailed Mann–Whitney U test. For the frequency distributions, non-linear regressions with the best-fit values were utilized. Quantitative western blots in the biological replicates were analyzed using unpaired two-tailed t-test. P value of ≤0.05 is considered significant for all statistical tests.\n\n\nResults\n\nOur laboratory has reported the construction and initial characterization of a mouse model expressing the MeCP2 A140V mutation10. We demonstrated increased cell packing density in the dentate gyrus, CA1, CA2, and CA3 of the hippocampus, frontal cortex, olfactory bulbs, and cerebellum in Mecp2 A140V mice. Increased neuronal cell packing density in the postnatal brain appears to be a result of smaller soma size of mature neurons in Rett syndrome human brain12,13,16. We examined the soma size of cultured primary hippocampus neurons from male Mecp2 A140V (Mutant) and wild type animals. We chose to quantify neuronal soma size in vitro, as the definition of the soma boundary and measurement of soma size in tissues is much more difficult to perform in a consistent manner. Some intrinsic and extrinsic factors may affect neuronal soma size in vitro, such as variations in handling, cell seeding density, media pH, and composition, shrinkage during fixation. We prepared our cultures in a consistent manner to minimize variability, and compared samples of wild type and mutant cultures prepared and grown together. We used an immunofluorescence approach to analyze the soma size from cultured neuronal cells and measured cellular cross-sectional area (as a measure of soma size) at three days in vitro (DIV) and 21 DIV (Figure 2A & B). We observed that the soma size of Mecp2 A140V mutant neurons was significantly (p<0.01) reduced compared to wild type, at 3 DIV [116±3.2 µm2; n=170 (Wt) vs 91±3.3 µm2; n=151 (Mut)] and at 21 DIV [127±4.0 µm2; n=40 (Wt) vs 101.0±5.6 µm2; n=40 (Mut)] (Figure 2C). These results indicate that the size reduction of mutant neuronal cells at both 3 and 21 DIV is comparable. It also suggests the presence of a key size regulatory mechanism for postnatal neurons and the disruption of this mechanism in RTT neurons.\n\nNeurons were stained with antibodies against the neuron-specific marker beta-III Tubulin (Tuj-1). Soma size, defined as the cross-sectional area of an outline around the neuronal soma, was measured using image analysis software. Representative confocal images were obtained with 40 X objective. Immunofluorescence image showing: A) Wild type neuron, and B) MeCP2 A140V neuron. (C) Quantification of neuronal soma size of wild type and mutant neurons at DIV 3 and 21 (Mean ± SEM, *, p <0.05, Student's t test).\n\nWe next analyzed the distributions of nuclear and soma size in the hippocampus and cerebellar granule cells at 5 DIV. As expected, comparison of mutant and wild type soma and nuclear sizes in these neurons also revealed a significant difference in mean size. The soma size of cultured hippocampus neurons at 5 DIV from the A140V males was significantly smaller than wild type (Figure 3A). In addition to the established role of MeCP2 in the regulating neuronal soma size, the global chromatin modulating action of MeCP2 is also implicated in the regulation of postnatal neuronal nucleus size. Recent studies of embryonic stem cell (ESC)-derived neurons have shown a correlation between nucleus size and levels of MeCP2 expression37. The nucleus size was measured as a surface area using DAPI and Lamin B staining. Quantification of nuclear size shows that mutant neurons were smaller when compared to wild type (Figure 3B). We then examined the distributions of size in cultured hippocampus neurons and found that mutant neuronal soma and nucleus size were shifted towards smaller size area when compared to wild type neurons (Figure 3C, D). We next elucidated the size of primary cerebellar granule neurons (CGN) from Mecp2 A140V (Mutant) compared to wild type. CGN cultures offer the possibility of more homogenous cell culture compared to hippocampus neuronal culture, thus avoiding the influence of heterogeneity on size measurements. As expected, CGNs from mutant mice recapitulate the phenotype of smaller soma, and nucleus size observed in hippocampus neurons (Figure 2). The CGN at 5 DIV from the male mutant brain were significantly lower while the nucleus size was also smaller when compared to wild type (Figure 3E, F). Further, the distributions of mutant neuronal soma and nucleus size were shifted towards smaller size, when compared with wild type neurons, similar to the result with hippocampal cultures (Figure 3G, H). From the distribution analysis, we could observe that the largest quartiles (bin) of soma and nucleus sizes in mutant neurons are significantly smaller than the largest quartile of the wild type neurons.\n\nAverage neuronal soma (A) and nuclear (B) size of hippocampus cells at 5 DIV from mutant and wild type. Frequency distributions of hippocampus soma (C) and nuclear (D) size are significantly different from mutant and wild type (p < 0.01). Average CGN neuronal soma (E) and nuclear (F) size at 5 DIV from mutant and wild type animals. Frequency distributions of CGN soma (D) and nuclear (E) areas are significantly different from mutant and wild type (p < 0.05).\n\nAs described above, we generated female Mecp2 A140V heterozygotes, which also carried an eGFP transgene on the wild type (paternal) X-chromosome (Figure 1, Figure 4A). In hippocampal cultures prepared from such female mice (Mecp2 A140V; X-eGFP), neurons expressing the wild type MeCP2 allele (“WT neurons”) are GFP (+) while neurons expressing MeCP2 A140V (“mutant neurons”) are GFP (-) (Figure 4B). After three DIV, coverslips were fixed and stained with mouse anti-beta-III tubulin antibody and rabbit anti-GFP antibody and counterstained with anti-rabbit 488 and anti-mouse 598. This strategy distinguished wild type (WT) neurons that are GFP positive (greenish yellow) from mutant (MUT) neurons that are GFP negative (red) in a single coverslip. We measured the size of MUT and WT neurons from such female cultures and found a significant reduction in soma size of mutant neurons [81.2±2.9 µm2; n=135] compared to wild type [103±3.2 µm2; n=137] (Figure 4C).\n\n(A) Confocal image of the hippocampal dentate gyrus from a Mecp2 A140V: X-eGFP mouse brain coronal section. Staining with anti-NeuN (red) and anti-GFP (green) shows mosaicism. (B) Hippocampal neurons from female Mecp2 A140V: X-eGFP mice were isolated and cultured in a single coverslip. Neurons were stained at DIV 3 with anti beta-III Tubulin (Tuj-1) (Red), and anti-GFP (Green) and the soma cross-sectional area was measured. MeCP2 A140V neurons do not express GFP and are stained by TuJ-1 alone (red) while that of wild type neurons are stained by both GFP and Tuj-1 (Greenish yellow). All the confocal images were captured with a 20X objective. (C) Quantification of neuronal soma size of wild type and mutant neurons in these mosaic cultures (Mean ± SEM, *, p <0.05, Student's t test).\n\nIGF-1 is currently an attractive compound for the treatment of Rett syndrome. In preclinical studies, IGF-1 treatment of Mecp2 mutant animals improves disease-related phenotypes33. To investigate whether IGF-1 could reverse the soma size in mutant neurons, we treated the cultures of mutant hippocampus neuronal cells with IGF-1 at a concentration of 100 ng/ml for 24 hrs. Image analyses indicate that the recombinant IGF-1 treatment resulted in increased neuronal soma size of the mutant cells (Figure 5A & B) [90.87±3.4 µm2; n=119 (IGF-1 non-treated) vs 115.2±3.9 µm2; n=116 (IGF-1 treated)] (Figure 5C). The treatment of wild type neurons with IGF-1 also increased the cell size but did not reach significance.\n\nPrimary hippocampal cultures from Mecp2 A140V male animals were treated with IGF-1 (100 ng/ml) for 24 hrs on 3 DIV. Neurons were stained with anti-beta-III Tubulin (Tuj-1), and cross-sectional area of neurons was measured. Representative confocal images were obtained with 20 X objective. Immunofluorescence image showing (A) untreated neurons and B) treated neurons with IGF-1. C) Quantification of neuronal soma size of mutant neurons without and with IGF-1 treatment revealed rescue of neuronal soma in the mutant neurons. (Mean ± SEM, *, p <0.05, Student's t-test).\n\nSeveral new lines of investigation indicate that the mTOR pathway is involved in the regulation of size in mammalian cells, including neurons. Details of the connection between the mTOR pathway and MECP2 mutation are not clear, but a recent study has shown that AKT/mTOR pathway activity is reduced in Mecp2 mutant neurons30,31. BDNF or IGF1 treatment of mutant animals activates the AKT/mTOR pathway and rescues neuronal soma size30. Therefore, we examined mTOR pathway proteins in Mecp2 A140V mutant and wild type animals by Western blot analysis. We compared the expression of major mTOR proteins including total mTOR, rictor, and raptor (Figure 6A). Western blot analysis displayed a significant decrease (p<0.05) in the expression of rictor in mutant Mecp2 A140V brain tissue (Figure 6B) indicating that the mTORC2 pathway is altered in Mecp2 A140V mutant mice. Given the role of mTORC2 as an upstream activator of AKT, this assumes greater significance. We also explored the phosphorylation of mTOR (Figure 6C) and found that mTOR phosphorylation at S2448 (Figure 6D), was not altered in mutant tissues, but observed a decrease in mTOR S2481 phosphorylation, which in turn was associated with a reduction in 4E-BP-1 phosphorylation (Figure 6E) (p<0.05). We did not see changes in the phosphorylation status of S6 ribosomal protein, which may also reflect the status of p70 S6K1.\n\nA) Western blots of whole brain lysate from wild type and mutant mice were probed simultaneously with antibodies against rictor, raptor, and total mTOR. Tubulin was used as a loading control to normalize the protein expression, and the images were scanned and quantified for the protein levels. B) The graph shows a significant reduction in rictor protein level in mutant compared to control animals after normalization to beta-III tubulin levels (n=3 each) (Mean ± SEM, *p <0.05, Student’s t-test). C) Western blot analysis of brain lysate to detect phosphorylated mTOR (S2448 and S2481), phosphorylated S6 ribosomal protein, and 4E-BP1 and phospho GSK-3β levels in whole brain lysate of 16-week old mutant and wild type mice. To avoid interference in the detection of phosphorylated forms some of the proteins were detected on separate membranes. Using immunoblot, we assessed total and phosphorylated protein levels. Tubulin was used as an appropriate loading control in all panels. (D) Phosphorylated mTOR (pS2481) and (E) Phosphorylated 4E-BP1 protein levels in mutant compared to control animals (n=4 each) (Mean ± SEM, *p <0.05, Student’s t-test).\n\n\nDiscussion\n\nRett syndrome is characterized by genotypic, phenotypic, and biological complexity in human subjects. MeCP2 has been shown to affect the expression of many genes and influences many distinct cellular processes in neurons and glia. This complexity poses a significant challenge in understanding the neurobiology of this disorder. Development of mouse models expressing human MeCP2 mutations has profoundly advanced our understanding of disease pathogenesis. The Mecp2 A140V is one such mouse model developed in our laboratory and has been previously described to have cellular abnormalities similar to those seen in RTT11. This is the first report of systematic and quantitative studies of soma and nucleus size in primary neuronal cultures from Mecp2 mutant animals. In the present study, we have demonstrated a reduction in soma and nucleus size in cultured hippocampal neurons from hemizygous male Mecp2 A140V mutant mice. The observed smaller soma size in Mecp2 A140V neurons is consistent with that seen in other Mecp2 mutant models19,38–42. Studies from Mecp2−/y (Bird mice) and Mecp2T158A/y have demonstrated smaller neuronal size at early- and late-symptomatic time points38,43. Studies in Mecp2-null and Nestin-Cre Mecp2 conditional mutants revealed that the size of the hippocampus, cerebral cortex and cerebellum were smaller compared to wild type; however, there were no differences in the brain architecture19. Soma size of the hippocampal CA2 neurons in these mutant mice was found to be 15−25% less than that of the controls19,43. CamK-Cre mediated deletion of Mecp2 gene in postnatal neurons also resulted in a smaller neuron phenotype, albeit less severe compared to that of germline or a Nestin-Cre mediated deletion of Mecp219. In the present study, soma size reduction in Mecp2 A140V mutant neurons was in a similar range. Over-expression of MeCP2 in a human SH-SY5Y neuroblastoma cell line was shown to increase nucleus size, suggesting a general function for MeCP2 in nucleus organization. Our data indicates a significant reduction in nuclear size of mutant neurons, compared to wild type neurons. This observation is consistent with another recent study, which demonstrated that neurons generated from mouse embryonic stem cells (ESCs) lacking Mecp2 had smaller nuclear size37.\n\nPrevious studies have suggested both cell autonomous and non-autonomous effects of MeCP2 mutation40,41,44–49. However, these studies have not determined whether it is cell autonomous or non-autonomous function that contributes to distinct neuronal phenotypes in female Mecp2 mutation heterozygotes. To investigate this problem in female mice, we devised an innovative approach utilizing heterozygous female carriers of A140V mutation, which also carried an X-linked GFP allele on the normal X-chromosome. In these animals, wild type neurons express GFP, while the neurons carrying mutant Mecp2 (MUT neurons) do not express GFP. GFP-positive and GFP-negative neurons were resolved into different colors (green and red) by appropriate immunostaining. In hippocampal cultures prepared from such heterozygous Mecp2 female carriers, we were able to differentiate between WT and MUT neurons plated on a single coverslip and quantified the size difference. The size of the MUT neurons from such female heterozygotes was similar to the neuronal size in cultures from hemizygous male Mecp2 A140V animals. Thus, neither contact with wild type neurons nor the presence of factors secreted from wild type neurons rescued the morphological abnormalities in mutant neurons. Our results suggest that MeCP2 acts in a cell-autonomous manner in determining neuronal morphology.\n\nIGF-1 is an emerging therapeutic agent for the treatment of RTT and related conditions. Treatment of MeCP2-null animals with IGF-1 peptide rescued morphological abnormalities33. Further, recombinant human IGF1 (rhIGF1) treatment of Mecp2 null mice resulted in improvement of physiological and behavioral symptoms50. We have demonstrated that treatment of mutant neurons in vitro with recombinant IGF-1 restored the neuronal soma size almost to that of wild type neurons. Regulation of cell and organ size is mediated by numerous factors ranging from nutritional status to growth factors such as insulin, and insulin-like growth factors. Recent studies have shown that the PI3K–PTEN–AKT signaling pathway is critical in the regulation of neuronal soma size29. Dysregulation of Akt/mTOR signaling and protein synthesis are demonstrated to be an important molecular feature in Mecp2 mutant models and embryonic stem cell-derived neurons30,31. Our biochemical pathway analysis of the mTOR pathway indicates a significant reduction in the rictor protein level from Mecp2 A140V mouse brain. New evidence suggests that rictor specifically affects the brain and neuronal size compared to other organs28. Loss of rictor in the central nervous system (CNS) resulted in smaller neurons, most likely a cell-autonomous effect. We suggest that MeCP2-mediated regulation of rictor expression may be an essential link between MeCP2 and the mTOR pathway. Also, mTOR phosphorylation at S2448 –the mTOR variant associated with mTOR complex 1- was not altered, but there was an observed decrease in mTOR phosphorylation at S2481–the mTOR variant associated with mTOR complex 2- phosphorylation51. The downstream signaling of mTOR pathway leads to activation of S6K1 and 4EBP1/eIF4E which independently regulates mammalian cell size through a translational mechanism52. We found a reduction in 4E-BP1 phosphorylation in Mecp2 A140V brain, independent of S6K1 activity. Decreased phosphorylation of 4E-BP1 has been shown to reduce cell size through downregulation of elF3 complex, independent of S6K1 activity53. The transcriptional regulation of mTORC1 and mTORC2 components by MeCP2 is not known, but our results confirmed the dysregulation of mTORC2 signaling association with Mecp2 mutation. We thus suggest that the MeCP2 regulates mTOR pathway activity in neurons and mutation in MECP2 results in downregulation of mTOR pathway that results in characteristic phenotypic feature of smaller neurons in Rett syndrome.\n\nHere we show that Mecp2 mutation in a mouse model directly affects the size of cultured neurons (in vitro) across cerebellum and hippocampus neurons. Our data demonstrate the smaller neuronal size phenotype in primary neuronal cultures prepared from Mecp2 mutant animals. We have also shown that IGF-1 treatment in vitro rescues this cellular phenotype and also report the down-regulation of mTORC1 and mTORC2- in Mecp2 mutant brain tissues as a molecular correlate of the neuronal size phenotype. The present analysis was limited to measurement of cell size and its possible association with a molecular pathway that regulates cell size. Our study opens the possibility of a larger role for the mTORC1 and mTORC2 pathway in RTT, and we are currently examining its pathophysiologic meaning. The strong correlation between cell size and mTOR activity in MeCP2 models indicates that very basic cell biological pathways play a critical role in this characteristic cellular phenotype. We are now in a position to investigate the molecular pathways that connect MeCP2 mutation, mTOR down-regulation, decreased neuronal size, and aberrant neuronal function. Improved assays and tools (kinase assays, receptor activation, RNAi) based on molecular target-centric approaches for high-throughput screens is a common approach to improving translational success. However, a recent meta-analysis suggests that phenotype-based assays have greater success in FDA approval for first–in-class drugs54,55. For conditions such as RTT and autism, where there is a lack of targetable mechanisms, a phenotype-based assay may be the only practical approach for drug screening. Our work suggests that neuronal size may be a useful, quantitative, in vitro phenotypic marker for the development of high-throughput screening (HTS) assays to discover novel therapeutic agents. Repurposing of compounds already approved by the FDA for other uses, or discovery of new candidate compounds may progress quickly through smaller scale preclinical animal testing and then into clinical trials.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data for 'Reduced neuronal size and mTOR pathway activity in the Mecp2 A140V Rett syndrome mouse model', 10.5256/f1000research.8156.d13435256",
"appendix": "Author contributions\n\n\n\nVN and SR were involved in project design, writing. SR was involved in the performance of all experiments and data analysis. AH and BG helped in editing this manuscript. SO and SS were responsible for maintenance, breeding of animals and genotyping. SO and BG were involved in the preparation of neuronal cultures, quantification of neuronal size and western blot experiments. AH was involved in sample preparation and performing the western blots experiments.\n\n\nCompeting interests\n\n\n\nThe authors have no competing interests.\n\n\nGrant information\n\nSupported by intramural funds from the Barrow Neurological Foundation and the Center for Rare Childhood Disorders at TGen to Vinodh Narayanan.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAmir RE, Van den Veyver IB, Wan M, et al.: Rett syndrome is caused by mutations in X-linked MECP2, encoding methyl-CpG-binding protein 2. Nat Genet. 1999; 23(2): 185–8. PubMed Abstract | Publisher Full Text\n\nZappella M: The Rett girls with preserved speech. Brain Dev. 1992; 14(2): 98–101. PubMed Abstract | Publisher Full Text\n\nVan den Veyver IB, Zoghbi HY: Mutations in the gene encoding methyl-CpG-binding protein 2 cause Rett syndrome. Brain Dev. 2001; 23(Suppl 1): S147–51. PubMed Abstract | Publisher Full Text\n\nAmir RE, Van den Veyver IB, Schultz R, et al.: Influence of mutation type and X chromosome inactivation on Rett syndrome phenotypes. Ann Neurol. 2000; 47(5): 670–9. PubMed Abstract | Publisher Full Text\n\nZappella M, Meloni I, Longo I, et al.: Study of MECP2 gene in Rett syndrome variants and autistic girls. Am J Med Genet B Neuropsychiatr Genet. 2003; 119B(1): 102–7. PubMed Abstract | Publisher Full Text\n\nVenkateswaran S, McMillan HJ, Doja A, et al.: Adolescent onset cognitive regression and neuropsychiatric symptoms associated with the A140V MECP2 mutation. Dev Med Child Neurol. 2014; 56(1): 91–4. PubMed Abstract | Publisher Full Text\n\nKlauck SM, Lindsay S, Beyer KS, et al.: A mutation hot spot for nonspecific X-linked mental retardation in the MECP2 gene causes the PPM-X syndrome. Am J Hum Genet. 2002; 70(4): 1034–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCouvert P, Bienvenu T, Aquaviva C, et al.: MECP2 is highly mutated in X-linked mental retardation. Hum Mol Genet. 2001; 10(9): 941–6. PubMed Abstract | Publisher Full Text\n\nOrrico A, Lam C, Galli L, et al.: MECP2 mutation in male patients with non-specific X-linked mental retardation. FEBS Lett. 2000; 481(3): 285–8. PubMed Abstract | Publisher Full Text\n\nNan X, Hou J, Maclean A, et al.: Interaction between chromatin proteins MECP2 and ATRX is disrupted by mutations that cause inherited mental retardation. Proc Natl Acad Sci U S A. 2007; 104(8): 2709–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJentarra GM, Olfers SL, Rice SG, et al.: Abnormalities of cell packing density and dendritic complexity in the MeCP2 A140V mouse model of Rett syndrome/X-linked mental retardation. BMC Neurosci. 2010; 11: 19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArmstrong DD: Neuropathology of Rett syndrome. Ment Retard Dev Disabil Res Rev. 2002; 8(2): 72–6. PubMed Abstract | Publisher Full Text\n\nBauman ML, Kemper TL, Arin DM: Microscopic observations of the brain in Rett syndrome. Neuropediatrics. 1995; 26(2): 105–8. PubMed Abstract | Publisher Full Text\n\nWeng SM, Bailey ME, Cobb SR: Rett syndrome: from bed to bench. Pediatr Neonatol. 2011; 52(6): 309–16. PubMed Abstract | Publisher Full Text\n\nWenk GL: Rett syndrome: neurobiological changes underlying specific symptoms. Prog Neurobiol. 1997; 51(4): 383–91. PubMed Abstract | Publisher Full Text\n\nBauman ML, Kemper TL, Arin DM: Pervasive neuroanatomic abnormalities of the brain in three cases of Rett’s syndrome. Neurology. 1995; 45(8): 1581–6. PubMed Abstract | Publisher Full Text\n\nArmstrong DD, Dunn K, Antalffy B: Decreased dendritic branching in frontal, motor and limbic cortex in Rett syndrome compared with trisomy 21. J Neuropathol Exp Neurol. 1998; 57(11): 1013–7. PubMed Abstract | Publisher Full Text\n\nArmstrong D, Dunn JK, Antalffy B, et al.: Selective dendritic alterations in the cortex of Rett syndrome. J Neuropathol Exp Neurol. 1995; 54(2): 195–201. PubMed Abstract | Publisher Full Text\n\nChen RZ, Akbarian S, Tudor M, et al.: Deficiency of methyl-CpG binding protein-2 in CNS neurons results in a Rett-like phenotype in mice. Nat Genet. 2001; 27(3): 327–31. PubMed Abstract | Publisher Full Text\n\nKishi N, Macklis JD: MECP2 is progressively expressed in post-migratory neurons and is involved in neuronal maturation rather than cell fate decisions. Mol Cell Neurosci. 2004; 27(3): 306–21. PubMed Abstract | Publisher Full Text\n\nStearns NA, Schaevitz LR, Bowling H, et al.: Behavioral and anatomical abnormalities in Mecp2 mutant mice: a model for Rett syndrome. Neuroscience. 2007; 146(3): 907–21. PubMed Abstract | Publisher Full Text\n\nSmrt RD, Eaves-Egenes J, Barkho BZ, et al.: Mecp2 deficiency leads to delayed maturation and altered gene expression in hippocampal neurons. Neurobiol Dis. 2007; 27(1): 77–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang IT, Reyes AR, Zhou Z: Neuronal morphology in MeCP2 mouse models is intrinsically variable and depends on age, cell type, and Mecp2 mutation. Neurobiol Dis. 2013; 58: 3–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTumaneng K, Russell RC, Guan KL: Organ size control by Hippo and TOR pathways. Curr Biol. 2012; 22(9): R368–79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCosta-Mattioli M, Monteggia LM: mTOR complexes in neurodevelopmental and neuropsychiatric disorders. Nat Neurosci. 2013; 16(11): 1537–43. PubMed Abstract | Publisher Full Text\n\nSarbassov DD, Ali SM, Kim DH, et al.: Rictor, a novel binding partner of mTOR, defines a rapamycin-insensitive and raptor-independent pathway that regulates the cytoskeleton. Curr Biol. 2004; 14(14): 1296–302. PubMed Abstract | Publisher Full Text\n\nUrbanska M, Gozdz A, Swiech LJ, et al.: Mammalian target of rapamycin complex 1 (mTORC1) and 2 (mTORC2) control the dendritic arbor morphology of hippocampal neurons. J Biol Chem. 2012; 287(36): 30240–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomanetz V, Angliker N, Cloëtta D, et al.: Ablation of the mTORC2 component rictor in brain or Purkinje cells affects size and neuron morphology. J Cell Biol. 2013; 201(2): 293–308. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar V, Zhang MX, Swank MW, et al.: Regulation of dendritic morphogenesis by Ras-PI3K-Akt-mTOR and Ras-MAPK signaling pathways. J Neurosci. 2005; 25(49): 11288–99. PubMed Abstract | Publisher Full Text\n\nLi Y, Wang H, Muffat J, et al.: Global transcriptional and translational repression in human-embryonic-stem-cell-derived Rett syndrome neurons. Cell Stem Cell. 2013; 13(4): 446–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRicciardi S, Boggio EM, Grosso S, et al.: Reduced AKT/mTOR signaling and protein synthesis dysregulation in a Rett syndrome animal model. Hum Mol Genet. 2011; 20(6): 1182–96. PubMed Abstract | Publisher Full Text\n\nKhwaja OS, Ho E, Barnes KV, et al.: Safety, pharmacokinetics, and preliminary assessment of efficacy of mecasermin (recombinant human IGF-1) for the treatment of Rett syndrome. Proc Natl Acad Sci USA. 2014; 111(12): 4596–601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTropea D, Giacometti E, Wilson NR, et al.: Partial reversal of Rett Syndrome-like symptoms in MeCP2 mutant mice. Proc Natl Acad Sci USA. 2009; 106(6): 2029–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi W, Pozzo-Miller L: BDNF deregulation in Rett syndrome. Neuropharmacology. 2014; 76(Pt C): 737–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBray N: Neurodevelopmental disorders: righting Rett syndrome with IGF1. Nat Rev Drug Discov. 2014; 13(9): 653. PubMed Abstract | Publisher Full Text\n\nKaech S, Banker G: Culturing hippocampal neurons. Nat Protoc. 2006; 1(5): 2406–15. PubMed Abstract | Publisher Full Text\n\nYazdani M, Deogracias R, Guy J, et al.: Disease modeling using embryonic stem cells: MeCP2 regulates nuclear size and RNA synthesis in neurons. Stem Cells. 2012; 30(10): 2128–39. PubMed Abstract | Publisher Full Text\n\nRobinson L, Guy J, McKay L, et al.: Morphological and functional reversal of phenotypes in a mouse model of Rett syndrome. Brain. 2012; 135(Pt 9): 2699–710. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuy J, Gan J, Selfridge J, et al.: Reversal of neurological defects in a mouse model of Rett syndrome. Science. 2007; 315(5815): 1143–7. PubMed Abstract | Publisher Full Text\n\nBallas N, Lioy DT, Grunseich C, et al.: Non-cell autonomous influence of MeCP2-deficient glia on neuronal dendritic morphology. Nat Neurosci. 2009; 12(3): 311–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaneja P, Ogier M, Brooks-Harris G, et al.: Pathophysiology of locus ceruleus neurons in a mouse model of Rett syndrome. J Neurosci. 2009; 29(39): 12187–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRietveld L, Stuss DP, McPhee D, et al.: Genotype-specific effects of Mecp2 loss-of-function on morphology of Layer V pyramidal neurons in heterozygous female Rett syndrome model mice. Front Cell Neurosci. 2015; 9: 145. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiacometti E, Luikenhuis S, Beard C, et al.: Partial rescue of MeCP2 deficiency by postnatal activation of MeCP2. Proc Natl Acad Sci USA. 2007; 104(6): 1931–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKishi N, Macklis JD: MeCP2 functions largely cell-autonomously, but also non-cell-autonomously, in neuronal maturation and dendritic arborization of cortical pyramidal neurons. Exp Neurol. 2010; 222(1): 51–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBelichenko NP, Belichenko PV, Mobley WC: Evidence for both neuronal cell autonomous and nonautonomous effects of methyl-CpG-binding protein 2 in the cerebral cortex of female mice with Mecp2 mutation. Neurobiol Dis. 2009; 34(1): 71–7. PubMed Abstract | Publisher Full Text\n\nSamaco RC, Mandel-Brehm C, Chao HT, et al.: Loss of MeCP2 in aminergic neurons causes cell-autonomous defects in neurotransmitter synthesis and specific behavioral abnormalities. Proc Natl Acad Sci USA. 2009; 106(51): 21966–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarshak S, Meynard MM, De Vries YA, et al.: Cell-autonomous alterations in dendritic arbor morphology and connectivity induced by overexpression of MeCP2 in Xenopus central neurons in vivo. PLoS One. 2012; 7(3): e33153. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlackman MP, Djukic B, Nelson SB, et al.: A critical and cell-autonomous role for MeCP2 in synaptic scaling up. J Neurosci. 2012; 32(39): 13529–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLioy DT, Garg SK, Monaghan CE, et al.: A role for glia in the progression of Rett’s syndrome. Nature. 2011; 475(7357): 497–500. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhwaja OS, Ho E, Barnes KV, et al.: Safety, pharmacokinetics, and preliminary assessment of efficacy of mecasermin (recombinant human IGF-1) for the treatment of Rett syndrome. Proc Natl Acad Sci USA. 2014; 111(12): 4596–601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCopp J, Manning G, Hunter T: TORC-specific phosphorylation of mammalian target of rapamycin (mTOR): phospho-Ser2481 is a marker for intact mTOR signaling complex 2. Cancer Res. 2009; 69(5): 1821–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFingar DC, Salama S, Tsou C, et al.: Mammalian cell size is controlled by mTOR and its downstream targets S6K1 and 4EBP1/eIF4E. Genes Dev. 2002; 16(12): 1472–87. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchipany K, Rosner M, Ionce L, et al.: eIF3 controls cell size independently of S6K1-activity. Oncotarget. 2015; 6(27): 24361–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwinney DC: Phenotypic vs. target-based drug discovery for first-in-class medicines. Clin Pharmacol Ther. 2013; 93(4): 299–301. PubMed Abstract | Publisher Full Text\n\nSwinney DC, Anthony J: How were new medicines discovered? Nat Rev Drug Discov. 2011; 10(7): 507–19. PubMed Abstract | Publisher Full Text\n\nRangasamy S, Olfers S, Gerald B, et al.: Dataset 1 in: Reduced neuronal size and mTOR pathway activity in the Mecp2 A140V Rett syndrome mouse model. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16351",
"date": "27 Sep 2016",
"name": "Lucas Pozzo-Miller",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOne of the cellular features of Rett syndrome replicated in MeCP2-based mouse models is a smaller cell body of neurons, which, together with the lack of neuronal cell death, results in a higher packing density of neurons in all brain regions studied to date. The study by Rangasamy et al. extends these observations to the MECP2 A140V knock-in mouse model that was crossed with a GFP-based mouse reporter of X-chromosome inactivation. These mice, together with knock-in mice with a GFP sequence inserted into Mecp2 gene, are a great tool to identify cells that express the mutant allele in the mosaic brain of heterozygous Mecp2 female mice, allowing the characterization of cell autonomous and non-cell autonomous consequences of MeCP2 loss. The authors replicate the observation that the mTOR pathway is downregulated in MECP2 A140V knock-in mice, as it is in Mecp2 null mice. Finally, the authors describe that IGF-1 increased the cell body size of cultured neurons from Mecp2 A140V male mice. Overall, these observations confirm that neuronal cell body size is a reliable biomarker of MeCP2 dysfunction, that IGF-1 improves cellular features in a Rett model, and that Mecp2 A140V knock-in mice replicate cellular features observed in Rett individuals and other Mecp2-based mouse models. The observations and approaches described by Rangasamy et al. represent a useful contribution to the field of translational and preclinical studies in Rett syndrome.",
"responses": []
},
{
"id": "16176",
"date": "27 Sep 2016",
"name": "Rajiv R. Ratan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript uses the experimental leverage of an in vitro cultures to document a change in soma size in neurons from a novel mouse mutant with a knock-in A140V mutation of the MECP2 gene linked to the x-chromosome. Phenotype of hippocampal and cerebellar granule neurons from male and female mice were examined, and statistically significant reductions in soma size were noted. These reductions in soma size were associated with reductions in mTOR signaling and rescued by IGF-1, which drives mTOR signaling. The authors suggest that the reductions in neuronal size may be a good phenotype on which to screen for drugs or molecules that overcome MeCP2 mutant effects.\n\nOverall this is a nice paper, with generally good controls and an intriguing message with clear clinical implications. Although not absolutely necessary, it would be nice to see if molecular or pharmacological inhibition of the PI3K-AKT-mTOR signaling pathway abrogated the effects of IGF1. It would also be useful to look at a growth factor that drives Erk signaling to see if it can overcome the effects. Finally forced expression of Rictor in vitro or in vivo might more directly link mTOR signaling with the Rett phenotype. Otherwise, a very interesting and important study.",
"responses": []
},
{
"id": "16174",
"date": "03 Oct 2016",
"name": "Subbiah Pugazhenthi",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRett syndrome (RTT) is caused by mutations in MECP2 gene which encodes methyl-CpG-binding protein 2. One such mutation is MECP2 A140V. This group has previously generated a mouse model of RTT, expressing Mecp2 A140V mutant. This mouse is extremely useful to characterize a complex disease like RTT, at the cellular and molecular level. This interesting study by Rangasamy et al., focuses on the characterization of in vitro neuronal phenotype and reports several novel findings. Size distribution studies with hippocampal and cerebellar granule neurons suggest that the neurons from male mutant mice have smaller soma size. Following treatment with IGF-1, the neuronal soma size is rescued. This finding supports the potential therapeutic use of IGF-1 in RTT.\nThe authors have also generated a novel female mouse model with heterozygous Mecp2 A140V mutant, carrying X-linked GFP allele on the normal X-chromosome. By this approach, GFP-positive wild-type neurons and GFP-negative mutant neurons can be compared within the mixed neuronal populations. With this model, the authors demonstrate the cell-autonomous nature of MeCP2 action on the neuronal size. In addition, MeCP2 appears to play a role in the regulation of mTOR pathway. Defects in this pathway are observed in the mutant mice, as shown by decreases in the levels of phosphorylated mTOR (pS2481) and phosphorylated 4E-BP1. Overall, this is a well-designed study with significant findings that contribute to the understanding of a complex disease. Future studies are needed to further understand the mechanism linking mTOR down-regulation and neuronal dysfunction.",
"responses": []
},
{
"id": "16241",
"date": "19 Oct 2016",
"name": "Ganeshwaran H. Mochida",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRett syndrome is a neurodevelopmental disorder caused by mutations in the MECP2 gene. The authors previously generated a knock-in mouse model of Rett syndrome (Mecp2 A140V). This paper describes in vitro studies of hippocampal and cerebellar granule neurons from the Mecp2 A140V mice. The paper demonstrates that the soma and nuclear size of these neurons in the Mecp2 A140V mice is decreased compared to the wild-type animals. The authors took advantage of a mouse line with an eGFP transgene on X chromosome to show that this size difference is a cell-autonomous effect. Further, they show that the small cell soma size phenotype can be rescued by IGF-1, and there are altered molecular signatures of the mTOR pathway in the mutant mouse brains.\n\nIt is a well-designed and well-executed study, and the data are clearly presented. This in vitro experimental model, with soma and nuclear size of neurons as biomarkers, could potentially be an attractive platform for high-throughput drug screening. In terms of the effect of A140V on the mTOR pathway, the reduction of Rictor, phosphorylated mTOR (S2481) and phosphorylated 4E-BP1 displayed on the Western blots is highly intriguing, but it would require further functional studies to link these molecular signatures to the phenotype of the A140V mutant neurons.\nMinor points:\nIn Figure 5C, only the results of IGF-1 treatment in the mutant neurons are shown, but it would be better to also show the results of IGF-1 treatment in the wild-type neurons. In Figure 6A, there are two lanes of Western blots shown for each of wild-type and A140V. There should be an explanation as to what these two lanes represent.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2269
|
https://f1000research.com/articles/5-1129/v1
|
02 Jun 16
|
{
"type": "Research Note",
"title": "Multiple statistical tests: lessons from a d20",
"authors": [
"Christopher R. Madan"
],
"abstract": "Statistical analyses are often conducted with α=.05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided dice (or `d20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is 1/20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.",
"keywords": [
"statistical analysis",
"error",
"probability",
"statistical test"
],
"content": "Introduction\n\nIn scientific research, it is important to consider the issue of conducting multiple statistical tests and the likelihood of spuriously obtaining a ‘significant’ effect. Within a null-hypothesis significance testing (NHST) framework, statistical tests are usually conducted with α = .05, i.e., the likelihood of falsely rejecting the null hypothesis as .05. Interestingly, this value coincides with the probability of obtaining a specific outcome on a 20-sided dice (or ‘d20’), as 120 = .05. In the current (fifth) edition of Dungeons & Dragons, a tabletop game, many in-game events are determined based on the outcome of a d20. However, to make some events more likely, there are times when players roll a d20 ‘with advantage’, meaning that they roll the d20 twice and take the greater value1. (There are also instances where a d20 is rolled ‘with disadvantage’, where the lesser value is taken, but here I will only focus on the former case.) This parallels the use of NHST without any correction for multiple comparisons, as it is more likely to get a significant effect due to chance (i.e., Type-I error) if many tests are conducted without a correction for multiple comparisons.\n\nHere I wondered how much the probability of obtaining a 20, on a d20, would increase due to multiple tests–i.e., obtaining at least one 20 across n die. This approach assumes that each statistical test is wholly independent from each other, and thus is likely to over-estimate the effect related to conducting multiple statistical tests using variations in how the measures are calculated or the use of different, but correlated, measures. Nonetheless, this exploration is based in probability theory and mathematical derivations, rather than computational simulations, and can serve as an comprehensible primer in understanding the relationship between repeated statistical tests and probability distributions.\n\nDeveloping an intuition of statistics and probability distributions is of particular importance as most people, both laymen2 and scientists3,4, have misconceptions about NHST. This is further compounded by critics of NHST, which often over-emphasize the limitations of the approach, e.g., see 4–6. By providing a comprehensible example of how repeated statistical tests can inflate chance likelihoods, I hope that these demonstrations can improve researchers’ intuitions regarding NHST. This approach is not contrary to those suggested by the use of confidence intervals and Bayesian statistics—which have become increasingly adopted across the life sciences, from medicine to psychology7,8—but rather to improve comprehension of the characteristics of NHST.\n\n\nMathematical derivations\n\nThe probability that of a specific outcome occurring on each on n die, each with d sides is:\n\n\n\nThe probability of obtaining a specific outcome across n rolls of a d-sided die are listed in Table 1.\n\nFor intuition, several simple cases of this can be examined. For d = 2, i.e., a coin, the probability of obtaining at a heads when flipping one coin (n = 1) is 12. The probability of obtaining a heads twice (with two coins, n = 2) is (12)2 or 14. In contrast, the probability of obtaining at least one heads when flipping two coins is 34, as there are four possible outcomes ({HH, HT, TH, TT}) and three of them satisfy the criteria of ‘at least one heads’ ({HH, HT, TH}) and only one outcome does not ({TT}). This can more clearly be considered as the complementary event, where the probability is 1 – 14, which resolves to 34.\n\nFor d = 6, i.e., a ‘regular’ six-sided die, the probability of obtaining any specific outcome is 16. When considering multiple dice, it is again important to differentiate the probability of obtaining ‘obtaining the same specific outcome multiple times’, e.g., the probability of obtaining two sixes with two dice is (16)2 = 136, from the case of ‘obtaining at least one specific outcome across multiple dice’. To determine the probability of obtaining a specific outcome on any of multiple dice, the complementary event should again be considered, i.e., the probability of not obtaining that outcome on any of the die. For n = 1, the probability of not obtaining a specific outcome is 56. Following from this, the probability of obtaining that specific outcome is 1 – 56 or 16. When n = 2, the probability of not obtaining a six on either of the dice is (56)2, which resolves to 2536. The complementary event of obtaining ‘at least one six’ is 1 – 2536 or 1136. Here we can see that with two dice, the probability of obtaining at least one six (or any other specific outcome) is nearly doubled, from 636 (i.e., 16 with a single die).\n\nFor d = 20, i.e., a 20-sided die, the probability of obtaining any specific outcome is 120 or .05. If n = 2 dice are rolled, the probability of obtaining at least one 20 is 39400 or .0975. If n = 10 dice are rolled, the probability of obtaining at least one 20 is ≈ .4013. With n = 20 dice, this increases further to ≈ .6415.\n\nacross n die.\n\nWe can also consider a more general problem, the probability of obtaining an outcome of o or greater, on at least one of n d-sided die:\n\n\n\nFor instance, when rolling a six-sided die, the probability of obtaining a five or higher is 26 (equivalent to 1236). Following from the same approach of calculating the complementary event, the probability of obtaining not obtaining any two specific outcomes across multiple dice is 1 – (46)2, which resolves to 2036. Figure 1 and Table 2 show the probability of obtaining at least o on a d = 20 die, across n dice.\n\n\nDiscussion\n\nWhile it is widely understood that multiple comparisons need to be corrected for, many would underestimate the degree of inflation in Type-I error associated with additional, uncorrected statistical tests. The mathematical derivations clearly illustrate the influence of multiple statistical tests on the likelihood of obtaining a specific outcome due to chance alone.",
"appendix": "Competing interests\n\n\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n\nGrant information\n\nThe author declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nI would like to thank Critical Role (http://www.geekandsundry.com/shows/critical-role/) for introducing me to the concept of rolling a die ‘with advantage’.\n\n\nReferences\n\nD&D Player’s Basic Rules, version 0.2. Last updated 11/03/2014. Reference Source\n\nTromovitch P: The lay public’s misinterpretation of the meaning of ‘significant’: A call for simple yet significant changes in scientific reporting. J Res Practice. 2015; 11(1): Article P1. Reference Source\n\nGliner JA, Leech NL, Morgan GA: Problems with null hypothesis significance testing (NHST): what do the textbooks say? J Exp Educ. 2002; 71(1): 83–92. Reference Source\n\nNickerson RS: Null hypothesis significance testing: A review of an old and continuing controversy. Psychol Methods. 2000; 5(2): 241–301. PubMed Abstract | Publisher Full Text\n\nAbelson RP: On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychol Sci. 1997; 8(1): 12–15. Publisher Full Text\n\nCortina JM, Dunlap WP: On the logic and purpose of significance testing. Psychol Methods. 1997; 2(2): 161–172. Publisher Full Text\n\nFidler F, Cumming G, Burgman M, et al.: Statistical reform in medicine, psychology and ecology. J Socio Econ. 2004; 33(5): 615–630. Publisher Full Text\n\nFidler F: Ethics and statistical reform: Lessons from medicine. In A. Panter & S. Sterba (Eds.) Handbook of Ethics in Quantitative Methodology. Routledge: New York. 2011. Publisher Full Text"
}
|
[
{
"id": "14149",
"date": "03 Jun 2016",
"name": "Jens Foell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author provides a novel demonstration of the increasing probability of spurious research results when performing multiple tests: noticing the equivalence of a p = .05 statistical threshold and a d20 die (as used in popular games), the author goes on to describe changes in probabilities when using multiple dice, or when allowing multiple sides of the die to count as a correct result. The aim of this article, as I understand it, is to demonstrate that these changes have a surprisingly large influence on statistical hypothesis testing, and at the same time to provide a hands-on example that many readers might be able to relate to (in the form of the d20 die).\nThe article is well-written and in my opinion fulfills both of these goals. Its rationale and the mathematical derivations seem to be sound and correct. I can easily see this article being used by educators to teach the topic of spurious statistical results and to make the topic more accessible.\nI have some minor edits/recommendations to increase the overall clarity and readability of the article:\nWhen the game name \"Dungeons & Dragons\" is first mentioned in the introduction, it should be followed by a citation of the game's publisher, year of publication, and, if necessary/appropriate, copyright information.\n\nA d6 die is described as \"regular\" in the text. This term seems ambiguous to me and I recommend replacing it with a term such as \"cube-shaped,\" \"classical,\" or \"conventional.\"\n\nThe current version of the manuscript appears to contain a printing error: before the last paragraph of the \"mathematical derivations\" section, the sentence fragment \"across n die.\" is printed without context.\nI recommend the indexing of this article after these minor issues have been addressed, and I hope the author will continue to produce research notes that highlight statistical issues in an approachable manner.",
"responses": []
},
{
"id": "14150",
"date": "08 Jun 2016",
"name": "Matthew Wall",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis short report provides a simple and concise illustration of some of the issues surrounding multiple comparisons in statistical testing. I see nothing wrong with the logic or the mathematics, and can see that this would make a valuable contribution as an assigned text on an introductory statistics course.\nThe only suggestion I have for material that could be added would be to include a citation to additional material on the topic of multiple-comparisons correction for the interested reader. This would make the piece more valuable as a teaching aid. I'd suggest adding a couple of sentences to the discussion along those lines and citing a review paper on the topic, such as Curran-Everett, 2000 1\nCouple of minor points:\nThe paragraph after table 1 starts 'For intuition,'. This seems an odd phrase to me. Maybe replace with 'Intuitively' or 'For the purposes of familiarity...' or something like that.\n\nJust before the general equation the phrase 'across n die' appears on its own, as a separate paragraph. An error? Or is this supposed to be a subtitle?",
"responses": []
},
{
"id": "14147",
"date": "05 Jul 2016",
"name": "Steven R. Shaw",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author provides an excellent foundation for developing an intuitive understanding of null hypothesis significant testing. The concept of using a 20 sided die to assist graduate students and new researchers to better understand what exactly is meant by .05 and how multiple comparisons have a dramatic influence on interpretation is an interesting one — and I believe novel. This intuitive approach can also be used to improve understanding for the general public and overall science communication. Although this report includes a mathematical derivation, which may be too advanced for new scholars or science communication, it provides an excellent rationale for the use of 20 sided die.\nWhen can also easily imagine that this rationale can be used better understand robustness of the outcomes of studies that may be influenced by experiment-wise error rates, such as in the cases of multiple attempts at replication or multiple trials of a specific experiment. The 20 sided die provides a concrete and real world method of communicating the complexities of multiple comparisons that is far more user-friendly than random number generators and variations of Monte Carlo studies.\nI am curious as to whether the exactitude in engineering and manufacturing a 20 sided die will result in exactly equal probability of each number appearing. A six sided die is created with right angles and is relatively easy to create an equal probability of landing on each side. Obviously, this makes no difference or changes the point of the paper; yet, it may add error should anyone actually attempt to roll the 20 sided dice multiple times. Just a thought as I am unsure on this issue. I suppose should a 20 sided die found to contain significant error or even bias, then there would be scandal in the Dungeons & Dragons world.\nThe author also deserves credit for establishing high levels of nerd credibility. When Dungeons & Dragons, mathematical derivations, and useful statistical communication methods are combined into a single published scientific paper; the trifecta of nerd credibility has been achieved.\nOverall, this is a well-written report and the mathematics is correct. I am hopeful that the author continues to elaborate on this concept and develops other uses of a 20 sided die for communication and lesson plans in courses on research design and basic applied statistics. This will also be helpful in explaining experiment wise error rate/multiple comparisons in the cases of science communication as well.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1129
|
https://f1000research.com/articles/5-2255/v1
|
07 Sep 16
|
{
"type": "Opinion Article",
"title": "Deep brain stimulation in Gilles de la Tourette syndrome: killing several birds with one stone?",
"authors": [
"Andreas Hartmann"
],
"abstract": "In patients with severe, treatment-refractory Gilles de la Tourette syndrome (GTS), deep brain stimulation (DBS) of various targets has been increasingly explored over the past 15 years. The multiplicity of surgical targets is intriguing and may be partly due to the complexity of GTS, specifically the various and frequent associated psychiatric comorbidities in this disorder. Thus, the target choice may not only be aimed at reducing tics but also comorbidities. While this approach is laudable, it also carries the risk to increase confounding factors in DBS trials and patient evaluation. Moreover, I question whether DBS should really be expected to alleviate multiple symptoms at a time. Rather, I argue that tic reduction should remain our primary objective in severe GTS patients and that this intervention may subsequently allow an improved psychotherapeutic and/or pharmacological treatment of comorbidities. Thus, I consider DBS in GTS not as a single solution for all our patients’ ailments but as a stepping stone to improved holistic care made possible by tic reduction.",
"keywords": [
"Tics",
"Tourette",
"deep brain stimulation (DBS)",
"comorbidities"
],
"content": "\n\nDeep brain stimulation (DBS) has been used for over 15 years to treat severe forms of Gilles de la Tourette syndrome (GTS) refractory to pharmacological and, more recently, cognitive-behavioral therapies (CBT) (Schrock et al., 2015). Despite the relatively small numbers of patients operated so far, the number of surgical targets is impressive (Porta et al., 2013). The available double-blind trials favor the thalamus and the globus pallidus internus (both anteromedial and posteroventral parts) but the debate if these two are the best targets or if other targets need to be explored remains open (Servello et al., 2016). As we have learnt from Parkinson disease, establishing just one or two consensual DBS targets is a long endeavour which requires time and a large number of patients (Lukins et al., 2014). Providing the latter will certainly be difficult in a comparatively rare disease like GTS.\n\nWhy so many potential targets in GTS? One of the main reasons appears to be the wish to diminish not only tics but also comorbidities (obsessive-compulsive disorder (OCD), impulsivity, attention deficit hyperactivity disorder (ADHD), anxiety, depression and others) which are present in almost 90% of patients meeting DSM criteria for GTS (Hirschtritt et al., 2015). Specifically, these patients fall into the category named « full-blown GTS » by Robertson (2015) and are also the most likely candidates to undergo surgery. Thus, a tailor-made, individualized approach might indeed make sense instead of including/randomizing patients into studies where a certain diagnostic uniformity is required or at least assumed.\n\nI will argue that in an admittedly complex situation, Occam’s razor is the way to go forward. First, there is no GTS without tics. Challenging DSM-5 criteria is understandable but unrealistic (Robertson & Eapen, 2014). In clinical practice, however, even if DSM-5 criteria for GTS are met, we do of course establish the predominant symptoms in terms of impairment. Then, we chose the surgical target which we believe will be best suited to counter the main burden on the patient’s quality of life. This may mean that a patient with severe tics but even more severe OCD might actually be operated predominantly for the latter, targeting the subthalamic nucleus, for instance, which is not a usual target in GTS (Mallet et al., 2008). However, if tics are the main problem, then these should be treated first and foremost, which does not prevent us from evaluating comorbidities pre- and post-op by appropriate scales, as is done anyway in most current trials (Kefalopoulou et al., 2015). But we should be clear, for the time being, that obtaining a direct, surgically-induced effect on comorbidities will be the cherry on the cake, not something that can be systematically expected, at least based on our current knowledge of basal ganglia circuitry. That, for instance, was the rationale of the Paris group to implant electrodes into the limbic portions of the GPi, hoping to also reduce behavioral manifestations of GTS (Houeto et al., 2005; Welter et al., 2008). In a similar vein, I am doubtful of implanting multiple electrodes in multiple sites in the hope of alleviating surgically a host of neuropsychiatric symptoms; although I admit that in rare, very debilitating cases, this might be an option to consider.\n\nMy take is rather this: having severe, relentless and debilitating tics tend to cloud comorbidities. In case of successful DBS, other symptoms, rather than being co-treated by electrode implantation, may actually re-emerge. However, the patient is now free to pursue other forms of treatment for these symptoms, for instance psychostimulants for the treatment of ADHD if these previously aggravated tics. Even more importantly, psychotherapeutic approaches thus far impossible, notably cognitive behavioural therapy (CBT), can become feasible. An example from the OCD world concerns patients who underwent a 24 week CBT treatment programme after DBS of the nucleus accumbens (Mantione et al., 2014). Not only did CBT offer further symptom improvement: rather, as the authors note, all patients (n=16) had undergone previous CBT trials (between 1 and 9) which were not only unsuccesful but sometimes counterproductive because they majored anxiety and fear. DBS appeared to alleviate these symptoms and thereby made successful CBT possible. In a similar vein, CBT aimed at further tic reduction could be tried post-op where, pre-op, it was unfeasible. The same applies for psychotherapeutical approaches aiming to improve OCD, depression, anxiety and behavioral problems.\n\nTherefore, and in conclusion, I suggest to view DBS in GTS as a window or a stepping stone to a more holistic treatment rather than a single solution for all our patients’ ailments.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHirschtritt ME, Lee PC, Pauls DL, et al.: Lifetime prevalence, age of risk, and genetic relationships of comorbid psychiatric disorders in Tourette syndrome. JAMA Psychiatry. 2015; 72(4): 325–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoueto JL, Karachi C, Mallet L, et al.: Tourette's syndrome and deep brain stimulation. J Neurol Neurosurg Psychiatry. 2005; 76(7): 992–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKefalopoulou Z, Zrinzo L, Jahanshahi M, et al.: Bilateral globus pallidus stimulation for severe Tourette's syndrome: a double-blind, randomised crossover trial. Lancet Neurol. 2015; 14(6): 595–605. PubMed Abstract | Publisher Full Text\n\nLukins TR, Tisch S, Jonker B: The latest evidence on target selection in deep brain stimulation for Parkinson's disease. J Clin Neurosci. 2014; 21(1): 22–7. PubMed Abstract | Publisher Full Text\n\nMallet L, Polosan M, Jaafari N, et al.: Subthalamic nucleus stimulation in severe obsessive-compulsive disorder. N Engl J Med. 2008; 359(20): 2121–34. PubMed Abstract | Publisher Full Text\n\nMantione M, Nieman DH, Figee M, et al.: Cognitive-behavioural therapy augments the effects of deep brain stimulation in obsessive-compulsive disorder. Psychol Med. 2014; 44(16): 3515–22. PubMed Abstract | Publisher Full Text\n\nPorta M, Cavanna AE, Zekaj E, et al.: Selection of patients with Tourette syndrome for deep brain stimulation surgery. Behav Neurol. 2013; 27(1): 125–31. PubMed Abstract | Publisher Full Text\n\nRobertson MM: A personal 35 year perspective on Gilles de la Tourette syndrome: prevalence, phenomenology, comorbidities, and coexistent psychopathologies. Lancet Psychiatry. 2015; 2(1): 68–87. PubMed Abstract | Publisher Full Text\n\nRobertson MM, Eapen V: Tourette's: syndrome, disorder or spectrum? Classificatory challenges and an appraisal of the DSM criteria. Asian J Psychiatr. 2014; 11: 106–13. PubMed Abstract | Publisher Full Text\n\nSchrock LE, Mink JW, Woods DW, et al.: Tourette syndrome deep brain stimulation: a review and updated recommendations. Mov Disord. 2015; 30(4): 448–71. PubMed Abstract | Publisher Full Text\n\nServello D, Zekaj E, Saleh C, et al.: Deep Brain Stimulation in Gilles de la Tourette Syndrome: What Does the Future Hold? A Cohort of 48 Patients. Neurosurgery. 2016; 78(1): 91–100. PubMed Abstract\n\nWelter ML, Mallet L, Houeto JL, et al.: Internal pallidal and thalamic stimulation in patients with Tourette syndrome. Arch Neurol. 2008; 65(7): 952–7. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "16164",
"date": "19 Sep 2016",
"name": "Andrea Cavanna",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI found Andreas Hartmann’s considerations on the use of Deep Brain Stimulation (DBS) for patients with Tourette syndrome (TS) both clinically sensible and thought-provoking. The article is clearly written and the take-home message is convincingly argued: patients with TS who are candidates to DBS present by definition with a clinical picture characterised by highly severe and refractory tics. Rather than considering DBS as a panacea for the multifaceted neurobehavioural spectrum complicating patients’ presentations, the focus (and expectations) of DBS should remain anchored to tic alleviation. From a practical point of view, it has been observed that the DBS procedure can have wide-ranging effects, however the approach of a priori targeting multiple symptoms at the same time (“killing several birds with one stone”) can be prone to theoretical and clinical fallacies. Conversely, it would be interesting to test the sequential approach proposed by the author by systematically assessing changes in health-related quality of life in patients undergoing tic-focused neuromodulation, followed by specific therapeutic interventions for the residual behavioural co-morbidities.",
"responses": []
},
{
"id": "16161",
"date": "23 Sep 2016",
"name": "Jeremy S. Stern",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nProfessor Hartmann's opinion article is a clear-thinking and original critique of the directions that DBS for GTS is taking. The field is perhaps expanding faster than uncertainties are being addressed, partly due to the heterogenous nature of the condition and the possibility of several different surgical targets- variables that he surveys from an interesting perspective. I agree with him and Professor Cavanna that the practice of selecting modified surgical targets for tics based on comorbidites may lack a good evidence base, but the fundamental problem is establishing how to best to treat tics with DBS, including the prediction of beneficial effect for individual patients.The notion of DBS in GTS as an enabling therapy to allow conventional management strategies to be more successful needs further exploration and may prove an important principle.",
"responses": []
},
{
"id": "16159",
"date": "28 Sep 2016",
"name": "Kevin J. Black",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting and thoughtful take on the approach to DBS in GTS. Because GTS is a complex neuropsychiatric syndrome with a variety of different symptoms (or comorbidities), it may not be reasonable to assume that all of these symptoms will be alleviated by DBS; to the contrary, they may actually appear to be worsening because they were previously being masked by the tics. As the author suggests, it is impossible to make accurate assessments of a treatment's effect without first appropriately classifying the phenomenology of the illness and/or symptoms being targeted.\n\nThis article shows familiarity with the literature and clear writing, and is entirely appropriate for an Opinion Article.\n\nWe add a few comments, hopefully to further the discussion. The author argues well for focusing clinical trials for DBS in GTS on one problem at a time, namely tics. However, as he notes, the great majority of patients with GTS have clinical features other than tics. Gilbert and Buncher (2005)1 include this observation of multiple symptoms as one of several features that complicate performing and interpreting clinical trials in GTS. Focusing DBS on the most problematic symptom in each patient may even prove to produce better results than focusing DBS on tics. But we have insufficient data to make such a judgment.\n\nThe issues identified by Prof. Hartmann highlight the critical importance of further clinical trials and of registering all DBS experience in GTS2.\n\nOne trivial note: Substitute “exacerbated” for the word “majored” near the end of the article.\n\nShan H. Siddiqi Kevin J. Black",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2255
|
https://f1000research.com/articles/5-1539/v1
|
29 Jun 16
|
{
"type": "Research Article",
"title": "Evaluation of unique identifiers used for citation linking",
"authors": [
"Heidi Holst Madsen",
"Dicte Madsen",
"Marianne Gauffriau",
"Heidi Holst Madsen",
"Dicte Madsen"
],
"abstract": "Unique identifiers (UID) are seen as an effective tool to create links between identical publications in databases or identify duplicates in a database. The purpose of the present study is to investigate how well UIDs work for citation linking. We have two objectives:\nExplore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key.\n\nIllustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication.\n\nThe objectives are addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition.\n\nThe case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication.\n\nThe use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.",
"keywords": [
"Citation linking",
"match key",
"unique identifiers",
"DOI",
"Scopus ID",
"Pure",
"SciVal",
"bibliometric analysis"
],
"content": "Introduction\n\nUnique identifiers (UIDs) have been introduced for more and more entities, e.g. Open Researcher and Contributor ID (ORCID) for researchers, and digital object identifiers (DOI) for research publications etc. One advantage of UIDs is that integrations between databases, e.g. citation linking, potentially can be done much more efficiently. This is stressed in a recent evaluation of metrics in research evaluations (Wilsdon et al., 2015) p. 15–22, 145).\n\nOnly a few studies discuss the consequences of using UIDs as a match key in citation linking. We give an overview of these in the literature review. The purpose of the present study is to find out how well UIDs work for citation linking and thus to create publication sets for bibliometric analysis. In citation linking representations of a publication are identified in different databases. Traditionally, this is done via a match key based on bibliographic information such as author, title, etc. The exact method is rarely described. An exception is the evaluation of the Danish Council for Independent Research (Schneider et al., 2014, p. 36–38).\n\nUIDs are simple match keys compared to the traditional method (e.g. Olensky et al., 2015). We explore how the method works in the integration between the current research information system (CRIS), Pure, and the bibliometric research evaluation tool, SciVal, (Elsevier, 2014). SciVal builds on data from the citation index Scopus, and Pure provides a uniform identification of researchers and the organizational structure at a university. UIDs make it easy to export a publication set from Pure to SciVal for bibliometric analysis (Figure 1). An alternative is to define the publication set, e.g. the publications from a department, in Scopus or Web of Science (WoS). This is often a resource-demanding task as researchers do not always register their affiliations correctly and consistently in publications (e.g. Moed et al., 1995, p. 390).\n\nPublication set automatically analyzed for UIDs before export to SciVal, August 2015. Source: Pure from Elsevier, version 4.23.1, local installation at University of Copenhagen.\n\nA widely used UID for research publications is the DOI. It was launched in 2000 (International DOI Foundation, 2015, sec. 1.2) and is by now assigned to publications by more than 5,000 publishers from the big players, e.g. Elsevier, to small societies, e.g. Danish Chemical Society (CrossRef, 2015b). Other UIDs for research publications such as arXiv ID from 1991 (arXiv.org, 2015), PubMed ID (PMID) from 1997 (Ostell et al., 1998, p. 27), and Scopus ID (EID) from 2004 (Elsevier, 2004) are not as prevalent as the DOI. Opposite to the other UIDs, the DOI is assigned to the publication itself and not merely to the publication’s representation in a database. In the integration between Pure and SciVal DOI, PMID and EID are used as match keys for citation linking. From March 2016 the integration between Pure and SciVal is based on EID alone. This will not affect the present study as we analyze publication sets downloaded in August and December 2015.\n\n\nObjectives\n\nThe purpose of the present study is to investigate how well UIDs work for citation linking. We have two objectives:\n\n1) Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key.\n\n2) Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication.\n\nWe describe the characteristics of two publication sets from Pure feeding into the citation linking process, but it is beyond the scope of this paper to do a thorough analysis of the quality of the Pure publication sets. Also we limit the examples of bibliometric indicators to three basic bibliometric indicators. More advanced indicators and their construction in SciVal are not discussed.\n\n\nMethods\n\nThe two objectives were addressed in a literature review and a case study. The literature review gave us an indication of the use of UIDs as match keys in citation linking, and an overview of the precision of the method. As the integration between Pure and SciVal is relatively new and evaluations are not yet reported in the literature, we conducted our own case study to see if the implementation of UIDs as match keys between SciVal and Pure confirms what other studies have found.\n\nEvaluations of UIDs as match keys in citation linking were identified. Information on search terms, search strategy and databases are given below. UIDs as match keys have been used for many applications, but our focus was on research publications, with a particular interest in how the method may affect bibliometric analysis. Thus, the search was limited to studies where UIDs of publications are used as the match key or part of the match key, and in which the method is analyzed and discussed in some detail.\n\nAn exploratory search showed that the terminology for citation linking is not consistent. The matching of identical publications in different databases is called citation linking or reference linking. Matching within the same database is called deduplication. The term citation matching is also used, but often for the more specific purposes where citing and cited publications are matched. We also saw examples of more general terminology, namely integration or interoperability between databases or retrieval strategy. In our subsequent searches the different terms for citation linking were combined (Boolean AND) with different terms for UID: unique, identifier, DOI, PMID. This gave us an idea of which databases use UIDs as match keys, e.g. CrossRef, Mendeley, and Altmetric.com. We also included these databases as search terms and combined them with the different terms for UIDs.\n\nThe searches were conducted in WoS (https://login.webofknowledge.com/), Scopus (https://www.scopus.com/), and Google Scholar (https://scholar.google.dk/). No range of years was specified. If no relevant publications were found in WoS and Scopus, we continued the search in Google Scholar. This means that not only peer-reviewed research but also preprints and reports were included in the literature review. In relevant publications, we manually scanned references and citations for other relevant publications. The searches were done in August and September 2015 followed by later supplementary searches based on the references found in August and September.\n\nIn the case study we explored the coverage, precision, and characteristics of publications matched versus not matched in the integration between Pure and SciVal. Our publication set is from the Department of Clinical Medicine (DoCM) at University of Copenhagen (UCPH). DoCM registers approx. 2,000 research publications in the UCPH Pure database per year. The majority are peer-reviewed journal publications in English. As this type of publication and the health sciences are well-covered in Scopus/SciVal (Mongeon & Paul-Hus, 2016, p. 218–219+222; Valderrama-Zurián et al., 2015, p. 570–571), we expected the DoCM publication set to be well-fitted for our purpose, namely to explore the citation linking process, rather than how well SciVal covers publications from a department.\n\nThe publication set was limited to research publications published in 2014, registered and validated in Pure. Publications published before 2014 were not included as these have been validated at department or group level and the data quality is not consistent as no common practice was in place. The validation of publications from 2014 was undertaken by the authors of this article and three information specialists from the University Library as a service for the Faculty of Health and Medical Sciences. As part of the validation process, language and publication type was determined according to the categories available in Pure. This information is utilized in the Results section. However, the focus of the validation was not citation linking, and fields for UIDs were not mandatory. If PMID or EID was registered in Pure, it is most likely because the publication was imported from PubMed or Scopus. A publication in Pure without a UID may not have a UID, or the UID is simply not registered in Pure. It should be noted that from November 2015 Elsevier matches publications in Pure with Scopus and attributes EIDs to new publications and retrospectively.\n\nOur choice of case implies some limitations. The publication sets have too few non-journal publications to draw conclusions on their coverage and the precision in publications matched versus not matched with UIDs as match keys. Furthermore, the publication year 2014 gives the publications too short of a time since publication to obtain robust citation counts.\n\nThe case study alone did not lead to generalizable results, but the results were compared to findings from the literature review to identify trends and compatibility with previous studies.\n\nBefore we analyzed the outcome of the citation linking based on UID, we downloaded, merged, and cleaned data from Pure and SciVal. This process was carried out in August 2015 (n=2068) and repeated in December 2015 (n=2066). It is possible for researchers and administrative staff to make retrospective changes to the registrations in Pure; this is the most plausible explanation for the lower number of publications in December.\n\nData software\n\nPure local installation at University of Copenhagen, version 4.22.1 for the August download and version 4.23.1 for the December download (data download)\n\nSciVal June 8, 2015, and September 30, 2015 releases (data analysis and download)\n\nMicrosoft Excel 2007 (data cleaning and analysis)\n\nData download, merging and cleaning\n\nRaw data was downloaded from Pure in August and December 2015 using the following filters:\n\nOrganisational unit = Department of Clinical Medicine\n\nPublication category = Research\n\nPublication statuses and dates > Latest > Date: Selected range = 2014\n\nWorkflow = Validated\n\nTo fit relevant data in just one worksheet in Excel and be able to create a .csv file, most of the data columns were deleted, and only the following kept:\n\nAccess to electronic version (full text) > DOI (Digital Object Identifier)-0\n\njournalAssociation.title\n\npages\n\npersons[0].lastName\n\ntypeClassification.typeClassification\n\ntitle\n\nid [=Pure ID]\n\nSource[sourceId]: PubMed [=PMID]\n\nSource[sourceId]: Scopus [=EID]\n\nlanguage.language\n\nDue to an error in the Copenhagen University Pure at the time, it was not possible to download a full data report of publications with the DOI column. Instead, first an ungrouped raw data report was downloaded, then the same report grouped on DOI. The two reports were matched on Pure ID to create one list with DOI data where available.\n\nThe Data set 1 DoCM Pure data August.csv and Data set 2 DoCM Pure data December.csv files comprise our \"raw\" Pure data – ever so slightly tidied to a) create one full data report with DOI where available, b) fit relevant columns in one worksheet to be able to create a .csv file.\n\nThe Pure \"raw\" data was furthermore cleaned by:\n\nRemoving superfluous spaces at the end of DOIs to be able to match DOIs in the Pure data with the DOIs in the SciVal data.\n\nAfter the Pure data was sent to SciVal for analysis, the resulting SciVal publication sets (August and December) were downloaded from SciVal with the following information:\n\nTitle\n\nAuthors\n\nJournal title\n\nCitations\n\nPages\n\nDOI\n\nPublication-type\n\nEID [=Scopus ID]\n\nPubMed ID [=PMID]\n\nThe Data set 3 DoCM SciVal data August.csv and Data set 4 DoCM SciVal data December.csv files comprise our raw SciVal data.\n\nThe SciVal raw data was furthermore cleaned by:\n\nRemoving \"2-s2.0-\" from the EIDs to be able to match with the EIDs in the Pure data.\n\nDuplicate DOIs were identified to remove superfluous/irrelevant publications:\n\nArticle vs. Article in Press (Article kept in data set)\n\nPublication duplicates (if one duplicate had a PMID, that is the one we kept; otherwise we randomly selected which duplicate to keep)\n\nPublication vs. publication attributed wrong ID in Scopus/SciVal and not occurring in the Pure data set (Publication in Pure data set kept)\n\nPublication registered as one publication type vs. same publication registered as another publication type (duplicate with same publication type as in the Pure data set was kept)\n\nAuthor's reply (not in Pure data set) having same DOI as the publication (in Pure data set) it relates to (publication in Pure data set kept).\n\nA note on some Article in Press occurrences in the SciVal data:\n\n1. Sometimes SciVal imports only the Article in Press instance of an article in Scopus (instead of the published article instance), or the article is registered in Scopus only as Article in Press, although it is published.\n\n2. During an automatic update in June 2015 of the UCPH Pure, a number of validated publications were changed from published in 2015 to published in 2014, although really they were published in 2015. As such, they should not have been part of our Pure publication set to begin with.\n\nGrouping publication types in data analysis\n\nJournal:\n\nJournal Article\n\nLetter\n\nReview\n\nArticle in Proceedings\n\nBook:\n\nAnthology\n\nBook\n\nContribution to book:\n\nBook chapter\n\nContribution to conference:\n\nConference abstract for conference\n\nConference abstract in journal\n\nPoster\n\nOther:\n\nComment/debate\n\nDoctoral thesis\n\nEditorial\n\nEncyclopedia chapter\n\nOther contribution\n\nPaper\n\nReport\n\nIn the Results section, characteristics for three groups of publications are shown: Publications with UID exported from Pure to SciVal and matched, publications with UID exported from Pure to SciVal and not matched, and publications without UID not exported from Pure to SciVal. The publications without UID (DOI, PMID or EID) were extracted from the cleaned Data set 1 and 2 with Pure data. To identify publications exported from Pure to SciVal and matched, we compared UIDs (DOI, PMID and/or EID) in the cleaned Data set 3 with UIDs in the cleaned Data set 1 (August download). Publications in Data set 1 with no corresponding UID in Data set 3 constitute publications with UID exported from Pure to SciVal and not matched. This was repeated for Data set 2 and 4 (December download). The EIDs attributed automatically to publications in Pure were not visible in our raw data. We found 32 publications in Data set 4 from SciVal which must have an EID in Pure and SciVal as no other UID was assigned to them. Finally we compared UIDs in the cleaned Data set 3 and 4 to identify publications matched in SciVal in December but not in August.\n\n\nResults\n\nThe first objective was met via the literature review and case study, namely to explore coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key. The second objective was addressed in the last part of the case study.\n\nThe literature review shows two trends. Firstly, the publication year of relevant studies is 2011 or later. Older UIDs such as arXiv ID and PMID do not seem to have the same momentum as DOI. Secondly, the use of UIDs for citation linking in bibliometric studies and citation indexes seems under-reported. A possible explanation is that the commercial players do not publish their methodologies in full detail (Olensky, 2014, p. 3). However, in a study from 2015, two bibliometric research groups provide documentation for how they use DOI as part of their match keys (Olensky et al., 2015, p. 7–9).\n\nIf we do not focus on citation linking in citation indexes and bibliometric analysis alone, we find an increasing number of tools for handling and analyzing research publications, e.g. CrossRef’s cited-by links (CrossRef, 2015a) and Altmetric.com’s embeddable badges (Altmetric.com, n.d.). Evaluations of these databases were also included in our literature review. But also for these tools evaluations of UIDs as match keys are rare.\n\nWe included sixteen studies in the literature review. Eight studies discuss the coverage of UIDs. They apply UIDs as match keys for very different purposes (e.g. evaluations of Mendeley, deduplication, interoperability between CRISs at UK universities and Researchfish), but all conclude that UIDs do not cover all records in the databases. Several of the studies do apply other match keys in addition to UIDs (Hammerton et al., 2012, p. 3–4; Haunschild & Bornmann, 2016, p. 63+68; HEFCE, 2015, p. 1–5; Jiang et al., 2014; Kim & Kim, 2013, p. 72–73; Kraker et al., 2015; Nuredini & Peters, 2015, p. 3–5; Research Councils UK, 2015).\n\nEight studies, in addition to the coverage of UIDs, also address the precision or types of errors when UIDs are used as match keys. The types of errors are summarized in Table 1.\n\nIn the case study we analyzed research publications (co-)authored by the Department of Clinical Medicine (DoCM) at the University of Copenhagen, published in 2014, and registered and validated in Pure. The share of publications matched between Pure and SciVal, or the coverage, is 85.6% in August and 89.3% in December. There are precision issues for a minor part of the publication sets. Three of the error types from Table 1 are also present in our publication sets.\n\nDuplicate DOIs (Table 1): An automatic report from SciVal states that 1837 publications (August) and 1876 publications (December) are matched with our Pure publication sets. These numbers are inflated due to DOI duplicates (Table 2).\n\nFrom August to December the number of duplicates decreases partly due to Scopus’s automatic cleaning process, where an Article in Press is deleted after the published version is registered in Scopus. We have discussed our results with consultants from Elsevier’s SciVal team and this has led to a correction of some of the other duplicates. Elsevier’s new routine for adding EIDs to publication records in Pure may also have had an effect.\n\nIncorrect DOIs in reference lists and databases & DOIs not registered by the database where a bibliometric analysis is performed (Table 1): In the August and December publication sets, respectively 5 and 2 of the DOI duplicates are examples of publications assigned a wrong DOI in Scopus (Table 2). For a 10% sample of the remaining matched publications in the August and December publication sets we verified the DOIs. The publications were sorted by DOI and every tenth publication was searched in Scopus, PubMed, and CrossRef where title, authors, journal, and start page were compared. One error was identified: the DOI is not registered in Scopus. Furthermore, we checked the 77 publications not matched in SciVal in August but matched in December. Of these, 36 publications have a DOI in our Pure publication set. No errors were found in Scopus. But as the publications were unmatched in August, a DOI or other UID must have been missing or been incorrect in Scopus in August.\n\nWe now turn to the characteristics of the publications in our publication sets. In Table 3–Table 7, the general characteristics of publications matched versus not matched in the integration between Pure and SciVal are presented. We have a particular interest in the publications’ UIDs as these are essential for a possible match. Publication type and language can give us an indication of whether all potential matches are made. We expected journal publications in English to be matched because they are well-covered in Scopus. Table 3 gives an overview of how many publications were matched and unmatched. For the unmatched publications we also show how many have a UID.\n\nDownload from August and December 2015.\n\nIn Table 4a & Table 4b we focus on the types of UIDs for the matched and the unmatched publications.\n\nDownload from August and December 2015.\n\nDOI is the most common UID (Table 4a) but nearly as many publications have a PMID. This was expected as the majority of the publications were imported from PubMed to Pure in our specific publication set. In the August publication set, very few publications had an EID most likely because Scopus is not commonly used for import to Pure by DoCM. In the December set we could not analyze the EIDs as automatically attributed EIDs are not shown in our Pure reports of raw data. According to this report, 10 publications had an EID. But at least 32 additional publications in our Pure publication set from December had an EID as no other UID is assigned to them in our Pure raw data and they were matched in SciVal.\n\nDownload from August and December 2015.\n\nThe unmatched publications with a UID are shown in Table 4b. PMID is the most common UID, up to 90%. Close to 40% of the publications have a DOI. For the December publication set, we assume that the unmatched publications have no EID, otherwise they should have been matched.\n\nIn the following three tables we analyzed publication type. Notable, but not surprising, is that 97% of matched publications are journal contributions (Table 5a), as these are usually well-represented in SciVal/Scopus. What is surprising, however, is it that practically the same percentage of unmatched publications with a UID are journal contributions (Table 5b). For the publications without a UID (Table 5c) there are still many journal publications, approximately 50%, but a much lower share than for the publications with a UID. The distributions among publication types do not differ substantially between the August and December publication sets. All publication sets include very few non-journal publications.\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nWe also analyzed the language of the publications. Concerning the matched publications, 99% are written in English. Interestingly, the absolute number of matched publications in other languages increased from 4 to 27 between August and December (Table 6a). Elsevier’s automatic assignment of EIDs may improve the match for these publications in our specific setting. However, our publication set is far too small to draw any conclusions from. For the unmatched publications with and without UID in the August and December publication sets, the ratios between English and other languages are close to fifty-fifty (Table 6b and Table 6c).\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nOur analysis reveals more journal publications in English not matched in SciVal than we expected. Therefore we extracted lists of the top journals according to number of publications from our publication sets. For the unmatched publications a large share is published in the two journals of the Danish Medical Association (Ugeskrift for Læger and Danish Medical Journal). Both are indexed by Scopus. Interestingly, we see Ugeskrift for Læger represented among the matched publications, the unmatched publications with UID, and the unmatched publications without UID. Also PLOS ONE publications are among both the matched and the unmatched publications, but not among publications without a UID. Three of the unmatched PLOS ONE publications from the August publication set are matched in December. The remaining three PLOS ONE publications were still not registered in Scopus in December. Table 7b and Table 7c includes more journals which are indexed by Scopus but the publications are not matched. For example Clinical Nutrition (cf. Table 7c) with 299 publications from 2014 indexed in Scopus, and Clinical and Translational Allergy (cf. Table 7b) with only 4 publications from 2014 indexed in Scopus. This may indicate some shortcomings in the Scopus indexing procedures.\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nDownload from August and December 2015.\n\nIn summary, our first objective, to explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as match keys, was answered by the literature review and case study. The literature review shows that only a few studies report findings on UIDs as match keys in citation linking. Results on coverage are reported and errors in the matching procedure are less frequently addressed (Table 1). The findings from the case study show that majority of the publications were matched (85.6% in August and 89.3% in December). Almost all the matched publications have a DOI and are journal publications in English. Among the matched publications, 67 (3.8%) in the publication set from August have a duplicate DOI, whereas 32 (1.7%) from December do. Other error types (Table 1) were observed which lowered the precision of the match between Pure and SciVal. Still, duplicate DOIs are the most prevalent problem. However, both coverage and precision have improved from August to December. This can be explained to some extent by Scopus’s automatic merging of Article in Press and the published version. Elsevier’s procedure of adding EIDs to publications in Pure may correct other duplicates and improve the coverage. Finally, duplicates may have been corrected manually by Elsevier in Scopus.\n\nThe unmatched publications also include journal publications. Close to half of these are in Danish and published in the journal Ugeskrift for Læger of which the indexing in Scopus is highly irregular. Our analysis indicates that journals with publications in English also suffer from similar irregular indexing but to a much lesser extent.\n\nIn regards to the second objective, our case study can be used to show that publication sets formed by using UIDs as match keys may affect the bibliometric indicators: Number of publications, number of citations, and the average number of citations per publication. This is to our knowledge only discussed briefly in the two studies. They both conclude that duplicate DOIs can lead to errors in bibliometric analysis (Franceschini et al., 2015, p. 2186; Valderrama-Zurián et al., 2015, p. 575).\n\nThe coverage can affect bibliometric indicators. Results from our case study indicated that the majority of the publications from Pure are matched correctly in SciVal. Yet, the difference between the August and the December publication sets and the analysis of top journals (Table 7a–Table 7c) show that coverage can be improved. This means that the number of publications and citations could be higher in a bibliometric analysis based on our publication set. Ugeskrift for Læger has over 100 publications that are not covered in SciVal/Scopus. The journal is not highly cited (Scopus 2014 IPP = 0.127, SNIP = 0.109) so inclusion of the missing publications would probably increase the number of citations a little, but lower the average number of citations per publication. However, inclusion of the missing publications for other journals could potentially have the opposite effect and increase the average number of citations per publication. An example is PLOS ONE (Scopus 2014 IPP = 3.270, SNIP = 1.034).\n\nThe precision of a bibliometric indicator is distorted by the fact that some DOIs are matched multiple times in SciVal. In most cases it is due to a duplicate of the same publication, but we also observed instances of publications in our Pure publication set with a DOI duplicate in SciVal not present in the Pure set (Table 2). The duplicates have several implications for the bibliometric indicators number of publications, number of citations, and the average number of citations per publication.\n\nThe number of publications becomes inflated by inclusion of duplicates. In our publication sets from August and December the publication count increased by 3.8% and 1.7%, respectively. Therefore we recommend that when the number of publications is calculated, duplicates should be removed whether the duplicate publication is in the original Pure publication set or not.\n\nBefore citations are counted, all duplicates not present in the Pure publication set must be deleted. For the remaining duplicate pairs we found that sometimes both duplicates were cited independently. In all instances except one there was no overlap between the citations. Citations divided between duplicates in Scopus are also reported in another study where variations of a journal name results in duplicates in Scopus. It is suggested that databases like Scopus can improve verification of DOIs to solve the duplicate problem (Valderrama-Zurián et al., 2015).\n\nThe calculation of the average number of citations per publication should not include duplicates in counting publications but include duplicates in counting citations. If duplicates are kept the average number of citations per publication will be too low. If the duplicates are removed some of the citations may also be discarded and again the average number of citations per publication will be too low.\n\n\nConclusion\n\nUIDs are seen as an effective tool to create links between identical publications in databases or identify duplicates in a database. The use of UIDs as match keys in citation linking is well-implemented in many settings but only few studies evaluate how UIDs work as match keys. As DOIs are implemented in more and more settings it also becomes increasingly interesting as a match key. According to the publication years of the studies in our literature review we suggest that this trend took off around 2010.\n\nOur case study confirms the findings of the literature review. UIDs as match keys do not return a 100% coverage of a publication set, and include errors for a small part of the matches. It is not possible to draw conclusions on when the coverage and precision is satisfactory as this should be discussed in relation to the purpose of a citation linking exercise, exemplified here as a bibliometric analysis.\n\nWe identified duplicate DOIs as a particular problem in citation linking. This type of error is easy to detect while other types of errors demand a more thorough analysis of the publication sets. This analysis could be done by using a traditional match key based on title, author name, etc. Other error types also present in our case study are: incorrect UID in reference lists and databases, and UIDs not registered by the database where a bibliometric analysis is performed.\n\nCitation linking is used for many purposes, but our focus is bibliometric indicators. Here the duplicate DOIs constitute a problem as both keeping them in the publication set to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication and the many other bibliometric indicators which combine publication and citation counts. Also the coverage of a publication set can affect bibliometric indicators. We have shown that failing to fully cover a low impact journal may also lead to imprecise bibliometric indicators.\n\n\nFuture implications\n\nOur purpose has been to contribute to the discussion on how well UIDs work for citation linking with a focus on preparing publication sets for bibliometric analysis. Compared to traditional citation linking where bibliographic information is used as the match key, UIDs are efficient, but they also have drawbacks.\n\nThe coverage of UIDs is fully dependent on whether a UID is assigned to a publication, and its representations in the publication lists and databases used for a particular citation linking exercise. Here the traditional match key has an advantage as it often is dependent on basic bibliographic data and can be modified to fit different formats. The traditional match key will probably have a good chance of retrieving all publications with a UID if the representations of the publications have basic bibliographic data of a fair quality. In addition, the traditional match key can retrieve publications without UIDs.\n\nThe precision of UIDs depends on how carefully a UID is assigned to a publication and its representations in the publication lists and databases. Using a single UID as a match key can be fragile as no crosschecks are made on other data fields. Detection of errors requires an examination of the result of the citation linking. The traditional match key often relies on more data fields and thus has a built-in crosscheck. Neither of the match keys will solve the problem of duplicates of identical publications.\n\nWe recommend more studies to be done on the pros and cons of UIDs because UIDs are being increasingly introduced in more entities and adopted as efficient match keys. The availability of UIDs may become critical for the inclusion of a publication or a database in a bibliometric analysis.\n\n\nData availability\n\nF1000Research: Dataset 1. Data of evaluation of unique identifiers used for citation linking, 10.5256/f1000research.8913.d126923 (Gauffriau et al., 2016).",
"appendix": "Author contributions\n\n\n\nThe authors have contributed to the work according to the four criteria for authorship in Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals. MG designed the work and performed the literature review. HHM and DM performed the experiments. All authors analyzed the data, drew conclusions, wrote and edited the paper. All authors agree on the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors wish to thank Guillaume Warnan and Floortje Flippo from Elsevier’s SciVal team for information on procedures in Pure, Scopus, and SciVal as well as manual matching of publications between Pure and SciVal. We are also grateful for the chance to present and discuss a preliminary version of our case study at the 20th Nordic Workshop on Bibliometrics and Research Policy 2015 in Oslo. And last but not least, thank you to Lorna Wildgaard from the Royal School of Library and Information Science, Denmark, for commenting on an earlier version of this paper.\n\n\nReferences\n\nAltmetric.com. (n.d.): Embeddable badges. Retrieved August 11, 2015. Reference Source\n\narXiv.org: Understanding the arXiv identifier. 2015. Reference Source\n\nCostas R, Zahedi Z, Wouters P: Do \"altmetrics\" correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J Assoc Inf Sci Technol. 2015; 66(10): 2003–2019. Publisher Full Text\n\nCrossRef: cited-by linking. Retrieved August 9, 2015. 2015a. Reference Source\n\nCrossRef: Publishers and societies. Retrieved December 6, 2015, 2015b. Reference Source\n\nDataCite. (n.d.): About DataCite. Retrieved February 13, 2016. Reference Source\n\nElsevier: Scopus comes of age. Retrieved December 6, 2015, 2004. Reference Source\n\nElsevier: Elsevier Enhances Pure, Providing New Research Analysis Functionalities through Direct Integration with SciVal. Retrieved August 8, 2015. 2014. Reference Source\n\nFranceschini F, Maisano D, Mastrogiacomo L: A novel approach for estimating the omitted-citation rate of bibliometric databases with an application to the field of bibliometrics. J Am Soc Inf Sci Technol. 2013; 64(10): 2149–2156. Publisher Full Text\n\nFranceschini F, Maisano D, Mastrogiacomo L: Scientific journal publishers and omitted citations in bibliometric databases: Any relationship? J Informetr. 2014; 8(3): 751–765. Publisher Full Text\n\nFranceschini F, Maisano D, Mastrogiacomo L: Errors in DOI indexing by bibliometric databases. Scientometrics. 2015; 102(3): 2181–2186. Publisher Full Text\n\nGauffriau M, Madsen HH, Madsen D: Dataset 1 in: Evaluation of unique identifiers used for citation linking. F1000Research. 2016. Data Source\n\nHammerton JA, Granitzer M, Harvey D, et al.: On generating large-scale ground truth datasets for the deduplication of bibliographic records. In Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics - WIMS ’12. New York, USA: ACM Press, 2012; 18. Publisher Full Text\n\nHaunschild R, Bornmann L: Normalization of Mendeley reader counts for impact assessment. J Informetr. 2016; 10(1): 62–73. Publisher Full Text\n\nHaustein S, Bowman TD, Macaluso B, et al.: Measuring Twitter activity of arXiv e-prints and published papers. In altmetrics14: expanding impacts and metrics. 2014. Publisher Full Text\n\nHaustein S, Siebenlist T: Applying social bookmarking data to evaluate journal usage. J Informetr. 2011; 5(3): 446–457. Publisher Full Text\n\nHEFCE: The Metric Tide: Correlation analysis of REF2014 scores and metrics. 2015. Publisher Full Text\n\nInternational DOI Foundation: DOI® Handbook. Retrieved August 11, 2015, 2015. Reference Source\n\nJiang Y, Lin C, Meng W, et al.: Rule-based deduplication of article records from bibliographic databases. Database (Oxford). 2014; 2014(0): bat086. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim KY, Kim HM: A Study on Developing and Refining a Large Citation Service System. International Journal of Knowledge Content Development & Technology. 2013; 3(1): 65–80. Publisher Full Text\n\nKraker P, Enkhbayar A, Lex E: Exploring Coverage and Distribution of Identifiers on the Scholarly Web. 2015. Reference Source\n\nMoed HF, Debruin RE, Vanleeuwen TN: New Bibliometric Tools for the Assessment of National Research Performance - Database Description, Overview of Indicators and First Applications. Scientometrics. 1995; 33(3): 381–422. Publisher Full Text\n\nMongeon P, Paul-Hus A: The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics. 2016; 106(1): 213–228. Publisher Full Text\n\nNuredini K, Peters I: Economic and Business Studies Journals and Readership Information from Mendeley. In F. Pehar, C. Schloegl, & C. Wolff (Eds.), Re:inventing Information Science in the Networked Society: Proceedings of the 14th International Symposium on Information Science. Zadar. 2015. Reference Source\n\nOlensky M: Data accuracy in bibliometric data sources and its impact on citation matching. Humboldt-Universität zu Berlin. 2014. Reference Source\n\nOlensky M, Schmidt M, van Eck NJ: Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of science. J Assoc Inf Sci Technol. 2015. Publisher Full Text\n\nOstell JM, Wheelan SJ, Kans JA: The NCBI Data Model. In A. D. Baxevanis & B. F. F. Ouellette (Eds.), Bioinformatics. (second Edi.,). New York, USA: John Wiley & Sons, Inc. 1998; 19–43. Publisher Full Text\n\nResearch Councils UK: Interoperability Pilot Phase 1 Report Purpose of the Pilot. 2015. Reference Source\n\nSchneider J, Bloch CW, Aagaard K, et al.: Analyses of the scholarly and scientific output from grants funded by the Danish Council for Independent Research from 2005–2008. 2014. Reference Source\n\nValderrama-Zurián JC, Aguilar-Moya R, Melero-Fuentes D, et al.: A systematic analysis of duplicate records in Scopus. J Informetr. 2015; 9(3): 570–576. Publisher Full Text\n\nWilsdon J, Allen L, Belfiore E, et al.: The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. 2015. Publisher Full Text\n\nZahedi Z, Costas R, Wouters P: How well developed are altmetrics? A cross-disciplinary analysis of the presence of “alternative metrics” in scientific publications. Scientometrics. 2014; 101(2): 1491–1513. Publisher Full Text\n\nZahedi Z, Haustein S, Bowman TD: Exploring data quality and retrieval strategies for Mendeley reader counts. In ASIS&T Workshop on Informetric and Scientometric Research: Metrics14. 2014. Reference Source"
}
|
[
{
"id": "14675",
"date": "28 Jul 2016",
"name": "Marion Schmidt",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe sample is a convenience sample based only on one year and one subject field (health sciences), with a predominantly journal- and English-based publication culture. The authors describe their first objective as \"Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key\", after introducing the more general goal to investigate \"how well UIDs work for citation linking\" (p.3). In order to truly answer the goals and objectives, the hypothesis should be discussed and tested that implementation of UIDs may vary across disciplines, depending on the amount of e.g. smaller, regional journals and easy data import options for a system like Pure from, e.g. PubMed. In this perspective a careful sampling strategy representative for all subjects that are, to a varying extent, covered in the target database SciVal/Scopus as well as more publication years, would have been much more fruitful in order to assess the coverage and thus usability of UIDs in both systems. Thus, data from a university-wide implementation of Pure would make up a reasonable case study.\nThe authors deal inconsistently with this issue as they try to rectify their sample on the one hand \"As this type of publication and the health sciences are well-covered in Scopus/SciVal [...], we expected the DoCM publication set to be well-fitted for our purpose, namely to explore the citation linking process, rather than how well SciVal covers publications from a department\" (p.4) - which is a non-adequate argument as the coverage could be considered separately. On the other hand, they concede that their \"case study\" may not lead to generalizable results and that results will therefore be compared to those from a literature review (p.4). This claim, however, is not really fulfilled, as the studies mentioned in the literature review use UIDs for different purposes, constellations and databases. More importantly, no real comparison takes place, as the authors only recap \"but all conclude that UIDs do not cover all records in the databases\" (p.6). In order to contextualize their own results, concrete settings and results of other studies should be represented and discussed.\nWith regard to the second purpose \"Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication\", the authors do not actually calculate citation indicators including and excluding publications (which are covered, but could not be matched via UIDs), but discuss the problem mostly theoretically.\nI would strongly suggest expanding the initial sample to other fields.\nThus, title and abstract information do - in my opinion - not reflect the actual limits of the study adequately.\nMethods and data (with exceptions mentioned separately below) are sufficiently clearly documented, but the whole study lacks generalizability and, partly, elaborateness of analyses, as in case of the citation indicator perspective. Besides, given the rather manageable amounts of unmatched publications in this study, a comprehensive and more elaborate search and analysis of the causes of the missed match (not covered, missing UID in Scopus, ..) would be preferable. In larger corpora, this could be done via a random sample.\nIn the introduction and literature review, it should be made clearer that the authors use a specific definition of the term citation linking (linking items between databases) for their study and it should be clarified if other studies refer to the same or other scenarios (like reference matching or deduplication).\n\nThe authors write \"In the integration between Pure and SciVal DOI, PMID and EID are used as match keys for citation linking. From March 2016 the integration between Pure and SciVal is based on EID alone. This will not affect the present study as we analyze publication sets downloaded in August and December 2015\" (p.3).\nThe authors should discuss what this does mean with respect of the relevance of their results. Could it mean that the matching of current publications will be probably better?\nRegarding the publication type categorization: Has the categorization been informed by some available classification or, e.g. database scheme? In my opinion, some mappings are sub-optimal and particular; especially the assignment of Doctoral Thesis to \"Other\" instead of \"Book\", Encyclopedia chapter to \"Other\" instead of \"Contribution to Book\", Editorial to \"Other\" instead of \"Journal\", Article in Proceedings to \"Journal\" instead of \"Contribution to Book\" (please compare the WoS document type and publication type classification for the third and fourth case). Including so many and different publication types in a residual category is unprofitable for the later analysis.\nI did not understand the fact that EIDs seem to have been attributed automatically without being visible in the raw data. How?",
"responses": [
{
"c_id": "2159",
"date": "06 Sep 2016",
"name": "Marianne Gauffriau",
"role": "Author Response",
"response": "Response to Marion Schmidt and Keith G. Jeffery Thank you for the valuable reviews. Below we quote the issues raised in the reviews followed by our response and description of how we have edited the paper. Please note that we have sent the same reply to both of you so you have the same information on how we have edited the paper. Marion Schmidt: “In the introduction and literature review, it should be made clearer that the authors use a specific definition of the term citation linking (linking items between databases) for their study and it should be clarified if other studies refer to the same or other scenarios (like reference matching or deduplication).” Keith G. Jeffery: “The paper states (P4) \"The matching of identical publications in different databases is called citation linking or reference linking. Matching within the same database is called deduplication. The term citation matching is also used, but often for the more specific purposes where citing and cited publications are matched\". It is important that the paper clarifies the thinking here also. Citation has nothing to do with matching or de-duplication, although 'clean' data is required for effective and efficient citation. In fact the paper does not really discuss citation itself but only the utilisation of UIDs to achieve citation - and more particularly the difficulties of obtaining 'clean' data. The paper does not reference \"Citation Linking: Improving Access to Online Journals\" S. Hitchcock*, L. Carr, S. Harris, J. M. N. Hey and W. Hall Proceedings of the 2nd ACM International Conference on Digital Libraries, edited by Robert B. Allen and Edie Rasmussen, 1997 (New York, USA: Association for Computing Machinery), pp. 115-122 available at http://journals.ecs.soton.ac.uk/acmdl97.htm which states clearly (and as early as 1997): \"The likely effect of citation linking can be gauged by recognising a direct parallel with citation indexing, which is sometimes referred to as 'forward' referencing (i.e. for a given article, all the subsequent papers that cite it), developed by Garfield (1955). Garfield's description of citation indexing as an 'association of ideas' bears remarkable similarity to Bush's 'association of thoughts' which anticipated modern hypertext. Citation linking combines the two approaches, mapping both reference data and citation index data on to the text in the form of links. Adding electronic links to the literature is introducing a new culture in many respects, but citation linking is likely to be acceptable to the academic community because it builds on practices established in other forms such as print. It also allows the community to exploit its own intellectual input in this process, recognised in Garfield's original rationale that \"by using authors' references in compiling the citation index, we are in reality utilizing an army of indexers\".\" Here citation linking is used - and with precedence - to mean something quite different to the use in this paper. […] I would suggest: […] 2. a paragraph explaining citation, citation linking, citation indexing;” The term citation linking has caused confusion as citation can be understood as the reference from one paper (citing document) to another paper (cited document). Also in the literature we see an inconsistent use of the term. By citation linking we have meant the linking of representations of identical publications across databases. We have had difficulties finding an unambiguous term for this and ended with citation linking. We now have minimized the use of the term citation linking. Instead we use match key and matching process. Marion Schmidt: “The sample is a convenience sample based only on one year and one subject field (health sciences), with a predominantly journal- and English-based publication culture. The authors describe their first objective as \"Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key\", after introducing the more general goal to investigate \"how well UIDs work for citation linking\" (p.3). In order to truly answer the goals and objectives, the hypothesis should be discussed and tested that implementation of UIDs may vary across disciplines, depending on the amount of e.g. smaller, regional journals and easy data import options for a system like Pure from, e.g. PubMed. In this perspective a careful sampling strategy representative for all subjects that are, to a varying extent, covered in the target database SciVal/Scopus as well as more publication years, would have been much more fruitful in order to assess the coverage and thus usability of UIDs in both systems. Thus, data from a university-wide implementation of Pure would make up a reasonable case study. The authors deal inconsistently with this issue as they try to rectify their sample on the one hand \"As this type of publication and the health sciences are well-covered in Scopus/SciVal [...], we expected the DoCM publication set to be well-fitted for our purpose, namely to explore the citation linking process, rather than how well SciVal covers publications from a department\" (p.4) - which is a non-adequate argument as the coverage could be considered separately. On the other hand, they concede that their \"case study\" may not lead to generalizable results and that results will therefore be compared to those from a literature review (p.4). […] I would strongly suggest expanding the initial sample to other fields.” Keith G. Jeffery: “The case study is limited in organisational unit, research discipline, time period; however the results are useful and could - with advantage - be compared with similar studies in other disciplines /organisational units and time periods.” The case we work with was deliberately chosen as the best scenario and not as a sample fit for generalization. We wanted to study how well UIDs work as match key in the integration between Pure and SciVal, so it seemed beneficial to work with a sample with as many UIDs registered as possible. Because of the attributes of publications from the Department of Clinical Medicine (journal publications in English), they seemed likely to have a high number of UIDs registered. Furthermore, the integration between Pure and SciVal is relatively new and we only had limited knowledge about how it worked and found no evaluations of the integration in the literature. We know that publication sets from health sciences are well-covered by Scopus. With the potential good coverage between Pure and SciVal we hoped that our publication set would allow us primarily to focus on the matching itself. The coverage of SciVal/Scopus is reported in the paper, but this is supporting information. In the selection of the case we also took local conditions into account. Working with Pure data validated by the University Library, that is publications from 2014, we felt confident that the sample would have fewer errors regarding registration/correct registration of UIDs than samples from previous years where no central validation routines were in place. It is not possible for us to repeat our analysis with a case covering more research areas and more publication years, as the matching process for the integration between Pure and SciVal has changed. Now only Scopus ID is used as match key. Before, DOI and PubMed ID also worked as match keys. However, we are doing a follow-up study to see how the new matching process works for our original publication set and a new larger publication set. Marion Schmidt: “Thus, title and abstract information do - in my opinion - not reflect the actual limits of the study adequately.” The objective was formulated somewhat broadly and could give the impression that the study included a thorough analysis of the coverage of SciVal/Scopus and the study yielded results which could be generalized to other research fields. As explained above, this is not the case. We have specified the objective to clearly show that we evaluate how well UIDs work as match key. Our focus is the matching process and any errors in this process. Furthermore, we do only work with one case – the integration between Pure and SciVal for a publication set from health sciences. The case cannot be generalized to other research fields or settings. As a consequence of the reformulated objective, we have changed the title and abstract accordingly. Keith G. Jeffery: “The paper usefully lists the common UIDs used for research publications, and in so doing illustrates the problem of disjoint UID sets (intersected by other attributes of the publication) and lack of universal uniqueness but without commenting upon it. An important facility in PURE (based on CERIF) is the ability to utilise 'federated IDs' so that there can be several 'unique identifiers' for the same object thus allowing crosswalking within the metadata for a given publication. The paper mentions UIDs for publications and persons but not for organisations; the use of such UIDs is becoming more common and is useful for comparative analysis of performance by research organisational units. The paper states (P3) \"Opposite to the other UIDs, the DOI is assigned to the publication itself and not merely to the publication’s representation in a database\". There is a lack of clarity of thinking here. The DOI is a digital identifier and thus is not assigned to the publication itself (the scholarly content) but to a digital representation (textual, diagrammatic, tabular...) of it. The representation in the database is of two parts: the metadata (traditionally the library catalog entry) and the publication content (possibly with added datasets, software or artifacts). Throughout the paper it should be made clear that the work is based on the metadata - although the citation is within the textual content of the publication referring to a reference at the end of the publication from where a link can/should be made to the full text of that (cited) article. In fact ideally the link should have a time period of validity and semantics of the kind of citation (such as negative or positive, explanatory...). […] I would suggest: 1. a paragraph clarifying the inter-relationships of metadata and content for one publication with that for another cited from the first; […] 3. the existing paragraph on UIDs being expanded to explain the usefulness of 'federated IDs' for crosswalking within the metadata of a given publication.” The description of a publication and its UIDs is elaborated. We did not analyze the full content of a publication but only parts of its metadata representations in Pure and SciVal. A publication can have more UIDs assigned to it in one or both of the databases. We assumed that each of the UIDs for a publication in a database represented the same publication. See Introduction and the subsection Limitations in the paper, version 2. Marion Schmidt: “On the other hand, they concede that their \"case study\" may not lead to generalizable results and that results will therefore be compared to those from a literature review (p.4). This claim, however, is not really fulfilled, as the studies mentioned in the literature review use UIDs for different purposes, constellations and databases. More importantly, no real comparison takes place, as the authors only recap \"but all conclude that UIDs do not cover all records in the databases\" (p.6). In order to contextualize their own results, concrete settings and results of other studies should be represented and discussed. […] In the introduction and literature review, it should be made clearer that the authors use a specific definition of the term citation linking (linking items between databases) for their study and it should be clarified if other studies refer to the same or other scenarios (like reference matching or deduplication).” Keith G. Jeffery: “The literature review is appropriate.” The literature review shows the big picture and now also includes more details on the studies included. None of the studies in the review were directly comparable to our study as we analyzed how well UIDs work as match key in the matching process between Pure and SciVal. We now show that the studies in the review used UIDs for many different matching processes. Only two of the studies focused on the evaluation of UIDs. Still, based on all the studies we summarized information on error types when UIDs are used as match keys, which is relevant for our study. See the subsection Results of the literature review in the paper, version 2. Marion Schmidt: “With regard to the second purpose \"Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication\", the authors do not actually calculate citation indicators including and excluding publications (which are covered, but could not be matched via UIDs), but discuss the problem mostly theoretically.” The number of citations was only calculated for the publications matched in SciVal at the time we did the exports from Pure to SciVal (August and December 2015). We did not have the number of citations for the other publications. This information is added to the article. For the matched publications we have added the number of citations including and excluding duplicates. See the subsection Results of the case study in the paper, version 2. Marion Schmidt: “Methods and data (with exceptions mentioned separately below) are sufficiently clearly documented, but the whole study lacks generalizability and, partly, elaborateness of analyses, as in case of the citation indicator perspective. Besides, given the rather manageable amounts of unmatched publications in this study, a comprehensive and more elaborate search and analysis of the causes of the missed match (not covered, missing UID in Scopus, ..) would be preferable. In larger corpora, this could be done via a random sample.” Causes for missed matches for publications with a UID were not investigated when we did the exports from Pure to SciVal at publication level but only at journal level (Table 7a-7c). We have looked into the question now, but many of the publications not matched in August and December 2015 are now matched. We added to the paper that from our analysis of journals (Table 7a-7c) it is likely that the publications not matched were not indexed in Scopus. We identified three PLOS ONE publications which were not indexed in Scopus, and Clinical and Translational Allergy had only four publications from 2014 indexed in Scopus. But the missed matches can also be due to missing UIDs in Scopus. For 35 publications we established that they were published in journals not indexed in Scopus. This is also added to the paper. See the subsection Results of the case study in the paper, version 2. Marion Schmidt: “The authors write \"In the integration between Pure and SciVal DOI, PMID and EID are used as match keys for citation linking. From March 2016 the integration between Pure and SciVal is based on EID alone. This will not affect the present study as we analyze publication sets downloaded in August and December 2015\" (p.3). The authors should discuss what this does mean with respect of the relevance of their results. Could it mean that the matching of current publications will be probably better?” The new matching process between Pure and SciVal means that our study is a documentation of the first matching processes between Pure and SciVal and can serve as background for new evaluations. We have added this point to the paper. We do not know if the new functionality is better than the ones we have evaluated but we are looking into that in a follow-up study. See the Introduction in the paper, version 2 Marion Schmidt: “Regarding the publication type categorization: Has the categorization been informed by some available classification or, e.g. database scheme? In my opinion, some mappings are sub-optimal and particular; especially the assignment of Doctoral Thesis to \"Other\" instead of \"Book\", Encyclopedia chapter to \"Other\" instead of \"Contribution to Book\", Editorial to \"Other\" instead of \"Journal\", Article in Proceedings to \"Journal\" instead of \"Contribution to Book\" (please compare the WoS document type and publication type classification for the third and fourth case). Including so many and different publication types in a residual category is unprofitable for the later analysis.” The categorization of publication types has been reorganized according to the categories in Pure. See Table 5a-c. Marion Schmidt: “I did not understand the fact that EIDs seem to have been attributed automatically without being visible in the raw data. How?” The EID automatically attributed to publications in Pure was introduced with Pure version 4.23.0. EIDs are added to publications in Pure as an automatic job. In the beginning the job ran every day with a sub-set of the publications in order not to overload the matching service. 90 days after installation of the new version of Pure the job had covered all publications. Now when a new publications are added to Pure they will be included in the first coming match, hence within 14 days. The automatic attributed EIDs are not shown in our raw data (excel files downloaded from Pure). Therefore it is not possible to detect which publications are matched on the automatic attributed EIDs. More publications are matched in the November dataset, and we assume they are matched on the automatic attributed EID but cannot prove it. Once again thank you for the reviews. We hope that our response and the edited paper have addressed all your comments."
}
]
},
{
"id": "15523",
"date": "08 Aug 2016",
"name": "Keith G. Jeffery",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper aims to contribute to the discussion on use of UIDs for citation linking. It states two objectives:\nExplore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key.\n\nIllustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication.\nThe paper usefully lists the common UIDs used for research publications, and in so doing illustrates the problem of disjoint UID sets (intersected by other attributes of the publication) and lack of universal uniqueness but without commenting upon it. An important facility in PURE (based on CERIF) is the ability to utilise 'federated IDs' so that there can be several 'unique identifiers' for the same object thus allowing crosswalking within the metadata for a given publication. The paper mentions UIDs for publications and persons but not for organisations; the use of such UIDs is becoming more common and is useful for comparative analysis of performance by research organisational units.\nThe paper states (P3) \"Opposite to the other UIDs, the DOI is assigned to the publication itself and not merely to the publication’s representation in a database\". There is a lack of clarity of thinking here. The DOI is a digital identifier and thus is not assigned to the publication itself (the scholarly content) but to a digital representation (textual, diagrammatic, tabular...) of it. The representation in the database is of two parts: the metadata (traditionally the library catalog entry) and the publication content (possibly with added datasets, software or artifacts). Throughout the paper it should be made clear that the work is based on the metadata - although the citation is within the textual content of the publication referring to a reference at the end of the publication from where a link can/should be made to the full text of that (cited) article. In fact ideally the link should have a time period of validity and semantics of the kind of citation (such as negative or positive, explanatory...).\nThe paper states (P4) \"The matching of identical publications in different databases is called citation linking or reference linking. Matching within the same database is called deduplication. The term citation matching is also used, but often for the more specific purposes where citing and cited publications are matched\". It is important that the paper clarifies the thinking here also. Citation has nothing to do with matching or de-duplication, although 'clean' data is required for effective and efficient citation. In fact the paper does not really discuss citation itself but only the utilisation of UIDs to achieve citation - and more particularly the difficulties of obtaining 'clean' data.\nThe paper does not reference \"Citation Linking: Improving Access to Online Journals\" S. Hitchcock*, L. Carr, S. Harris, J. M. N. Hey and W. Hall Proceedings of the 2nd ACM International Conference on Digital Libraries, edited by Robert B. Allen and Edie Rasmussen, 1997 (New York, USA: Association for Computing Machinery), pp. 115-122 available at http://journals.ecs.soton.ac.uk/acmdl97.htm which states clearly (and as early as 1997):\n\"The likely effect of citation linking can be gauged by recognising a direct parallel with citation indexing, which is sometimes referred to as 'forward' referencing (i.e. for a given article, all the subsequent papers that cite it), developed by Garfield (1955). Garfield's description of citation indexing as an 'association of ideas' bears remarkable similarity to Bush's 'association of thoughts' which anticipated modern hypertext. Citation linking combines the two approaches, mapping both reference data and citation index data on to the text in the form of links. Adding electronic links to the literature is introducing a new culture in many respects, but citation linking is likely to be acceptable to the academic community because it builds on practices established in other forms such as print. It also allows the community to exploit its own intellectual input in this process, recognised in Garfield's original rationale that \"by using authors' references in compiling the citation index, we are in reality utilizing an army of indexers\".\"\n\nHere citation linking is used - and with precedence - to mean something quite different to the use in this paper.\n\nThe literature review is appropriate. The case study is limited in organisational unit, research discipline, time period; however the results are useful and could - with advantage - be compared with similar studies in other disciplines /organisational units and time periods.\nOverall the paper is a useful contribution and hopefully will stimulate similar studies in other disciplines and time ranges. but the lack of clarity of thinking should be addressed. I would suggest:\na paragraph clarifying the inter-relationships of metadata and content for one publication with that for another cited from the first;\n\na paragraph explaining citation, citation linking, citation indexing;\n\nthe existing paragraph on UIDs being expanded to explain the usefulness of 'federated IDs' for crosswalking within the metadata of a given publication.",
"responses": [
{
"c_id": "2160",
"date": "06 Sep 2016",
"name": "Marianne Gauffriau",
"role": "Author Response",
"response": "Response to Marion Schmidt and Keith G. Jeffery Thank you for the valuable reviews. Below we quote the issues raised in the reviews followed by our response and description of how we have edited the paper. Please note that we have sent the same reply to both of you so you have the same information on how we have edited the paper. Marion Schmidt: “In the introduction and literature review, it should be made clearer that the authors use a specific definition of the term citation linking (linking items between databases) for their study and it should be clarified if other studies refer to the same or other scenarios (like reference matching or deduplication).” Keith G. Jeffery: “The paper states (P4) \"The matching of identical publications in different databases is called citation linking or reference linking. Matching within the same database is called deduplication. The term citation matching is also used, but often for the more specific purposes where citing and cited publications are matched\". It is important that the paper clarifies the thinking here also. Citation has nothing to do with matching or de-duplication, although 'clean' data is required for effective and efficient citation. In fact the paper does not really discuss citation itself but only the utilisation of UIDs to achieve citation - and more particularly the difficulties of obtaining 'clean' data. The paper does not reference \"Citation Linking: Improving Access to Online Journals\" S. Hitchcock*, L. Carr, S. Harris, J. M. N. Hey and W. Hall Proceedings of the 2nd ACM International Conference on Digital Libraries, edited by Robert B. Allen and Edie Rasmussen, 1997 (New York, USA: Association for Computing Machinery), pp. 115-122 available at http://journals.ecs.soton.ac.uk/acmdl97.htm which states clearly (and as early as 1997): \"The likely effect of citation linking can be gauged by recognising a direct parallel with citation indexing, which is sometimes referred to as 'forward' referencing (i.e. for a given article, all the subsequent papers that cite it), developed by Garfield (1955). Garfield's description of citation indexing as an 'association of ideas' bears remarkable similarity to Bush's 'association of thoughts' which anticipated modern hypertext. Citation linking combines the two approaches, mapping both reference data and citation index data on to the text in the form of links. Adding electronic links to the literature is introducing a new culture in many respects, but citation linking is likely to be acceptable to the academic community because it builds on practices established in other forms such as print. It also allows the community to exploit its own intellectual input in this process, recognised in Garfield's original rationale that \"by using authors' references in compiling the citation index, we are in reality utilizing an army of indexers\".\" Here citation linking is used - and with precedence - to mean something quite different to the use in this paper. […] I would suggest: […] 2. a paragraph explaining citation, citation linking, citation indexing;” The term citation linking has caused confusion as citation can be understood as the reference from one paper (citing document) to another paper (cited document). Also in the literature we see an inconsistent use of the term. By citation linking we have meant the linking of representations of identical publications across databases. We have had difficulties finding an unambiguous term for this and ended with citation linking. We now have minimized the use of the term citation linking. Instead we use match key and matching process. Marion Schmidt: “The sample is a convenience sample based only on one year and one subject field (health sciences), with a predominantly journal- and English-based publication culture. The authors describe their first objective as \"Explore the coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match key\", after introducing the more general goal to investigate \"how well UIDs work for citation linking\" (p.3). In order to truly answer the goals and objectives, the hypothesis should be discussed and tested that implementation of UIDs may vary across disciplines, depending on the amount of e.g. smaller, regional journals and easy data import options for a system like Pure from, e.g. PubMed. In this perspective a careful sampling strategy representative for all subjects that are, to a varying extent, covered in the target database SciVal/Scopus as well as more publication years, would have been much more fruitful in order to assess the coverage and thus usability of UIDs in both systems. Thus, data from a university-wide implementation of Pure would make up a reasonable case study. The authors deal inconsistently with this issue as they try to rectify their sample on the one hand \"As this type of publication and the health sciences are well-covered in Scopus/SciVal [...], we expected the DoCM publication set to be well-fitted for our purpose, namely to explore the citation linking process, rather than how well SciVal covers publications from a department\" (p.4) - which is a non-adequate argument as the coverage could be considered separately. On the other hand, they concede that their \"case study\" may not lead to generalizable results and that results will therefore be compared to those from a literature review (p.4). […] I would strongly suggest expanding the initial sample to other fields.” Keith G. Jeffery: “The case study is limited in organisational unit, research discipline, time period; however the results are useful and could - with advantage - be compared with similar studies in other disciplines /organisational units and time periods.” The case we work with was deliberately chosen as the best scenario and not as a sample fit for generalization. We wanted to study how well UIDs work as match key in the integration between Pure and SciVal, so it seemed beneficial to work with a sample with as many UIDs registered as possible. Because of the attributes of publications from the Department of Clinical Medicine (journal publications in English), they seemed likely to have a high number of UIDs registered. Furthermore, the integration between Pure and SciVal is relatively new and we only had limited knowledge about how it worked and found no evaluations of the integration in the literature. We know that publication sets from health sciences are well-covered by Scopus. With the potential good coverage between Pure and SciVal we hoped that our publication set would allow us primarily to focus on the matching itself. The coverage of SciVal/Scopus is reported in the paper, but this is supporting information. In the selection of the case we also took local conditions into account. Working with Pure data validated by the University Library, that is publications from 2014, we felt confident that the sample would have fewer errors regarding registration/correct registration of UIDs than samples from previous years where no central validation routines were in place. It is not possible for us to repeat our analysis with a case covering more research areas and more publication years, as the matching process for the integration between Pure and SciVal has changed. Now only Scopus ID is used as match key. Before, DOI and PubMed ID also worked as match keys. However, we are doing a follow-up study to see how the new matching process works for our original publication set and a new larger publication set. Marion Schmidt: “Thus, title and abstract information do - in my opinion - not reflect the actual limits of the study adequately.” The objective was formulated somewhat broadly and could give the impression that the study included a thorough analysis of the coverage of SciVal/Scopus and the study yielded results which could be generalized to other research fields. As explained above, this is not the case. We have specified the objective to clearly show that we evaluate how well UIDs work as match key. Our focus is the matching process and any errors in this process. Furthermore, we do only work with one case – the integration between Pure and SciVal for a publication set from health sciences. The case cannot be generalized to other research fields or settings. As a consequence of the reformulated objective, we have changed the title and abstract accordingly. Keith G. Jeffery: “The paper usefully lists the common UIDs used for research publications, and in so doing illustrates the problem of disjoint UID sets (intersected by other attributes of the publication) and lack of universal uniqueness but without commenting upon it. An important facility in PURE (based on CERIF) is the ability to utilise 'federated IDs' so that there can be several 'unique identifiers' for the same object thus allowing crosswalking within the metadata for a given publication. The paper mentions UIDs for publications and persons but not for organisations; the use of such UIDs is becoming more common and is useful for comparative analysis of performance by research organisational units. The paper states (P3) \"Opposite to the other UIDs, the DOI is assigned to the publication itself and not merely to the publication’s representation in a database\". There is a lack of clarity of thinking here. The DOI is a digital identifier and thus is not assigned to the publication itself (the scholarly content) but to a digital representation (textual, diagrammatic, tabular...) of it. The representation in the database is of two parts: the metadata (traditionally the library catalog entry) and the publication content (possibly with added datasets, software or artifacts). Throughout the paper it should be made clear that the work is based on the metadata - although the citation is within the textual content of the publication referring to a reference at the end of the publication from where a link can/should be made to the full text of that (cited) article. In fact ideally the link should have a time period of validity and semantics of the kind of citation (such as negative or positive, explanatory...). […] I would suggest: 1. a paragraph clarifying the inter-relationships of metadata and content for one publication with that for another cited from the first; […] 3. the existing paragraph on UIDs being expanded to explain the usefulness of 'federated IDs' for crosswalking within the metadata of a given publication.” The description of a publication and its UIDs is elaborated. We did not analyze the full content of a publication but only parts of its metadata representations in Pure and SciVal. A publication can have more UIDs assigned to it in one or both of the databases. We assumed that each of the UIDs for a publication in a database represented the same publication. See Introduction and the subsection Limitations in the paper, version 2. Marion Schmidt: “On the other hand, they concede that their \"case study\" may not lead to generalizable results and that results will therefore be compared to those from a literature review (p.4). This claim, however, is not really fulfilled, as the studies mentioned in the literature review use UIDs for different purposes, constellations and databases. More importantly, no real comparison takes place, as the authors only recap \"but all conclude that UIDs do not cover all records in the databases\" (p.6). In order to contextualize their own results, concrete settings and results of other studies should be represented and discussed. […] In the introduction and literature review, it should be made clearer that the authors use a specific definition of the term citation linking (linking items between databases) for their study and it should be clarified if other studies refer to the same or other scenarios (like reference matching or deduplication).” Keith G. Jeffery: “The literature review is appropriate.” The literature review shows the big picture and now also includes more details on the studies included. None of the studies in the review were directly comparable to our study as we analyzed how well UIDs work as match key in the matching process between Pure and SciVal. We now show that the studies in the review used UIDs for many different matching processes. Only two of the studies focused on the evaluation of UIDs. Still, based on all the studies we summarized information on error types when UIDs are used as match keys, which is relevant for our study. See the subsection Results of the literature review in the paper, version 2. Marion Schmidt: “With regard to the second purpose \"Illustrate how publication sets formed by using UIDs as the match key may affect the bibliometric indicators: Number of publications, number of citations and the average number of citations per publication\", the authors do not actually calculate citation indicators including and excluding publications (which are covered, but could not be matched via UIDs), but discuss the problem mostly theoretically.” The number of citations was only calculated for the publications matched in SciVal at the time we did the exports from Pure to SciVal (August and December 2015). We did not have the number of citations for the other publications. This information is added to the article. For the matched publications we have added the number of citations including and excluding duplicates. See the subsection Results of the case study in the paper, version 2. Marion Schmidt: “Methods and data (with exceptions mentioned separately below) are sufficiently clearly documented, but the whole study lacks generalizability and, partly, elaborateness of analyses, as in case of the citation indicator perspective. Besides, given the rather manageable amounts of unmatched publications in this study, a comprehensive and more elaborate search and analysis of the causes of the missed match (not covered, missing UID in Scopus, ..) would be preferable. In larger corpora, this could be done via a random sample.” Causes for missed matches for publications with a UID were not investigated when we did the exports from Pure to SciVal at publication level but only at journal level (Table 7a-7c). We have looked into the question now, but many of the publications not matched in August and December 2015 are now matched. We added to the paper that from our analysis of journals (Table 7a-7c) it is likely that the publications not matched were not indexed in Scopus. We identified three PLOS ONE publications which were not indexed in Scopus, and Clinical and Translational Allergy had only four publications from 2014 indexed in Scopus. But the missed matches can also be due to missing UIDs in Scopus. For 35 publications we established that they were published in journals not indexed in Scopus. This is also added to the paper. See the subsection Results of the case study in the paper, version 2. Marion Schmidt: “The authors write \"In the integration between Pure and SciVal DOI, PMID and EID are used as match keys for citation linking. From March 2016 the integration between Pure and SciVal is based on EID alone. This will not affect the present study as we analyze publication sets downloaded in August and December 2015\" (p.3). The authors should discuss what this does mean with respect of the relevance of their results. Could it mean that the matching of current publications will be probably better?” The new matching process between Pure and SciVal means that our study is a documentation of the first matching processes between Pure and SciVal and can serve as background for new evaluations. We have added this point to the paper. We do not know if the new functionality is better than the ones we have evaluated but we are looking into that in a follow-up study. See the Introduction in the paper, version 2 Marion Schmidt: “Regarding the publication type categorization: Has the categorization been informed by some available classification or, e.g. database scheme? In my opinion, some mappings are sub-optimal and particular; especially the assignment of Doctoral Thesis to \"Other\" instead of \"Book\", Encyclopedia chapter to \"Other\" instead of \"Contribution to Book\", Editorial to \"Other\" instead of \"Journal\", Article in Proceedings to \"Journal\" instead of \"Contribution to Book\" (please compare the WoS document type and publication type classification for the third and fourth case). Including so many and different publication types in a residual category is unprofitable for the later analysis.” The categorization of publication types has been reorganized according to the categories in Pure. See Table 5a-c. Marion Schmidt: “I did not understand the fact that EIDs seem to have been attributed automatically without being visible in the raw data. How?” The EID automatically attributed to publications in Pure was introduced with Pure version 4.23.0. EIDs are added to publications in Pure as an automatic job. In the beginning the job ran every day with a sub-set of the publications in order not to overload the matching service. 90 days after installation of the new version of Pure the job had covered all publications. Now when a new publications are added to Pure they will be included in the first coming match, hence within 14 days. The automatic attributed EIDs are not shown in our raw data (excel files downloaded from Pure). Therefore it is not possible to detect which publications are matched on the automatic attributed EIDs. More publications are matched in the November dataset, and we assume they are matched on the automatic attributed EID but cannot prove it. Once again thank you for the reviews. We hope that our response and the edited paper have addressed all your comments."
}
]
},
{
"id": "15520",
"date": "02 Sep 2016",
"name": "Sarah L. Shreeves",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes a small study to better understand how unique identifiers (in this case DOIs) assigned to publications work as a match key for what the authors call 'citation linking' or matching identical publications in different databases in order to form a publication set that can then be used for bibliometric analysis. The study's goals were to lay out the characteristics of publications that were matched versus those that were unmatched, and to illustrate how the publications set formed might be affected by problems in the matching.\nI found the discussion of the citation linking throughout the article confusing, in part because my expectation of what that means is quite different than what is described in the article. My understanding of citation linking is that one is following works cited in a publication via the UID. It took several reads to clarify this; it may be something that the authors wish to clarify further in the abstract or introduction.\n\nI found the study design, methods, and conclusions to be adequate for the study particularly for the first objective (given the acknowledgement that this is not generalizable). It would be useful to expand this study to other disciplines to see whether there are similar issues and characteristics. In order to replicate this study in another discipline, however, more attention may need to be paid to coverage in Pure and SciVal, or use of other databases may need to be deployed.\n\nThe second objective- to better understand how bibliometric analysis of the publication set formed might be skewed by issues in matching- was undermined by the size of the dataset- there seemed to be too little data from which to draw firm conclusions. However, given that, the description of the results are fine.\n\nI believe that the Future Implications section provides the clearest implications of the study - essentially that UIDs aren't panaceas for deduplication/ citation matching/ etc.- and that use of UIDs as match points should be informed by potential points of failure. This description could have clarified other pieces of the article.\n\nI wavered between 'Approved' and 'Approved with Reservations', and settled on 'Approved'. I believe that the revisions necessary are really clarity of language and noting throughout that the size and scope of the case study provide a limited window.",
"responses": [
{
"c_id": "2192",
"date": "19 Sep 2016",
"name": "Marianne Gauffriau",
"role": "Author Response",
"response": "Thank you for your review. You touch upon important points. Please see our response below. Sarah L. Shreeves: “I found the discussion of the citation linking throughout the article confusing, in part because my expectation of what that means is quite different than what is described in the article. My understanding of citation linking is that one is following works cited in a publication via the UID. It took several reads to clarify this; it may be something that the authors wish to clarify further in the abstract or introduction.” The term citation linking: In the literature we saw an inconsistent use of terms, but settled on \"citation linking\" to mean the matching of representations of identical publications across databases. We got similar comments from other reviewers and have edited our article, minimizing the term \"citation linking\", instead using \"match key\" and \"matching process\". Sarah L. Shreeves: “I found the study design, methods, and conclusions to be adequate for the study particularly for the first objective (given the acknowledgement that this is not generalizable). It would be useful to expand this study to other disciplines to see whether there are similar issues and characteristics. In order to replicate this study in another discipline, however, more attention may need to be paid to coverage in Pure and SciVal, or use of other databases may need to be deployed.” Generalization: We have focused on Pure and SciVal in this study and plan to do the same in a follow-up study where we expand the data sets to cover more research fields within health sciences and investigate a new matching process introduced by Elsevier. We did not consider including other databases as the main focus in the present study is the matching process. Thus we have selected a case where we expected the coverage to be good. If other disciplines with a low coverage in SciVal were studied, we agree that it would be important to analyze and discuss the coverage in more detail. As pointed out in your review, additional databases could be considered. Sarah L. Shreeves: “The second objective- to better understand how bibliometric analysis of the publication set formed might be skewed by issues in matching- was undermined by the size of the dataset- there seemed to be too little data from which to draw firm conclusions. However, given that, the description of the results are fine.“ Bibliometric indicators and the discussion of them are based on a limited data set. We agree: Our results apply to our case alone and may serve as inspiration for other similar studies. In version 2 of the article the part on bibliometric indicators is elaborated upon. See the subsection Results of the case study in the paper, version 2. Sarah L. Shreeves: “I believe that the Future Implications section provides the clearest implications of the study - essentially that UIDs aren't panaceas for deduplication/ citation matching/ etc.- and that use of UIDs as match points should be informed by potential points of failure. This description could have clarified other pieces of the article.” Future Implications: In the section Future Implications we discuss UIDs as match key compared to a traditional match key. We hope to address this in a follow-up study, following Elsevier changing the way publications are matched between (Scopus and) Pure and SciVal: Publications in Pure are matched with publications in Scopus using traditional match keys and assigned a Scopus ID of the matched publication. The match between Pure and SciVal now happens using Scopus ID as only match key. Sarah L. Shreeves: “I wavered between 'Approved' and 'Approved with Reservations', and settled on 'Approved'. I believe that the revisions necessary are really clarity of language and noting throughout that the size and scope of the case study provide a limited window.” Again thank you for your review. We hope that our response and the edited paper have addressed your comments."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1539
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.