id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2111.10456
Egor Lappo
Egor Lappo, Noah A. Rosenberg
A lattice structure for ancestral configurations arising from the relationship between gene trees and species trees
21 pages, 15 figures. This version contains reference updates and minor changes to the text
null
null
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by/4.0/
To a given gene tree topology $G$ and species tree topology $S$ with leaves labeled bijectively from a fixed set $X$, one can associate a set of ancestral configurations, each of which encodes a set of gene lineages that can be found at a given node of the species tree. We introduce a lattice structure on ancestral configurations, studying the directed graphs that provide graphical representations of lattices of ancestral configurations. For a matching gene tree topology and species tree topology $G=S$, we present a method for defining the digraph of ancestral configurations from the tree topology by using iterated cartesian products of graphs. We show that a specific set of paths on the digraph of ancestral configurations is in bijection with the set of labeled histories -- a well-known phylogenetic object that enumerates possible temporal orderings of the coalescences of a tree. For each of a series of tree families, we obtain closed-form expressions for the number of labeled histories by using this bijection to count paths on associated digraphs. Finally, we prove that our lattice construction extends to nonmatching tree pairs, and we use it to characterize pairs $(G,S)$ having the maximal number of ancestral configurations for a fixed $G$. We discuss how the construction provides new methods for performing enumerations of combinatorial aspects of gene and species trees.
[ { "created": "Fri, 19 Nov 2021 22:08:36 GMT", "version": "v1" }, { "created": "Fri, 9 Sep 2022 22:16:42 GMT", "version": "v2" }, { "created": "Wed, 6 Sep 2023 05:03:45 GMT", "version": "v3" } ]
2023-09-07
[ [ "Lappo", "Egor", "" ], [ "Rosenberg", "Noah A.", "" ] ]
To a given gene tree topology $G$ and species tree topology $S$ with leaves labeled bijectively from a fixed set $X$, one can associate a set of ancestral configurations, each of which encodes a set of gene lineages that can be found at a given node of the species tree. We introduce a lattice structure on ancestral configurations, studying the directed graphs that provide graphical representations of lattices of ancestral configurations. For a matching gene tree topology and species tree topology $G=S$, we present a method for defining the digraph of ancestral configurations from the tree topology by using iterated cartesian products of graphs. We show that a specific set of paths on the digraph of ancestral configurations is in bijection with the set of labeled histories -- a well-known phylogenetic object that enumerates possible temporal orderings of the coalescences of a tree. For each of a series of tree families, we obtain closed-form expressions for the number of labeled histories by using this bijection to count paths on associated digraphs. Finally, we prove that our lattice construction extends to nonmatching tree pairs, and we use it to characterize pairs $(G,S)$ having the maximal number of ancestral configurations for a fixed $G$. We discuss how the construction provides new methods for performing enumerations of combinatorial aspects of gene and species trees.
1705.05935
Ricard Sole
Ricard Sole
Rise of the humanbot
3 figures
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The accelerated path of technological development, particularly at the interface between hardware and biology has been suggested as evidence for future major technological breakthroughs associated to our potential to overcome biological constraints. This includes the potential of becoming immortal, having expanded cognitive capacities thanks to hardware implants or the creation of intelligent machines. Here I argue that several relevant evolutionary and structural constraints might prevent achieving most (if not all) these innovations. Instead, the coming future will bring novelties that will challenge many other aspects of our life and that can be seen as other feasible singularities. One particularly important one has to do with the evolving interactions between humans and non-intelligent robots capable of learning and communication. Here I argue that a long term interaction can lead to a new class of "agent" (the humanbot). The way shared memories get tangled over time will inevitably have important consequences for both sides of the pair, whose identity as separated entities might become blurred and ultimately vanish. Understanding such hybrid systems requires a second-order neuroscience approach while posing serious conceptual challenges, including the definition of consciousness.
[ { "created": "Tue, 16 May 2017 21:46:17 GMT", "version": "v1" } ]
2017-05-18
[ [ "Sole", "Ricard", "" ] ]
The accelerated path of technological development, particularly at the interface between hardware and biology has been suggested as evidence for future major technological breakthroughs associated to our potential to overcome biological constraints. This includes the potential of becoming immortal, having expanded cognitive capacities thanks to hardware implants or the creation of intelligent machines. Here I argue that several relevant evolutionary and structural constraints might prevent achieving most (if not all) these innovations. Instead, the coming future will bring novelties that will challenge many other aspects of our life and that can be seen as other feasible singularities. One particularly important one has to do with the evolving interactions between humans and non-intelligent robots capable of learning and communication. Here I argue that a long term interaction can lead to a new class of "agent" (the humanbot). The way shared memories get tangled over time will inevitably have important consequences for both sides of the pair, whose identity as separated entities might become blurred and ultimately vanish. Understanding such hybrid systems requires a second-order neuroscience approach while posing serious conceptual challenges, including the definition of consciousness.
0904.4360
Arnab Bhattacharyya
Arnab Bhattacharyya and Bernhard Haeupler
Robust Regulatory Networks
null
null
null
null
q-bio.MN cs.CC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the characteristic features of genetic networks is their inherent robustness, that is, their ability to retain functionality in spite of the introduction of random errors. In this paper, we seek to better understand how robustness is achieved and what functionalities can be maintained robustly. Our goal is to formalize some of the language used in biological discussions in a reasonable mathematical framework, where questions can be answered in a rigorous fashion. These results provide basic conceptual understanding of robust regulatory networks that should be valuable independent of the details of the formalism. We model the gene regulatory network as a boolean network, a general and well-established model introduced by Stuart Kauffman. A boolean network is said to be in a viable configuration if the node states of the network at its fixpoint satisfy some given constraint. We specify how mutations affect the behavior of the boolean network. A network is then said to be robust if most random mutations to the network reach a viable configuration. The main question investigated in our study is: given a constraint on the fixpoint configuration, does there exist a network that is robust with respect to it and, if so, what is its structure? We demonstrate both explicit constructions of robust networks as well as negative results disproving their existence.
[ { "created": "Tue, 28 Apr 2009 10:46:26 GMT", "version": "v1" } ]
2009-04-29
[ [ "Bhattacharyya", "Arnab", "" ], [ "Haeupler", "Bernhard", "" ] ]
One of the characteristic features of genetic networks is their inherent robustness, that is, their ability to retain functionality in spite of the introduction of random errors. In this paper, we seek to better understand how robustness is achieved and what functionalities can be maintained robustly. Our goal is to formalize some of the language used in biological discussions in a reasonable mathematical framework, where questions can be answered in a rigorous fashion. These results provide basic conceptual understanding of robust regulatory networks that should be valuable independent of the details of the formalism. We model the gene regulatory network as a boolean network, a general and well-established model introduced by Stuart Kauffman. A boolean network is said to be in a viable configuration if the node states of the network at its fixpoint satisfy some given constraint. We specify how mutations affect the behavior of the boolean network. A network is then said to be robust if most random mutations to the network reach a viable configuration. The main question investigated in our study is: given a constraint on the fixpoint configuration, does there exist a network that is robust with respect to it and, if so, what is its structure? We demonstrate both explicit constructions of robust networks as well as negative results disproving their existence.
2102.03961
Diego D\'iaz-Dom\'inguez
Diego Diaz-Dominguez annd Gonzalo Navarro
Efficient construction of the extended BWT from grammar-compressed DNA sequencing reads
null
null
null
null
q-bio.GN cs.DS
http://creativecommons.org/licenses/by/4.0/
We present an algorithm for building the extended BWT (eBWT) of a string collection from its grammar-compressed representation. Our technique exploits the string repetitions captured by the grammar to boost the computation of the eBWT. Thus, the more repetitive the collection is, the lower are the resources we use per input symbol. We rely on a new grammar recently proposed at DCC'21 whose nonterminals serve as building blocks for inducing the eBWT. A relevant application for this idea is the construction of self-indexes for analyzing sequencing reads -- massive and repetitive string collections of raw genomic data. Self-indexes have become increasingly popular in Bioinformatics as they can encode more information in less space. Our efficient eBWT construction opens the door to perform accurate bioinformatic analyses on more massive sequence datasets, which are not tractable with current eBWT construction techniques.
[ { "created": "Mon, 8 Feb 2021 02:10:34 GMT", "version": "v1" } ]
2021-02-10
[ [ "Navarro", "Diego Diaz-Dominguez annd Gonzalo", "" ] ]
We present an algorithm for building the extended BWT (eBWT) of a string collection from its grammar-compressed representation. Our technique exploits the string repetitions captured by the grammar to boost the computation of the eBWT. Thus, the more repetitive the collection is, the lower are the resources we use per input symbol. We rely on a new grammar recently proposed at DCC'21 whose nonterminals serve as building blocks for inducing the eBWT. A relevant application for this idea is the construction of self-indexes for analyzing sequencing reads -- massive and repetitive string collections of raw genomic data. Self-indexes have become increasingly popular in Bioinformatics as they can encode more information in less space. Our efficient eBWT construction opens the door to perform accurate bioinformatic analyses on more massive sequence datasets, which are not tractable with current eBWT construction techniques.
1703.10948
William Jacobs
William M Jacobs, Eugene I Shakhnovich
Evidence of evolutionary selection for co-translational folding
null
null
10.1073/pnas.1705772114
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experiments and simulations have demonstrated that proteins can fold on the ribosome. However, the extent and generality of fitness effects resulting from co-translational folding remain open questions. Here we report a genome-wide analysis that uncovers evidence of evolutionary selection for co-translational folding. We describe a robust statistical approach to identify loci within genes that are both significantly enriched in slowly translated codons and evolutionarily conserved. Surprisingly, we find that domain boundaries can explain only a small fraction of these conserved loci. Instead, we propose that regions enriched in slowly translated codons are associated with co-translational folding intermediates, which may be smaller than a single domain. We show that the intermediates predicted by a native-centric model of co-translational folding account for the majority of these loci across more than 500 E. coli proteins. By making a direct connection to protein folding, this analysis provides strong evidence that many synonymous substitutions have been selected to optimize translation rates at specific locations within genes. More generally, our results indicate that kinetics, and not just thermodynamics, can significantly alter the efficiency of self-assembly in a biological context.
[ { "created": "Fri, 31 Mar 2017 15:39:13 GMT", "version": "v1" }, { "created": "Tue, 10 Oct 2017 19:36:39 GMT", "version": "v2" } ]
2017-10-12
[ [ "Jacobs", "William M", "" ], [ "Shakhnovich", "Eugene I", "" ] ]
Recent experiments and simulations have demonstrated that proteins can fold on the ribosome. However, the extent and generality of fitness effects resulting from co-translational folding remain open questions. Here we report a genome-wide analysis that uncovers evidence of evolutionary selection for co-translational folding. We describe a robust statistical approach to identify loci within genes that are both significantly enriched in slowly translated codons and evolutionarily conserved. Surprisingly, we find that domain boundaries can explain only a small fraction of these conserved loci. Instead, we propose that regions enriched in slowly translated codons are associated with co-translational folding intermediates, which may be smaller than a single domain. We show that the intermediates predicted by a native-centric model of co-translational folding account for the majority of these loci across more than 500 E. coli proteins. By making a direct connection to protein folding, this analysis provides strong evidence that many synonymous substitutions have been selected to optimize translation rates at specific locations within genes. More generally, our results indicate that kinetics, and not just thermodynamics, can significantly alter the efficiency of self-assembly in a biological context.
0909.1582
Tobias Galla
Tobias Galla
Independence and interdependence in the nest-site choice by honeybee swarms: agent-based models, analytical approaches and pattern formation
13 pages, 9 figures; accepted in revised form by Journal of Theoretical Biology
null
null
null
q-bio.PE cond-mat.stat-mech physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper List, Elsholtz and Seeley [Phil. Trans. Roy. Soc. B. 364 (2009) 755] have devised an agent-based model of the the nest-choice dynamics in swarms of honeybees, and have concluded that both interdependence and independence are needed for the bees to reach a consensus on the best nest site. We here present a simplified version of the model which can be treated analytically with the tools of statistical physics and which largely has the same features as the original dynamics. Based on our analytical approaches it is possible to characterize the co-ordination outcome exactly on the deterministic level, and to a good approximation if stochastic effects are taken into account, reducing the need for computer simulations on the agent-based level. In the second part of the paper we present a spatial extension, and show that transient non-trivial patterns emerge, before consensus is reached. Approaches in terms of Langevin equations for continuous field variables are discussed.
[ { "created": "Tue, 8 Sep 2009 21:22:17 GMT", "version": "v1" } ]
2009-09-10
[ [ "Galla", "Tobias", "" ] ]
In a recent paper List, Elsholtz and Seeley [Phil. Trans. Roy. Soc. B. 364 (2009) 755] have devised an agent-based model of the the nest-choice dynamics in swarms of honeybees, and have concluded that both interdependence and independence are needed for the bees to reach a consensus on the best nest site. We here present a simplified version of the model which can be treated analytically with the tools of statistical physics and which largely has the same features as the original dynamics. Based on our analytical approaches it is possible to characterize the co-ordination outcome exactly on the deterministic level, and to a good approximation if stochastic effects are taken into account, reducing the need for computer simulations on the agent-based level. In the second part of the paper we present a spatial extension, and show that transient non-trivial patterns emerge, before consensus is reached. Approaches in terms of Langevin equations for continuous field variables are discussed.
1403.1614
Jae Kyoung Kim
Jae Kyoung Kim, Zachary P. Kilpatrick, Matthew R. Bennett, Kre\v{s}imir Josi\'c
Molecular mechanisms that regulate the coupled period of the mammalian circadian clock
21 pages, 16 figures
Biophyjical Journal 106 (2014)
10.1016/j.bpj.2014.02.039
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In mammals, most cells in the brain and peripheral tissues generate circadian (~24hr) rhythms autonomously. These self-sustained rhythms are coordinated and entrained by a master circadian clock in the suprachiasmatic nucleus (SCN). Within the SCN, the individual rhythms of each neuron are synchronized through intercellular signaling. One important feature of SCN is that the synchronized period is close to the cell population mean of intrinsic periods. In this way, the synchronized period of the SCN stays close to the periods of cells in peripheral tissues. This is important for SCN to entrain cells throughout the body. However, the mechanism that drives the period of the coupled SCN cells to the population mean is not known. We use mathematical modeling and analysis to show that the mechanism of transcription repression plays a pivotal role in regulating the coupled period. Specifically, we use phase response curve analysis to show that the coupled period within the SCN stays near the population mean if transcriptional repression occurs via protein sequestration. In contrast, the coupled period is far from the mean if repression occurs through highly nonlinear Hill-type regulation (e.g. oligomer- or phosphorylation-based repression). Furthermore, we find that the timescale of intercellular coupling needs to be fast compared to that of intracellular feedback to maintain the mean period. These findings reveal the important relationship between the intracellular transcriptional feedback loop and intercellular coupling. This relationship explains why transcriptional repression appears to occur via protein sequestration in multicellular organisms, mammals and Drosophila, in contrast with the phosphorylation-based repression in unicellular organisms. That is, transition to protein sequestration is essential for synchronizing multiple cells with a period close to the population mean (~24hr).
[ { "created": "Thu, 6 Mar 2014 22:36:46 GMT", "version": "v1" } ]
2014-06-10
[ [ "Kim", "Jae Kyoung", "" ], [ "Kilpatrick", "Zachary P.", "" ], [ "Bennett", "Matthew R.", "" ], [ "Josić", "Krešimir", "" ] ]
In mammals, most cells in the brain and peripheral tissues generate circadian (~24hr) rhythms autonomously. These self-sustained rhythms are coordinated and entrained by a master circadian clock in the suprachiasmatic nucleus (SCN). Within the SCN, the individual rhythms of each neuron are synchronized through intercellular signaling. One important feature of SCN is that the synchronized period is close to the cell population mean of intrinsic periods. In this way, the synchronized period of the SCN stays close to the periods of cells in peripheral tissues. This is important for SCN to entrain cells throughout the body. However, the mechanism that drives the period of the coupled SCN cells to the population mean is not known. We use mathematical modeling and analysis to show that the mechanism of transcription repression plays a pivotal role in regulating the coupled period. Specifically, we use phase response curve analysis to show that the coupled period within the SCN stays near the population mean if transcriptional repression occurs via protein sequestration. In contrast, the coupled period is far from the mean if repression occurs through highly nonlinear Hill-type regulation (e.g. oligomer- or phosphorylation-based repression). Furthermore, we find that the timescale of intercellular coupling needs to be fast compared to that of intracellular feedback to maintain the mean period. These findings reveal the important relationship between the intracellular transcriptional feedback loop and intercellular coupling. This relationship explains why transcriptional repression appears to occur via protein sequestration in multicellular organisms, mammals and Drosophila, in contrast with the phosphorylation-based repression in unicellular organisms. That is, transition to protein sequestration is essential for synchronizing multiple cells with a period close to the population mean (~24hr).
1701.01219
Geoffrey Goodhill
Geoffrey J Goodhill
Is neuroscience facing up to statistical power?
5 pages
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been demonstrated that the statistical power of many neuroscience studies is very low, so that the results are unlikely to be robustly reproducible. How are neuroscientists and the journals in which they publish responding to this problem? Here I review the sample size justifications provided for all 15 papers published in one recent issue of the leading journal Nature Neuroscience. Of these, only one claimed it was adequately powered. The others mostly appealed to the sample sizes used in earlier studies, despite a lack of evidence that these earlier studies were adequately powered. Thus, concerns regarding statistical power in neuroscience have mostly not yet been addressed.
[ { "created": "Thu, 5 Jan 2017 06:07:48 GMT", "version": "v1" } ]
2017-01-06
[ [ "Goodhill", "Geoffrey J", "" ] ]
It has been demonstrated that the statistical power of many neuroscience studies is very low, so that the results are unlikely to be robustly reproducible. How are neuroscientists and the journals in which they publish responding to this problem? Here I review the sample size justifications provided for all 15 papers published in one recent issue of the leading journal Nature Neuroscience. Of these, only one claimed it was adequately powered. The others mostly appealed to the sample sizes used in earlier studies, despite a lack of evidence that these earlier studies were adequately powered. Thus, concerns regarding statistical power in neuroscience have mostly not yet been addressed.
1103.2206
Hugues Berry
Hugues Berry (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL, LIRIS), Hugues Chat\'e (SPEC - URA 2464)
Anomalous diffusion due to hindering by mobile obstacles undergoing Brownian motion or Orstein-Ulhenbeck processes
Physical Review E (2014)
null
null
null
q-bio.QM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report ''anomalous diffusion'', where mean-squared displacements scale as a power law of time with exponent $\alpha< 1$ (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly-located \textit{immobile} obstacles. Here, we have used Monte-Carlo simulations to investigate transient subdiffusion due to \textit{mobile} obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g. Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power-laws with anomalous exponent $\alpha$ that varies with the density of OU obstacles or the relaxation time-scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in 2d. Therefore, our results show that subdiffusion due to mobile obstacles with OU-type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles.
[ { "created": "Fri, 11 Mar 2011 07:58:11 GMT", "version": "v1" }, { "created": "Fri, 24 Jan 2014 07:27:42 GMT", "version": "v2" } ]
2014-01-27
[ [ "Berry", "Hugues", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL, LIRIS" ], [ "Chaté", "Hugues", "", "SPEC - URA 2464" ] ]
In vivo measurements of the passive movements of biomolecules or vesicles in cells consistently report ''anomalous diffusion'', where mean-squared displacements scale as a power law of time with exponent $\alpha< 1$ (subdiffusion). While the detailed mechanisms causing such behaviors are not always elucidated, movement hindrance by obstacles is often invoked. However, our understanding of how hindered diffusion leads to subdiffusion is based on diffusion amidst randomly-located \textit{immobile} obstacles. Here, we have used Monte-Carlo simulations to investigate transient subdiffusion due to \textit{mobile} obstacles with various modes of mobility. Our simulations confirm that the anomalous regimes rapidly disappear when the obstacles move by Brownian motion. By contrast, mobile obstacles with more confined displacements, e.g. Orstein-Ulhenbeck motion, are shown to preserve subdiffusive regimes. The mean-squared displacement of tracked protein displays convincing power-laws with anomalous exponent $\alpha$ that varies with the density of OU obstacles or the relaxation time-scale of the OU process. In particular, some of the values we observed are significantly below the universal value predicted for immobile obstacles in 2d. Therefore, our results show that subdiffusion due to mobile obstacles with OU-type of motion may account for the large variation range exhibited by experimental measurements in living cells and may explain that some experimental estimates are below the universal value predicted for immobile obstacles.
2104.01175
M. \c{C}a\u{g}atay Karakan
Rachael K. Jayne, M. \c{C}a\u{g}atay Karakan, Kehan Zhang, Noelle Pierce, Christos Michas, David J. Bishop, Christopher S. Chen, Kamil L. Ekinci and Alice E. White
Direct laser writing for cardiac tissue engineering: a microfluidic heart on a chip with integrated transducers
Main article 15 pages, 6 figures, 1 tables; supplementary 11 pages, 7 figures, 1 table, 6 movies
Lab on a Chip, 2021, 21, 1724 - 1737
10.1039/D0LC01078B
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have designed and fabricated a microfluidic-based platform for sensing mechanical forces generated by cardiac microtissues in a highly-controlled microenvironment. Our fabrication approach combines Direct Laser Writing (DLW) lithography with soft lithography. At the center of our platform is a cylindrical volume, divided into two chambers by a cylindrical polydimethylsiloxane (PDMS) shell. Cells are seeded into the inner chamber from a top opening, and the microtissue assembles onto tailor-made attachment sites on the inner walls of the cylindrical shell. The outer chamber is electrically and fluidically isolated from the inner one by the cylindrical shell and is designed for actuation and sensing purposes. Externally applied pressure waves to the outer chamber deform parts of the cylindrical shell and thus allow us to exert time-dependent forces on the microtissue. Oscillatory forces generated by the microtissue similarly deform the cylindrical shell and change the volume of the outer chamber, resulting in measurable electrical conductance changes. We have used this platform to study the response of cardiac microtissues derived from human induced pluripotent stem cells (hiPSC) under prescribed mechanical loading and pacing.
[ { "created": "Fri, 2 Apr 2021 17:49:51 GMT", "version": "v1" } ]
2021-05-06
[ [ "Jayne", "Rachael K.", "" ], [ "Karakan", "M. Çağatay", "" ], [ "Zhang", "Kehan", "" ], [ "Pierce", "Noelle", "" ], [ "Michas", "Christos", "" ], [ "Bishop", "David J.", "" ], [ "Chen", "Christopher S.", "" ], [ "Ekinci", "Kamil L.", "" ], [ "White", "Alice E.", "" ] ]
We have designed and fabricated a microfluidic-based platform for sensing mechanical forces generated by cardiac microtissues in a highly-controlled microenvironment. Our fabrication approach combines Direct Laser Writing (DLW) lithography with soft lithography. At the center of our platform is a cylindrical volume, divided into two chambers by a cylindrical polydimethylsiloxane (PDMS) shell. Cells are seeded into the inner chamber from a top opening, and the microtissue assembles onto tailor-made attachment sites on the inner walls of the cylindrical shell. The outer chamber is electrically and fluidically isolated from the inner one by the cylindrical shell and is designed for actuation and sensing purposes. Externally applied pressure waves to the outer chamber deform parts of the cylindrical shell and thus allow us to exert time-dependent forces on the microtissue. Oscillatory forces generated by the microtissue similarly deform the cylindrical shell and change the volume of the outer chamber, resulting in measurable electrical conductance changes. We have used this platform to study the response of cardiac microtissues derived from human induced pluripotent stem cells (hiPSC) under prescribed mechanical loading and pacing.
1101.5371
Marco Morelli
Marco J. Morelli, Caroline F. Wright, Ga\"el Th\'ebaud, Nick J. Knowles, Pawel Herzyk, David J. Paton, Daniel T. Haydon, Donald P. King
Beyond the consensus: dissecting within-host viral population diversity of foot-and-mouth disease virus using next-generation genome sequencing
null
Journal of Virology, March 2011
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sequence diversity of viral populations within individual hosts is the starting material for selection and subsequent evolution of RNA viruses such as foot-and-mouth disease virus (FMDV). Using next-generation sequencing (NGS) performed on a Genome Analyzer platform (Illumina), this study compared the viral populations within two bovine epithelial samples (foot lesions) from a single animal with the Inoculum used to initiate experimental infection. Genomic sequences were determined in duplicate sequencing runs, and the consensus sequence determined by NGS, for the Inoculum, was identical to that previously determined using the Sanger method. However, NGS reveals the fine polymorphic sub-structure of the viral population, from nucleotide variants present at just below 50% frequency to those present at fractions of 1%. Some of the higher frequency polymorphisms identified encoded changes within codons associated with heparan sulphate binding and were present in both feet lesions revealing intermediate stages in the evolution of a tissue-culture adapted virus replicating within a mammalian host. We identified 2,622, 1,434 and 1,703 polymorphisms in the Inoculum, and in the two foot lesions respectively: most of the substitutions occurred only in a small fraction of the population and represent the progeny from recent cellular replication prior to onset of any selective pressures. We estimated an upper limit for the genome-wide mutation rate of the virus within a cell to be 7.8 x 10-4 per nt. The greater depth of detection, achieved by NGS, demonstrates that this method is a powerful and valuable tool for the dissection of FMDV populations within-hosts.
[ { "created": "Thu, 27 Jan 2011 19:56:16 GMT", "version": "v1" } ]
2011-01-28
[ [ "Morelli", "Marco J.", "" ], [ "Wright", "Caroline F.", "" ], [ "Thébaud", "Gaël", "" ], [ "Knowles", "Nick J.", "" ], [ "Herzyk", "Pawel", "" ], [ "Paton", "David J.", "" ], [ "Haydon", "Daniel T.", "" ], [ "King", "Donald P.", "" ] ]
The sequence diversity of viral populations within individual hosts is the starting material for selection and subsequent evolution of RNA viruses such as foot-and-mouth disease virus (FMDV). Using next-generation sequencing (NGS) performed on a Genome Analyzer platform (Illumina), this study compared the viral populations within two bovine epithelial samples (foot lesions) from a single animal with the Inoculum used to initiate experimental infection. Genomic sequences were determined in duplicate sequencing runs, and the consensus sequence determined by NGS, for the Inoculum, was identical to that previously determined using the Sanger method. However, NGS reveals the fine polymorphic sub-structure of the viral population, from nucleotide variants present at just below 50% frequency to those present at fractions of 1%. Some of the higher frequency polymorphisms identified encoded changes within codons associated with heparan sulphate binding and were present in both feet lesions revealing intermediate stages in the evolution of a tissue-culture adapted virus replicating within a mammalian host. We identified 2,622, 1,434 and 1,703 polymorphisms in the Inoculum, and in the two foot lesions respectively: most of the substitutions occurred only in a small fraction of the population and represent the progeny from recent cellular replication prior to onset of any selective pressures. We estimated an upper limit for the genome-wide mutation rate of the virus within a cell to be 7.8 x 10-4 per nt. The greater depth of detection, achieved by NGS, demonstrates that this method is a powerful and valuable tool for the dissection of FMDV populations within-hosts.
1402.2820
Pascal Grange
Pascal Grange, Jason W. Bohland, Benjamin Okaty, Ken Sugino, Hemant Bokil, Sacha Nelson, Lydia Ng, Michael Hawrylycz and Partha P. Mitra
Cell-type-specific transcriptomes and the Allen Atlas (II): discussion of the linear model of brain-wide densities of cell types
178 pages, 207 figures, 7 tables; v2: typos corrected; v3: more typos corrected, missing pseudo-code in section 3.5 written, image attachment changed in Figure 6, misuse of notation $r^{signal}$ corrected, conclusions unchanged
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The voxelized Allen Atlas of the adult mouse brain (at a resolution of 200 microns) has been used in [arXiv:1303.0013] to estimate the region-specificity of 64 cell types whose transcriptional profile in the mouse brain has been measured in microarray experiments. In particular, the model yields estimates for the brain-wide density of each of these cell types. We conduct numerical experiments to estimate the errors in the estimated density profiles. First of all, we check that a simulated thalamic profile based on 200 well-chosen genes can transfer signal from cerebellar Purkinje cells to the thalamus. This inspires us to sub-sample the atlas of genes by repeatedly drawing random sets of 200 genes and refitting the model. This results in a random distribution of density profiles, that can be compared to the predictions of the model. This results in a ranking of cell types by the overlap between the original and sub-sampled density profiles. Cell types with high rank include medium spiny neurons, several samples of cortical pyramidal neurons, hippocampal pyramidal neurons, granule cells and cholinergic neurons from the brain stem. In some cases with lower rank, the average sub-sample can have better contrast properties than the original model (this is the case for amygdalar neurons and dopaminergic neurons from the ventral midbrain). Finally, we add some noise to the cell-type-specific transcriptomes by mixing them using a scalar parameter weighing a random matrix. After refitting the model, we observe than a mixing parameter of $5\%$ leads to modifications of density profiles that span the same interval as the ones resulting from sub-sampling.
[ { "created": "Wed, 12 Feb 2014 13:49:44 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2014 11:34:33 GMT", "version": "v2" }, { "created": "Tue, 18 Feb 2014 13:05:31 GMT", "version": "v3" } ]
2014-02-19
[ [ "Grange", "Pascal", "" ], [ "Bohland", "Jason W.", "" ], [ "Okaty", "Benjamin", "" ], [ "Sugino", "Ken", "" ], [ "Bokil", "Hemant", "" ], [ "Nelson", "Sacha", "" ], [ "Ng", "Lydia", "" ], [ "Hawrylycz", "Michael", "" ], [ "Mitra", "Partha P.", "" ] ]
The voxelized Allen Atlas of the adult mouse brain (at a resolution of 200 microns) has been used in [arXiv:1303.0013] to estimate the region-specificity of 64 cell types whose transcriptional profile in the mouse brain has been measured in microarray experiments. In particular, the model yields estimates for the brain-wide density of each of these cell types. We conduct numerical experiments to estimate the errors in the estimated density profiles. First of all, we check that a simulated thalamic profile based on 200 well-chosen genes can transfer signal from cerebellar Purkinje cells to the thalamus. This inspires us to sub-sample the atlas of genes by repeatedly drawing random sets of 200 genes and refitting the model. This results in a random distribution of density profiles, that can be compared to the predictions of the model. This results in a ranking of cell types by the overlap between the original and sub-sampled density profiles. Cell types with high rank include medium spiny neurons, several samples of cortical pyramidal neurons, hippocampal pyramidal neurons, granule cells and cholinergic neurons from the brain stem. In some cases with lower rank, the average sub-sample can have better contrast properties than the original model (this is the case for amygdalar neurons and dopaminergic neurons from the ventral midbrain). Finally, we add some noise to the cell-type-specific transcriptomes by mixing them using a scalar parameter weighing a random matrix. After refitting the model, we observe than a mixing parameter of $5\%$ leads to modifications of density profiles that span the same interval as the ones resulting from sub-sampling.
2202.05176
Yu Wang
Renquan Zhang, Yu Wang, Zheng Lv, Sen Pei
Evaluating the impact of quarantine measures on COVID-19 spread
13 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During the early stage of the COVID-19 pandemic, many countries implemented non-pharmaceutical interventions (NPIs) to control the transmission of SARS-CoV-2, the causative pathogen of COVID-19. Among those NPIs, quarantine measures were widely adopted and enforced through stay-at-home and shelter-in-place orders. Understanding the effectiveness of quarantine measures can inform decision-making and control planning during the ongoing COVID-19 pandemic and for future disease outbreaks. In this study, we use mathematical models to evaluate the impact of quarantine measures on COVID-19 spread in four cities that experienced large-scale outbreaks in the spring of 2020: Wuhan, New York, Milan, and London. We develop a susceptible-exposed-infected-removed (SEIR)-type model with a component of quarantine and couple this disease transmission model with a data assimilation method. By calibrating the model to case data, we estimate key epidemiological parameters before lockdown in each city. We further examine the impact of quarantine rates on COVID-19 spread after lockdown using model simulations. Results indicate that quarantine of susceptible and exposed individuals and undetected infections is necessary to contain the outbreak; however, the quarantine rates for these populations can be reduced through faster isolation of confirmed cases. We generate counterfactual simulations to estimate effectiveness of quarantine measures. Without quarantine measures, the cumulative confirmed cases could be 73, 22, 43 and 93 times higher than reported numbers within 40 days after lockdown in Wuhan, New York, Milan, and London. Our findings underscore the essential role of quarantine during the early phase of the pandemic.
[ { "created": "Wed, 9 Feb 2022 06:35:23 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2022 09:11:48 GMT", "version": "v2" } ]
2022-03-04
[ [ "Zhang", "Renquan", "" ], [ "Wang", "Yu", "" ], [ "Lv", "Zheng", "" ], [ "Pei", "Sen", "" ] ]
During the early stage of the COVID-19 pandemic, many countries implemented non-pharmaceutical interventions (NPIs) to control the transmission of SARS-CoV-2, the causative pathogen of COVID-19. Among those NPIs, quarantine measures were widely adopted and enforced through stay-at-home and shelter-in-place orders. Understanding the effectiveness of quarantine measures can inform decision-making and control planning during the ongoing COVID-19 pandemic and for future disease outbreaks. In this study, we use mathematical models to evaluate the impact of quarantine measures on COVID-19 spread in four cities that experienced large-scale outbreaks in the spring of 2020: Wuhan, New York, Milan, and London. We develop a susceptible-exposed-infected-removed (SEIR)-type model with a component of quarantine and couple this disease transmission model with a data assimilation method. By calibrating the model to case data, we estimate key epidemiological parameters before lockdown in each city. We further examine the impact of quarantine rates on COVID-19 spread after lockdown using model simulations. Results indicate that quarantine of susceptible and exposed individuals and undetected infections is necessary to contain the outbreak; however, the quarantine rates for these populations can be reduced through faster isolation of confirmed cases. We generate counterfactual simulations to estimate effectiveness of quarantine measures. Without quarantine measures, the cumulative confirmed cases could be 73, 22, 43 and 93 times higher than reported numbers within 40 days after lockdown in Wuhan, New York, Milan, and London. Our findings underscore the essential role of quarantine during the early phase of the pandemic.
1908.03841
Carolyn Talcott
Akos Vertes, Albert-Baskar Arul, Peter Avar, Andrew R. Korte, Lida Parvin, Ziad J. Sahab, Deborah I. Bunin, Merrill Knapp, Denise Nishita, Andrew Poggio, Mark-Oliver Stehr, Carolyn L. Talcott, Brian M. Davis, Christine A. Morton, Christopher J. Sevinsky and Maria I. Zavodszky
Transcriptional Response of SK-N-AS Cells to Methamidophos
null
null
null
null
q-bio.GN cs.LG q-bio.CB stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcriptomics response of SK-N-AS cells to methamidophos (an acetylcholine esterase inhibitor) exposure was measured at 10 time points between 0.5 and 48 h. The data was analyzed using a combination of traditional statistical methods and novel machine learning algorithms for detecting anomalous behavior and infer causal relations between time profiles. We identified several processes that appeared to be upregulated in cells treated with methamidophos including: unfolded protein response, response to cAMP, calcium ion response, and cell-cell signaling. The data confirmed the expected consequence of acetylcholine buildup. In addition, transcripts with potentially key roles were identified and causal networks relating these transcripts were inferred using two different computational methods: Siamese convolutional networks and time warp causal inference. Two types of anomaly detection algorithms, one based on Autoencoders and the other one based on Generative Adversarial Networks (GANs), were applied to narrow down the set of relevant transcripts.
[ { "created": "Sun, 11 Aug 2019 02:53:56 GMT", "version": "v1" } ]
2019-08-13
[ [ "Vertes", "Akos", "" ], [ "Arul", "Albert-Baskar", "" ], [ "Avar", "Peter", "" ], [ "Korte", "Andrew R.", "" ], [ "Parvin", "Lida", "" ], [ "Sahab", "Ziad J.", "" ], [ "Bunin", "Deborah I.", "" ], [ "Knapp", "Merrill", "" ], [ "Nishita", "Denise", "" ], [ "Poggio", "Andrew", "" ], [ "Stehr", "Mark-Oliver", "" ], [ "Talcott", "Carolyn L.", "" ], [ "Davis", "Brian M.", "" ], [ "Morton", "Christine A.", "" ], [ "Sevinsky", "Christopher J.", "" ], [ "Zavodszky", "Maria I.", "" ] ]
Transcriptomics response of SK-N-AS cells to methamidophos (an acetylcholine esterase inhibitor) exposure was measured at 10 time points between 0.5 and 48 h. The data was analyzed using a combination of traditional statistical methods and novel machine learning algorithms for detecting anomalous behavior and infer causal relations between time profiles. We identified several processes that appeared to be upregulated in cells treated with methamidophos including: unfolded protein response, response to cAMP, calcium ion response, and cell-cell signaling. The data confirmed the expected consequence of acetylcholine buildup. In addition, transcripts with potentially key roles were identified and causal networks relating these transcripts were inferred using two different computational methods: Siamese convolutional networks and time warp causal inference. Two types of anomaly detection algorithms, one based on Autoencoders and the other one based on Generative Adversarial Networks (GANs), were applied to narrow down the set of relevant transcripts.
1908.08623
Surjyendu Ray
Surjyendu Ray, Bei Jia, Sam Safavi, Tim van Opijnen, Ralph Isberg, Jason Rosch and Jos\'e Bento
Exact inference under the perfect phylogeny model
null
null
null
null
q-bio.QM cs.DS cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Motivation: Many inference tools use the Perfect Phylogeny Model (PPM) to learn trees from noisy variant allele frequency (VAF) data. Learning in this setting is hard, and existing tools use approximate or heuristic algorithms. An algorithmic improvement is important to help disentangle the limitations of the PPM's assumptions from the limitations in our capacity to learn under it. Results: We make such improvement in the scenario, where the mutations that are relevant for evolution can be clustered into a small number of groups, and the trees to be reconstructed have a small number of nodes. We use a careful combination of algorithms, software, and hardware, to develop EXACT: a tool that can explore the space of all possible phylogenetic trees, and performs exact inference under the PPM with noisy data. EXACT allows users to obtain not just the most-likely tree for some input data, but exact statistics about the distribution of trees that might explain the data. We show that EXACT outperforms several existing tools for this same task. Availability: https://github.com/surjray-repos/EXACT
[ { "created": "Thu, 22 Aug 2019 23:50:31 GMT", "version": "v1" } ]
2019-08-26
[ [ "Ray", "Surjyendu", "" ], [ "Jia", "Bei", "" ], [ "Safavi", "Sam", "" ], [ "van Opijnen", "Tim", "" ], [ "Isberg", "Ralph", "" ], [ "Rosch", "Jason", "" ], [ "Bento", "José", "" ] ]
Motivation: Many inference tools use the Perfect Phylogeny Model (PPM) to learn trees from noisy variant allele frequency (VAF) data. Learning in this setting is hard, and existing tools use approximate or heuristic algorithms. An algorithmic improvement is important to help disentangle the limitations of the PPM's assumptions from the limitations in our capacity to learn under it. Results: We make such improvement in the scenario, where the mutations that are relevant for evolution can be clustered into a small number of groups, and the trees to be reconstructed have a small number of nodes. We use a careful combination of algorithms, software, and hardware, to develop EXACT: a tool that can explore the space of all possible phylogenetic trees, and performs exact inference under the PPM with noisy data. EXACT allows users to obtain not just the most-likely tree for some input data, but exact statistics about the distribution of trees that might explain the data. We show that EXACT outperforms several existing tools for this same task. Availability: https://github.com/surjray-repos/EXACT
2303.07508
Hoang Ngo
Hoang M. Ngo, My T. Thai, Tamer Kahveci
QuTIE: Quantum optimization for Target Identification by Enzymes
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Target Identification by Enzymes (TIE) problem aims to identify the set of enzymes in a given metabolic network, such that their inhibition eliminates a given set of target compounds associated with a disease while incurring minimum damage to the rest of the compounds. This is an NP-complete problem, and thus optimal solutions using classical computers fail to scale to large metabolic networks. In this paper, we consider the TIE problem for identifying drug targets in metabolic networks. We develop the first quantum optimization solution, called QuTIE (Quantum optimization for Target Identification by Enzymes), to this NP-complete problem. We do that by developing an equivalent formulation of the TIE problem in Quadratic Unconstrained Binary Optimization (QUBO) form, then mapping it to a logical graph, which is then embedded on a hardware graph on a quantum computer. Our experimental results on 27 metabolic networks from Escherichia coli, Homo sapiens, and Mus musculus show that QuTIE yields solutions which are optimal or almost optimal. Our experiments also demonstrate that QuTIE can successfully identify enzyme targets already verified in wet-lab experiments for 14 major disease classes.
[ { "created": "Mon, 13 Mar 2023 22:39:11 GMT", "version": "v1" }, { "created": "Tue, 4 Jul 2023 15:40:32 GMT", "version": "v2" } ]
2023-07-06
[ [ "Ngo", "Hoang M.", "" ], [ "Thai", "My T.", "" ], [ "Kahveci", "Tamer", "" ] ]
Target Identification by Enzymes (TIE) problem aims to identify the set of enzymes in a given metabolic network, such that their inhibition eliminates a given set of target compounds associated with a disease while incurring minimum damage to the rest of the compounds. This is an NP-complete problem, and thus optimal solutions using classical computers fail to scale to large metabolic networks. In this paper, we consider the TIE problem for identifying drug targets in metabolic networks. We develop the first quantum optimization solution, called QuTIE (Quantum optimization for Target Identification by Enzymes), to this NP-complete problem. We do that by developing an equivalent formulation of the TIE problem in Quadratic Unconstrained Binary Optimization (QUBO) form, then mapping it to a logical graph, which is then embedded on a hardware graph on a quantum computer. Our experimental results on 27 metabolic networks from Escherichia coli, Homo sapiens, and Mus musculus show that QuTIE yields solutions which are optimal or almost optimal. Our experiments also demonstrate that QuTIE can successfully identify enzyme targets already verified in wet-lab experiments for 14 major disease classes.
2105.02386
Noah Ziems
Noah Ziems, Shaoen Wu, Jim Norman
Automated Primary Hyperparathyroidism Screening with Neural Networks
IEEE GLOBECOM
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Primary Hyperparathyroidism(PHPT) is a relatively common disease, affecting about one in every 1,000 adults. However, screening for PHPT can be difficult, meaning it often goes undiagnosed for long periods of time. While looking at specific blood test results independently can help indicate whether a patient has PHPT, often these blood result levels can all be within their respective normal ranges despite the patient having PHPT. Based on the clinic data from the real world, in this work, we propose a novel approach to screening PHPT with neural network (NN) architecture, achieving over 97\% accuracy with common blood values as inputs. Further, we propose a second model achieving over 99\% accuracy with additional lab test values as inputs. Moreover, compared to traditional PHPT screening methods, our NN models can reduce the false negatives of traditional screening methods by 99\%.
[ { "created": "Thu, 6 May 2021 01:14:27 GMT", "version": "v1" } ]
2021-05-07
[ [ "Ziems", "Noah", "" ], [ "Wu", "Shaoen", "" ], [ "Norman", "Jim", "" ] ]
Primary Hyperparathyroidism(PHPT) is a relatively common disease, affecting about one in every 1,000 adults. However, screening for PHPT can be difficult, meaning it often goes undiagnosed for long periods of time. While looking at specific blood test results independently can help indicate whether a patient has PHPT, often these blood result levels can all be within their respective normal ranges despite the patient having PHPT. Based on the clinic data from the real world, in this work, we propose a novel approach to screening PHPT with neural network (NN) architecture, achieving over 97\% accuracy with common blood values as inputs. Further, we propose a second model achieving over 99\% accuracy with additional lab test values as inputs. Moreover, compared to traditional PHPT screening methods, our NN models can reduce the false negatives of traditional screening methods by 99\%.
0705.4634
Carlo Piermarocchi
Diego Calzolari, Giovanni Paternostro, Patrick L. Harrington Jr., Carlo Piermarocchi, and Phillip M. Duxbury
Selective control of the apoptosis signaling network in heterogeneous cell populations
14 pages, 16 figures. Accepted for publication in PLoS ONE
PLoS ONE 2(6): e547 (2007)
10.1371/journal.pone.0000547
null
q-bio.QM cond-mat.stat-mech
null
Selective control in a population is the ability to control a member of the population while leaving the other members relatively unaffected. The concept of selective control is developed using cell death or apoptosis in heterogeneous cell populations as an example. Apoptosis signaling in heterogeneous cells is described by an ensemble of gene networks with identical topology but different link strengths. Selective control depends on the statistics of signaling in the ensemble of networks and we analyse the effects of superposition, non-linearity and feedback on these statistics. Parallel pathways promote normal statistics while series pathways promote skew distributions which in the most extreme cases become log-normal. We also show that feedback and non-linearity can produce bimodal signaling statistics, as can discreteness and non-linearity. Two methods for optimizing selective control are presented. The first is an exhaustive search method and the second is a linear programming based approach. Though control of a single gene in the signaling network yields little selectivity, control of a few genes typically yields higher levels of selectivity. The statistics of gene combinations susceptible to selective control is studied and is used to identify general control strategies. We found that selectivity is promoted by acting on the least sensitive nodes in the case of weak populations, while selective control of robust populations is optimized through perturbations of more sensitive nodes. High throughput experiments with heterogeneous cell lines could be designed in an analogous manner, with the further possibility of incorporating the selectivity optimization process into a closed-loop control system.
[ { "created": "Thu, 31 May 2007 15:51:56 GMT", "version": "v1" } ]
2014-07-29
[ [ "Calzolari", "Diego", "" ], [ "Paternostro", "Giovanni", "" ], [ "Harrington", "Patrick L.", "Jr." ], [ "Piermarocchi", "Carlo", "" ], [ "Duxbury", "Phillip M.", "" ] ]
Selective control in a population is the ability to control a member of the population while leaving the other members relatively unaffected. The concept of selective control is developed using cell death or apoptosis in heterogeneous cell populations as an example. Apoptosis signaling in heterogeneous cells is described by an ensemble of gene networks with identical topology but different link strengths. Selective control depends on the statistics of signaling in the ensemble of networks and we analyse the effects of superposition, non-linearity and feedback on these statistics. Parallel pathways promote normal statistics while series pathways promote skew distributions which in the most extreme cases become log-normal. We also show that feedback and non-linearity can produce bimodal signaling statistics, as can discreteness and non-linearity. Two methods for optimizing selective control are presented. The first is an exhaustive search method and the second is a linear programming based approach. Though control of a single gene in the signaling network yields little selectivity, control of a few genes typically yields higher levels of selectivity. The statistics of gene combinations susceptible to selective control is studied and is used to identify general control strategies. We found that selectivity is promoted by acting on the least sensitive nodes in the case of weak populations, while selective control of robust populations is optimized through perturbations of more sensitive nodes. High throughput experiments with heterogeneous cell lines could be designed in an analogous manner, with the further possibility of incorporating the selectivity optimization process into a closed-loop control system.
2004.12420
Markus List
Sepideh Sadegh, Julian Matschinske, David B. Blumenthal, Gihanna Galindez, Tim Kacprowski, Markus List, Reza Nasirigerdeh, Mhaned Oubounyt, Andreas Pichlmair, Tim Daniel Rose, Marisol Salgado-Albarr\'an, Julian Sp\"ath, Alexey Stukalov, Nina K. Wenke, Kevin Yuan, Josch K. Pauling, Jan Baumbach
Exploring the SARS-CoV-2 virus-host-drug interactome for drug repurposing
15 pages, 4 figures
Nat Commun 11, 3518 (2020)
10.1038/s41467-020-17189-2
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Coronavirus Disease-2019 (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. It was first identified in Wuhan, China, and has since spread causing a global pandemic. Various studies have been performed to understand the molecular mechanisms of viral infection for predicting drug repurposing candidates. However, such information is spread across many publications and it is very time-consuming to access, integrate, explore, and exploit. We developed CoVex, the first interactive online platform for SARS-CoV-2 and SARS-CoV-1 host interactome exploration and drug (target) identification. CoVex integrates 1) experimentally validated virus-human protein interactions, 2) human protein-protein interactions and 3) drug-target interactions. The web interface allows user-friendly visual exploration of the virus-host interactome and implements systems medicine algorithms for network-based prediction of drugs. Thus, CoVex is an important resource, not only to understand the molecular mechanisms involved in SARS-CoV-2 and SARS-CoV-1 pathogenicity, but also in clinical research for the identification and prioritization of candidate therapeutics. We apply CoVex to investigate recent hypotheses on a systems biology level and to systematically explore the molecular mechanisms driving the virus life cycle. Furthermore, we extract and discuss drug repurposing candidates involved in these mechanisms. CoVex renders COVID-19 drug research systems-medicine-ready by giving the scientific community direct access to network medicine algorithms integrating virus-host-drug interactions. It is available at https://exbio.wzw.tum.de/covex/.
[ { "created": "Sun, 26 Apr 2020 15:46:10 GMT", "version": "v1" } ]
2020-07-15
[ [ "Sadegh", "Sepideh", "" ], [ "Matschinske", "Julian", "" ], [ "Blumenthal", "David B.", "" ], [ "Galindez", "Gihanna", "" ], [ "Kacprowski", "Tim", "" ], [ "List", "Markus", "" ], [ "Nasirigerdeh", "Reza", "" ], [ "Oubounyt", "Mhaned", "" ], [ "Pichlmair", "Andreas", "" ], [ "Rose", "Tim Daniel", "" ], [ "Salgado-Albarrán", "Marisol", "" ], [ "Späth", "Julian", "" ], [ "Stukalov", "Alexey", "" ], [ "Wenke", "Nina K.", "" ], [ "Yuan", "Kevin", "" ], [ "Pauling", "Josch K.", "" ], [ "Baumbach", "Jan", "" ] ]
Coronavirus Disease-2019 (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. It was first identified in Wuhan, China, and has since spread causing a global pandemic. Various studies have been performed to understand the molecular mechanisms of viral infection for predicting drug repurposing candidates. However, such information is spread across many publications and it is very time-consuming to access, integrate, explore, and exploit. We developed CoVex, the first interactive online platform for SARS-CoV-2 and SARS-CoV-1 host interactome exploration and drug (target) identification. CoVex integrates 1) experimentally validated virus-human protein interactions, 2) human protein-protein interactions and 3) drug-target interactions. The web interface allows user-friendly visual exploration of the virus-host interactome and implements systems medicine algorithms for network-based prediction of drugs. Thus, CoVex is an important resource, not only to understand the molecular mechanisms involved in SARS-CoV-2 and SARS-CoV-1 pathogenicity, but also in clinical research for the identification and prioritization of candidate therapeutics. We apply CoVex to investigate recent hypotheses on a systems biology level and to systematically explore the molecular mechanisms driving the virus life cycle. Furthermore, we extract and discuss drug repurposing candidates involved in these mechanisms. CoVex renders COVID-19 drug research systems-medicine-ready by giving the scientific community direct access to network medicine algorithms integrating virus-host-drug interactions. It is available at https://exbio.wzw.tum.de/covex/.
2004.01574
Babacar Mbaye Ndiaye
Babacar Mbaye Ndiaye, Lena Tendeng, Diaraf Seck
Analysis of the COVID-19 pandemic by SIR model and machine learning technics for forecasting
null
null
null
null
q-bio.PE math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work is a trial in which we propose SIR model and machine learning tools to analyze the coronavirus pandemic in the real world. Based on the public data from \cite{datahub}, we estimate main key pandemic parameters and make predictions on the inflection point and possible ending time for the real world and specifically for Senegal. The coronavirus disease 2019, by World Health Organization, rapidly spread out in the whole China and then in the whole world. Under optimistic estimation, the pandemic in some countries will end soon, while for most part of countries in the world (US, Italy, etc.), the hit of anti-pandemic will be no later than the end of April.
[ { "created": "Fri, 3 Apr 2020 13:56:54 GMT", "version": "v1" } ]
2020-04-06
[ [ "Ndiaye", "Babacar Mbaye", "" ], [ "Tendeng", "Lena", "" ], [ "Seck", "Diaraf", "" ] ]
This work is a trial in which we propose SIR model and machine learning tools to analyze the coronavirus pandemic in the real world. Based on the public data from \cite{datahub}, we estimate main key pandemic parameters and make predictions on the inflection point and possible ending time for the real world and specifically for Senegal. The coronavirus disease 2019, by World Health Organization, rapidly spread out in the whole China and then in the whole world. Under optimistic estimation, the pandemic in some countries will end soon, while for most part of countries in the world (US, Italy, etc.), the hit of anti-pandemic will be no later than the end of April.
2104.08175
Maryam Ghanbari
M. Ghanbari, Z. Zhou, L-M. Hsu, Y. Han, Y. Sun, P-T. Yap, H. Zhang, D. Shen
Altered connectedness of the brain chronnectome during the progression of Alzheimer's disease
21 pages, 4 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Graph theory has been extensively used to investigate brain network topology and its changes in disease cohorts. However, many graph theoretic analysis-based brain network studies focused on the shortest paths or, more generally, cost-efficiency. In this work, we use two new concepts, connectedness and 2-connectedness, to measure different global properties compared to the previously widely adopted ones.
[ { "created": "Tue, 30 Mar 2021 19:29:41 GMT", "version": "v1" } ]
2021-04-19
[ [ "Ghanbari", "M.", "" ], [ "Zhou", "Z.", "" ], [ "Hsu", "L-M.", "" ], [ "Han", "Y.", "" ], [ "Sun", "Y.", "" ], [ "Yap", "P-T.", "" ], [ "Zhang", "H.", "" ], [ "Shen", "D.", "" ] ]
Graph theory has been extensively used to investigate brain network topology and its changes in disease cohorts. However, many graph theoretic analysis-based brain network studies focused on the shortest paths or, more generally, cost-efficiency. In this work, we use two new concepts, connectedness and 2-connectedness, to measure different global properties compared to the previously widely adopted ones.
1007.3567
Subhadip Raychaudhuri
Joanna Skommer, Tom Brittain, Subhadip Raychaudhuri
Bcl-2 inhibits apoptosis by increasing the time-to-death and intrinsic cell-to-cell variations in the mitochondrial pathway of cell death
11 pages, In press
Apoptosis (2010)
10.1007/s10495-010-0515-7
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BH3 mimetics have been proposed as new anticancer therapeutics. They target anti-apoptotic Bcl-2 proteins, up-regulation of which has been implicated in the resistance of many cancer cells, particularly leukemia and lymphoma cells, to apoptosis. Using probabilistic computational modeling of the mitochondrial pathway of apoptosis, verified by single-cell experimental observations, we develop a model of Bcl-2 inhibition of apoptosis. Our results clarify how Bcl-2 imparts its anti-apoptotic role by increasing the time-to-death and cell-to-cell variability. We also show that although the commitment to death is highly impacted by differences in protein levels at the time of stimulation, inherent stochastic fluctuations in apoptotic signaling are sufficient to induce cell-to-cell variability and to allow single cells to escape death. This study suggests that intrinsic cell-to-cell stochastic variability in apoptotic signaling is sufficient to cause fractional killing of cancer cells after exposure to BH3 mimetics. This is an unanticipated facet of cancer chemoresistance.
[ { "created": "Wed, 21 Jul 2010 06:40:57 GMT", "version": "v1" } ]
2010-07-22
[ [ "Skommer", "Joanna", "" ], [ "Brittain", "Tom", "" ], [ "Raychaudhuri", "Subhadip", "" ] ]
BH3 mimetics have been proposed as new anticancer therapeutics. They target anti-apoptotic Bcl-2 proteins, up-regulation of which has been implicated in the resistance of many cancer cells, particularly leukemia and lymphoma cells, to apoptosis. Using probabilistic computational modeling of the mitochondrial pathway of apoptosis, verified by single-cell experimental observations, we develop a model of Bcl-2 inhibition of apoptosis. Our results clarify how Bcl-2 imparts its anti-apoptotic role by increasing the time-to-death and cell-to-cell variability. We also show that although the commitment to death is highly impacted by differences in protein levels at the time of stimulation, inherent stochastic fluctuations in apoptotic signaling are sufficient to induce cell-to-cell variability and to allow single cells to escape death. This study suggests that intrinsic cell-to-cell stochastic variability in apoptotic signaling is sufficient to cause fractional killing of cancer cells after exposure to BH3 mimetics. This is an unanticipated facet of cancer chemoresistance.
2211.02168
Mathieu Fourment
Mathieu Fourment, Christiaan J. Swanepoel, Jared G. Galloway, Xiang Ji, Karthik Gangavarapu, Marc A. Suchard, Frederick A. Matsen IV
Automatic differentiation is no panacea for phylogenetic gradient computation
17 pages and 2 figures in main text, plus supplementary materials
null
10.1093/gbe/evad099
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gradients of probabilistic model likelihoods with respect to their parameters are essential for modern computational statistics and machine learning. These calculations are readily available for arbitrary models via automatic differentiation implemented in general-purpose machine-learning libraries such as TensorFlow and PyTorch. Although these libraries are highly optimized, it is not clear if their general-purpose nature will limit their algorithmic complexity or implementation speed for the phylogenetic case compared to phylogenetics-specific code. In this paper, we compare six gradient implementations of the phylogenetic likelihood functions, in isolation and also as part of a variational inference procedure. We find that although automatic differentiation can scale approximately linearly in tree size, it is much slower than the carefully-implemented gradient calculation for tree likelihood and ratio transformation operations. We conclude that a mixed approach combining phylogenetic libraries with machine learning libraries will provide the optimal combination of speed and model flexibility moving forward.
[ { "created": "Thu, 3 Nov 2022 22:43:27 GMT", "version": "v1" }, { "created": "Mon, 5 Jun 2023 02:19:32 GMT", "version": "v2" } ]
2023-06-06
[ [ "Fourment", "Mathieu", "" ], [ "Swanepoel", "Christiaan J.", "" ], [ "Galloway", "Jared G.", "" ], [ "Ji", "Xiang", "" ], [ "Gangavarapu", "Karthik", "" ], [ "Suchard", "Marc A.", "" ], [ "Matsen", "Frederick A.", "IV" ] ]
Gradients of probabilistic model likelihoods with respect to their parameters are essential for modern computational statistics and machine learning. These calculations are readily available for arbitrary models via automatic differentiation implemented in general-purpose machine-learning libraries such as TensorFlow and PyTorch. Although these libraries are highly optimized, it is not clear if their general-purpose nature will limit their algorithmic complexity or implementation speed for the phylogenetic case compared to phylogenetics-specific code. In this paper, we compare six gradient implementations of the phylogenetic likelihood functions, in isolation and also as part of a variational inference procedure. We find that although automatic differentiation can scale approximately linearly in tree size, it is much slower than the carefully-implemented gradient calculation for tree likelihood and ratio transformation operations. We conclude that a mixed approach combining phylogenetic libraries with machine learning libraries will provide the optimal combination of speed and model flexibility moving forward.
2105.00719
Mareike Fischer
Sophie J. Kersting and Mareike Fischer
Measuring tree balance using symmetry nodes -- a new balance index and its extremal properties
null
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effects like selection in evolution as well as fertility inheritance in the development of populations can lead to a higher degree of asymmetry in evolutionary trees than expected under a null hypothesis. To identify and quantify such influences, various balance indices were proposed in the phylogenetic literature and have been in use for decades. However, so far no balance index was based on the number of \emph{symmetry nodes}, even though symmetry nodes play an important role in other areas of mathematical phylogenetics and despite the fact that symmetry nodes are a quite natural way to measure balance or symmetry of a given tree. The aim of this manuscript is thus twofold: First, we will introduce the \emph{symmetry nodes index} as an index for measuring balance of phylogenetic trees and analyze its extremal properties. We also show that this index can be calculated in linear time. This new index turns out to be a generalization of a simple and well-known balance index, namely the \emph{cherry index}, as well as a specialization of another, less established, balance index, namely \emph{Rogers' $J$ index}. Thus, it is the second objective of the present manuscript to compare the new symmetry nodes index to these two indices and to underline its advantages. In order to do so, we will derive some extremal properties of the cherry index and Rogers' $J$ index along the way and thus complement existing studies on these indices. Moreover, we used the programming language \textsf{R} to implement all three indices in the software package \textsf{symmeTree}, which has been made publicly available.
[ { "created": "Mon, 3 May 2021 09:55:27 GMT", "version": "v1" }, { "created": "Wed, 4 Aug 2021 10:21:08 GMT", "version": "v2" } ]
2021-08-05
[ [ "Kersting", "Sophie J.", "" ], [ "Fischer", "Mareike", "" ] ]
Effects like selection in evolution as well as fertility inheritance in the development of populations can lead to a higher degree of asymmetry in evolutionary trees than expected under a null hypothesis. To identify and quantify such influences, various balance indices were proposed in the phylogenetic literature and have been in use for decades. However, so far no balance index was based on the number of \emph{symmetry nodes}, even though symmetry nodes play an important role in other areas of mathematical phylogenetics and despite the fact that symmetry nodes are a quite natural way to measure balance or symmetry of a given tree. The aim of this manuscript is thus twofold: First, we will introduce the \emph{symmetry nodes index} as an index for measuring balance of phylogenetic trees and analyze its extremal properties. We also show that this index can be calculated in linear time. This new index turns out to be a generalization of a simple and well-known balance index, namely the \emph{cherry index}, as well as a specialization of another, less established, balance index, namely \emph{Rogers' $J$ index}. Thus, it is the second objective of the present manuscript to compare the new symmetry nodes index to these two indices and to underline its advantages. In order to do so, we will derive some extremal properties of the cherry index and Rogers' $J$ index along the way and thus complement existing studies on these indices. Moreover, we used the programming language \textsf{R} to implement all three indices in the software package \textsf{symmeTree}, which has been made publicly available.
1806.06894
William Jacobs
William M. Jacobs and Eugene I. Shakhnovich
Accurate protein-folding transition-path statistics from a simple free-energy landscape
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central goal of protein-folding theory is to predict the stochastic dynamics of transition paths --- the rare trajectories that transit between the folded and unfolded ensembles --- using only thermodynamic information, such as a low-dimensional equilibrium free-energy landscape. However, commonly used one-dimensional landscapes typically fall short of this aim, because an empirical coordinate-dependent diffusion coefficient has to be fit to transition-path trajectory data in order to reproduce the transition-path dynamics. We show that an alternative, first-principles free-energy landscape predicts transition-path statistics that agree well with simulations and single-molecule experiments without requiring dynamical data as an input. This 'topological configuration' model assumes that distinct, native-like substructures assemble on a timescale that is slower than native-contact formation but faster than the folding of the entire protein. Using only equilibrium simulation data to determine the free energies of these coarse-grained intermediate states, we predict a broad distribution of transition-path transit times that agrees well with the transition-path durations observed in simulations. We further show that both the distribution of finite-time displacements on a one-dimensional order parameter and the ensemble of transition-path trajectories generated by the model are consistent with the simulated transition paths. These results indicate that a landscape based on transient folding intermediates, which are often hidden by one-dimensional projections, can form the basis of a predictive model of protein-folding transition-path dynamics.
[ { "created": "Mon, 18 Jun 2018 18:58:26 GMT", "version": "v1" }, { "created": "Tue, 7 Aug 2018 18:35:27 GMT", "version": "v2" } ]
2018-08-09
[ [ "Jacobs", "William M.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
A central goal of protein-folding theory is to predict the stochastic dynamics of transition paths --- the rare trajectories that transit between the folded and unfolded ensembles --- using only thermodynamic information, such as a low-dimensional equilibrium free-energy landscape. However, commonly used one-dimensional landscapes typically fall short of this aim, because an empirical coordinate-dependent diffusion coefficient has to be fit to transition-path trajectory data in order to reproduce the transition-path dynamics. We show that an alternative, first-principles free-energy landscape predicts transition-path statistics that agree well with simulations and single-molecule experiments without requiring dynamical data as an input. This 'topological configuration' model assumes that distinct, native-like substructures assemble on a timescale that is slower than native-contact formation but faster than the folding of the entire protein. Using only equilibrium simulation data to determine the free energies of these coarse-grained intermediate states, we predict a broad distribution of transition-path transit times that agrees well with the transition-path durations observed in simulations. We further show that both the distribution of finite-time displacements on a one-dimensional order parameter and the ensemble of transition-path trajectories generated by the model are consistent with the simulated transition paths. These results indicate that a landscape based on transient folding intermediates, which are often hidden by one-dimensional projections, can form the basis of a predictive model of protein-folding transition-path dynamics.
1407.6534
Maria Pires Pacheco
Maria Pires Pacheco and Thomas Sauter
Fast reconstruction of compact context-specific metabolic networks via integration of microarray data
9 pages, 1 figure
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently we proposed an algorithm for the fast reconstruction of compact context-specific metabolic networks (FASTCORE) that allowed dropping the reconstruction time to the time order of seconds (Vlassis et al.,2014). This extremely low computational demand opens new possibilities for improving the quality of the models. Several rounds of model reconstruction, testing of the model's predictions against real experimental data, curation steps of the input model and the set of core reactions as well as cross-validations assays are required to reconstruct high-quality models. These semi-automated model curations steps are in such extend not possible with competing algorithms due to their high computational demands. To adapt FASTCORE for the integration of microarray data, we therefore propose a new workflow: FASTCORMICS. FASTCORMICS requires as input microarray data and a Genome-scale reconstruction. FASTCORMICS is devoid of heuristic parameter settings and has a low computational demand with overall building times in the order of a few minutes. FASTCORMICS preprocesses the microarrays data with the discretization tool Barcode (Zillox et al, 2007). Barcode uses prior knowledge on the intensity distribution of each probe set for a given microarray platform to segregate between expressed genes and non-expressed genes. This preprocessing step allows circumventing the need of setting a heuristic expression threshold, which is critical for the output models as in response to this threshold alternative pathways or subsystems might be included or excluded, thereby heavily changing the functionalities of the model. In general, FASTCORMICS outperforms competing algorithms and allows obtaining high-quality, robust models in a high-throughput manner. This will allow the use of metabolic modelling as routine process for the analysis of microarray data e.g. in the field of personalized medicine.
[ { "created": "Thu, 24 Jul 2014 11:34:28 GMT", "version": "v1" } ]
2014-07-25
[ [ "Pacheco", "Maria Pires", "" ], [ "Sauter", "Thomas", "" ] ]
Recently we proposed an algorithm for the fast reconstruction of compact context-specific metabolic networks (FASTCORE) that allowed dropping the reconstruction time to the time order of seconds (Vlassis et al.,2014). This extremely low computational demand opens new possibilities for improving the quality of the models. Several rounds of model reconstruction, testing of the model's predictions against real experimental data, curation steps of the input model and the set of core reactions as well as cross-validations assays are required to reconstruct high-quality models. These semi-automated model curations steps are in such extend not possible with competing algorithms due to their high computational demands. To adapt FASTCORE for the integration of microarray data, we therefore propose a new workflow: FASTCORMICS. FASTCORMICS requires as input microarray data and a Genome-scale reconstruction. FASTCORMICS is devoid of heuristic parameter settings and has a low computational demand with overall building times in the order of a few minutes. FASTCORMICS preprocesses the microarrays data with the discretization tool Barcode (Zillox et al, 2007). Barcode uses prior knowledge on the intensity distribution of each probe set for a given microarray platform to segregate between expressed genes and non-expressed genes. This preprocessing step allows circumventing the need of setting a heuristic expression threshold, which is critical for the output models as in response to this threshold alternative pathways or subsystems might be included or excluded, thereby heavily changing the functionalities of the model. In general, FASTCORMICS outperforms competing algorithms and allows obtaining high-quality, robust models in a high-throughput manner. This will allow the use of metabolic modelling as routine process for the analysis of microarray data e.g. in the field of personalized medicine.
2201.13373
Antonio Jim\'enez-Pastor
Antonio Jim\'enez-Pastor, Joshua Paul Jacob, Gleb Pogudin
Exact linear reduction for rational dynamical systems
19 pages, 4 algorithms, 4 tables, 1 figure
null
null
null
q-bio.QM cs.SC cs.SY eess.SY math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detailed dynamical systems models used in life sciences may include dozens or even hundreds of state variables. Models of large dimension are not only harder from the numerical perspective (e.g., for parameter estimation or simulation), but it is also becoming challenging to derive mechanistic insights from such models. Exact model reduction is a way to address this issue by finding a self-consistent lower-dimensional projection of the corresponding dynamical system. A recent algorithm CLUE allows one to construct an exact linear reduction of the smallest possible dimension such that the fixed variables of interest are preserved. However, CLUE is restricted to systems with polynomial dynamics. Since rational dynamics occurs frequently in the life sciences (e.g., Michaelis-Menten or Hill kinetics), it is desirable to extend CLUE to the models with rational dynamics. In this paper, we present an extension of CLUE to the case of rational dynamics and demonstrate its applicability on examples from literature. Our implementation is available in version 1.5 of CLUE at https://github.com/pogudingleb/CLUE.
[ { "created": "Mon, 31 Jan 2022 17:34:04 GMT", "version": "v1" }, { "created": "Fri, 13 May 2022 13:45:06 GMT", "version": "v2" }, { "created": "Mon, 4 Jul 2022 20:07:05 GMT", "version": "v3" } ]
2022-07-06
[ [ "Jiménez-Pastor", "Antonio", "" ], [ "Jacob", "Joshua Paul", "" ], [ "Pogudin", "Gleb", "" ] ]
Detailed dynamical systems models used in life sciences may include dozens or even hundreds of state variables. Models of large dimension are not only harder from the numerical perspective (e.g., for parameter estimation or simulation), but it is also becoming challenging to derive mechanistic insights from such models. Exact model reduction is a way to address this issue by finding a self-consistent lower-dimensional projection of the corresponding dynamical system. A recent algorithm CLUE allows one to construct an exact linear reduction of the smallest possible dimension such that the fixed variables of interest are preserved. However, CLUE is restricted to systems with polynomial dynamics. Since rational dynamics occurs frequently in the life sciences (e.g., Michaelis-Menten or Hill kinetics), it is desirable to extend CLUE to the models with rational dynamics. In this paper, we present an extension of CLUE to the case of rational dynamics and demonstrate its applicability on examples from literature. Our implementation is available in version 1.5 of CLUE at https://github.com/pogudingleb/CLUE.
1702.00687
Alejandro Jimenez Rodriguez
Alejandro Jimenez Rodriguez, Juan Carlos Cordero Ceballos and Nestor E. Sanchez
Heterogeneous gain distributions in neural networks I:The stationary case
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study heterogeneous distribution of gains in neural fields using techniques of quantum mechanics by exploiting a relationship of our model and the time-independent Schr\"{o}dinger equation. We show that specific relationships between the connectivity kernel and the gain of the population can explain the behavior of the neural field in simulations. In particular, we show this relationships for the gating of activity between two regions (step potential), the propagation of activity throughout another region (barrier) and, most importantly, the existence of bumps in gain-contained regions (gain well). Our results constitute specific predictions that can be tested in vivo or in vitro.
[ { "created": "Thu, 2 Feb 2017 14:03:36 GMT", "version": "v1" } ]
2017-02-03
[ [ "Rodriguez", "Alejandro Jimenez", "" ], [ "Ceballos", "Juan Carlos Cordero", "" ], [ "Sanchez", "Nestor E.", "" ] ]
We study heterogeneous distribution of gains in neural fields using techniques of quantum mechanics by exploiting a relationship of our model and the time-independent Schr\"{o}dinger equation. We show that specific relationships between the connectivity kernel and the gain of the population can explain the behavior of the neural field in simulations. In particular, we show this relationships for the gating of activity between two regions (step potential), the propagation of activity throughout another region (barrier) and, most importantly, the existence of bumps in gain-contained regions (gain well). Our results constitute specific predictions that can be tested in vivo or in vitro.
2110.03410
Alex Zabeo
Alex Zabeo, Gianpietro Basei, Georgia Tsiliki, Willie Peijnenburg, Danail Hristozov
Ordered Weighted Average Based grouping of nanomaterials with Arsinh and Dose Response similarity models
Accepted for publication
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the context of the EU GRACIOUS project, we propose a novel procedure for similarity assessment and grouping of nanomaterials. This methodology is based on the (1) Arsinh transformation function for scalar properties, (2) full curve shape comparison by application of a modified Kolmogorov-Smirnov metric for bivariate properties, (3) Ordered Weighted Average (OWA) aggregation-based grouping distance, and (4) hierarchical clustering. The approach allows for grouping of nanomaterials that is not affected by the dataset, so that group membership will not change when new candidates are included in the set of assessed materials. To facilitate the application of the proposed methodology, a software script was developed by using the R programming language which is currently under migration to a web tool. The presented approach was tested against a dataset, derived from literature review, related to immobilisation of Daphnia magna and reporting information on several nanomaterials.
[ { "created": "Thu, 7 Oct 2021 12:53:33 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 16:25:18 GMT", "version": "v2" } ]
2021-11-29
[ [ "Zabeo", "Alex", "" ], [ "Basei", "Gianpietro", "" ], [ "Tsiliki", "Georgia", "" ], [ "Peijnenburg", "Willie", "" ], [ "Hristozov", "Danail", "" ] ]
In the context of the EU GRACIOUS project, we propose a novel procedure for similarity assessment and grouping of nanomaterials. This methodology is based on the (1) Arsinh transformation function for scalar properties, (2) full curve shape comparison by application of a modified Kolmogorov-Smirnov metric for bivariate properties, (3) Ordered Weighted Average (OWA) aggregation-based grouping distance, and (4) hierarchical clustering. The approach allows for grouping of nanomaterials that is not affected by the dataset, so that group membership will not change when new candidates are included in the set of assessed materials. To facilitate the application of the proposed methodology, a software script was developed by using the R programming language which is currently under migration to a web tool. The presented approach was tested against a dataset, derived from literature review, related to immobilisation of Daphnia magna and reporting information on several nanomaterials.
1206.1871
L. E. Jones
Laura E. Jones, Lutz Becks, Stephen P. Ellner, Nelson G. Hairston Jr, Takehito Yoshida, and Gregor F. Fussmann
Rapid contemporary evolution and clonal food web dynamics
30 pages, 6 Figures
Phil. Trans. R. Soc. B 364, 1579-1591 (2009)
10.1098/rstb.2009.0004
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Character evolution that affects ecological community interactions often occurs contemporaneously with temporal changes in population size, potentially altering the very nature of those dynamics. Such eco-evolutionary processes may be most readily explored in systems with short generations and simple genetics. Asexual and cyclically parthenogenetic organisms such as microalgae, cladocerans, and rotifers, which frequently dominate freshwater plankton communities, meet these requirements. Multiple clonal lines can coexist within each species over extended periods, until either fixation occurs or a sexual phase reshuffles the genetic material. When clones differ in traits affecting interspecific interactions, within-species clonal dynamics can have major effects on the population dynamics. We first consider a simple predator-prey system with two prey genotypes, parameterized with data on a well-studied experimental system, and explore how the extent of differences in defense against predation within the prey population determine dynamic stability versus instability of the system. We then explore how increased potential for evolution affects the community dynamics in a more general community model with multiple predator and multiple prey genotypes. These examples illustrate how microevolutionary "details" that enhance or limit the potential for heritable phenotypic change can have significant effects on contemporaneous community-level dynamics and the persistence and coexistence of species.
[ { "created": "Fri, 8 Jun 2012 20:26:02 GMT", "version": "v1" } ]
2012-06-12
[ [ "Jones", "Laura E.", "" ], [ "Becks", "Lutz", "" ], [ "Ellner", "Stephen P.", "" ], [ "Hairston", "Nelson G.", "Jr" ], [ "Yoshida", "Takehito", "" ], [ "Fussmann", "Gregor F.", "" ] ]
Character evolution that affects ecological community interactions often occurs contemporaneously with temporal changes in population size, potentially altering the very nature of those dynamics. Such eco-evolutionary processes may be most readily explored in systems with short generations and simple genetics. Asexual and cyclically parthenogenetic organisms such as microalgae, cladocerans, and rotifers, which frequently dominate freshwater plankton communities, meet these requirements. Multiple clonal lines can coexist within each species over extended periods, until either fixation occurs or a sexual phase reshuffles the genetic material. When clones differ in traits affecting interspecific interactions, within-species clonal dynamics can have major effects on the population dynamics. We first consider a simple predator-prey system with two prey genotypes, parameterized with data on a well-studied experimental system, and explore how the extent of differences in defense against predation within the prey population determine dynamic stability versus instability of the system. We then explore how increased potential for evolution affects the community dynamics in a more general community model with multiple predator and multiple prey genotypes. These examples illustrate how microevolutionary "details" that enhance or limit the potential for heritable phenotypic change can have significant effects on contemporaneous community-level dynamics and the persistence and coexistence of species.
1510.00675
Andrew Mugler
Andrew Mugler, Sean Fancher
Stochastic modeling of gene expression, protein modification, and polymerization
10 pages, 2 figures. To appear in the q-bio Methods Textbook
null
null
null
q-bio.MN physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many fundamental cellular processes involve small numbers of molecules. When numbers are small, fluctuations dominate, and stochastic models, which account for these fluctuations, are required. In this chapter, we describe minimal stochastic models of three fundamental cellular processes: gene expression, protein modification, and polymerization. We introduce key analytic tools for solving each model, including the generating function, eigenfunction expansion, and operator methods, and we discuss how these tools are extended to more complicated models. These analytic tools provide an elegant, efficient, and often insightful alternative to stochastic simulation.
[ { "created": "Fri, 2 Oct 2015 18:31:32 GMT", "version": "v1" } ]
2015-10-05
[ [ "Mugler", "Andrew", "" ], [ "Fancher", "Sean", "" ] ]
Many fundamental cellular processes involve small numbers of molecules. When numbers are small, fluctuations dominate, and stochastic models, which account for these fluctuations, are required. In this chapter, we describe minimal stochastic models of three fundamental cellular processes: gene expression, protein modification, and polymerization. We introduce key analytic tools for solving each model, including the generating function, eigenfunction expansion, and operator methods, and we discuss how these tools are extended to more complicated models. These analytic tools provide an elegant, efficient, and often insightful alternative to stochastic simulation.
1706.02758
Saba Adabi
Saba Adabi, Matin Hosseinzadeh, Shahryar Noei, Steven Daveluy, Anne Clayton, Darius Mehregan, Silvia Conforto, Mohammadreza Nasiriavanaki
Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms
null
Nature Scientific Reports 17912 (2017)
10.1038/s41598-017-17398-8
null
q-bio.TO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, diagnosis of skin diseases is based primarily on visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and in conjunction with decision-theoretic approaches used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue.
[ { "created": "Thu, 8 Jun 2017 20:33:54 GMT", "version": "v1" }, { "created": "Fri, 18 Aug 2017 05:50:56 GMT", "version": "v2" } ]
2018-01-30
[ [ "Adabi", "Saba", "" ], [ "Hosseinzadeh", "Matin", "" ], [ "Noei", "Shahryar", "" ], [ "Daveluy", "Steven", "" ], [ "Clayton", "Anne", "" ], [ "Mehregan", "Darius", "" ], [ "Conforto", "Silvia", "" ], [ "Nasiriavanaki", "Mohammadreza", "" ] ]
Currently, diagnosis of skin diseases is based primarily on visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and in conjunction with decision-theoretic approaches used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue.
2008.00865
Matheus Lazo Lazo
Matheus Jatkoske Lazo and Adriano De Cezaro
Why can we observe a plateau even in an out of control epidemic outbreak? A SEIR model with the interaction of $n$ distinct populations for Covid-19 in Brazil
submitted to Trends in Applied and Computational Mathematics
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This manuscript proposes a model of $n$ distinct populations interaction structured SEIR to describe the spread of COVID-19 pandemic diseases. The proposed model has the flexibility to include geographically separated communities as well as taking into account aging population groups and their interactions. We show that distinct assumptions on the dynamics of the proposed model lead to a plateau-like curve of the infected population, reflecting collected data from large countries like Brazil. Such observations aim to the following conjecture: " Covid-19 diseased diffusion from the capitals to the Brazil interior, as reflected by the proposed model, is responsible for plateau-like reported cases in the country". We present numerical simulations of some scenarios and comparison with reported data from Brazil that corroborate with the aforementioned conclusions.
[ { "created": "Mon, 3 Aug 2020 13:34:01 GMT", "version": "v1" } ]
2020-08-04
[ [ "Lazo", "Matheus Jatkoske", "" ], [ "De Cezaro", "Adriano", "" ] ]
This manuscript proposes a model of $n$ distinct populations interaction structured SEIR to describe the spread of COVID-19 pandemic diseases. The proposed model has the flexibility to include geographically separated communities as well as taking into account aging population groups and their interactions. We show that distinct assumptions on the dynamics of the proposed model lead to a plateau-like curve of the infected population, reflecting collected data from large countries like Brazil. Such observations aim to the following conjecture: " Covid-19 diseased diffusion from the capitals to the Brazil interior, as reflected by the proposed model, is responsible for plateau-like reported cases in the country". We present numerical simulations of some scenarios and comparison with reported data from Brazil that corroborate with the aforementioned conclusions.
1404.7715
Tamar Friedlander
Tamar Friedlander, Avraham E. Mayo, Tsvi Tlusty and Uri Alon
Evolution of bow-tie architectures in biology
null
PLoS Comput Biol 11(3): e1004055
10.1371/journal.pcbi.1004055
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bow-tie or hourglass structure is a common architectural feature found in biological and technological networks. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when two conditions are met: (i) the evolutionary goal is rank deficient, where the rank corresponds to the minimal number of input features on which the outputs depend, and (ii) The effects of mutations on interaction intensities between components are described by product rule - namely the mutated element is multiplied by a random number. Product-rule mutations are more biologically realistic than the commonly used sum-rule mutations that add a random number to the mutated element. These conditions robustly lead to bow-tie structures. The minimal width of the intermediate network layers (the waist or knot of the bow-tie) equals the rank of the evolutionary goal. These findings can help explain the presence of bow-ties in diverse biological systems, and can also be relevant for machine learning applications that employ multi-layered networks.
[ { "created": "Wed, 30 Apr 2014 13:11:41 GMT", "version": "v1" } ]
2015-03-26
[ [ "Friedlander", "Tamar", "" ], [ "Mayo", "Avraham E.", "" ], [ "Tlusty", "Tsvi", "" ], [ "Alon", "Uri", "" ] ]
Bow-tie or hourglass structure is a common architectural feature found in biological and technological networks. A bow-tie in a multi-layered structure occurs when intermediate layers have much fewer components than the input and output layers. Examples include metabolism where a handful of building blocks mediate between multiple input nutrients and multiple output biomass components, and signaling networks where information from numerous receptor types passes through a small set of signaling pathways to regulate multiple output genes. Little is known, however, about how bow-tie architectures evolve. Here, we address the evolution of bow-tie architectures using simulations of multi-layered systems evolving to fulfill a given input-output goal. We find that bow-ties spontaneously evolve when two conditions are met: (i) the evolutionary goal is rank deficient, where the rank corresponds to the minimal number of input features on which the outputs depend, and (ii) The effects of mutations on interaction intensities between components are described by product rule - namely the mutated element is multiplied by a random number. Product-rule mutations are more biologically realistic than the commonly used sum-rule mutations that add a random number to the mutated element. These conditions robustly lead to bow-tie structures. The minimal width of the intermediate network layers (the waist or knot of the bow-tie) equals the rank of the evolutionary goal. These findings can help explain the presence of bow-ties in diverse biological systems, and can also be relevant for machine learning applications that employ multi-layered networks.
2009.11568
Bhanu Prakash
Bhanu Prakash D, D. K. K. Vamsi, D. Bangaru Rajesh, Carani B Sanjeevi
Control Intervention Strategies for Within-Host, Between-Host and their Efficacy in the Treatment, Spread of COVID-19 : A Multi Scale Modeling Approach
null
null
10.1515/cmb-2020-0111
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 pandemic has resulted in more than 14.5 million infections and 6,04,917 deaths in 212 countries over the last few months. Different drug intervention acting at multiple stages of pathogenesis of COVID-19 can substantially reduce the infection induced,thereby decreasing the mortality. Also population level control strategies can reduce the spread of the COVID-19 substantially. Motivated by these observations, in this work we propose and study a multi scale model linking both within-host and between-host dynamics of COVID-19. Initially the natural history dealing with the disease dynamics is studied. Later, comparative effectiveness is performed to understand the efficacy of both the within-host and population level interventions. Findings of this study suggest that a combined strategy involving treatment with drugs such as Arbidol, remdesivir, Lopinavir/Ritonavir that inhibits viral replication and immunotherapies like monoclonal antibodies, along with environmental hygiene and generalized social distancing proved to be the best and optimal in reducing the basic reproduction number and environmental spread of the virus at the population level.
[ { "created": "Thu, 24 Sep 2020 09:33:45 GMT", "version": "v1" } ]
2023-09-01
[ [ "D", "Bhanu Prakash", "" ], [ "Vamsi", "D. K. K.", "" ], [ "Rajesh", "D. Bangaru", "" ], [ "Sanjeevi", "Carani B", "" ] ]
The COVID-19 pandemic has resulted in more than 14.5 million infections and 6,04,917 deaths in 212 countries over the last few months. Different drug intervention acting at multiple stages of pathogenesis of COVID-19 can substantially reduce the infection induced,thereby decreasing the mortality. Also population level control strategies can reduce the spread of the COVID-19 substantially. Motivated by these observations, in this work we propose and study a multi scale model linking both within-host and between-host dynamics of COVID-19. Initially the natural history dealing with the disease dynamics is studied. Later, comparative effectiveness is performed to understand the efficacy of both the within-host and population level interventions. Findings of this study suggest that a combined strategy involving treatment with drugs such as Arbidol, remdesivir, Lopinavir/Ritonavir that inhibits viral replication and immunotherapies like monoclonal antibodies, along with environmental hygiene and generalized social distancing proved to be the best and optimal in reducing the basic reproduction number and environmental spread of the virus at the population level.
2106.13146
Akiva Bruno Melka
Akiva Bruno Melka and Yoram Louzoun
High fraction of silent recombination in a finite population two-locus neutral birth-death-mutation model
12 pages, 7 figures
Phys. Rev. E 106, 024409 (2022)
10.1103/PhysRevE.106.024409
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A precise estimate of allele and haplotype polymorphism is of great interest in theoretical population genetics, but also has practical applications, such as bone marrow registries management. Allele polymorphism is driven mainly by point mutations, while haplotype polymorphism is also affected by recombination. Current estimates treat recombination as mutations in an infinite site model. We here show that even in the simple case of two loci in a haploid individual, for a finite population, most recombination events produce existing haplotypes, and as such are silent. Silent recombination considerably reduces the total number of haplotypes expected from the infinite site model for populations that are not much larger than one over the mutation rate. Moreover, in contrast with mutations, the number of haplotypes does not grow linearly with the population size. We hence propose a more accurate estimate of the total number of haplotypes that takes into account silent recombination. We study large-scale Human Leukocyte Antigen (HLA) haplotype frequencies from human populations to show that the current estimated recombination rate in the HLA region is underestimated.
[ { "created": "Thu, 24 Jun 2021 16:22:52 GMT", "version": "v1" }, { "created": "Sun, 10 Jul 2022 12:41:15 GMT", "version": "v2" } ]
2022-08-19
[ [ "Melka", "Akiva Bruno", "" ], [ "Louzoun", "Yoram", "" ] ]
A precise estimate of allele and haplotype polymorphism is of great interest in theoretical population genetics, but also has practical applications, such as bone marrow registries management. Allele polymorphism is driven mainly by point mutations, while haplotype polymorphism is also affected by recombination. Current estimates treat recombination as mutations in an infinite site model. We here show that even in the simple case of two loci in a haploid individual, for a finite population, most recombination events produce existing haplotypes, and as such are silent. Silent recombination considerably reduces the total number of haplotypes expected from the infinite site model for populations that are not much larger than one over the mutation rate. Moreover, in contrast with mutations, the number of haplotypes does not grow linearly with the population size. We hence propose a more accurate estimate of the total number of haplotypes that takes into account silent recombination. We study large-scale Human Leukocyte Antigen (HLA) haplotype frequencies from human populations to show that the current estimated recombination rate in the HLA region is underestimated.
2209.07423
Ziqiao Zhang
Ziqiao Zhang, Yatao Bian, Ailin Xie, Pengju Han, Long-Kai Huang, Shuigeng Zhou
Can Pre-trained Models Really Learn Better Molecular Representations for AI-aided Drug Discovery?
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised pre-training is gaining increasingly more popularity in AI-aided drug discovery, leading to more and more pre-trained models with the promise that they can extract better feature representations for molecules. Yet, the quality of learned representations have not been fully explored. In this work, inspired by the two phenomena of Activity Cliffs (ACs) and Scaffold Hopping (SH) in traditional Quantitative Structure-Activity Relationship (QSAR) analysis, we propose a method named Representation-Property Relationship Analysis (RePRA) to evaluate the quality of the representations extracted by the pre-trained model and visualize the relationship between the representations and properties. The concepts of ACs and SH are generalized from the structure-activity context to the representation-property context, and the underlying principles of RePRA are analyzed theoretically. Two scores are designed to measure the generalized ACs and SH detected by RePRA, and therefore the quality of representations can be evaluated. In experiments, representations of molecules from 10 target tasks generated by 7 pre-trained models are analyzed. The results indicate that the state-of-the-art pre-trained models can overcome some shortcomings of canonical Extended-Connectivity FingerPrints (ECFP), while the correlation between the basis of the representation space and specific molecular substructures are not explicit. Thus, some representations could be even worse than the canonical fingerprints. Our method enables researchers to evaluate the quality of molecular representations generated by their proposed self-supervised pre-trained models. And our findings can guide the community to develop better pre-training techniques to regularize the occurrence of ACs and SH.
[ { "created": "Sun, 21 Aug 2022 10:05:25 GMT", "version": "v1" } ]
2022-09-16
[ [ "Zhang", "Ziqiao", "" ], [ "Bian", "Yatao", "" ], [ "Xie", "Ailin", "" ], [ "Han", "Pengju", "" ], [ "Huang", "Long-Kai", "" ], [ "Zhou", "Shuigeng", "" ] ]
Self-supervised pre-training is gaining increasingly more popularity in AI-aided drug discovery, leading to more and more pre-trained models with the promise that they can extract better feature representations for molecules. Yet, the quality of learned representations have not been fully explored. In this work, inspired by the two phenomena of Activity Cliffs (ACs) and Scaffold Hopping (SH) in traditional Quantitative Structure-Activity Relationship (QSAR) analysis, we propose a method named Representation-Property Relationship Analysis (RePRA) to evaluate the quality of the representations extracted by the pre-trained model and visualize the relationship between the representations and properties. The concepts of ACs and SH are generalized from the structure-activity context to the representation-property context, and the underlying principles of RePRA are analyzed theoretically. Two scores are designed to measure the generalized ACs and SH detected by RePRA, and therefore the quality of representations can be evaluated. In experiments, representations of molecules from 10 target tasks generated by 7 pre-trained models are analyzed. The results indicate that the state-of-the-art pre-trained models can overcome some shortcomings of canonical Extended-Connectivity FingerPrints (ECFP), while the correlation between the basis of the representation space and specific molecular substructures are not explicit. Thus, some representations could be even worse than the canonical fingerprints. Our method enables researchers to evaluate the quality of molecular representations generated by their proposed self-supervised pre-trained models. And our findings can guide the community to develop better pre-training techniques to regularize the occurrence of ACs and SH.
q-bio/0311024
S. A. Belbas
S. A. Belbas (Mathematics Dept, University of Alabama), Suhnghee Kim (Dept of Biological Sciences, University of Alabama)
Identification in models of biochemical reactions
39 pages, 8 figures
null
null
null
q-bio.QM
null
We introduce, analyze, and implement a new method for parameter identification for system of ordinary differential equations that are used to model sets of biochemical reactions. Our method relies on the integral formulation of the ODE system and a method of linear least squares applied to the integral equations. Certain variants of this method are also introduced in this paper.
[ { "created": "Tue, 18 Nov 2003 17:02:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Belbas", "S. A.", "", "Mathematics Dept, University of Alabama" ], [ "Kim", "Suhnghee", "", "Dept of Biological Sciences, University of Alabama" ] ]
We introduce, analyze, and implement a new method for parameter identification for system of ordinary differential equations that are used to model sets of biochemical reactions. Our method relies on the integral formulation of the ODE system and a method of linear least squares applied to the integral equations. Certain variants of this method are also introduced in this paper.
2207.06003
Yang Tian
Yang Tian, Guoqi Li, Pei Sun
Bridging the information and dynamics attributes of neural activities
null
Physical Review Research, 3(4), 043085 (2021)
10.1103/PhysRevResearch.3.043085
null
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The brain works as a dynamic system to process information. Various challenges remain in understanding the connection between information and dynamics attributes in the brain. The present research pursues exploring how the characteristics of neural information functions are linked to neural dynamics. We attempt to bridge dynamics (e.g., Kolmogorov-Sinai entropy) and information (e.g., mutual information and Fisher information) metrics on the stimulus-triggered stochastic dynamics in neural populations. On the one hand, our unified analysis identifies various essential features of the information-processing-related neural dynamics. We discover spatiotemporal differences in the dynamic randomness and chaotic degrees of neural dynamics during neural information processing. On the other hand, our framework reveals the fundamental role of neural dynamics in shaping neural information processing. The neural dynamics creates an oppositely directed variation of encoding and decoding properties under specific conditions, and it determines the neural representation of stimulus distribution. Overall, our findings demonstrate a potential direction to explain the emergence of neural information processing from neural dynamics and help understand the intrinsic connections between the informational and the physical brain.
[ { "created": "Wed, 13 Jul 2022 07:20:22 GMT", "version": "v1" } ]
2022-07-14
[ [ "Tian", "Yang", "" ], [ "Li", "Guoqi", "" ], [ "Sun", "Pei", "" ] ]
The brain works as a dynamic system to process information. Various challenges remain in understanding the connection between information and dynamics attributes in the brain. The present research pursues exploring how the characteristics of neural information functions are linked to neural dynamics. We attempt to bridge dynamics (e.g., Kolmogorov-Sinai entropy) and information (e.g., mutual information and Fisher information) metrics on the stimulus-triggered stochastic dynamics in neural populations. On the one hand, our unified analysis identifies various essential features of the information-processing-related neural dynamics. We discover spatiotemporal differences in the dynamic randomness and chaotic degrees of neural dynamics during neural information processing. On the other hand, our framework reveals the fundamental role of neural dynamics in shaping neural information processing. The neural dynamics creates an oppositely directed variation of encoding and decoding properties under specific conditions, and it determines the neural representation of stimulus distribution. Overall, our findings demonstrate a potential direction to explain the emergence of neural information processing from neural dynamics and help understand the intrinsic connections between the informational and the physical brain.
0812.3523
Diego Garlaschelli
Tancredi Caruso, Diego Garlaschelli, Roberto Bargagli, Peter Convey
Testing metabolic scaling theory using intraspecific allometries in Antarctic microarthropods
Submitted to Oikos (Nordic Ecological Society and Blackwell Publishing Ltd.)
Oikos 119, 935-945 (2010)
10.1111/j.1600-0706.2009.17915.x
null
q-bio.PE physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantitative scaling relationships among body mass, temperature and metabolic rate of organisms are still controversial, while resolution may be further complicated through the use of different and possibly inappropriate approaches to statistical analysis. We propose the application of a modelling strategy based on Akaike's information criteria and non-linear model fitting (nlm). Accordingly, we collated and modelled available data at intraspecific level on the individual standard metabolic rate of Antarctic microarthropods as a function of body mass (M), temperature (T), species identity (S) and high rank taxa to which species belong (G) and tested predictions from Metabolic Scaling Theory. We also performed allometric analysis based on logarithmic transformations (lm). Conclusions from lm and nlm approaches were different. Best-supported models from lm incorporated T, M and S. The estimates of the allometric scaling exponent b linking body mass and metabolic rate indicated no interspecific difference and resulted in a value of 0.696 +/- 0.105 (mean +/- 95% CI). In contrast, the four best-supported nlm models suggested that both the scaling exponent and activation energy significantly vary across the high rank taxa to which species belong, with mean values of b ranging from about 0.6 to 0.8. We therefore reached two conclusions: 1) published analyses of arthropod metabolism based on logarithmic data may be biased by data transformation; 2) non-linear models applied to Antarctic microarthropod metabolic rate suggest that intraspecific scaling of standard metabolic rate in Antarctic microarthropods is highly variable and can be characterised by scaling exponents that greatly vary within taxa, which may have biased previous interspecific comparisons that neglected intraspecific variability.
[ { "created": "Thu, 18 Dec 2008 13:07:49 GMT", "version": "v1" }, { "created": "Wed, 16 Sep 2009 12:52:21 GMT", "version": "v2" } ]
2012-10-18
[ [ "Caruso", "Tancredi", "" ], [ "Garlaschelli", "Diego", "" ], [ "Bargagli", "Roberto", "" ], [ "Convey", "Peter", "" ] ]
Quantitative scaling relationships among body mass, temperature and metabolic rate of organisms are still controversial, while resolution may be further complicated through the use of different and possibly inappropriate approaches to statistical analysis. We propose the application of a modelling strategy based on Akaike's information criteria and non-linear model fitting (nlm). Accordingly, we collated and modelled available data at intraspecific level on the individual standard metabolic rate of Antarctic microarthropods as a function of body mass (M), temperature (T), species identity (S) and high rank taxa to which species belong (G) and tested predictions from Metabolic Scaling Theory. We also performed allometric analysis based on logarithmic transformations (lm). Conclusions from lm and nlm approaches were different. Best-supported models from lm incorporated T, M and S. The estimates of the allometric scaling exponent b linking body mass and metabolic rate indicated no interspecific difference and resulted in a value of 0.696 +/- 0.105 (mean +/- 95% CI). In contrast, the four best-supported nlm models suggested that both the scaling exponent and activation energy significantly vary across the high rank taxa to which species belong, with mean values of b ranging from about 0.6 to 0.8. We therefore reached two conclusions: 1) published analyses of arthropod metabolism based on logarithmic data may be biased by data transformation; 2) non-linear models applied to Antarctic microarthropod metabolic rate suggest that intraspecific scaling of standard metabolic rate in Antarctic microarthropods is highly variable and can be characterised by scaling exponents that greatly vary within taxa, which may have biased previous interspecific comparisons that neglected intraspecific variability.
1202.5048
Marcelo Gleiser
Marcelo Gleiser and Sara Imari Walker
Life's Chirality From Prebiotic Environments
12 pages, 7 figures. Plenary talk delivered at the S\~ao Paulo Advanced School of Astrobiology, S\~ao Paulo, December 2011. In press, Int. J. Astrobio. Added small clarification in conclusion section
null
10.1017/S1473550412000377
null
q-bio.BM astro-ph.EP physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key open question in the study of life is the origin of biomolecular homochirality: almost every life-form on Earth has exclusively levorotary amino acids and dextrorotary sugars. Will the same handedness be preferred if life is found elsewhere? We review some of the pertinent literature and discuss recent results suggesting that life's homochirality resulted from sequential chiral symmetry breaking triggered by environmental events. In one scenario, autocatalytic prebiotic reactions undergo stochastic fluctuations due to environmental disturbances. In another, chiral-selective polymerization reaction rates influenced by environmental effects lead to substantial chiral excess even in the absence of autocatalysis. Applying these arguments to other potentially life-bearing platforms has implications to the search for extraterrestrial life: we predict that a statistically representative sampling of extraterrestrial stereochemistry will be racemic (chirally neutral) on average.
[ { "created": "Wed, 22 Feb 2012 21:18:00 GMT", "version": "v1" }, { "created": "Fri, 2 Mar 2012 20:50:28 GMT", "version": "v2" } ]
2015-06-04
[ [ "Gleiser", "Marcelo", "" ], [ "Walker", "Sara Imari", "" ] ]
A key open question in the study of life is the origin of biomolecular homochirality: almost every life-form on Earth has exclusively levorotary amino acids and dextrorotary sugars. Will the same handedness be preferred if life is found elsewhere? We review some of the pertinent literature and discuss recent results suggesting that life's homochirality resulted from sequential chiral symmetry breaking triggered by environmental events. In one scenario, autocatalytic prebiotic reactions undergo stochastic fluctuations due to environmental disturbances. In another, chiral-selective polymerization reaction rates influenced by environmental effects lead to substantial chiral excess even in the absence of autocatalysis. Applying these arguments to other potentially life-bearing platforms has implications to the search for extraterrestrial life: we predict that a statistically representative sampling of extraterrestrial stereochemistry will be racemic (chirally neutral) on average.
1203.0937
Alexander Sch\"onhuth
Tobias Marschall, Ivan Costa, Stefan Canzar, Markus Bauer, Gunnar Klau, Alexander Schliep, Alexander Sch\"onhuth
CLEVER: Clique-Enumerating Variant Finder
30 pages, 8 figures
Bioinformatics, 28(22), 2875-2882, 2012
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Next-generation sequencing techniques have facilitated a large scale analysis of human genetic variation. Despite the advances in sequencing speeds, the computational discovery of structural variants is not yet standard. It is likely that many variants have remained undiscovered in most sequenced individuals. Here we present a novel internal segment size based approach, which organizes all, including also concordant reads into a read alignment graph where max-cliques represent maximal contradiction-free groups of alignments. A specifically engineered algorithm then enumerates all max-cliques and statistically evaluates them for their potential to reflect insertions or deletions (indels). For the first time in the literature, we compare a large range of state-of-the-art approaches using simulated Illumina reads from a fully annotated genome and present various relevant performance statistics. We achieve superior performance rates in particular on indels of sizes 20--100, which have been exposed as a current major challenge in the SV discovery literature and where prior insert size based approaches have limitations. In that size range, we outperform even split read aligners. We achieve good results also on real data where we make a substantial amount of correct predictions as the only tool, which complement the predictions of split-read aligners. CLEVER is open source (GPL) and available from http://clever-sv.googlecode.com.
[ { "created": "Mon, 5 Mar 2012 14:51:41 GMT", "version": "v1" }, { "created": "Mon, 16 Jul 2012 00:18:50 GMT", "version": "v2" } ]
2015-01-14
[ [ "Marschall", "Tobias", "" ], [ "Costa", "Ivan", "" ], [ "Canzar", "Stefan", "" ], [ "Bauer", "Markus", "" ], [ "Klau", "Gunnar", "" ], [ "Schliep", "Alexander", "" ], [ "Schönhuth", "Alexander", "" ] ]
Next-generation sequencing techniques have facilitated a large scale analysis of human genetic variation. Despite the advances in sequencing speeds, the computational discovery of structural variants is not yet standard. It is likely that many variants have remained undiscovered in most sequenced individuals. Here we present a novel internal segment size based approach, which organizes all, including also concordant reads into a read alignment graph where max-cliques represent maximal contradiction-free groups of alignments. A specifically engineered algorithm then enumerates all max-cliques and statistically evaluates them for their potential to reflect insertions or deletions (indels). For the first time in the literature, we compare a large range of state-of-the-art approaches using simulated Illumina reads from a fully annotated genome and present various relevant performance statistics. We achieve superior performance rates in particular on indels of sizes 20--100, which have been exposed as a current major challenge in the SV discovery literature and where prior insert size based approaches have limitations. In that size range, we outperform even split read aligners. We achieve good results also on real data where we make a substantial amount of correct predictions as the only tool, which complement the predictions of split-read aligners. CLEVER is open source (GPL) and available from http://clever-sv.googlecode.com.
1410.8703
Benedikt Obermayer
Benedikt Obermayer and Erel Levine
Inverse Ising inference with correlated samples
18 pages, 6 figures; accepted at New J Phys
New J. Phys. 16:123017 (2014)
10.1088/1367-2630/16/12/123017
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem.
[ { "created": "Fri, 31 Oct 2014 11:02:29 GMT", "version": "v1" } ]
2014-12-10
[ [ "Obermayer", "Benedikt", "" ], [ "Levine", "Erel", "" ] ]
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem.
2104.12021
Debaditya Chakraborty
Debaditya Chakraborty, Cristina Ivan, Paola Amero, Maliha Khan, Cristian Rodriguez-Aguayo, Hakan Ba\c{s}a\u{g}ao\u{g}lu, and Gabriel Lopez-Berestein
Explainable Artificial Intelligence Reveals Novel Insight into Tumor Microenvironment Conditions Linked with Better Prognosis in Patients with Breast Cancer
14 pages, 5 figures, preprint
null
null
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We investigated the data-driven relationship between features in the tumor microenvironment (TME) and the overall and 5-year survival in triple-negative breast cancer (TNBC) and non-TNBC (NTNBC) patients by using Explainable Artificial Intelligence (XAI) models. We used clinical information from patients with invasive breast carcinoma from The Cancer Genome Atlas and from two studies from the cbioPortal, the PanCanAtlas project and the GDAC Firehose study. In this study, we used a normalized RNA sequencing data-driven cohort from 1,015 breast cancer patients, alive or deceased, from the UCSC Xena data set and performed integrated deconvolution with the EPIC method to estimate the percentage of seven different immune and stromal cells from RNA sequencing data. Novel insights derived from our XAI model showed that CD4+ T cells and B cells are more critical than other TME features for enhanced prognosis for both TNBC and NTNBC patients. Our XAI model revealed the critical inflection points (i.e., threshold fractions) of CD4+ T cells and B cells above or below which 5-year survival rates improve. Subsequently, we ascertained the conditional probabilities of $\geq$ 5-year survival in both TNBC and NTNBC patients under specific conditions inferred from the inflection points. In particular, the XAI models revealed that a B-cell fraction exceeding 0.018 in the TME could ensure 100% 5-year survival for NTNBC patients. The findings from this research could lead to more accurate clinical predictions and enhanced immunotherapies and to the design of innovative strategies to reprogram the TME of breast cancer patients.
[ { "created": "Sat, 24 Apr 2021 20:50:41 GMT", "version": "v1" } ]
2021-04-27
[ [ "Chakraborty", "Debaditya", "" ], [ "Ivan", "Cristina", "" ], [ "Amero", "Paola", "" ], [ "Khan", "Maliha", "" ], [ "Rodriguez-Aguayo", "Cristian", "" ], [ "Başağaoğlu", "Hakan", "" ], [ "Lopez-Berestein", "Gabriel", "" ] ]
We investigated the data-driven relationship between features in the tumor microenvironment (TME) and the overall and 5-year survival in triple-negative breast cancer (TNBC) and non-TNBC (NTNBC) patients by using Explainable Artificial Intelligence (XAI) models. We used clinical information from patients with invasive breast carcinoma from The Cancer Genome Atlas and from two studies from the cbioPortal, the PanCanAtlas project and the GDAC Firehose study. In this study, we used a normalized RNA sequencing data-driven cohort from 1,015 breast cancer patients, alive or deceased, from the UCSC Xena data set and performed integrated deconvolution with the EPIC method to estimate the percentage of seven different immune and stromal cells from RNA sequencing data. Novel insights derived from our XAI model showed that CD4+ T cells and B cells are more critical than other TME features for enhanced prognosis for both TNBC and NTNBC patients. Our XAI model revealed the critical inflection points (i.e., threshold fractions) of CD4+ T cells and B cells above or below which 5-year survival rates improve. Subsequently, we ascertained the conditional probabilities of $\geq$ 5-year survival in both TNBC and NTNBC patients under specific conditions inferred from the inflection points. In particular, the XAI models revealed that a B-cell fraction exceeding 0.018 in the TME could ensure 100% 5-year survival for NTNBC patients. The findings from this research could lead to more accurate clinical predictions and enhanced immunotherapies and to the design of innovative strategies to reprogram the TME of breast cancer patients.
2006.10584
Rui Wang
Jiahui Chen, Kaifu Gao, Rui Wang, Duc Duy Nguyen and Guo-Wei Wei
Review of COVID-19 Antibody Therapies
30 pages, 10 figures, 5 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under the global health emergency caused by coronavirus disease 2019 (COVID-19), efficient and specific therapies are urgently needed. Compared with traditional small-molecular drugs, antibody therapies are relatively easy to develop and as specific as vaccines in targeting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and thus attract much attention in the past few months. This work reviews seven existing antibodies for SARS-CoV-2 spike (S) protein with three-dimensional (3D) structures deposited in the Protein Data Bank. Five antibody structures associated with SARS-CoV are evaluated for their potential in neutralizing SARS-CoV-2. The interactions of these antibodies with the S protein receptor-binding domain (RBD) are compared with those of angiotensin-converting enzyme 2 (ACE2) and RBD complexes. Due to the orders of magnitude in the discrepancies of experimental binding affinities, we introduce topological data analysis (TDA), a variety of network models, and deep learning to analyze the binding strength and therapeutic potential of the aforementioned fourteen antibody-antigen complexes. The current COVID-19 antibody clinical trials, which are not limited to the S protein target, are also reviewed.
[ { "created": "Thu, 18 Jun 2020 14:47:19 GMT", "version": "v1" } ]
2020-06-19
[ [ "Chen", "Jiahui", "" ], [ "Gao", "Kaifu", "" ], [ "Wang", "Rui", "" ], [ "Nguyen", "Duc Duy", "" ], [ "Wei", "Guo-Wei", "" ] ]
Under the global health emergency caused by coronavirus disease 2019 (COVID-19), efficient and specific therapies are urgently needed. Compared with traditional small-molecular drugs, antibody therapies are relatively easy to develop and as specific as vaccines in targeting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and thus attract much attention in the past few months. This work reviews seven existing antibodies for SARS-CoV-2 spike (S) protein with three-dimensional (3D) structures deposited in the Protein Data Bank. Five antibody structures associated with SARS-CoV are evaluated for their potential in neutralizing SARS-CoV-2. The interactions of these antibodies with the S protein receptor-binding domain (RBD) are compared with those of angiotensin-converting enzyme 2 (ACE2) and RBD complexes. Due to the orders of magnitude in the discrepancies of experimental binding affinities, we introduce topological data analysis (TDA), a variety of network models, and deep learning to analyze the binding strength and therapeutic potential of the aforementioned fourteen antibody-antigen complexes. The current COVID-19 antibody clinical trials, which are not limited to the S protein target, are also reviewed.
2207.04351
Wayne Hayes
Tingyin Ding, Utsav Jain, Wayne B. Hayes
BLANT: Basic Local Alignment of Network Topology, Part 2: Topology-only Extension Beyond Graphlet Seeds
8 pages, 5 Figures, 3 Tables
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
BLAST is a standard tool in bioinformatics for creating local sequence alignments using a "seed-and-extend" approach. Here we introduce an analogous seed-and-extend algorithm that produces local network alignments: BLANT (Basic Local Alignment of Network Topology). In Part 1, we introduced BLANT-seed, which generates graphlet-based seeds using only topological information. Here, in Part 2, we describe BLANT-extend, which "grows" seeds to larger local alignments using only topological information. We allow the user to specify bounds on several measures an alignment must satisfy, including the edge density, edge commonality (i.e., aligned edges), and node-pair similarity if such a measure is used; the latter allows the inclusion of sequence-based similarity, if desired, as well as local topological constraints. BLANT-extend is able to enumerate all possible alignments satisfying the bounds that can be grown from each seed, within a specified CPU time or number of generated alignments. While topology-driven local network alignment has a wide variety of potential applications outside bioinformatics, here we focus on the alignment of Protein-Protein Interaction (PPI) networks. We show that BLANT is capable of finding large, high-quality local alignments when the networks are known to have high topological similarity -- for example recovering hundreds of orthologs between networks of the recent Integrated Interaction Database (IID). Predictably, however, it performs less well when true topological similarity is absent, as is the case in most current experimental PPI networks that are noisy and have wide disparity in edge density which results in low common coverage.
[ { "created": "Sun, 10 Jul 2022 00:20:24 GMT", "version": "v1" } ]
2022-07-12
[ [ "Ding", "Tingyin", "" ], [ "Jain", "Utsav", "" ], [ "Hayes", "Wayne B.", "" ] ]
BLAST is a standard tool in bioinformatics for creating local sequence alignments using a "seed-and-extend" approach. Here we introduce an analogous seed-and-extend algorithm that produces local network alignments: BLANT (Basic Local Alignment of Network Topology). In Part 1, we introduced BLANT-seed, which generates graphlet-based seeds using only topological information. Here, in Part 2, we describe BLANT-extend, which "grows" seeds to larger local alignments using only topological information. We allow the user to specify bounds on several measures an alignment must satisfy, including the edge density, edge commonality (i.e., aligned edges), and node-pair similarity if such a measure is used; the latter allows the inclusion of sequence-based similarity, if desired, as well as local topological constraints. BLANT-extend is able to enumerate all possible alignments satisfying the bounds that can be grown from each seed, within a specified CPU time or number of generated alignments. While topology-driven local network alignment has a wide variety of potential applications outside bioinformatics, here we focus on the alignment of Protein-Protein Interaction (PPI) networks. We show that BLANT is capable of finding large, high-quality local alignments when the networks are known to have high topological similarity -- for example recovering hundreds of orthologs between networks of the recent Integrated Interaction Database (IID). Predictably, however, it performs less well when true topological similarity is absent, as is the case in most current experimental PPI networks that are noisy and have wide disparity in edge density which results in low common coverage.
1305.4206
Hern\'an Burbano Dr
Kentaro Yoshida, Verena J. Schuenemann, Liliana M. Cano, Marina Pais, Bagdevi Mishra, Rahul Sharma, Christa Lanz, Frank N. Martin, Sophien Kamoun, Johannes Krause, Marco Thines, Detlef Weigel and Hern\'an A. Burbano
The rise and fall of the Phytophthora infestans lineage that triggered the Irish potato famine
To be published in eLIFE
null
10.7554/eLife.00731
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phytophthora infestans, the cause of potato late blight, is infamous for having triggered the Irish Great Famine in the 1840s. Until the late 1970s, P. infestans diversity outside of its Mexican center of origin was low, and one scenario held that a single strain, US-1, had dominated the global population for 150 years; this was later challenged based on DNA analysis of historical herbarium specimens. We have compared the genomes of 11 herbarium and 15 modern strains. We conclude that the nineteenth century epidemic was caused by a unique genotype, HERB-1, that persisted for over 50 years. HERB-1 is distinct from all examined modern strains, but it is a close relative of US-1, which replaced it outside of Mexico in the twentieth century. We propose that HERB-1 and US-1 emerged from a metapopulation that was established in the early 1800s outside of the species' center of diversity.
[ { "created": "Fri, 17 May 2013 23:22:29 GMT", "version": "v1" } ]
2013-06-14
[ [ "Yoshida", "Kentaro", "" ], [ "Schuenemann", "Verena J.", "" ], [ "Cano", "Liliana M.", "" ], [ "Pais", "Marina", "" ], [ "Mishra", "Bagdevi", "" ], [ "Sharma", "Rahul", "" ], [ "Lanz", "Christa", "" ], [ "Martin", "Frank N.", "" ], [ "Kamoun", "Sophien", "" ], [ "Krause", "Johannes", "" ], [ "Thines", "Marco", "" ], [ "Weigel", "Detlef", "" ], [ "Burbano", "Hernán A.", "" ] ]
Phytophthora infestans, the cause of potato late blight, is infamous for having triggered the Irish Great Famine in the 1840s. Until the late 1970s, P. infestans diversity outside of its Mexican center of origin was low, and one scenario held that a single strain, US-1, had dominated the global population for 150 years; this was later challenged based on DNA analysis of historical herbarium specimens. We have compared the genomes of 11 herbarium and 15 modern strains. We conclude that the nineteenth century epidemic was caused by a unique genotype, HERB-1, that persisted for over 50 years. HERB-1 is distinct from all examined modern strains, but it is a close relative of US-1, which replaced it outside of Mexico in the twentieth century. We propose that HERB-1 and US-1 emerged from a metapopulation that was established in the early 1800s outside of the species' center of diversity.
2402.17997
Xiang Li
Zhe Wang, Jianping Wu, Mengjun Zheng, Chenchen Geng, Borui Zhen, Wei Zhang, Hui Wu, Zhengyang Xu, Gang Xu, Si Chen, Xiang Li
StaPep: an open-source tool for the structure prediction and feature extraction of hydrocarbon-stapled peptides
26 pages, 6 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many tools exist for extracting structural and physiochemical descriptors from linear peptides to predict their properties, but similar tools for hydrocarbon-stapled peptides are lacking.Here, we present StaPep, a Python-based toolkit designed for generating 2D/3D structures and calculating 21 distinct features for hydrocarbon-stapled peptides.The current version supports hydrocarbon-stapled peptides containing 2 non-standard amino acids (norleucine and 2-aminoisobutyric acid) and 6 nonnatural anchoring residues (S3, S5, S8, R3, R5 and R8).Then we established a hand-curated dataset of 201 hydrocarbon-stapled peptides and 384 linear peptides with sequence information and experimental membrane permeability, to showcase StaPep's application in artificial intelligence projects.A machine learning-based predictor utilizing above calculated features was developed with AUC of 0.85, for identifying cell-penetrating hydrocarbon-stapled peptides.StaPep's pipeline spans data retrieval, cleaning, structure generation, molecular feature calculation, and machine learning model construction for hydrocarbon-stapled peptides.The source codes and dataset are freely available on Github: https://github.com/dahuilangda/stapep_package.
[ { "created": "Wed, 28 Feb 2024 02:23:16 GMT", "version": "v1" } ]
2024-02-29
[ [ "Wang", "Zhe", "" ], [ "Wu", "Jianping", "" ], [ "Zheng", "Mengjun", "" ], [ "Geng", "Chenchen", "" ], [ "Zhen", "Borui", "" ], [ "Zhang", "Wei", "" ], [ "Wu", "Hui", "" ], [ "Xu", "Zhengyang", "" ], [ "Xu", "Gang", "" ], [ "Chen", "Si", "" ], [ "Li", "Xiang", "" ] ]
Many tools exist for extracting structural and physiochemical descriptors from linear peptides to predict their properties, but similar tools for hydrocarbon-stapled peptides are lacking.Here, we present StaPep, a Python-based toolkit designed for generating 2D/3D structures and calculating 21 distinct features for hydrocarbon-stapled peptides.The current version supports hydrocarbon-stapled peptides containing 2 non-standard amino acids (norleucine and 2-aminoisobutyric acid) and 6 nonnatural anchoring residues (S3, S5, S8, R3, R5 and R8).Then we established a hand-curated dataset of 201 hydrocarbon-stapled peptides and 384 linear peptides with sequence information and experimental membrane permeability, to showcase StaPep's application in artificial intelligence projects.A machine learning-based predictor utilizing above calculated features was developed with AUC of 0.85, for identifying cell-penetrating hydrocarbon-stapled peptides.StaPep's pipeline spans data retrieval, cleaning, structure generation, molecular feature calculation, and machine learning model construction for hydrocarbon-stapled peptides.The source codes and dataset are freely available on Github: https://github.com/dahuilangda/stapep_package.
2308.03896
Eleftheria Papadopoulou
Eleftheria Papadopoulou, Charlene Vance, Paloma S. Rozene Vallespin, Panagiotis Tsapekos, Irini Angelidaki
Saccharina latissima, candy-factory waste, and digestate from full-scale biogas plant as alternative carbohydrate and nutrient sources for lactic acid production
25 pages, 6 figures
Bioresource Technology, Volume 380, 2023
10.1016/j.biortech.2023.129078
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
To substitute petroleum-based materials with bio-based alternatives, microbial fermentation combined with inexpensive biomass is suggested. In this study Saccharina latissima hydrolysate, candy-factory waste, and digestate from full-scale biogas plant were explored as substrates for lactic acid production. The lactic acid bacteria Enterococcus faecium, Lactobacillus plantarum, and Pediococcus pentosaceus were tested as starter cultures. Sugars released from seaweed hydrolysate and candy-waste were successfully utilized by the studied bacterial strains. Additionally, seaweed hydrolysate and digestate served as nutrient supplements supporting microbial fermentation. According to the highest achieved relative lactic acid production, a scaled-up co-fermentation of candy-waste and digestate was performed. Lactic acid reached a concentration of 65.65 g/L, with 61.69% relative lactic acid production, and 1.37 g/L/hour productivity. The findings indicate that lactic acid can be successfully produced from low-cost industrial residues.
[ { "created": "Mon, 7 Aug 2023 20:13:10 GMT", "version": "v1" } ]
2023-08-09
[ [ "Papadopoulou", "Eleftheria", "" ], [ "Vance", "Charlene", "" ], [ "Vallespin", "Paloma S. Rozene", "" ], [ "Tsapekos", "Panagiotis", "" ], [ "Angelidaki", "Irini", "" ] ]
To substitute petroleum-based materials with bio-based alternatives, microbial fermentation combined with inexpensive biomass is suggested. In this study Saccharina latissima hydrolysate, candy-factory waste, and digestate from full-scale biogas plant were explored as substrates for lactic acid production. The lactic acid bacteria Enterococcus faecium, Lactobacillus plantarum, and Pediococcus pentosaceus were tested as starter cultures. Sugars released from seaweed hydrolysate and candy-waste were successfully utilized by the studied bacterial strains. Additionally, seaweed hydrolysate and digestate served as nutrient supplements supporting microbial fermentation. According to the highest achieved relative lactic acid production, a scaled-up co-fermentation of candy-waste and digestate was performed. Lactic acid reached a concentration of 65.65 g/L, with 61.69% relative lactic acid production, and 1.37 g/L/hour productivity. The findings indicate that lactic acid can be successfully produced from low-cost industrial residues.
1703.07724
Helena Zacharias U.
Helena U. Zacharias, Thorsten Rehberg, Sebastian Mehrl, Daniel Richtmann, Tilo Wettig, Peter J. Oefner, Rainer Spang, Wolfram Gronwald, Michael Altenbuchinger
Scale-invariant biomarker discovery in urine and plasma metabolite fingerprints
null
null
10.1021/acs.jproteome.7b00325
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Metabolomics data is typically scaled to a common reference like a constant volume of body fluid, a constant creatinine level, or a constant area under the spectrum. Such normalization of the data, however, may affect the selection of biomarkers and the biological interpretation of results in unforeseen ways. Results: First, we study how the outcome of hypothesis tests for differential metabolite concentration is affected by the choice of scale. Furthermore, we observe this interdependence also for different classification approaches. Second, to overcome this problem and establish a scale-invariant biomarker discovery algorithm, we extend linear zero-sum regression to the logistic regression framework and show in two applications to ${}^1$H NMR-based metabolomics data how this approach overcomes the scaling problem. Availability: Logistic zero-sum regression is available as an R package as well as a high-performance computing implementation that can be downloaded at https://github.com/rehbergT/zeroSum
[ { "created": "Wed, 22 Mar 2017 16:06:23 GMT", "version": "v1" } ]
2017-09-19
[ [ "Zacharias", "Helena U.", "" ], [ "Rehberg", "Thorsten", "" ], [ "Mehrl", "Sebastian", "" ], [ "Richtmann", "Daniel", "" ], [ "Wettig", "Tilo", "" ], [ "Oefner", "Peter J.", "" ], [ "Spang", "Rainer", "" ], [ "Gronwald", "Wolfram", "" ], [ "Altenbuchinger", "Michael", "" ] ]
Motivation: Metabolomics data is typically scaled to a common reference like a constant volume of body fluid, a constant creatinine level, or a constant area under the spectrum. Such normalization of the data, however, may affect the selection of biomarkers and the biological interpretation of results in unforeseen ways. Results: First, we study how the outcome of hypothesis tests for differential metabolite concentration is affected by the choice of scale. Furthermore, we observe this interdependence also for different classification approaches. Second, to overcome this problem and establish a scale-invariant biomarker discovery algorithm, we extend linear zero-sum regression to the logistic regression framework and show in two applications to ${}^1$H NMR-based metabolomics data how this approach overcomes the scaling problem. Availability: Logistic zero-sum regression is available as an R package as well as a high-performance computing implementation that can be downloaded at https://github.com/rehbergT/zeroSum
0704.2649
Mike Steel Prof.
Mike Steel, Aki Mimoto, Arne O. Mooers
Hedging our bets: the expected contribution of species to future phylogenetic diversity
19 pages, 2 figures
null
null
null
q-bio.PE
null
If predictions for species extinctions hold, then the `tree of life' today may be quite different to that in (say) 100 years. We describe a technique to quantify how much each species is likely to contribute to future biodiversity, as measured by its expected contribution to phylogenetic diversity. Our approach considers all possible scenarios for the set of species that will be extant at some future time, and weights them according to their likelihood under an independent (but not identical) distribution on species extinctions. Although the number of extinction scenarios can typically be very large, we show that there is a simple algorithm that will quickly compute this index. The method is implemented and applied to the prosimian primates as a test case, and the associated species ranking is compared to a related measure (the `Shapley index'). We describe indices for rooted and unrooted trees, and a modification that also includes the focal taxon's probability of extinction, making it directly comparable to some new conservation metrics.
[ { "created": "Fri, 20 Apr 2007 02:37:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Steel", "Mike", "" ], [ "Mimoto", "Aki", "" ], [ "Mooers", "Arne O.", "" ] ]
If predictions for species extinctions hold, then the `tree of life' today may be quite different to that in (say) 100 years. We describe a technique to quantify how much each species is likely to contribute to future biodiversity, as measured by its expected contribution to phylogenetic diversity. Our approach considers all possible scenarios for the set of species that will be extant at some future time, and weights them according to their likelihood under an independent (but not identical) distribution on species extinctions. Although the number of extinction scenarios can typically be very large, we show that there is a simple algorithm that will quickly compute this index. The method is implemented and applied to the prosimian primates as a test case, and the associated species ranking is compared to a related measure (the `Shapley index'). We describe indices for rooted and unrooted trees, and a modification that also includes the focal taxon's probability of extinction, making it directly comparable to some new conservation metrics.
1506.05185
Paolo Ribeca
{\L}ukasz Roguski, Paolo Ribeca
CARGO: Effective format-free compressed storage of genomic information
13 (Main) + 31 (Supplementary) + 88 (Manual) pages
null
10.1093/nar/gkw318
null
q-bio.GN cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors, and scale well to multi-TB datasets.
[ { "created": "Wed, 17 Jun 2015 02:11:42 GMT", "version": "v1" } ]
2021-11-01
[ [ "Roguski", "Łukasz", "" ], [ "Ribeca", "Paolo", "" ] ]
The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors, and scale well to multi-TB datasets.
1307.1917
Pratha Sah
Pratha Sah, Sutirth Dey
Stabilizing spatially structured populations through Adaptive Limiter Control
null
null
10.1371/journal.pone.0105861
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stabilizing the dynamics of complex, non-linear systems is a major concern across several scientific disciplines including ecology and conservation biology. Unfortunately, most methods proposed to reduce the fluctuations in chaotic systems are not applicable for real, biological populations. This is because such methods typically require detailed knowledge of system specific parameters and the ability to manipulate them in real time; conditions often not met by most real populations. Moreover, real populations are often noisy and extinction-prone, which can sometimes render such methods ineffective. Here we investigate a control strategy, which works by perturbing the population size, and is robust to reasonable amounts of noise and extinction probability. This strategy, called the Adaptive Limiter Control (ALC), has been previously shown to increase constancy and persistence of laboratory populations and metapopulations of Drosophila melanogaster. Here we present a detailed numerical investigation of the effects of ALC on the fluctuations and persistence of metapopulations. We show that at high migration rates, application of ALC does not require a priori information about the population growth rates. We also show that ALC can stabilize metapopulations even when applied to as low as one-tenth of the total number of subpopulations. Moreover, ALC is effective even when the subpopulations have high extinction rates: conditions under which one other control algorithm has previously failed to attain stability. Importantly, ALC not only reduces the fluctuation in metapopulation sizes, but also the global extinction probability. Finally, the method is robust to moderate levels of noise in the dynamics and the carrying capacity of the environment. These results, coupled with our earlier empirical findings, establish ALC to be a strong candidate for stabilizing real biological metapopulations.
[ { "created": "Sun, 7 Jul 2013 21:09:23 GMT", "version": "v1" } ]
2015-06-16
[ [ "Sah", "Pratha", "" ], [ "Dey", "Sutirth", "" ] ]
Stabilizing the dynamics of complex, non-linear systems is a major concern across several scientific disciplines including ecology and conservation biology. Unfortunately, most methods proposed to reduce the fluctuations in chaotic systems are not applicable for real, biological populations. This is because such methods typically require detailed knowledge of system specific parameters and the ability to manipulate them in real time; conditions often not met by most real populations. Moreover, real populations are often noisy and extinction-prone, which can sometimes render such methods ineffective. Here we investigate a control strategy, which works by perturbing the population size, and is robust to reasonable amounts of noise and extinction probability. This strategy, called the Adaptive Limiter Control (ALC), has been previously shown to increase constancy and persistence of laboratory populations and metapopulations of Drosophila melanogaster. Here we present a detailed numerical investigation of the effects of ALC on the fluctuations and persistence of metapopulations. We show that at high migration rates, application of ALC does not require a priori information about the population growth rates. We also show that ALC can stabilize metapopulations even when applied to as low as one-tenth of the total number of subpopulations. Moreover, ALC is effective even when the subpopulations have high extinction rates: conditions under which one other control algorithm has previously failed to attain stability. Importantly, ALC not only reduces the fluctuation in metapopulation sizes, but also the global extinction probability. Finally, the method is robust to moderate levels of noise in the dynamics and the carrying capacity of the environment. These results, coupled with our earlier empirical findings, establish ALC to be a strong candidate for stabilizing real biological metapopulations.
1912.06686
Claas Flint
Claas Flint, Micah Cearns, Nils Opel, Ronny Redlich, David M. A. Mehler, Daniel Emden, Nils R. Winter, Ramona Leenings, Simon B. Eickhoff, Tilo Kircher, Axel Krug, Igor Nenadic, Volker Arolt, Scott Clark, Bernhard T. Baune, Xiaoyi Jiang, Udo Dannlowski, Tim Hahn
Systematic Misestimation of Machine Learning Performance in Neuroimaging Studies of Depression
null
Neuropsychopharmacology 46 (2021) 1510-1517
10.1038/s41386-021-01020-7
null
q-bio.NC cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
We currently observe a disconcerting phenomenon in machine learning studies in psychiatry: While we would expect larger samples to yield better results due to the availability of more data, larger machine learning studies consistently show much weaker performance than the numerous small-scale studies. Here, we systematically investigated this effect focusing on one of the most heavily studied questions in the field, namely the classification of patients suffering from major depressive disorder (MDD) and healthy control (HC) based on neuroimaging data. Drawing upon structural magnetic resonance imaging (MRI) data from a balanced sample of $N = 1,868$ MDD patients and HC from our recent international Predictive Analytics Competition (PAC), we first trained and tested a classification model on the full dataset which yielded an accuracy of $61\,\%$. Next, we mimicked the process by which researchers would draw samples of various sizes ($N = 4$ to $N = 150$) from the population and showed a strong risk of misestimation. Specifically, for small sample sizes ($N = 20$), we observe accuracies of up to $95\,\%$. For medium sample sizes ($N = 100$) accuracies up to $75\,\%$ were found. Importantly, further investigation showed that sufficiently large test sets effectively protect against performance misestimation whereas larger datasets per se do not. While these results question the validity of a substantial part of the current literature, we outline the relatively low-cost remedy of larger test sets, which is readily available in most cases.
[ { "created": "Fri, 13 Dec 2019 20:12:52 GMT", "version": "v1" }, { "created": "Mon, 3 May 2021 15:10:35 GMT", "version": "v2" } ]
2021-06-23
[ [ "Flint", "Claas", "" ], [ "Cearns", "Micah", "" ], [ "Opel", "Nils", "" ], [ "Redlich", "Ronny", "" ], [ "Mehler", "David M. A.", "" ], [ "Emden", "Daniel", "" ], [ "Winter", "Nils R.", "" ], [ "Leenings", "Ramona", "" ], [ "Eickhoff", "Simon B.", "" ], [ "Kircher", "Tilo", "" ], [ "Krug", "Axel", "" ], [ "Nenadic", "Igor", "" ], [ "Arolt", "Volker", "" ], [ "Clark", "Scott", "" ], [ "Baune", "Bernhard T.", "" ], [ "Jiang", "Xiaoyi", "" ], [ "Dannlowski", "Udo", "" ], [ "Hahn", "Tim", "" ] ]
We currently observe a disconcerting phenomenon in machine learning studies in psychiatry: While we would expect larger samples to yield better results due to the availability of more data, larger machine learning studies consistently show much weaker performance than the numerous small-scale studies. Here, we systematically investigated this effect focusing on one of the most heavily studied questions in the field, namely the classification of patients suffering from major depressive disorder (MDD) and healthy control (HC) based on neuroimaging data. Drawing upon structural magnetic resonance imaging (MRI) data from a balanced sample of $N = 1,868$ MDD patients and HC from our recent international Predictive Analytics Competition (PAC), we first trained and tested a classification model on the full dataset which yielded an accuracy of $61\,\%$. Next, we mimicked the process by which researchers would draw samples of various sizes ($N = 4$ to $N = 150$) from the population and showed a strong risk of misestimation. Specifically, for small sample sizes ($N = 20$), we observe accuracies of up to $95\,\%$. For medium sample sizes ($N = 100$) accuracies up to $75\,\%$ were found. Importantly, further investigation showed that sufficiently large test sets effectively protect against performance misestimation whereas larger datasets per se do not. While these results question the validity of a substantial part of the current literature, we outline the relatively low-cost remedy of larger test sets, which is readily available in most cases.
2311.00652
Tom Zhao
Tom Y. Zhao, Jin-Tae Kim, Min Cho, Akhil Narang, John A. Rogers, Neelesh A. Patankar
The physical origin of aneurysm growth, dissection, and rupture
null
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rupture of aortic aneurysms is by far the most fatal heart disease, with a mortality rate exceeding 80%. There are no reliable clinical protocols to predict growth, dissection, and rupture because the fundamental physics driving aneurysm progression is unknown. Here, via in-vitro experiments, we show that a blood-wall, fluttering instability manifests in synthetic arteries under pulsatile forcing. We establish a phase space to prove that the transition from stable flow to unstable aortic flutter is accurately predicted by a flutter instability parameter derived from first principles. Time resolved strain maps of the evolving system reveal the dynamical characteristics of aortic flutter that drive aneurysm progression. We show that low level instability can trigger permanent aortic growth, even in the absence of material remodeling. Sufficiently large flutter beyond a secondary threshold localizes strain in the walls to the length scale clinically observed in aortic dissection. Lastly, significant physical flutter beyond a tertiary threshold can ultimately induce aneurysm rupture via failure modes reported from necropsy. Resolving the fundamental physics of aneurysm progression directly leads to clinical protocols that forecast growth as well as intercept dissection and rupture by pinpointing their physical origin.
[ { "created": "Wed, 1 Nov 2023 16:58:06 GMT", "version": "v1" } ]
2023-11-02
[ [ "Zhao", "Tom Y.", "" ], [ "Kim", "Jin-Tae", "" ], [ "Cho", "Min", "" ], [ "Narang", "Akhil", "" ], [ "Rogers", "John A.", "" ], [ "Patankar", "Neelesh A.", "" ] ]
Rupture of aortic aneurysms is by far the most fatal heart disease, with a mortality rate exceeding 80%. There are no reliable clinical protocols to predict growth, dissection, and rupture because the fundamental physics driving aneurysm progression is unknown. Here, via in-vitro experiments, we show that a blood-wall, fluttering instability manifests in synthetic arteries under pulsatile forcing. We establish a phase space to prove that the transition from stable flow to unstable aortic flutter is accurately predicted by a flutter instability parameter derived from first principles. Time resolved strain maps of the evolving system reveal the dynamical characteristics of aortic flutter that drive aneurysm progression. We show that low level instability can trigger permanent aortic growth, even in the absence of material remodeling. Sufficiently large flutter beyond a secondary threshold localizes strain in the walls to the length scale clinically observed in aortic dissection. Lastly, significant physical flutter beyond a tertiary threshold can ultimately induce aneurysm rupture via failure modes reported from necropsy. Resolving the fundamental physics of aneurysm progression directly leads to clinical protocols that forecast growth as well as intercept dissection and rupture by pinpointing their physical origin.
2001.01852
Sheryl Chang
Sheryl L. Chang, Mahendra Piraveenan, Mikhail Prokopenko
Impact of network assortativity on epidemic and vaccination behaviour
17 pages, 13 figures
chaos, solitons & fractals,140:110143, 2020
10.1016/j.chaos.2020.110143
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The resurgence of measles is largely attributed to the decline in vaccine adoption and the increase in mobility. Although the vaccine for measles is readily available and highly successful, its current adoption is not adequate to prevent epidemics. Vaccine adoption is directly affected by individual vaccination decisions, and has a complex interplay with the spatial spread of disease shaped by an underlying mobility (travelling) network. In this paper, we model the travelling connectivity as a scale-free network, and investigate dependencies between the network's assortativity and the resultant epidemic and vaccination dynamics. In doing so we extend an SIR-network model with game-theoretic components, capturing the imitation dynamics under a voluntary vaccination scheme. Our results show a correlation between the epidemic dynamics and the network's assortativity, highlighting that networks with high assortativity tend to suppress epidemics under certain conditions. In highly assortative networks, the suppression is sustained producing an early convergence to equilibrium. In highly disassortative networks, however, the suppression effect diminishes over time due to scattering of non-vaccinating nodes, and frequent switching between the predominantly vaccinating and non-vaccinating phases of the dynamics.
[ { "created": "Tue, 7 Jan 2020 02:08:19 GMT", "version": "v1" }, { "created": "Mon, 1 Jun 2020 09:30:07 GMT", "version": "v2" } ]
2020-08-26
[ [ "Chang", "Sheryl L.", "" ], [ "Piraveenan", "Mahendra", "" ], [ "Prokopenko", "Mikhail", "" ] ]
The resurgence of measles is largely attributed to the decline in vaccine adoption and the increase in mobility. Although the vaccine for measles is readily available and highly successful, its current adoption is not adequate to prevent epidemics. Vaccine adoption is directly affected by individual vaccination decisions, and has a complex interplay with the spatial spread of disease shaped by an underlying mobility (travelling) network. In this paper, we model the travelling connectivity as a scale-free network, and investigate dependencies between the network's assortativity and the resultant epidemic and vaccination dynamics. In doing so we extend an SIR-network model with game-theoretic components, capturing the imitation dynamics under a voluntary vaccination scheme. Our results show a correlation between the epidemic dynamics and the network's assortativity, highlighting that networks with high assortativity tend to suppress epidemics under certain conditions. In highly assortative networks, the suppression is sustained producing an early convergence to equilibrium. In highly disassortative networks, however, the suppression effect diminishes over time due to scattering of non-vaccinating nodes, and frequent switching between the predominantly vaccinating and non-vaccinating phases of the dynamics.
2401.00743
Kristina Crona
Kristina Crona and Devin Greene
Walsh coefficients and circuits for several alleles
13 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Walsh coefficients have been applied extensively to biallelic systems for quantifying pairwise and higher order epistasis, in particular for demonstrating the empirical importance of higher order interactions. Circuits, or minimal dependence relations, and related approaches that use triangulations of polytopes have also been applied to biallelic systems. Here we provide biological interpretations of Walsh coefficients for several alleles, and discuss circuits in the same general setting.
[ { "created": "Mon, 1 Jan 2024 12:56:42 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2024 14:28:14 GMT", "version": "v2" } ]
2024-01-03
[ [ "Crona", "Kristina", "" ], [ "Greene", "Devin", "" ] ]
Walsh coefficients have been applied extensively to biallelic systems for quantifying pairwise and higher order epistasis, in particular for demonstrating the empirical importance of higher order interactions. Circuits, or minimal dependence relations, and related approaches that use triangulations of polytopes have also been applied to biallelic systems. Here we provide biological interpretations of Walsh coefficients for several alleles, and discuss circuits in the same general setting.
1010.5758
Lykke Pedersen
Lykke Pedersen, Sandeep Krishna and Mogens H. Jensen
Dickkopf1 - a new player in modelling the Wnt pathway
null
null
10.1371/journal.pone.0025550
null
q-bio.CB physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Wnt signalling pathway transducing the stabilization of beta-catenin is essential for metazoan embryo development and is misregulated in many diseases such as cancers. In recent years models have been proposed for the Wnt signalling pathway during the segmentation process in developing embryos. Many of these include negative feedback loops build around Axin2 regulation. In this article we propose a new negative feedback model for the Wnt pathway with Dickkopf1 (Dkk1) at its core. Dkk1 is a negative regulator of Wnt signalling. In chicken and mouse embryos there is a gradient of Wnt in the presomitic mesoderm (PSM) decreasing from the posterior to the anterior end. The formation of somites and the oscillations of Wnt target genes are controlled by this gradient. Here we incorporate a Wnt gradient and show that synchronization of neighbouring cells in the PSM is important in accordance with experimental observations.
[ { "created": "Wed, 27 Oct 2010 18:01:50 GMT", "version": "v1" } ]
2015-05-20
[ [ "Pedersen", "Lykke", "" ], [ "Krishna", "Sandeep", "" ], [ "Jensen", "Mogens H.", "" ] ]
The Wnt signalling pathway transducing the stabilization of beta-catenin is essential for metazoan embryo development and is misregulated in many diseases such as cancers. In recent years models have been proposed for the Wnt signalling pathway during the segmentation process in developing embryos. Many of these include negative feedback loops build around Axin2 regulation. In this article we propose a new negative feedback model for the Wnt pathway with Dickkopf1 (Dkk1) at its core. Dkk1 is a negative regulator of Wnt signalling. In chicken and mouse embryos there is a gradient of Wnt in the presomitic mesoderm (PSM) decreasing from the posterior to the anterior end. The formation of somites and the oscillations of Wnt target genes are controlled by this gradient. Here we incorporate a Wnt gradient and show that synchronization of neighbouring cells in the PSM is important in accordance with experimental observations.
1701.01150
Maxwell Bertolero
M.A. Bertolero, B.T.T. Yeo, M. D'Esposito
The Diverse Club: The Integrative Core of Complex Networks
null
null
10.1038/s41467-017-01189-w
null
q-bio.NC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A complex system can be represented and analyzed as a network, where nodes represent the units of the network and edges represent connections between those units. For example, a brain network represents neurons as nodes and axons between neurons as edges. In many networks, some nodes have a disproportionately high number of edges. These nodes also have many edges between each other, and are referred to as the rich club. In many different networks, the nodes of this club are assumed to support global network integration. However, another set of nodes potentially exhibits a connectivity structure that is more advantageous to global network integration. Here, in a myriad of different biological and man-made networks, we discover the diverse club--a set of nodes that have edges diversely distributed across the network. The diverse club exhibits, to a greater extent than the rich club, properties consistent with an integrative network function--these nodes are more highly interconnected and their edges are more critical for efficient global integration. Moreover, we present a generative evolutionary network model that produces networks with a diverse club but not a rich club, thus demonstrating that these two clubs potentially evolved via distinct selection pressures. Given the variety of different networks that we analyzed--the c. elegans, the macaque brain, the human brain, the United States power grid, and global air traffic--the diverse club appears to be ubiquitous in complex networks. These results warrant the distinction and analysis of two critical clubs of nodes in all complex systems.
[ { "created": "Wed, 4 Jan 2017 21:16:50 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2017 16:39:23 GMT", "version": "v2" } ]
2017-11-03
[ [ "Bertolero", "M. A.", "" ], [ "Yeo", "B. T. T.", "" ], [ "D'Esposito", "M.", "" ] ]
A complex system can be represented and analyzed as a network, where nodes represent the units of the network and edges represent connections between those units. For example, a brain network represents neurons as nodes and axons between neurons as edges. In many networks, some nodes have a disproportionately high number of edges. These nodes also have many edges between each other, and are referred to as the rich club. In many different networks, the nodes of this club are assumed to support global network integration. However, another set of nodes potentially exhibits a connectivity structure that is more advantageous to global network integration. Here, in a myriad of different biological and man-made networks, we discover the diverse club--a set of nodes that have edges diversely distributed across the network. The diverse club exhibits, to a greater extent than the rich club, properties consistent with an integrative network function--these nodes are more highly interconnected and their edges are more critical for efficient global integration. Moreover, we present a generative evolutionary network model that produces networks with a diverse club but not a rich club, thus demonstrating that these two clubs potentially evolved via distinct selection pressures. Given the variety of different networks that we analyzed--the c. elegans, the macaque brain, the human brain, the United States power grid, and global air traffic--the diverse club appears to be ubiquitous in complex networks. These results warrant the distinction and analysis of two critical clubs of nodes in all complex systems.
2204.06802
Jordi Garcia-Ojalvo
Pablo Casani-Galdon and Jordi Garcia-Ojalvo
Signaling oscillations: molecular mechanisms and functional roles
6 pages, 4 figures
null
null
null
q-bio.CB physics.bio-ph q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Mounting evidence shows that oscillatory activity is widespread in cell signaling. Here we review some of this recent evidence, focusing on both the molecular mechanisms that potentially underlie such dynamical behavior, and the potential advantages that signaling oscillations might have in cell function. The biological processes considered include tissue maintenance, embryonic development and wound healing. With the aid of mathematical modeling, we show that a common principle, namely delayed negative feedback, underpins this wide variety of phenomena.
[ { "created": "Thu, 14 Apr 2022 07:51:11 GMT", "version": "v1" } ]
2022-04-15
[ [ "Casani-Galdon", "Pablo", "" ], [ "Garcia-Ojalvo", "Jordi", "" ] ]
Mounting evidence shows that oscillatory activity is widespread in cell signaling. Here we review some of this recent evidence, focusing on both the molecular mechanisms that potentially underlie such dynamical behavior, and the potential advantages that signaling oscillations might have in cell function. The biological processes considered include tissue maintenance, embryonic development and wound healing. With the aid of mathematical modeling, we show that a common principle, namely delayed negative feedback, underpins this wide variety of phenomena.
1408.3044
Benjamin Albrecht
Benjamin Albrecht
Computing Hybridization Networks for Multiple Rooted Binary Phylogenetic Trees by Maximum Acyclic Agreement Forests
38 pages
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is a known fact that, given two rooted binary phylogenetic trees, the concept of maximum acyclic agreement forests is sufficient to compute hybridization networks with minimum hybridization number. In this work, we demonstrate by first presenting an algorithm and then showing its correctness, that this concept is also sufficient in the case of multiple input trees. More precisely, we show that for computing minimum hybridization networks for multiple rooted binary phylogenetic trees on the same set of taxa it suffices to take only maximum acyclic agreement forests into account. Moreover, this article contains a proof showing that the minimum hybridization number for a set of rooted binary phylogenetic trees on the same set of taxa can be also computed by solving subproblems referring to common clusters of the input trees.
[ { "created": "Wed, 13 Aug 2014 16:15:09 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2015 16:42:43 GMT", "version": "v2" }, { "created": "Thu, 17 Dec 2015 16:39:30 GMT", "version": "v3" } ]
2015-12-18
[ [ "Albrecht", "Benjamin", "" ] ]
It is a known fact that, given two rooted binary phylogenetic trees, the concept of maximum acyclic agreement forests is sufficient to compute hybridization networks with minimum hybridization number. In this work, we demonstrate by first presenting an algorithm and then showing its correctness, that this concept is also sufficient in the case of multiple input trees. More precisely, we show that for computing minimum hybridization networks for multiple rooted binary phylogenetic trees on the same set of taxa it suffices to take only maximum acyclic agreement forests into account. Moreover, this article contains a proof showing that the minimum hybridization number for a set of rooted binary phylogenetic trees on the same set of taxa can be also computed by solving subproblems referring to common clusters of the input trees.
0905.0732
Sidney Redner
T. Antal, P. L. Krapivsky, S. Redner
Shepherd Model for Knot-Limited Polymer Ejection from a Capsid
9 pages, 4 figures. version 2 has minor changes in response to referee comments
Journal Theoretical Biology 261, 488-493 (2009)
10.1016/j.jtbi.2009.08.021
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct a tractable model to describe the rate at which a knotted polymer is ejected from a spherical capsid via a small pore. Knots are too large to fit through the pore and must reptate to the end of the polymer for ejection to occur. The reptation of knots is described by symmetric exclusion on the line, with the internal capsid pressure represented by an additional biased particle that drives knots to the end of the chain. We compute the exact ejection speed for a finite number of knots L and find that it scales as 1/L. We also construct a continuum theory for many knots that matches the exact discrete theory for large L.
[ { "created": "Wed, 6 May 2009 19:21:45 GMT", "version": "v1" }, { "created": "Wed, 12 Aug 2009 00:21:49 GMT", "version": "v2" } ]
2009-10-22
[ [ "Antal", "T.", "" ], [ "Krapivsky", "P. L.", "" ], [ "Redner", "S.", "" ] ]
We construct a tractable model to describe the rate at which a knotted polymer is ejected from a spherical capsid via a small pore. Knots are too large to fit through the pore and must reptate to the end of the polymer for ejection to occur. The reptation of knots is described by symmetric exclusion on the line, with the internal capsid pressure represented by an additional biased particle that drives knots to the end of the chain. We compute the exact ejection speed for a finite number of knots L and find that it scales as 1/L. We also construct a continuum theory for many knots that matches the exact discrete theory for large L.
q-bio/0604006
Oliver Mason
Oliver Mason and Mark Verwoerd
Graph Theory and Networks in Biology
52 pages, 5 figures, Survey Paper
null
null
null
q-bio.MN q-bio.QM
null
In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of bio-molecular networks, as well as the application of centrality measures to interaction networks and research on the hierarchical structure of such networks and network motifs. Work on the link between structural network properties and dynamics is also described, with emphasis on synchronization and disease propagation.
[ { "created": "Thu, 6 Apr 2006 14:55:34 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mason", "Oliver", "" ], [ "Verwoerd", "Mark", "" ] ]
In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of bio-molecular networks, as well as the application of centrality measures to interaction networks and research on the hierarchical structure of such networks and network motifs. Work on the link between structural network properties and dynamics is also described, with emphasis on synchronization and disease propagation.
2006.09530
Xun Jiao
C. A. K. Kwuimy, Foad Nazari, Xun Jiao, Pejman Rohani and C. Nataraj
Nonlinear dynamic analysis of an epidemiological model for COVID-19 including public behavior and government action
null
null
10.1007/s11071-020-05815-z
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with nonlinear modeling and analysis of the COVID-19 pandemic currently ravaging the planet. There are two objectives: to arrive at an appropriate model that captures the collected data faithfully, and to use that as a basis to explore the nonlinear behavior. We use a nonlinear SEIR (Susceptible, Exposed, Infectious & Removed) transmission model with added behavioral and government policy dynamics. We develop a genetic algorithm technique to identify key model parameters employing COVID19 data from South Korea. Stability, bifurcations and dynamic behavior are analyzed. Parametric analysis reveals conditions for sustained epidemic equilibria to occur. This work points to the value of nonlinear dynamic analysis in pandemic modeling and demonstrates the dramatic influence of social and government behavior on disease dynamics.
[ { "created": "Tue, 16 Jun 2020 21:36:45 GMT", "version": "v1" }, { "created": "Wed, 14 Oct 2020 00:38:50 GMT", "version": "v2" } ]
2020-10-15
[ [ "Kwuimy", "C. A. K.", "" ], [ "Nazari", "Foad", "" ], [ "Jiao", "Xun", "" ], [ "Rohani", "Pejman", "" ], [ "Nataraj", "C.", "" ] ]
This paper is concerned with nonlinear modeling and analysis of the COVID-19 pandemic currently ravaging the planet. There are two objectives: to arrive at an appropriate model that captures the collected data faithfully, and to use that as a basis to explore the nonlinear behavior. We use a nonlinear SEIR (Susceptible, Exposed, Infectious & Removed) transmission model with added behavioral and government policy dynamics. We develop a genetic algorithm technique to identify key model parameters employing COVID19 data from South Korea. Stability, bifurcations and dynamic behavior are analyzed. Parametric analysis reveals conditions for sustained epidemic equilibria to occur. This work points to the value of nonlinear dynamic analysis in pandemic modeling and demonstrates the dramatic influence of social and government behavior on disease dynamics.
2211.06428
Zichen Wang
Gil Sadeh, Zichen Wang, Jasleen Grewal, Huzefa Rangwala, Layne Price
Training self-supervised peptide sequence models on artificially chopped proteins
null
null
null
null
q-bio.QM cs.LG q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Representation learning for proteins has primarily focused on the global understanding of protein sequences regardless of their length. However, shorter proteins (known as peptides) take on distinct structures and functions compared to their longer counterparts. Unfortunately, there are not as many naturally occurring peptides available to be sequenced and therefore less peptide-specific data to train with. In this paper, we propose a new peptide data augmentation scheme, where we train peptide language models on artificially constructed peptides that are small contiguous subsets of longer, wild-type proteins; we refer to the training peptides as "chopped proteins". We evaluate the representation potential of models trained with chopped proteins versus natural peptides and find that training language models with chopped proteins results in more generalized embeddings for short protein sequences. These peptide-specific models also retain information about the original protein they were derived from better than language models trained on full-length proteins. We compare masked language model training objectives to three novel peptide-specific training objectives: next-peptide prediction, contrastive peptide selection and evolution-weighted MLM. We demonstrate improved zero-shot learning performance for a deep mutational scan peptides benchmark.
[ { "created": "Wed, 9 Nov 2022 22:22:17 GMT", "version": "v1" } ]
2022-11-15
[ [ "Sadeh", "Gil", "" ], [ "Wang", "Zichen", "" ], [ "Grewal", "Jasleen", "" ], [ "Rangwala", "Huzefa", "" ], [ "Price", "Layne", "" ] ]
Representation learning for proteins has primarily focused on the global understanding of protein sequences regardless of their length. However, shorter proteins (known as peptides) take on distinct structures and functions compared to their longer counterparts. Unfortunately, there are not as many naturally occurring peptides available to be sequenced and therefore less peptide-specific data to train with. In this paper, we propose a new peptide data augmentation scheme, where we train peptide language models on artificially constructed peptides that are small contiguous subsets of longer, wild-type proteins; we refer to the training peptides as "chopped proteins". We evaluate the representation potential of models trained with chopped proteins versus natural peptides and find that training language models with chopped proteins results in more generalized embeddings for short protein sequences. These peptide-specific models also retain information about the original protein they were derived from better than language models trained on full-length proteins. We compare masked language model training objectives to three novel peptide-specific training objectives: next-peptide prediction, contrastive peptide selection and evolution-weighted MLM. We demonstrate improved zero-shot learning performance for a deep mutational scan peptides benchmark.
2205.06525
Oshrit Shtossel
Shtossel Oshrit, Isakov Haim, Turjeman Sondra, Koren Omry, Louzoun Yoram
Image and graph convolution networks improve microbiome-based machine learning accuracy
19 pages of manuscript, 3 figures, and 4 pages of Supp. Mat
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The human gut microbiome is associated with a large number of disease etiologies. As such, it is a natural candidate for machine learning based biomarker development for multiple diseases and conditions. The microbiome is often analyzed using 16S rRNA gene sequencing. However, several properties of microbial 16S rRNA gene sequencing hinder machine learning, including non-uniform representation, a small number of samples compared with the dimension of each sample, and sparsity of the data, with the majority of bacteria present in a small subset of samples. We suggest two novel methods to combine information from different bacteria and improve data representation for machine learning using bacterial taxonomy. iMic and gMic translate the microbiome to images and graphs respectively, and convolutional neural networks are then applied to the graph or image. We show that both algorithms improve performance of static 16S rRNA gene sequence-based machine learning compared to the best state-of-the-art methods. Furthermore, these methods ease the interpretation of the classifiers. iMic is then extended to dynamic microbiome samples, and an iMic explainable AI algorithm is proposed to detect bacteria relevant to each condition.
[ { "created": "Fri, 13 May 2022 09:17:12 GMT", "version": "v1" } ]
2022-05-16
[ [ "Oshrit", "Shtossel", "" ], [ "Haim", "Isakov", "" ], [ "Sondra", "Turjeman", "" ], [ "Omry", "Koren", "" ], [ "Yoram", "Louzoun", "" ] ]
The human gut microbiome is associated with a large number of disease etiologies. As such, it is a natural candidate for machine learning based biomarker development for multiple diseases and conditions. The microbiome is often analyzed using 16S rRNA gene sequencing. However, several properties of microbial 16S rRNA gene sequencing hinder machine learning, including non-uniform representation, a small number of samples compared with the dimension of each sample, and sparsity of the data, with the majority of bacteria present in a small subset of samples. We suggest two novel methods to combine information from different bacteria and improve data representation for machine learning using bacterial taxonomy. iMic and gMic translate the microbiome to images and graphs respectively, and convolutional neural networks are then applied to the graph or image. We show that both algorithms improve performance of static 16S rRNA gene sequence-based machine learning compared to the best state-of-the-art methods. Furthermore, these methods ease the interpretation of the classifiers. iMic is then extended to dynamic microbiome samples, and an iMic explainable AI algorithm is proposed to detect bacteria relevant to each condition.
1610.02282
Bahram Houchmandzadeh
Bahram Houchmandzadeh
Neutral Aggregation in Finite Length Genotype space
null
null
10.1103/PhysRevE.95.012402
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of modern genome sequencing techniques allows for a more stringent test of the neutrality hypothesis of Darwinian evolution, where all individuals have the same fitness. Using the individual based model of Wright and Fisher, we compute the amplitude of neutral aggregation in the genome space, i.e., the probability of finding two individuals at genetic (hamming) distance k as a function of genome size L, population size N and mutation probability per base \nu. In well mixed populations, we show that for N\nu\textless{}1/L, neutral aggregation is the dominant force and most individuals are found at short genetic distances from each other. For N\nu\textgreater{}1 on the contrary, individuals are randomly dispersed in genome space. The results are extended to geographically dispersed population, where the controlling parameter is shown to be a combination of mutation and migration probability. The theory we develop can be used to test the neutrality hypothesis in various ecological and evolutionary systems.
[ { "created": "Fri, 7 Oct 2016 13:44:28 GMT", "version": "v1" }, { "created": "Mon, 19 Dec 2016 14:31:02 GMT", "version": "v2" } ]
2017-02-01
[ [ "Houchmandzadeh", "Bahram", "" ] ]
The advent of modern genome sequencing techniques allows for a more stringent test of the neutrality hypothesis of Darwinian evolution, where all individuals have the same fitness. Using the individual based model of Wright and Fisher, we compute the amplitude of neutral aggregation in the genome space, i.e., the probability of finding two individuals at genetic (hamming) distance k as a function of genome size L, population size N and mutation probability per base \nu. In well mixed populations, we show that for N\nu\textless{}1/L, neutral aggregation is the dominant force and most individuals are found at short genetic distances from each other. For N\nu\textgreater{}1 on the contrary, individuals are randomly dispersed in genome space. The results are extended to geographically dispersed population, where the controlling parameter is shown to be a combination of mutation and migration probability. The theory we develop can be used to test the neutrality hypothesis in various ecological and evolutionary systems.
2308.01941
Hui Xiong
Hui Xiong, Congying Chu, Lingzhong Fan, Ming Song, Jiaqi Zhang, Yawei Ma, Ruonan Zheng, Junyang Zhang, Zhengyi Yang, Tianzi Jiang
Digital twin brain: a bridge between biological intelligence and artificial intelligence
null
Intell Comput. 2023;2:0055
10.34133/icomputing.0055
null
q-bio.NC cs.AI cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, advances in neuroscience and artificial intelligence have paved the way for unprecedented opportunities for understanding the complexity of the brain and its emulation by computational systems. Cutting-edge advancements in neuroscience research have revealed the intricate relationship between brain structure and function, while the success of artificial neural networks highlights the importance of network architecture. Now is the time to bring them together to better unravel how intelligence emerges from the brain's multiscale repositories. In this review, we propose the Digital Twin Brain (DTB) as a transformative platform that bridges the gap between biological and artificial intelligence. It consists of three core elements: the brain structure that is fundamental to the twinning process, bottom-layer models to generate brain functions, and its wide spectrum of applications. Crucially, brain atlases provide a vital constraint, preserving the brain's network organization within the DTB. Furthermore, we highlight open questions that invite joint efforts from interdisciplinary fields and emphasize the far-reaching implications of the DTB. The DTB can offer unprecedented insights into the emergence of intelligence and neurological disorders, which holds tremendous promise for advancing our understanding of both biological and artificial intelligence, and ultimately propelling the development of artificial general intelligence and facilitating precision mental healthcare.
[ { "created": "Thu, 3 Aug 2023 03:36:22 GMT", "version": "v1" } ]
2024-04-19
[ [ "Xiong", "Hui", "" ], [ "Chu", "Congying", "" ], [ "Fan", "Lingzhong", "" ], [ "Song", "Ming", "" ], [ "Zhang", "Jiaqi", "" ], [ "Ma", "Yawei", "" ], [ "Zheng", "Ruonan", "" ], [ "Zhang", "Junyang", "" ], [ "Yang", "Zhengyi", "" ], [ "Jiang", "Tianzi", "" ] ]
In recent years, advances in neuroscience and artificial intelligence have paved the way for unprecedented opportunities for understanding the complexity of the brain and its emulation by computational systems. Cutting-edge advancements in neuroscience research have revealed the intricate relationship between brain structure and function, while the success of artificial neural networks highlights the importance of network architecture. Now is the time to bring them together to better unravel how intelligence emerges from the brain's multiscale repositories. In this review, we propose the Digital Twin Brain (DTB) as a transformative platform that bridges the gap between biological and artificial intelligence. It consists of three core elements: the brain structure that is fundamental to the twinning process, bottom-layer models to generate brain functions, and its wide spectrum of applications. Crucially, brain atlases provide a vital constraint, preserving the brain's network organization within the DTB. Furthermore, we highlight open questions that invite joint efforts from interdisciplinary fields and emphasize the far-reaching implications of the DTB. The DTB can offer unprecedented insights into the emergence of intelligence and neurological disorders, which holds tremendous promise for advancing our understanding of both biological and artificial intelligence, and ultimately propelling the development of artificial general intelligence and facilitating precision mental healthcare.
1809.01851
Richa Phogat
Richa Phogat and P. Parmananda
Provoking Predetermined Aperiodic Patterns in Human Brainwaves
This is the final manuscript after peer review. 8 pages and 10 figures in main text, 3 pages and 6 figures in supplementary text, all combined in a single pdf document
Chaos 28, 121105 (2018)
10.1063/1.5080971
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present work, electroencephalographic recordings of healthy human participants were performed to study the entrainment of brainwaves using a variety of stimulus. First, periodic entrainment of the brainwaves was studied using two different stimuli in the form of periodic auditory and visual signals. The entrainment with the periodic visual stimulation was consistently observed, whereas the auditory entrainment was inconclusive. Hence, a photic (Visual) stimulus, where two frequencies were presented to the subject simultaneously was used to further explore the bifrequency entrainment of human brainwaves. Subsequently, the evolution of brainwaves as a result of an aperiodic stimulation was explored, wherein an entrainment to the predetermined aperiodic pattern was observed. These results suggest that aperiodic entrainment could be used as a tool for guided modification of brainwaves. This could find possible applications in processes such as epilepsy suppression and biofeedback.
[ { "created": "Thu, 6 Sep 2018 07:26:06 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2019 09:32:57 GMT", "version": "v2" } ]
2019-01-04
[ [ "Phogat", "Richa", "" ], [ "Parmananda", "P.", "" ] ]
In the present work, electroencephalographic recordings of healthy human participants were performed to study the entrainment of brainwaves using a variety of stimulus. First, periodic entrainment of the brainwaves was studied using two different stimuli in the form of periodic auditory and visual signals. The entrainment with the periodic visual stimulation was consistently observed, whereas the auditory entrainment was inconclusive. Hence, a photic (Visual) stimulus, where two frequencies were presented to the subject simultaneously was used to further explore the bifrequency entrainment of human brainwaves. Subsequently, the evolution of brainwaves as a result of an aperiodic stimulation was explored, wherein an entrainment to the predetermined aperiodic pattern was observed. These results suggest that aperiodic entrainment could be used as a tool for guided modification of brainwaves. This could find possible applications in processes such as epilepsy suppression and biofeedback.
1912.06066
Paolo Rissone
Aurelien Severino, Alvaro Martinez Monge, Paolo Rissone and Felix Ritort
Efficient methods for determining folding free energies in single-molecule pulling experiments
23 pages, 5 figures
Journal of Statistical Mechanics: Theory and Experiment 124001 (2019)
10.1088/1742-5468/ab4e91
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The remarkable accuracy and versatility of single-molecule techniques make possible new measurements that are not feasible in bulk assays. Among these, the precise estimation of folding free energies using fluctuation theorems in nonequilibrium pulling experiments has become a benchmark in modern biophysics. In practice, the use of fluctuation relations to determine free energies requires a thorough evaluation of the usually large energetic contributions caused by the elastic deformation of the different elements of the experimental setup (such as the optical trap, the molecular linkers and the stretched-unfolded polymer). We review and describe how to optimally estimate such elastic energy contributions to extract folding free energies, using DNA and RNA hairpins as model systems pulled by laser optical tweezers. The methodology is generally applicable to other force-spectroscopy techniques and molecular systems.
[ { "created": "Thu, 12 Dec 2019 16:36:57 GMT", "version": "v1" } ]
2019-12-19
[ [ "Severino", "Aurelien", "" ], [ "Monge", "Alvaro Martinez", "" ], [ "Rissone", "Paolo", "" ], [ "Ritort", "Felix", "" ] ]
The remarkable accuracy and versatility of single-molecule techniques make possible new measurements that are not feasible in bulk assays. Among these, the precise estimation of folding free energies using fluctuation theorems in nonequilibrium pulling experiments has become a benchmark in modern biophysics. In practice, the use of fluctuation relations to determine free energies requires a thorough evaluation of the usually large energetic contributions caused by the elastic deformation of the different elements of the experimental setup (such as the optical trap, the molecular linkers and the stretched-unfolded polymer). We review and describe how to optimally estimate such elastic energy contributions to extract folding free energies, using DNA and RNA hairpins as model systems pulled by laser optical tweezers. The methodology is generally applicable to other force-spectroscopy techniques and molecular systems.
2310.15456
Leonardino Digma
Leonardino A. Digma MD, Joseph R. Winer PhD, Michael D. Greicius MD
Substantial Doubt Remains about the Efficacy of Anti-Amyloid Antibodies
11 pages, 2 figures; Update 11/18/2023: Added subheadings to manuscript to improve readability, added a new data point to Figure 1A and Figure 2 for the recently published A4 clinical trial
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Alzheimer's disease (AD) is a prevalent, progressive, and ultimately fatal neurodegenerative disorder that is defined pathologically by the accumulation of amyloid plaques and tau neurofibrillary tangles in the brain. There remains an unmet need for therapies that can halt or slow the course of AD. To address this need, the FDA has provided a mechanism, under its Accelerated Approval pathway, for potential therapeutics to be approved based in part on their ability to reduce brain amyloid. Through this pathway, two monoclonal anti-amyloid antibodies, aducanumab and lecanemab, have been approved for clinical use. More recently, another amyloid-lowering antibody, donanemab, generated a statistically significant outcome in a phase 3 clinical trial and will shortly come under FDA review. While these monoclonal antibodies are not yet routinely used in clinical practice, the series of recent positive clinical trials has fostered enthusiasm amongst some AD experts. Here, we discuss three key limitations regarding recent anti-amyloid clinical trials: (1) there is little to no evidence that amyloid reduction correlates with clinical outcome, (2) the reported efficacy of anti-amyloid therapies may be partly, or wholly, explained by functional unblinding, and (3) donanemab in its phase 3 trial had no effect on tau burden, the pathological hallmark more closely related to cognition. Taken together, these observations call into question the efficacy of anti-amyloid therapies.
[ { "created": "Tue, 24 Oct 2023 02:06:56 GMT", "version": "v1" }, { "created": "Sun, 19 Nov 2023 03:29:20 GMT", "version": "v2" } ]
2023-11-21
[ [ "MD", "Leonardino A. Digma", "" ], [ "PhD", "Joseph R. Winer", "" ], [ "MD", "Michael D. Greicius", "" ] ]
Alzheimer's disease (AD) is a prevalent, progressive, and ultimately fatal neurodegenerative disorder that is defined pathologically by the accumulation of amyloid plaques and tau neurofibrillary tangles in the brain. There remains an unmet need for therapies that can halt or slow the course of AD. To address this need, the FDA has provided a mechanism, under its Accelerated Approval pathway, for potential therapeutics to be approved based in part on their ability to reduce brain amyloid. Through this pathway, two monoclonal anti-amyloid antibodies, aducanumab and lecanemab, have been approved for clinical use. More recently, another amyloid-lowering antibody, donanemab, generated a statistically significant outcome in a phase 3 clinical trial and will shortly come under FDA review. While these monoclonal antibodies are not yet routinely used in clinical practice, the series of recent positive clinical trials has fostered enthusiasm amongst some AD experts. Here, we discuss three key limitations regarding recent anti-amyloid clinical trials: (1) there is little to no evidence that amyloid reduction correlates with clinical outcome, (2) the reported efficacy of anti-amyloid therapies may be partly, or wholly, explained by functional unblinding, and (3) donanemab in its phase 3 trial had no effect on tau burden, the pathological hallmark more closely related to cognition. Taken together, these observations call into question the efficacy of anti-amyloid therapies.
2312.14288
Aniruddha Acharya
Aniruddha Acharya, Enrique Perez, Miller Maddox-Mandolini and Hania De La Fuente
The Status and Prospects of Phytoremediation of Heavy Metals
34 pages, 3 figures, 2 tables, review paper
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The release of heavy metals into the agricultural soil and waterbodies has been accelerated due to anthropogenic activities. They are not usually required for biological functions thus, their accumulation in biological system poses serious threat to health and environment globally. Phytoremediation offers a safe, inexpensive, and ecologically sustainable technique to clean habitats contaminated with heavy metals. Though several plants have been identified and used as a potential candidate for such phytoremediation, the technique is still at its formative stage and has been mostly confined to laboratory and greenhouses. However, recently several field studies have shown promising results that can propel large-scale implementation of this technology in industrial sites and urban agriculture. Realistically, the commercialization of this technique is possible if interdisciplinary approach is employed to increase its efficiency. This review presents a comprehensive narration of the status and future of the technique. It illustrates the concept of phytoremediation, the ecological and commercial benefits, and the types of phytoremediation. The candidate plants and factors that influences phytoremediation has been discussed. Finally, the physiological and molecular mechanism along with the future of the technique has been described.
[ { "created": "Thu, 21 Dec 2023 20:43:39 GMT", "version": "v1" } ]
2023-12-25
[ [ "Acharya", "Aniruddha", "" ], [ "Perez", "Enrique", "" ], [ "Maddox-Mandolini", "Miller", "" ], [ "De La Fuente", "Hania", "" ] ]
The release of heavy metals into the agricultural soil and waterbodies has been accelerated due to anthropogenic activities. They are not usually required for biological functions thus, their accumulation in biological system poses serious threat to health and environment globally. Phytoremediation offers a safe, inexpensive, and ecologically sustainable technique to clean habitats contaminated with heavy metals. Though several plants have been identified and used as a potential candidate for such phytoremediation, the technique is still at its formative stage and has been mostly confined to laboratory and greenhouses. However, recently several field studies have shown promising results that can propel large-scale implementation of this technology in industrial sites and urban agriculture. Realistically, the commercialization of this technique is possible if interdisciplinary approach is employed to increase its efficiency. This review presents a comprehensive narration of the status and future of the technique. It illustrates the concept of phytoremediation, the ecological and commercial benefits, and the types of phytoremediation. The candidate plants and factors that influences phytoremediation has been discussed. Finally, the physiological and molecular mechanism along with the future of the technique has been described.
1307.1918
Kristina Crona
Devin Greene and Kristina Crona
The Changing Geometry of a Fitness Landscape Along an Adaptive Walk
29 pages, 7 figures
null
10.1371/journal.pcbi.1003520
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has recently been noted that the relative prevalence of the various kinds of epistasis varies along an adaptive walk. This has been explained as a result of mean regression in NK model fitness landscapes. Here we show that this phenomenon occurs quite generally in fitness landscapes. We propose a simple and general explanation for this phenomemon, confirming the role of mean regression. We provide support for this explanation with simulations, and discuss the empirical relevance of our findings.
[ { "created": "Sun, 7 Jul 2013 21:09:55 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2013 00:40:08 GMT", "version": "v2" }, { "created": "Sat, 14 Dec 2013 16:03:14 GMT", "version": "v3" } ]
2015-06-16
[ [ "Greene", "Devin", "" ], [ "Crona", "Kristina", "" ] ]
It has recently been noted that the relative prevalence of the various kinds of epistasis varies along an adaptive walk. This has been explained as a result of mean regression in NK model fitness landscapes. Here we show that this phenomenon occurs quite generally in fitness landscapes. We propose a simple and general explanation for this phenomemon, confirming the role of mean regression. We provide support for this explanation with simulations, and discuss the empirical relevance of our findings.
1706.01138
Alice Doucet Beaupr\'e
Alice Doucet-Beaupr\'e and James P. O'Dwyer
Widespread bursts of diversification in microbial phylogenies
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Until recently, much of the microbial world was hidden from view. A global research effort has changed this, unveiling and quantifying microbial diversity across enormous range of critically-important contexts, from the human microbiome, to plant-soil interactions, to marine life. Yet what has remained largely hidden is the interplay of ecological and evolutionary processes that led to the diversity we observe in the present day. We introduce a theoretical framework to quantify the effect of ecological innovations in microbial evolutionary history, using a new, coarse-grained approach that is robust to the incompleteness and ambiguities in microbial community data. Applying this methodology, we identify a balance of gradual, ongoing diversification and rapid bursts across a vast range of microbial habitats. Moreover, we find universal quantitative similarities in the tempo of diversification, independent of habitat type.
[ { "created": "Sun, 4 Jun 2017 20:01:36 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2017 19:09:40 GMT", "version": "v2" } ]
2017-10-26
[ [ "Doucet-Beaupré", "Alice", "" ], [ "O'Dwyer", "James P.", "" ] ]
Until recently, much of the microbial world was hidden from view. A global research effort has changed this, unveiling and quantifying microbial diversity across enormous range of critically-important contexts, from the human microbiome, to plant-soil interactions, to marine life. Yet what has remained largely hidden is the interplay of ecological and evolutionary processes that led to the diversity we observe in the present day. We introduce a theoretical framework to quantify the effect of ecological innovations in microbial evolutionary history, using a new, coarse-grained approach that is robust to the incompleteness and ambiguities in microbial community data. Applying this methodology, we identify a balance of gradual, ongoing diversification and rapid bursts across a vast range of microbial habitats. Moreover, we find universal quantitative similarities in the tempo of diversification, independent of habitat type.
0809.0637
Thomas R. Weikl
Thomas R. Weikl and Carola von Deuster
Selected-fit versus induced-fit protein binding: Kinetic differences and mutational analysis
6 pages, 3 figures; to appear in "Proteins: Structure, Function, and Bioinformatics"
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binding of a ligand molecule to a protein is often accompanied by conformational changes of the protein. A central question is whether the ligand induces the conformational change (induced-fit), or rather selects and stabilizes a complementary conformation from a pre-existing equilibrium of ground and excited states of the protein (selected-fit). We consider here the binding kinetics in a simple four-state model of ligand-protein binding. In this model, the protein has two conformations, which can both bind the ligand. The first conformation is the ground state of the protein when the ligand is off, and the second conformation is the ground state when the ligand is bound. The induced-fit mechanism corresponds to ligand binding in the unbound ground state, and the selected-fit mechanism to ligand binding in the excited state. We find a simple, characteristic difference between the on- and off-rates in the two mechanisms if the conformational relaxation into the ground states is fast. In the case of selected-fit binding, the on-rate depends on the conformational equilibrium constant, while the off-rate is independent. In the case of induced-fit binding, in contrast, the off-rate depends on the conformational equilibrium, while the on-rate is independent. Whether a protein binds a ligand via selected-fit or induced-fit thus may be revealed by mutations far from the protein's binding pocket, or other "perturbations" that only affect the conformational equilibrium. In the case of selected-fit, such mutations will only change the on-rate, and in the case of induced-fit, only the off-rate.
[ { "created": "Wed, 3 Sep 2008 14:54:52 GMT", "version": "v1" } ]
2008-09-04
[ [ "Weikl", "Thomas R.", "" ], [ "von Deuster", "Carola", "" ] ]
The binding of a ligand molecule to a protein is often accompanied by conformational changes of the protein. A central question is whether the ligand induces the conformational change (induced-fit), or rather selects and stabilizes a complementary conformation from a pre-existing equilibrium of ground and excited states of the protein (selected-fit). We consider here the binding kinetics in a simple four-state model of ligand-protein binding. In this model, the protein has two conformations, which can both bind the ligand. The first conformation is the ground state of the protein when the ligand is off, and the second conformation is the ground state when the ligand is bound. The induced-fit mechanism corresponds to ligand binding in the unbound ground state, and the selected-fit mechanism to ligand binding in the excited state. We find a simple, characteristic difference between the on- and off-rates in the two mechanisms if the conformational relaxation into the ground states is fast. In the case of selected-fit binding, the on-rate depends on the conformational equilibrium constant, while the off-rate is independent. In the case of induced-fit binding, in contrast, the off-rate depends on the conformational equilibrium, while the on-rate is independent. Whether a protein binds a ligand via selected-fit or induced-fit thus may be revealed by mutations far from the protein's binding pocket, or other "perturbations" that only affect the conformational equilibrium. In the case of selected-fit, such mutations will only change the on-rate, and in the case of induced-fit, only the off-rate.
1112.4193
C. Titus Brown
Jason Pell, Arend Hintze, Rosangela Canino-Koning, Adina Howe, James M. Tiedje, C. Titus Brown
Scaling metagenome sequence assembly with probabilistic de Bruijn graphs
null
null
10.1073/pnas.1121464109
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep sequencing has enabled the investigation of a wide range of environmental microbial ecosystems, but the high memory requirements for {\em de novo} assembly of short-read shotgun sequencing data from these complex populations are an increasingly large practical barrier. Here we introduce a memory-efficient graph representation with which we can analyze the k-mer connectivity of metagenomic samples. The graph representation is based on a probabilistic data structure, a Bloom filter, that allows us to efficiently store assembly graphs in as little as 4 bits per k-mer, albeit inexactly. We show that this data structure accurately represents DNA assembly graphs in low memory. We apply this data structure to the problem of partitioning assembly graphs into components as a prelude to assembly, and show that this reduces the overall memory requirements for {\em de novo} assembly of metagenomes. On one soil metagenome assembly, this approach achieves a nearly 40-fold decrease in the maximum memory requirements for assembly. This probabilistic graph representation is a significant theoretical advance in storing assembly graphs and also yields immediate leverage on metagenomic assembly.
[ { "created": "Sun, 18 Dec 2011 21:49:46 GMT", "version": "v1" }, { "created": "Wed, 28 Dec 2011 00:59:57 GMT", "version": "v2" }, { "created": "Sat, 30 Jun 2012 00:50:37 GMT", "version": "v3" } ]
2015-06-03
[ [ "Pell", "Jason", "" ], [ "Hintze", "Arend", "" ], [ "Canino-Koning", "Rosangela", "" ], [ "Howe", "Adina", "" ], [ "Tiedje", "James M.", "" ], [ "Brown", "C. Titus", "" ] ]
Deep sequencing has enabled the investigation of a wide range of environmental microbial ecosystems, but the high memory requirements for {\em de novo} assembly of short-read shotgun sequencing data from these complex populations are an increasingly large practical barrier. Here we introduce a memory-efficient graph representation with which we can analyze the k-mer connectivity of metagenomic samples. The graph representation is based on a probabilistic data structure, a Bloom filter, that allows us to efficiently store assembly graphs in as little as 4 bits per k-mer, albeit inexactly. We show that this data structure accurately represents DNA assembly graphs in low memory. We apply this data structure to the problem of partitioning assembly graphs into components as a prelude to assembly, and show that this reduces the overall memory requirements for {\em de novo} assembly of metagenomes. On one soil metagenome assembly, this approach achieves a nearly 40-fold decrease in the maximum memory requirements for assembly. This probabilistic graph representation is a significant theoretical advance in storing assembly graphs and also yields immediate leverage on metagenomic assembly.
2207.14296
Romain David
Jan-Willem Boiten, Christian Ohmann (ECRIN), Ayodeji Adeniran (BBMRI-ERIC), Steve Canham (ECRIN), Monica Cano Abadia (BBMRI-ERIC), Gauthier Chassang (INSERM), Maria Luisa Chiusano (EMBRC), Romain David (ERINHA-AISBL), Maddalena Fratelli (ECRIN), Phil Gribbon, Petr Holub, (BBMRI-ERIC), Rebecca Ludwig, Michaela Th. Mayrhofer (BBMRI-ERIC), Mihaela Matei (ECRIN), Arshiya Merchant, Maria Panagiotopoulou (ECRIN), Luca Pireddu (BBMRI-ERIC), Alex Sanchez Pla (VHIR), Irene Schl\"under (BBMRI-ERIC), George Tsamis, Harald Wagener
EOSC-LIFE WP4 TOOLBOX: Toolbox for sharing of sensitive data -- a concept description
null
null
null
null
q-bio.OT cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Horizon 2020 project EOSC-Life brings together the 13 Life Science 'ESFRI' research infrastructures to create an open, digital and collaborative space for biological and medical research. Sharing sensitive data is a specific challenge within EOSC-Life. For that reason, a toolbox is being developed, providing information to researchers who wish to share and/or use sensitive data in a cloud environment in general, and the European Open Science Cloud in particular. The sensitivity of the data may arise from its personal nature but can also be caused by intellectual property considerations, biohazard concerns, or the Nagoya protocol. The toolbox will not create new content, instead, it will allow researchers to find existing resources that are relevant for sharing sensitive data across all participating research infrastructures (F in FAIR). The toolbox will provide links to recommendations, procedures, and best practices, as well as to software (tools) to support data sharing and reuse. It will be based upon a tagging (categorisation) system, allowing consistent labelling and categorisation of resources. The current design document provides an outline for the anticipated toolbox, as well as its basic principles regarding content and sustainability.
[ { "created": "Thu, 28 Jul 2022 07:09:27 GMT", "version": "v1" } ]
2022-08-01
[ [ "Boiten", "Jan-Willem", "", "ECRIN" ], [ "Ohmann", "Christian", "", "ECRIN" ], [ "Adeniran", "Ayodeji", "", "BBMRI-ERIC" ], [ "Canham", "Steve", "", "ECRIN" ], [ "Abadia", "Monica Cano", "", "BBMRI-ERIC" ], [ "Chassang", "Gauthier", "", "INSERM" ], [ "Chiusano", "Maria Luisa", "", "EMBRC" ], [ "David", "Romain", "", "ERINHA-AISBL" ], [ "Fratelli", "Maddalena", "", "ECRIN" ], [ "Gribbon", "Phil", "", "BBMRI-ERIC" ], [ "Holub", "Petr", "", "BBMRI-ERIC" ], [ "Ludwig", "Rebecca", "", "BBMRI-ERIC" ], [ "Mayrhofer", "Michaela Th.", "", "BBMRI-ERIC" ], [ "Matei", "Mihaela", "", "ECRIN" ], [ "Merchant", "Arshiya", "", "ECRIN" ], [ "Panagiotopoulou", "Maria", "", "ECRIN" ], [ "Pireddu", "Luca", "", "BBMRI-ERIC" ], [ "Pla", "Alex Sanchez", "", "VHIR" ], [ "Schlünder", "Irene", "", "BBMRI-ERIC" ], [ "Tsamis", "George", "" ], [ "Wagener", "Harald", "" ] ]
The Horizon 2020 project EOSC-Life brings together the 13 Life Science 'ESFRI' research infrastructures to create an open, digital and collaborative space for biological and medical research. Sharing sensitive data is a specific challenge within EOSC-Life. For that reason, a toolbox is being developed, providing information to researchers who wish to share and/or use sensitive data in a cloud environment in general, and the European Open Science Cloud in particular. The sensitivity of the data may arise from its personal nature but can also be caused by intellectual property considerations, biohazard concerns, or the Nagoya protocol. The toolbox will not create new content, instead, it will allow researchers to find existing resources that are relevant for sharing sensitive data across all participating research infrastructures (F in FAIR). The toolbox will provide links to recommendations, procedures, and best practices, as well as to software (tools) to support data sharing and reuse. It will be based upon a tagging (categorisation) system, allowing consistent labelling and categorisation of resources. The current design document provides an outline for the anticipated toolbox, as well as its basic principles regarding content and sustainability.
q-bio/0611084
Ruriko Yoshida
Chris L. Schardl, Kelly D. Craven, Adam Lindstrom, Skyler Speakman, Arnold Stromberg, Ruriko Yoshida
A Novel Test for Host-Symbiont Codivergence Indicates Ancient Origin of Fungal Endophytes in Grasses
6 figures and 6 tables
Systematic Biology. Volume 57, Issue 3, (2008), p483 - 498
null
null
q-bio.PE math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Significant phylogenetic codivergence between plant or animal hosts ($H$) and their symbionts or parasites ($P$) indicate the importance of their interactions on evolutionary time scales. However, valid and realistic methods to test for codivergence are not fully developed. One of the systems where possible codivergence has been of interest involves the large subfamily of temperate grasses (Pooideae) and their endophytic fungi (epichloae). These widespread symbioses often help protect host plants from herbivory and stresses, and affect species diversity and food web structures. Here we introduce the MRCALink (most-recent-common-ancestor link) method and use it to investigate the possibility of grass-epichlo\"e codivergence. MRCALink applied to ultrametric $H$ and $P$ trees identifies all corresponding nodes for pairwise comparisons of MRCA ages. The result is compared to the space of random $H$ and $P$ tree pairs estimated by a Monte Carlo method. Compared to tree reconciliation the method is less dependent on tree topologies (which often can be misleading), and it crucially improves on phylogeny-independent methods such as {\tt ParaFit} or the Mantel test by eliminating an extreme (but previously unrecognized) distortion of node-pair sampling. Analysis of 26 grass species-epichlo\"e species symbioses did not reject random association of $H$ and $P$ MRCA ages. However, when five obvious host jumps were removed the analysis significantly rejected random association and supported grass-endophyte codivergence. Interestingly, early cladogenesis events in the Pooideae corresponded to early cladogenesis events in epichloae, suggesting concomitant origins of this grass subfamily and its remarkable group of symbionts. We also applied our method to the well-known gopher-louse data set.
[ { "created": "Sat, 25 Nov 2006 10:07:40 GMT", "version": "v1" }, { "created": "Sat, 6 Oct 2007 19:18:54 GMT", "version": "v2" }, { "created": "Thu, 28 Aug 2008 17:01:43 GMT", "version": "v3" } ]
2008-08-28
[ [ "Schardl", "Chris L.", "" ], [ "Craven", "Kelly D.", "" ], [ "Lindstrom", "Adam", "" ], [ "Speakman", "Skyler", "" ], [ "Stromberg", "Arnold", "" ], [ "Yoshida", "Ruriko", "" ] ]
Significant phylogenetic codivergence between plant or animal hosts ($H$) and their symbionts or parasites ($P$) indicate the importance of their interactions on evolutionary time scales. However, valid and realistic methods to test for codivergence are not fully developed. One of the systems where possible codivergence has been of interest involves the large subfamily of temperate grasses (Pooideae) and their endophytic fungi (epichloae). These widespread symbioses often help protect host plants from herbivory and stresses, and affect species diversity and food web structures. Here we introduce the MRCALink (most-recent-common-ancestor link) method and use it to investigate the possibility of grass-epichlo\"e codivergence. MRCALink applied to ultrametric $H$ and $P$ trees identifies all corresponding nodes for pairwise comparisons of MRCA ages. The result is compared to the space of random $H$ and $P$ tree pairs estimated by a Monte Carlo method. Compared to tree reconciliation the method is less dependent on tree topologies (which often can be misleading), and it crucially improves on phylogeny-independent methods such as {\tt ParaFit} or the Mantel test by eliminating an extreme (but previously unrecognized) distortion of node-pair sampling. Analysis of 26 grass species-epichlo\"e species symbioses did not reject random association of $H$ and $P$ MRCA ages. However, when five obvious host jumps were removed the analysis significantly rejected random association and supported grass-endophyte codivergence. Interestingly, early cladogenesis events in the Pooideae corresponded to early cladogenesis events in epichloae, suggesting concomitant origins of this grass subfamily and its remarkable group of symbionts. We also applied our method to the well-known gopher-louse data set.
2006.00659
Jagadish Kumar Dr.
G. Ananthakrishna and Jagadish Kumar
A reductive analysis of a compartmental model for COVID-19: data assimilation and forecasting for the United Kingdom
16 pages, 3 figures
null
null
null
q-bio.PE physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a deterministic model that partitions the total population into the susceptible, infected, quarantined, and those traced after exposure, the recovered and the deceased. We hypothesize 'accessible population for transmission of the disease' to be a small fraction of the total population, for instance when interventions are in force. This hypothesis, together with the structure of the set of coupled nonlinear ordinary differential equations for the populations, allows us to decouple the equations into just two equations. This further reduces to a logistic type of equation for the total infected population. The equation can be solved analytically and therefore allows for a clear interpretation of the growth and inhibiting factors in terms of the parameters in the full model. The validity of the 'accessible population' hypothesis and the efficacy of the reduced logistic model is demonstrated by the ease of fitting the United Kingdom data for the cumulative infected and daily new infected cases. The model can also be used to forecast further progression of the disease. In an effort to find optimized parameter values compatible with the United Kingdom coronavirus data, we first determine the relative importance of the various transition rates participating in the original model. Using this we show that the original model equations provide a very good fit with the United Kingdom data for the cumulative number of infections and the daily new cases. The fact that the model calculated daily new cases exhibits a turning point, suggests the beginning of a slow-down in the spread of infections. However, since the rate of slowing down beyond the turning point is small, the cumulative number of infections is likely to saturate to about $3.52 \times 10^5$ around late July, provided the lock-down conditions continue to prevail.
[ { "created": "Mon, 1 Jun 2020 01:27:35 GMT", "version": "v1" }, { "created": "Sat, 6 Jun 2020 14:17:49 GMT", "version": "v2" } ]
2020-06-09
[ [ "Ananthakrishna", "G.", "" ], [ "Kumar", "Jagadish", "" ] ]
We introduce a deterministic model that partitions the total population into the susceptible, infected, quarantined, and those traced after exposure, the recovered and the deceased. We hypothesize 'accessible population for transmission of the disease' to be a small fraction of the total population, for instance when interventions are in force. This hypothesis, together with the structure of the set of coupled nonlinear ordinary differential equations for the populations, allows us to decouple the equations into just two equations. This further reduces to a logistic type of equation for the total infected population. The equation can be solved analytically and therefore allows for a clear interpretation of the growth and inhibiting factors in terms of the parameters in the full model. The validity of the 'accessible population' hypothesis and the efficacy of the reduced logistic model is demonstrated by the ease of fitting the United Kingdom data for the cumulative infected and daily new infected cases. The model can also be used to forecast further progression of the disease. In an effort to find optimized parameter values compatible with the United Kingdom coronavirus data, we first determine the relative importance of the various transition rates participating in the original model. Using this we show that the original model equations provide a very good fit with the United Kingdom data for the cumulative number of infections and the daily new cases. The fact that the model calculated daily new cases exhibits a turning point, suggests the beginning of a slow-down in the spread of infections. However, since the rate of slowing down beyond the turning point is small, the cumulative number of infections is likely to saturate to about $3.52 \times 10^5$ around late July, provided the lock-down conditions continue to prevail.
0811.3950
Yufang Wang
Yufang Wang, Ling Guo, Ido Golding, Edward C. Cox, N. P. Ong
Quantitative transcription factor binding kinetics at the single-molecule level
34 pages, 10 figures, accepted by Biophysical Journal
null
10.1016/j.bpj.2008.09.040
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have investigated the binding interaction between the bacteriophage lambda repressor CI and its target DNA using total internal reflection fluorescence microscopy. Large, step-wise changes in the intensity of the red fluorescent protein fused to CI were observed as it associated and dissociated from individually labeled single molecule DNA targets. The stochastic association and dissociation were characterized by Poisson statistics. Dark and bright intervals were measured for thousands of individual events. The exponential distribution of the intervals allowed direct determination of the association and dissociation rate constants, ka and kd respectively. We resolved in detail how ka and kd varied as a function of 3 control parameters, the DNA length L, the CI dimer concentration, and the binding affinity. Our results show that although interaction with non-operator DNA sequences are observable, CI binding to the operator site is not dependent on the length of flanking non-operator DNA.
[ { "created": "Mon, 24 Nov 2008 19:32:00 GMT", "version": "v1" } ]
2015-05-13
[ [ "Wang", "Yufang", "" ], [ "Guo", "Ling", "" ], [ "Golding", "Ido", "" ], [ "Cox", "Edward C.", "" ], [ "Ong", "N. P.", "" ] ]
We have investigated the binding interaction between the bacteriophage lambda repressor CI and its target DNA using total internal reflection fluorescence microscopy. Large, step-wise changes in the intensity of the red fluorescent protein fused to CI were observed as it associated and dissociated from individually labeled single molecule DNA targets. The stochastic association and dissociation were characterized by Poisson statistics. Dark and bright intervals were measured for thousands of individual events. The exponential distribution of the intervals allowed direct determination of the association and dissociation rate constants, ka and kd respectively. We resolved in detail how ka and kd varied as a function of 3 control parameters, the DNA length L, the CI dimer concentration, and the binding affinity. Our results show that although interaction with non-operator DNA sequences are observable, CI binding to the operator site is not dependent on the length of flanking non-operator DNA.
1612.09301
Annette Rohrs
Ronald E. Mickens, Maxine Harlemon, and Kale Oyedeji
Consequences of Culling in Deterministic ODE Predator-Prey Models
8 pages, 4 figures Corrected typo in second author's last name
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show, within the context of the standard class of deterministic ODE predator-prey mathematical models, that predator culling does not produce a long term decrease in the predator population.
[ { "created": "Thu, 29 Dec 2016 15:21:45 GMT", "version": "v1" }, { "created": "Tue, 10 Jan 2017 20:19:26 GMT", "version": "v2" } ]
2017-01-12
[ [ "Mickens", "Ronald E.", "" ], [ "Harlemon", "Maxine", "" ], [ "Oyedeji", "Kale", "" ] ]
We show, within the context of the standard class of deterministic ODE predator-prey mathematical models, that predator culling does not produce a long term decrease in the predator population.
2003.10442
Shannon Gallagher
Shannon Gallagher, Andersen Chang, William F. Eddy
Exploring the nuances of R0: Eight estimates and application to 2009 pandemic influenza
null
null
null
null
q-bio.PE stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For nearly a century, the initial reproduction number (R0) has been used as a one number summary to compare outbreaks of infectious disease, yet there is no `standard' estimator for R0. Difficulties in estimating R0 arise both from how a disease transmits through a population as well as from differences in statistical estimation method. We describe eight methods used to estimate R0 and provide a thorough simulation study of how these estimates change in the presence of different disease parameters. As motivation, we analyze the 2009 outbreak of the H1N1 pandemic influenza in the USA and compare the results from our eight methods to a previous study. We discuss the most important aspects from our results which effect the estimation of R0, which include the population size, time period used, and the initial percent of infectious individuals. Additionally, we discuss how pre-processing incidence counts may effect estimates of R0. Finally, we provide guidelines for estimating point estimates and confidence intervals to create reliable, comparable estimates of R0.
[ { "created": "Mon, 23 Mar 2020 16:09:08 GMT", "version": "v1" } ]
2020-03-25
[ [ "Gallagher", "Shannon", "" ], [ "Chang", "Andersen", "" ], [ "Eddy", "William F.", "" ] ]
For nearly a century, the initial reproduction number (R0) has been used as a one number summary to compare outbreaks of infectious disease, yet there is no `standard' estimator for R0. Difficulties in estimating R0 arise both from how a disease transmits through a population as well as from differences in statistical estimation method. We describe eight methods used to estimate R0 and provide a thorough simulation study of how these estimates change in the presence of different disease parameters. As motivation, we analyze the 2009 outbreak of the H1N1 pandemic influenza in the USA and compare the results from our eight methods to a previous study. We discuss the most important aspects from our results which effect the estimation of R0, which include the population size, time period used, and the initial percent of infectious individuals. Additionally, we discuss how pre-processing incidence counts may effect estimates of R0. Finally, we provide guidelines for estimating point estimates and confidence intervals to create reliable, comparable estimates of R0.
1905.13065
Julien Lagarde
Julien Lagarde and Nicolas Bouisset
When is now in a distributed system? Animated motion (could) set the present in brain networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our brains are viewed as interconnected distributed systems. The connections between distant areas in the brain are significantly delayed. How to obtained now in such networks with delayed interconnections? We aim to show that delayed communication and interconnectedness of the brain impose an interaction with the environment, assuming that such an access to now, which we label t-present, is of use for this system. It is conjectured that for any sensory, motor or cognitive functions to work efficiently an updated sort of time origin is required, and we claim that it is uniquely given by a direct contact with the physical environment. To get such contact autonomously any movement is required, be it originating in the motion of sensory systems or in goal directed movements. Some limit cases are identified and discussed. Next, several testable situations are envisioned and available studies in favor of the main theoretical hypothesis are shortly reviewed. Finally, as a proof of concept, an experimental study employing galvanic vestibular stimulation is presented and discussed.
[ { "created": "Thu, 30 May 2019 14:12:15 GMT", "version": "v1" } ]
2019-05-31
[ [ "Lagarde", "Julien", "" ], [ "Bouisset", "Nicolas", "" ] ]
Our brains are viewed as interconnected distributed systems. The connections between distant areas in the brain are significantly delayed. How to obtained now in such networks with delayed interconnections? We aim to show that delayed communication and interconnectedness of the brain impose an interaction with the environment, assuming that such an access to now, which we label t-present, is of use for this system. It is conjectured that for any sensory, motor or cognitive functions to work efficiently an updated sort of time origin is required, and we claim that it is uniquely given by a direct contact with the physical environment. To get such contact autonomously any movement is required, be it originating in the motion of sensory systems or in goal directed movements. Some limit cases are identified and discussed. Next, several testable situations are envisioned and available studies in favor of the main theoretical hypothesis are shortly reviewed. Finally, as a proof of concept, an experimental study employing galvanic vestibular stimulation is presented and discussed.
q-bio/0506013
Michael Stumpf
M.P.H. Stumpf, P.J. Ingram, I. Nouvel, C. Wiuf
Statistical model selection methods applied to biological networks
Transactions in Computational Systems Biology (in press)
null
null
null
q-bio.MN q-bio.OT
null
Many biological networks have been labelled scale-free as their degree distribution can be approximately described by a powerlaw distribution. While the degree distribution does not summarize all aspects of a network it has often been suggested that its functional form contains important clues as to underlying evolutionary processes that have shaped the network. Generally determining the appropriate functional form for the degree distribution has been fitted in an ad-hoc fashion. Here we apply formal statistical model selection methods to determine which functional form best describes degree distributions of protein interaction and metabolic networks. We interpret the degree distribution as belonging to a class of probability models and determine which of these models provides the best description for the empirical data using maximum likelihood inference, composite likelihood methods, the Akaike information criterion and goodness-of-fit tests. The whole data is used in order to determine the parameter that best explains the data under a given model (e.g. scale-free or random graph). As we will show, present protein interaction and metabolic network data from different organisms suggests that simple scale-free models do not provide an adequate description of real network data.
[ { "created": "Fri, 10 Jun 2005 12:24:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Stumpf", "M. P. H.", "" ], [ "Ingram", "P. J.", "" ], [ "Nouvel", "I.", "" ], [ "Wiuf", "C.", "" ] ]
Many biological networks have been labelled scale-free as their degree distribution can be approximately described by a powerlaw distribution. While the degree distribution does not summarize all aspects of a network it has often been suggested that its functional form contains important clues as to underlying evolutionary processes that have shaped the network. Generally determining the appropriate functional form for the degree distribution has been fitted in an ad-hoc fashion. Here we apply formal statistical model selection methods to determine which functional form best describes degree distributions of protein interaction and metabolic networks. We interpret the degree distribution as belonging to a class of probability models and determine which of these models provides the best description for the empirical data using maximum likelihood inference, composite likelihood methods, the Akaike information criterion and goodness-of-fit tests. The whole data is used in order to determine the parameter that best explains the data under a given model (e.g. scale-free or random graph). As we will show, present protein interaction and metabolic network data from different organisms suggests that simple scale-free models do not provide an adequate description of real network data.
2403.07023
Chunhui Gu
Chunhui Gu, Ruosha Li, Guoqiang Zhang
Propensity-score matching analysis in COVID-19-related studies: a method and quality systematic review
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: To provide an overall quality assessment of the methods used for COVID-19-related studies using propensity score matching (PSM). Study Design and Setting: A systematic search was conducted in June 2021 on PubMed to identify COVID-19-related studies that use the PSM analysis between 2020 and 2021. Key information about study design and PSM analysis were extracted, such as covariates, matching algorithm, and reporting of estimated treatment effect type. Results: One-hundred-and-fifty (87.72%) cohort studies and thirteen (7.60%) case-control studies were found among 171 identified articles. Forty-five studies (26.32%) provided a reasonable justification for covariates selection. One-hundred-and-three (60.23%) and Sixty-nine (40.35%) studies did not provide the model that was used for calculating the propensity score or did not report the matching algorithm, respectively. Seventy-three (42.69%) studies reported the method(s) for checking covariates balance. Forty studies (23.39%) had a statistician co-author. All the case-control studies (n=13) did not have a statistician co-author (p=0.006) and all studies that clarified the treatment effect estimation (n=6) had a statistician co-author (p<0.001). Conclusions: The reporting quality of the PSM analysis is suboptimal in some COVID-19 epidemiological studies. Some pitfalls may undermine study findings that involve PSM analysis, such as a mismatch between PSM analysis and study design.
[ { "created": "Sun, 10 Mar 2024 06:24:56 GMT", "version": "v1" } ]
2024-03-13
[ [ "Gu", "Chunhui", "" ], [ "Li", "Ruosha", "" ], [ "Zhang", "Guoqiang", "" ] ]
Objectives: To provide an overall quality assessment of the methods used for COVID-19-related studies using propensity score matching (PSM). Study Design and Setting: A systematic search was conducted in June 2021 on PubMed to identify COVID-19-related studies that use the PSM analysis between 2020 and 2021. Key information about study design and PSM analysis were extracted, such as covariates, matching algorithm, and reporting of estimated treatment effect type. Results: One-hundred-and-fifty (87.72%) cohort studies and thirteen (7.60%) case-control studies were found among 171 identified articles. Forty-five studies (26.32%) provided a reasonable justification for covariates selection. One-hundred-and-three (60.23%) and Sixty-nine (40.35%) studies did not provide the model that was used for calculating the propensity score or did not report the matching algorithm, respectively. Seventy-three (42.69%) studies reported the method(s) for checking covariates balance. Forty studies (23.39%) had a statistician co-author. All the case-control studies (n=13) did not have a statistician co-author (p=0.006) and all studies that clarified the treatment effect estimation (n=6) had a statistician co-author (p<0.001). Conclusions: The reporting quality of the PSM analysis is suboptimal in some COVID-19 epidemiological studies. Some pitfalls may undermine study findings that involve PSM analysis, such as a mismatch between PSM analysis and study design.
1503.01081
Antti Honkela
Antti Honkela, Jaakko Peltonen, Hande Topa, Iryna Charapitsa, Filomena Matarese, Korbinian Grote, Hendrik G. Stunnenberg, George Reid, Neil D. Lawrence, Magnus Rattray
Genome-wide modelling of transcription kinetics reveals patterns of RNA production delays
42 pages, 17 figures
PNAS 112(42):13115-13120, 2015
10.1073/pnas.1420404112
null
q-bio.GN q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
Genes with similar transcriptional activation kinetics can display very different temporal mRNA profiles due to differences in transcription time, degradation rate and RNA processing kinetics. Recent studies have shown that a splicing-associated RNA production delay can be significant. We introduce a joint model of transcriptional activation and mRNA accumulation which can be used for inference of transcription rate, RNA production delay and degradation rate given genome-wide data from high-throughput sequencing time course experiments. We combine a mechanistic differential equation model with a non-parametric statistical modelling approach allowing us to capture a broad range of activation kinetics, and use Bayesian parameter estimation to quantify the uncertainty in the estimates of the kinetic parameters. We apply the model to data from estrogen receptor (ER-{\alpha}) activation in the MCF-7 breast cancer cell line. We use RNA polymerase II (pol-II) ChIP-Seq time course data to characterise transcriptional activation and mRNA-Seq time course data to quantify mature transcripts. We find that 11% of genes with a good signal in the data display a delay of more than 20 minutes between completing transcription and mature mRNA production. The genes displaying these long delays are significantly more likely to be short. We also find a statistical association between high delay and late intron retention in pre-mRNA data, indicating significant splicing-associated production delays in many genes.
[ { "created": "Tue, 3 Mar 2015 20:00:31 GMT", "version": "v1" }, { "created": "Thu, 16 Jul 2015 19:07:43 GMT", "version": "v2" } ]
2015-10-22
[ [ "Honkela", "Antti", "" ], [ "Peltonen", "Jaakko", "" ], [ "Topa", "Hande", "" ], [ "Charapitsa", "Iryna", "" ], [ "Matarese", "Filomena", "" ], [ "Grote", "Korbinian", "" ], [ "Stunnenberg", "Hendrik G.", "" ], [ "Reid", "George", "" ], [ "Lawrence", "Neil D.", "" ], [ "Rattray", "Magnus", "" ] ]
Genes with similar transcriptional activation kinetics can display very different temporal mRNA profiles due to differences in transcription time, degradation rate and RNA processing kinetics. Recent studies have shown that a splicing-associated RNA production delay can be significant. We introduce a joint model of transcriptional activation and mRNA accumulation which can be used for inference of transcription rate, RNA production delay and degradation rate given genome-wide data from high-throughput sequencing time course experiments. We combine a mechanistic differential equation model with a non-parametric statistical modelling approach allowing us to capture a broad range of activation kinetics, and use Bayesian parameter estimation to quantify the uncertainty in the estimates of the kinetic parameters. We apply the model to data from estrogen receptor (ER-{\alpha}) activation in the MCF-7 breast cancer cell line. We use RNA polymerase II (pol-II) ChIP-Seq time course data to characterise transcriptional activation and mRNA-Seq time course data to quantify mature transcripts. We find that 11% of genes with a good signal in the data display a delay of more than 20 minutes between completing transcription and mature mRNA production. The genes displaying these long delays are significantly more likely to be short. We also find a statistical association between high delay and late intron retention in pre-mRNA data, indicating significant splicing-associated production delays in many genes.
1601.01580
Ran Levi
Pawe Dotko, Kathryn Hess, Ran Levi, Max Nolte, Michael Reimann, Martina Scolamiero, Katharine Turner, Eilif Muller, Henry Markram
Topological analysis of the connectome of digital reconstructions of neural microcircuits
null
null
10.3389/fncom.2017.00048
null
q-bio.NC math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent publication provides the network graph for a neocortical microcircuit comprising 8 million connections between 31,000 neurons (H. Markram, et al., Reconstruction and simulation of neocortical microcircuitry, Cell, 163 (2015) no. 2, 456-492). Since traditional graph-theoretical methods may not be sufficient to understand the immense complexity of such a biological network, we explored whether methods from algebraic topology could provide a new perspective on its structural and functional organization. Structural topological analysis revealed that directed graphs representing connectivity among neurons in the microcircuit deviated significantly from different varieties of randomized graph. In particular, the directed graphs contained in the order of $10^7$ simplices {\DH} groups of neurons with all-to-all directed connectivity. Some of these simplices contained up to 8 neurons, making them the most extreme neuronal clustering motif ever reported. Functional topological analysis of simulated neuronal activity in the microcircuit revealed novel spatio-temporal metrics that provide an effective classification of functional responses to qualitatively different stimuli. This study represents the first algebraic topological analysis of structural connectomics and connectomics-based spatio-temporal activity in a biologically realistic neural microcircuit. The methods used in the study show promise for more general applications in network science.
[ { "created": "Thu, 7 Jan 2016 16:02:05 GMT", "version": "v1" } ]
2017-06-13
[ [ "Dotko", "Pawe", "" ], [ "Hess", "Kathryn", "" ], [ "Levi", "Ran", "" ], [ "Nolte", "Max", "" ], [ "Reimann", "Michael", "" ], [ "Scolamiero", "Martina", "" ], [ "Turner", "Katharine", "" ], [ "Muller", "Eilif", "" ], [ "Markram", "Henry", "" ] ]
A recent publication provides the network graph for a neocortical microcircuit comprising 8 million connections between 31,000 neurons (H. Markram, et al., Reconstruction and simulation of neocortical microcircuitry, Cell, 163 (2015) no. 2, 456-492). Since traditional graph-theoretical methods may not be sufficient to understand the immense complexity of such a biological network, we explored whether methods from algebraic topology could provide a new perspective on its structural and functional organization. Structural topological analysis revealed that directed graphs representing connectivity among neurons in the microcircuit deviated significantly from different varieties of randomized graph. In particular, the directed graphs contained in the order of $10^7$ simplices {\DH} groups of neurons with all-to-all directed connectivity. Some of these simplices contained up to 8 neurons, making them the most extreme neuronal clustering motif ever reported. Functional topological analysis of simulated neuronal activity in the microcircuit revealed novel spatio-temporal metrics that provide an effective classification of functional responses to qualitatively different stimuli. This study represents the first algebraic topological analysis of structural connectomics and connectomics-based spatio-temporal activity in a biologically realistic neural microcircuit. The methods used in the study show promise for more general applications in network science.
2202.05179
Rosalia Ferraro
Rosalia Ferraro, Flora Ascione, Prashant Dogra, Vittorio Cristini, Stefano Guido, Sergio Caserta
Diffusion-induced anisotropic cancer invasion: a novel experimental method based on tumour spheroids
14 pages, 9 figures
null
10.1002/aic.17678
null
q-bio.CB cond-mat.soft
http://creativecommons.org/licenses/by-nc-sa/4.0/
Tumour invasion is strongly influenced by microenvironment and, among other parameters, chemical stimuli play an important role. An innovative methodology for the quantitative investigation of chemotaxis in vitro by live imaging of morphology of cell spheroids, in 3D collagen gel, is presented here. The assay was performed by using a chemotactic chamber to impose a controlled gradients of nutrients (glucose) on spheroids, mimicking the chemotactic stimuli naturally occurring in the proximity of blood vessels. Different tumoral cell lines (PANC-1 and HT-1080) are compared to non-tumoral ones (NIH/3T3). Morphology response is observed by means a Time-lapse workstation equipped with an incubating system and quantified by image analysis techniques. Description of invasion phenomena was based on an engineering approach, based on transport phenomena concepts. As expected, NIH/3T3 spheroids are characterized by a limited tendency of cells to invade the surrounding tissue, unlike PANC-1 and HT-1080 that show relatively stronger response to gradients.
[ { "created": "Thu, 10 Feb 2022 17:23:51 GMT", "version": "v1" }, { "created": "Mon, 28 Mar 2022 08:11:33 GMT", "version": "v2" } ]
2022-03-29
[ [ "Ferraro", "Rosalia", "" ], [ "Ascione", "Flora", "" ], [ "Dogra", "Prashant", "" ], [ "Cristini", "Vittorio", "" ], [ "Guido", "Stefano", "" ], [ "Caserta", "Sergio", "" ] ]
Tumour invasion is strongly influenced by microenvironment and, among other parameters, chemical stimuli play an important role. An innovative methodology for the quantitative investigation of chemotaxis in vitro by live imaging of morphology of cell spheroids, in 3D collagen gel, is presented here. The assay was performed by using a chemotactic chamber to impose a controlled gradients of nutrients (glucose) on spheroids, mimicking the chemotactic stimuli naturally occurring in the proximity of blood vessels. Different tumoral cell lines (PANC-1 and HT-1080) are compared to non-tumoral ones (NIH/3T3). Morphology response is observed by means a Time-lapse workstation equipped with an incubating system and quantified by image analysis techniques. Description of invasion phenomena was based on an engineering approach, based on transport phenomena concepts. As expected, NIH/3T3 spheroids are characterized by a limited tendency of cells to invade the surrounding tissue, unlike PANC-1 and HT-1080 that show relatively stronger response to gradients.
0806.1733
SuPing Lyu
Suping Lyu
A dynamic logic method for determining behaviors of biological networks
29 pages, 5 figures, 2008 APS poster
null
null
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A dynamic logic method was developed to analyze molecular networks of cells by combining Kauffman and Thomas's logic operations with molecular interaction parameters. The logic operations characterize the discrete interactions between biological components. The interaction parameters (e.g. response times) describe the quantitative kinetics. The combination of the two quantitatively characterizes the discrete biological interactions. A number of simple networks were analyzed. The main results include: we proved the theorems to determine bistable states and oscillation behaviors of networks, we showed that time delays are essential for oscillation structures, we proved that single variable networks do not have chaotic behaviors, and we explained why one signal can have multiply responses. In addition, we applied the present method to the analysis of the MAPK cascade, feed-forward loops, and mitosis cycle of budding yeast cells.
[ { "created": "Tue, 10 Jun 2008 19:55:08 GMT", "version": "v1" } ]
2008-06-11
[ [ "Lyu", "Suping", "" ] ]
A dynamic logic method was developed to analyze molecular networks of cells by combining Kauffman and Thomas's logic operations with molecular interaction parameters. The logic operations characterize the discrete interactions between biological components. The interaction parameters (e.g. response times) describe the quantitative kinetics. The combination of the two quantitatively characterizes the discrete biological interactions. A number of simple networks were analyzed. The main results include: we proved the theorems to determine bistable states and oscillation behaviors of networks, we showed that time delays are essential for oscillation structures, we proved that single variable networks do not have chaotic behaviors, and we explained why one signal can have multiply responses. In addition, we applied the present method to the analysis of the MAPK cascade, feed-forward loops, and mitosis cycle of budding yeast cells.
1904.04931
Wendy K. Caldwell
Wendy K. Caldwell, Geoffrey Fairchild, Sara Y. Del Valle
Nowcasting Influenza Incidence with CDC Web Traffic Data: A Demonstration Using a Novel Data Set
21 pages (including references and appendices; 12 pages prior to references), 5 figures (some include subfigures), ILI data available in csv form
null
null
LA-UR-18-31259
q-bio.PE cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influenza epidemics result in a public health and economic burden around the globe. Traditional surveillance techniques, which rely on doctor visits, provide data with a delay of 1-2 weeks. A means of obtaining real-time data and forecasting future outbreaks is desirable to provide more timely responses to influenza epidemics. In this work, we present the first implementation of a novel data set by demonstrating its ability to supplement traditional disease surveillance at multiple spatial resolutions. We use Internet traffic data from the Centers for Disease Control and Prevention (CDC) website to determine the potential usability of this data source. We test the traffic generated by ten influenza-related pages in eight states and nine census divisions within the United States and compare it against clinical surveillance data. Our results yield $r^2$ = 0.955 in the most successful case, promising results for some cases, and unsuccessful results for other cases. These results demonstrate that Internet data may be able to complement traditional influenza surveillance in some cases but not in others. Specifically, our results show that the CDC website traffic may inform national and division-level models but not models for each individual state. In addition, our results show better agreement when the data were broken up by seasons instead of aggregated over several years. In the interest of scientific transparency to further the understanding of when Internet data streams are an appropriate supplemental data source, we also include negative results (i.e., unsuccessful models). We anticipate that this work will lead to more complex nowcasting and forecasting models using this data stream.
[ { "created": "Tue, 9 Apr 2019 22:01:37 GMT", "version": "v1" } ]
2019-04-11
[ [ "Caldwell", "Wendy K.", "" ], [ "Fairchild", "Geoffrey", "" ], [ "Del Valle", "Sara Y.", "" ] ]
Influenza epidemics result in a public health and economic burden around the globe. Traditional surveillance techniques, which rely on doctor visits, provide data with a delay of 1-2 weeks. A means of obtaining real-time data and forecasting future outbreaks is desirable to provide more timely responses to influenza epidemics. In this work, we present the first implementation of a novel data set by demonstrating its ability to supplement traditional disease surveillance at multiple spatial resolutions. We use Internet traffic data from the Centers for Disease Control and Prevention (CDC) website to determine the potential usability of this data source. We test the traffic generated by ten influenza-related pages in eight states and nine census divisions within the United States and compare it against clinical surveillance data. Our results yield $r^2$ = 0.955 in the most successful case, promising results for some cases, and unsuccessful results for other cases. These results demonstrate that Internet data may be able to complement traditional influenza surveillance in some cases but not in others. Specifically, our results show that the CDC website traffic may inform national and division-level models but not models for each individual state. In addition, our results show better agreement when the data were broken up by seasons instead of aggregated over several years. In the interest of scientific transparency to further the understanding of when Internet data streams are an appropriate supplemental data source, we also include negative results (i.e., unsuccessful models). We anticipate that this work will lead to more complex nowcasting and forecasting models using this data stream.
1507.04863
Jeong Chan Park
Jeong Chan Park, Myung-Hwan Jung
Study of the Effects of High-Energy Proton Beams on Escherichia Coli
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Antibiotic-resistant bacterial infection becomes one of the most serious risks to public health care today. However, discouragingly, the development of new antibiotics has been little progressed over the last decade. There is an urgent need of the alternative approaches to treat the antibiotic-resistant bacteria. The novel methods, which include photothermal therapy based on gold nano-materials and ionizing radiation such as X-rays and gamma rays, have been reported. Studies of the effects of high-energy proton radiation on bacteria are mainly focused on Bacillus species and its spores. The effect of proton beams on Escherichia coli (E. coli) has been limitedly reported. The Escherichia coli is an important biological tool to obtain the metabolic and genetic information and also a common model microorganism for studying toxicity and antimicrobial activity. In addition, E. coli is a common bacterium in the intestinal tract of mammals. Herein, the morphological and physiological changes of E. coli after proton irradiation were investigated. The diluted solutions of the cells were used for proton beam radiation. LB agar plates were used to count the number of colonies formed. The growing profile of the cells was monitored by optical density at 600 nm. The morphology of the irradiated cells was analyzed with optical microscope. Microarray analysis was performed to examine the gene expression changes between irradiated samples and control samples without irradiation.
[ { "created": "Fri, 17 Jul 2015 07:51:49 GMT", "version": "v1" } ]
2015-07-20
[ [ "Park", "Jeong Chan", "" ], [ "Jung", "Myung-Hwan", "" ] ]
Antibiotic-resistant bacterial infection becomes one of the most serious risks to public health care today. However, discouragingly, the development of new antibiotics has been little progressed over the last decade. There is an urgent need of the alternative approaches to treat the antibiotic-resistant bacteria. The novel methods, which include photothermal therapy based on gold nano-materials and ionizing radiation such as X-rays and gamma rays, have been reported. Studies of the effects of high-energy proton radiation on bacteria are mainly focused on Bacillus species and its spores. The effect of proton beams on Escherichia coli (E. coli) has been limitedly reported. The Escherichia coli is an important biological tool to obtain the metabolic and genetic information and also a common model microorganism for studying toxicity and antimicrobial activity. In addition, E. coli is a common bacterium in the intestinal tract of mammals. Herein, the morphological and physiological changes of E. coli after proton irradiation were investigated. The diluted solutions of the cells were used for proton beam radiation. LB agar plates were used to count the number of colonies formed. The growing profile of the cells was monitored by optical density at 600 nm. The morphology of the irradiated cells was analyzed with optical microscope. Microarray analysis was performed to examine the gene expression changes between irradiated samples and control samples without irradiation.
0712.3240
Ryan Gutenkunst
Ryan N. Gutenkunst and James P. Sethna
Adaptive mutation of biochemical reaction constants: Fisher's geometrical model without pleiotropy
9 pages, 4 figures, submitted
null
null
LA-UR: 10-04348
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The distribution of fitness effects of adaptive mutations remains poorly understood, both empirically and theoretically. We study this distribution using a version of Fisher's geometrical model without pleiotropy, such that each mutation affects only a single trait. We are motivated by the notion of an organism's chemotype, the set of biochemical reaction constants that govern its molecular constituents. From physical considerations, we expect the chemotype to be of high dimension and to exhibit very little pleiotropy. Our model generically predicts striking cusps in the distribution of the fitness effects of arising and fixed mutations. It further predicts that a single element of the chemotype should comprise all mutations at the high-fitness ends of these distributions. Using extreme value theory, we show that the two cusps with the highest fitnesses are typically well-separated, even when the chemotype possesses thousands of elements; this suggests a means to observe these cusps experimentally. More broadly, our work demonstrates that new insights into evolution can arise from the chemotype perspective, a perspective between the genotype and the phenotype.
[ { "created": "Wed, 19 Dec 2007 20:20:28 GMT", "version": "v1" }, { "created": "Mon, 28 Jun 2010 20:12:19 GMT", "version": "v2" } ]
2015-03-13
[ [ "Gutenkunst", "Ryan N.", "" ], [ "Sethna", "James P.", "" ] ]
The distribution of fitness effects of adaptive mutations remains poorly understood, both empirically and theoretically. We study this distribution using a version of Fisher's geometrical model without pleiotropy, such that each mutation affects only a single trait. We are motivated by the notion of an organism's chemotype, the set of biochemical reaction constants that govern its molecular constituents. From physical considerations, we expect the chemotype to be of high dimension and to exhibit very little pleiotropy. Our model generically predicts striking cusps in the distribution of the fitness effects of arising and fixed mutations. It further predicts that a single element of the chemotype should comprise all mutations at the high-fitness ends of these distributions. Using extreme value theory, we show that the two cusps with the highest fitnesses are typically well-separated, even when the chemotype possesses thousands of elements; this suggests a means to observe these cusps experimentally. More broadly, our work demonstrates that new insights into evolution can arise from the chemotype perspective, a perspective between the genotype and the phenotype.
2103.13951
Sarah Kadelka
Sarah Kadelka, Judith A Bouman, Peter Ashcroft, Roland R Regoes
Comment on Buss et al., Science 2021: An alternative, empirically-supported adjustment for sero-reversion yields a 10 percentage point lower estimate of the cumulative incidence of SARS-CoV-2 in Manaus by October 2020
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The estimate of the cumulative incidence of SARS-CoV-2 of 76% in Manaus by October 2020 by Buss et al. relies on the assumption of an exponentially-declining probability of sero-reversion over time. We present an alternative, empirically-supported approach that is based on the observed dynamics of antibody titers in sero-positive cases. Through this approach we revise the cumulative incidence estimate to 66% (63.3% - 68.5%) by October 2020. This estimate has implications for the proximity to herd immunity and future estimates of fitness advantages of virus variants, such as the P.1 variant of concern. This methodology is also relevant for any other sero-survey analysis that has to adjust for sero-reversion.
[ { "created": "Thu, 25 Mar 2021 16:16:00 GMT", "version": "v1" } ]
2021-03-26
[ [ "Kadelka", "Sarah", "" ], [ "Bouman", "Judith A", "" ], [ "Ashcroft", "Peter", "" ], [ "Regoes", "Roland R", "" ] ]
The estimate of the cumulative incidence of SARS-CoV-2 of 76% in Manaus by October 2020 by Buss et al. relies on the assumption of an exponentially-declining probability of sero-reversion over time. We present an alternative, empirically-supported approach that is based on the observed dynamics of antibody titers in sero-positive cases. Through this approach we revise the cumulative incidence estimate to 66% (63.3% - 68.5%) by October 2020. This estimate has implications for the proximity to herd immunity and future estimates of fitness advantages of virus variants, such as the P.1 variant of concern. This methodology is also relevant for any other sero-survey analysis that has to adjust for sero-reversion.
2002.12776
Radu Grosu
Radu Grosu
ResNets, NeuralODEs and CT-RNNs are Particular Neural Regulatory Networks
9 pages, 4 figures
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows that ResNets, NeuralODEs, and CT-RNNs, are particular neural regulatory networks (NRNs), a biophysical model for the nonspiking neurons encountered in small species, such as the C.elegans nematode, and in the retina of large species. Compared to ResNets, NeuralODEs and CT-RNNs, NRNs have an additional multiplicative term in their synaptic computation, allowing them to adapt to each particular input. This additional flexibility makes NRNs $M$ times more succinct than NeuralODEs and CT-RNNs, where $M$ is proportional to the size of the training set. Moreover, as NeuralODEs and CT-RNNs are $N$ times more succinct than ResNets, where $N$ is the number of integration steps required to compute the output $F(x)$ for a given input $x$, NRNs are in total $M\,{\cdot}\,N$ more succinct than ResNets. For a given approximation task, this considerable succinctness allows to learn a very small and therefore understandable NRN, whose behavior can be explained in terms of well established architectural motifs, that NRNs share with gene regulatory networks, such as, activation, inhibition, sequentialization, mutual exclusion, and synchronization. To the best of our knowledge, this paper unifies for the first time the mainstream work on deep neural networks with the one in biology and neuroscience in a quantitative fashion.
[ { "created": "Wed, 26 Feb 2020 18:33:20 GMT", "version": "v1" }, { "created": "Tue, 3 Mar 2020 13:05:48 GMT", "version": "v2" }, { "created": "Thu, 19 Mar 2020 07:20:12 GMT", "version": "v3" } ]
2020-03-20
[ [ "Grosu", "Radu", "" ] ]
This paper shows that ResNets, NeuralODEs, and CT-RNNs, are particular neural regulatory networks (NRNs), a biophysical model for the nonspiking neurons encountered in small species, such as the C.elegans nematode, and in the retina of large species. Compared to ResNets, NeuralODEs and CT-RNNs, NRNs have an additional multiplicative term in their synaptic computation, allowing them to adapt to each particular input. This additional flexibility makes NRNs $M$ times more succinct than NeuralODEs and CT-RNNs, where $M$ is proportional to the size of the training set. Moreover, as NeuralODEs and CT-RNNs are $N$ times more succinct than ResNets, where $N$ is the number of integration steps required to compute the output $F(x)$ for a given input $x$, NRNs are in total $M\,{\cdot}\,N$ more succinct than ResNets. For a given approximation task, this considerable succinctness allows to learn a very small and therefore understandable NRN, whose behavior can be explained in terms of well established architectural motifs, that NRNs share with gene regulatory networks, such as, activation, inhibition, sequentialization, mutual exclusion, and synchronization. To the best of our knowledge, this paper unifies for the first time the mainstream work on deep neural networks with the one in biology and neuroscience in a quantitative fashion.
2104.08072
Thomas Booth
Thomas Booth, Bernice Akpinar, Andrei Roman, Haris Shuaib, Aysha Luis, Alysha Chelliah, Ayisha Al Busaidi, Ayesha Mirchandani, Burcu Alparslan, Nina Mansoor, Keyoumars Ashkan, Sebastien Ourselin, Marc Modat
Machine Learning and Glioblastoma: Treatment Response Monitoring Biomarkers in 2021
null
Kia S.M. et al. (eds) Machine Learning in Clinical Neuroimaging and Radiogenomics in Neuro-oncology. MLCN 2020, RNO-AI 2020. Lecture Notes in Computer Science, vol 12449
10.1007/978-3-030-66843-3_21
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
The aim of the systematic review was to assess recently published studies on diagnostic test accuracy of glioblastoma treatment response monitoring biomarkers in adults, developed through machine learning (ML). Articles were searched for using MEDLINE, EMBASE, and the Cochrane Register. Included study participants were adult patients with high grade glioma who had undergone standard treatment (maximal resection, radiotherapy with concomitant and adjuvant temozolomide) and subsequently underwent follow-up imaging to determine treatment response status. Risk of bias and applicability was assessed with QUADAS 2 methodology. Contingency tables were created for hold-out test sets and recall, specificity, precision, F1-score, balanced accuracy calculated. Fifteen studies were included with 1038 patients in training sets and 233 in test sets. To determine whether there was progression or a mimic, the reference standard combination of follow-up imaging and histopathology at re-operation was applied in 67% of studies. The small numbers of patient included in studies, the high risk of bias and concerns of applicability in the study designs (particularly in relation to the reference standard and patient selection due to confounding), and the low level of evidence, suggest that limited conclusions can be drawn from the data. There is likely good diagnostic performance of machine learning models that use MRI features to distinguish between progression and mimics. The diagnostic performance of ML using implicit features did not appear to be superior to ML using explicit features. There are a range of ML-based solutions poised to become treatment response monitoring biomarkers for glioblastoma. To achieve this, the development and validation of ML models require large, well-annotated datasets where the potential for confounding in the study design has been carefully considered.
[ { "created": "Thu, 15 Apr 2021 10:49:34 GMT", "version": "v1" } ]
2021-04-19
[ [ "Booth", "Thomas", "" ], [ "Akpinar", "Bernice", "" ], [ "Roman", "Andrei", "" ], [ "Shuaib", "Haris", "" ], [ "Luis", "Aysha", "" ], [ "Chelliah", "Alysha", "" ], [ "Busaidi", "Ayisha Al", "" ], [ "Mirchandani", "Ayesha", "" ], [ "Alparslan", "Burcu", "" ], [ "Mansoor", "Nina", "" ], [ "Ashkan", "Keyoumars", "" ], [ "Ourselin", "Sebastien", "" ], [ "Modat", "Marc", "" ] ]
The aim of the systematic review was to assess recently published studies on diagnostic test accuracy of glioblastoma treatment response monitoring biomarkers in adults, developed through machine learning (ML). Articles were searched for using MEDLINE, EMBASE, and the Cochrane Register. Included study participants were adult patients with high grade glioma who had undergone standard treatment (maximal resection, radiotherapy with concomitant and adjuvant temozolomide) and subsequently underwent follow-up imaging to determine treatment response status. Risk of bias and applicability was assessed with QUADAS 2 methodology. Contingency tables were created for hold-out test sets and recall, specificity, precision, F1-score, balanced accuracy calculated. Fifteen studies were included with 1038 patients in training sets and 233 in test sets. To determine whether there was progression or a mimic, the reference standard combination of follow-up imaging and histopathology at re-operation was applied in 67% of studies. The small numbers of patient included in studies, the high risk of bias and concerns of applicability in the study designs (particularly in relation to the reference standard and patient selection due to confounding), and the low level of evidence, suggest that limited conclusions can be drawn from the data. There is likely good diagnostic performance of machine learning models that use MRI features to distinguish between progression and mimics. The diagnostic performance of ML using implicit features did not appear to be superior to ML using explicit features. There are a range of ML-based solutions poised to become treatment response monitoring biomarkers for glioblastoma. To achieve this, the development and validation of ML models require large, well-annotated datasets where the potential for confounding in the study design has been carefully considered.
1810.12742
Felix Huber
Felix Huber
Efficient Tree Solver for Hines Matrices on the GPU
null
null
null
null
q-bio.QM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain consists of a large number of interconnected neurons communicating via exchange of electrical spikes. Simulations play an important role in better understanding electrical activity in the brain and offers a way to to compare measured data to simulated data such that experimental data can be interpreted better. A key component in such simulations is an efficient solver for the Hines matrices used in computing inter-neuron signal propagation. In order to achieve high performance simulations, it is crucial to have an efficient solver algorithm. In this report we explain a new parallel GPU solver for these matrices which offers fine grained parallelization and allows for work balancing during the simulation setup.
[ { "created": "Tue, 30 Oct 2018 14:00:02 GMT", "version": "v1" }, { "created": "Tue, 6 Nov 2018 09:32:29 GMT", "version": "v2" } ]
2018-11-07
[ [ "Huber", "Felix", "" ] ]
The human brain consists of a large number of interconnected neurons communicating via exchange of electrical spikes. Simulations play an important role in better understanding electrical activity in the brain and offers a way to to compare measured data to simulated data such that experimental data can be interpreted better. A key component in such simulations is an efficient solver for the Hines matrices used in computing inter-neuron signal propagation. In order to achieve high performance simulations, it is crucial to have an efficient solver algorithm. In this report we explain a new parallel GPU solver for these matrices which offers fine grained parallelization and allows for work balancing during the simulation setup.
2105.11221
Chiara Villa
Chiara Villa, Alf Gerisch, Mark A. J. Chaplain
A novel nonlocal partial differential equation model of endothelial progenitor cell cluster formation during the early stages of vasculogenesis
37 pages, 13 figures, 1 supplementary document
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neovascularisation is essential for tissue development and regeneration, in addition to playing a key role in pathological settings such as ischemia and tumour development. Experimental findings in the past two decades have led to the identification of a new mechanism of neovascularisation, cluster-based vasculogenesis, during which endothelial progenitor cells (EPCs) mobilised from the bone marrow are capable of bridging distant vascular beds in a variety of hypoxic settings in vivo. This process is characterised by the formation of EPC clusters during its early stages and, while much progress has been made in identifying various mechanisms underlying cluster formation, we are still far from a comprehensive description of such spatio-temporal dynamics. In order to achieve this, we propose a novel mathematical model of the early stages of cluster-based vasculogenesis, comprising of a system of nonlocal partial differential equations including key mechanisms such as endogenous chemotaxis, matrix degradation, cell proliferation and cell-to-cell adhesion. We conduct a linear stability analysis on the system, solve the equations numerically, conduct a parametric analysis of the numerical solutions of the 1D problem to investigate the role of underlying dynamics on the speed of cluster formation and the size of clusters, and verify the key results of the parametric analysis with simulations of the 2D problem. Our results, which qualitatively compare with data from in vitro experiments, elucidate the complementary role played by endogenous chemotaxis and matrix degradation in the formation of clusters, and they indicate that previous approaches to the nonlocal modelling of cell-to-cell adhesion, while they capture the aggregating effect of cell-to-cell adhesion, are not sufficient to capture its stabilising effect on clusters, and new continuum cell-adhesion modelling strategies are required.
[ { "created": "Mon, 24 May 2021 11:58:04 GMT", "version": "v1" }, { "created": "Mon, 4 Oct 2021 22:05:59 GMT", "version": "v2" }, { "created": "Mon, 21 Mar 2022 18:27:26 GMT", "version": "v3" } ]
2022-03-23
[ [ "Villa", "Chiara", "" ], [ "Gerisch", "Alf", "" ], [ "Chaplain", "Mark A. J.", "" ] ]
Neovascularisation is essential for tissue development and regeneration, in addition to playing a key role in pathological settings such as ischemia and tumour development. Experimental findings in the past two decades have led to the identification of a new mechanism of neovascularisation, cluster-based vasculogenesis, during which endothelial progenitor cells (EPCs) mobilised from the bone marrow are capable of bridging distant vascular beds in a variety of hypoxic settings in vivo. This process is characterised by the formation of EPC clusters during its early stages and, while much progress has been made in identifying various mechanisms underlying cluster formation, we are still far from a comprehensive description of such spatio-temporal dynamics. In order to achieve this, we propose a novel mathematical model of the early stages of cluster-based vasculogenesis, comprising of a system of nonlocal partial differential equations including key mechanisms such as endogenous chemotaxis, matrix degradation, cell proliferation and cell-to-cell adhesion. We conduct a linear stability analysis on the system, solve the equations numerically, conduct a parametric analysis of the numerical solutions of the 1D problem to investigate the role of underlying dynamics on the speed of cluster formation and the size of clusters, and verify the key results of the parametric analysis with simulations of the 2D problem. Our results, which qualitatively compare with data from in vitro experiments, elucidate the complementary role played by endogenous chemotaxis and matrix degradation in the formation of clusters, and they indicate that previous approaches to the nonlocal modelling of cell-to-cell adhesion, while they capture the aggregating effect of cell-to-cell adhesion, are not sufficient to capture its stabilising effect on clusters, and new continuum cell-adhesion modelling strategies are required.
0807.1367
Kunihiko Kaneko
Kunihiko Kaneko
Shaping Robust System through Evolution
null
Chaos (2008)18, 026112
10.1063/1.2912458
null
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological functions are generated as a result of developmental dynamics that form phenotypes governed by genotypes. The dynamical system for development is shaped through genetic evolution following natural selection based on the fitness of the phenotype. Here we study how this dynamical system is robust to noise during development and to genetic change by mutation. We adopt a simplified transcription regulation network model to govern gene expression, which gives a fitness function. Through simulations of the network that undergoes mutation and selection, we show that a certain level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during development shapes any network's robustness, not only to noise but also to mutations. We also establish a relationship between developmental and mutational robustness through phenotypic variances caused by genetic variation and epigenetic noise. A universal relationship between the two variances is derived, akin to the fluctuation-dissipation relationship known in physics.
[ { "created": "Wed, 9 Jul 2008 02:29:08 GMT", "version": "v1" } ]
2009-11-13
[ [ "Kaneko", "Kunihiko", "" ] ]
Biological functions are generated as a result of developmental dynamics that form phenotypes governed by genotypes. The dynamical system for development is shaped through genetic evolution following natural selection based on the fitness of the phenotype. Here we study how this dynamical system is robust to noise during development and to genetic change by mutation. We adopt a simplified transcription regulation network model to govern gene expression, which gives a fitness function. Through simulations of the network that undergoes mutation and selection, we show that a certain level of noise in gene expression is required for the network to acquire both types of robustness. The results reveal how the noise that cells encounter during development shapes any network's robustness, not only to noise but also to mutations. We also establish a relationship between developmental and mutational robustness through phenotypic variances caused by genetic variation and epigenetic noise. A universal relationship between the two variances is derived, akin to the fluctuation-dissipation relationship known in physics.
2103.15123
Magdalena Djordjevic
Marko Djordjevic, Igor Salom, Sofija Markovic, Andjela Rodic, Ognjen Milicevic, Magdalena Djordjevic
Inferring the main drivers of SARS-CoV-2 global transmissibility by feature selection methods
14 pages, 4 figures, 4 tables, GeoHealth (in press)
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Identifying the main environmental drivers of SARS-CoV-2 transmissibility in the population is crucial for understanding current and potential future outbursts of COVID-19 and other infectious diseases. To address this problem, we concentrate on the basic reproduction number $R_0$, which is not sensitive to testing coverage and represents transmissibility in an absence of social distancing and in a completely susceptible population. While many variables may potentially influence $R_0$, a high correlation between these variables may obscure the result interpretation. Consequently, we combine Principal Component Analysis with feature selection methods from several regression-based approaches to identify the main demographic and meteorological drivers behind $R_0$. We robustly obtain that country's wealth/development (GDP per capita or Human Development Index) is the most important $R_0$ predictor at the global level, probably being a good proxy for the overall contact frequency in a population. This main effect is modulated by built-up area per capita (crowdedness in indoor space), onset of infection (likely related to increased awareness of infection risks), net migration, unhealthy living lifestyle/conditions including pollution, seasonality, and possibly BCG vaccination prevalence. Also, we argue that several variables that significantly correlate with transmissibility do not directly influence $R_0$ or affect it differently than suggested by naive analysis.
[ { "created": "Sun, 28 Mar 2021 13:03:09 GMT", "version": "v1" }, { "created": "Tue, 31 Aug 2021 17:33:51 GMT", "version": "v2" } ]
2021-09-01
[ [ "Djordjevic", "Marko", "" ], [ "Salom", "Igor", "" ], [ "Markovic", "Sofija", "" ], [ "Rodic", "Andjela", "" ], [ "Milicevic", "Ognjen", "" ], [ "Djordjevic", "Magdalena", "" ] ]
Identifying the main environmental drivers of SARS-CoV-2 transmissibility in the population is crucial for understanding current and potential future outbursts of COVID-19 and other infectious diseases. To address this problem, we concentrate on the basic reproduction number $R_0$, which is not sensitive to testing coverage and represents transmissibility in an absence of social distancing and in a completely susceptible population. While many variables may potentially influence $R_0$, a high correlation between these variables may obscure the result interpretation. Consequently, we combine Principal Component Analysis with feature selection methods from several regression-based approaches to identify the main demographic and meteorological drivers behind $R_0$. We robustly obtain that country's wealth/development (GDP per capita or Human Development Index) is the most important $R_0$ predictor at the global level, probably being a good proxy for the overall contact frequency in a population. This main effect is modulated by built-up area per capita (crowdedness in indoor space), onset of infection (likely related to increased awareness of infection risks), net migration, unhealthy living lifestyle/conditions including pollution, seasonality, and possibly BCG vaccination prevalence. Also, we argue that several variables that significantly correlate with transmissibility do not directly influence $R_0$ or affect it differently than suggested by naive analysis.
1207.0020
Claudio Borile
Claudio Borile, Miguel A. Mu\~noz, Sandro Azaele, Jayanth R. Banavar and Amos Maritan
Spontaneously Broken Neutral Symmetry in an Ecological System
5 pages, 3 figures, to appear in Physical Review Letters
Physical Review Letters 109 (3), 038102 (2012)
10.1103/PhysRevLett.109.038102
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest) --regardless of the species they belong to-- have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species while treating the dynamics in a species-symmetric manner. We demonstrate that the dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.
[ { "created": "Fri, 29 Jun 2012 21:15:11 GMT", "version": "v1" } ]
2013-06-28
[ [ "Borile", "Claudio", "" ], [ "Muñoz", "Miguel A.", "" ], [ "Azaele", "Sandro", "" ], [ "Banavar", "Jayanth R.", "" ], [ "Maritan", "Amos", "" ] ]
Spontaneous symmetry breaking plays a fundamental role in many areas of condensed matter and particle physics. A fundamental problem in ecology is the elucidation of the mechanisms responsible for biodiversity and stability. Neutral theory, which makes the simplifying assumption that all individuals (such as trees in a tropical forest) --regardless of the species they belong to-- have the same prospect of reproduction, death, etc., yields gross patterns that are in accord with empirical data. We explore the possibility of birth and death rates that depend on the population density of species while treating the dynamics in a species-symmetric manner. We demonstrate that the dynamical evolution can lead to a stationary state characterized simultaneously by both biodiversity and spontaneously broken neutral symmetry.