id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
0803.1966
Thibault Lagache
T. Lagache and D. Holcman
Quantifying intermittent transport in cell cytoplasm
4 pages, 5 figures accepted as rapid communication in Phys. Rev. E
null
10.1103/PhysRevE.77.030901
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Active cellular transport is a fundamental mechanism for protein and vesicle delivery, cell cycle and molecular degradation. Viruses can hijack the transport system and use it to reach the nucleus. Most transport processes consist of intermittent dynamics, where the motion of a particle, such as a virus, alternates between pure Brownian and directed movement along microtubules. In this communication, we estimate the mean time for particle to attach to a microtubule network. This computation leads to a coarse grained equation of the intermittent motion in radial and cylindrical geometries. Finally, by using the degradation activity inside the cytoplasm, we obtain refined asymptotic estimations for the probability and the mean time a virus reaches a small nuclear pore.
[ { "created": "Thu, 13 Mar 2008 13:43:48 GMT", "version": "v1" } ]
2009-11-13
[ [ "Lagache", "T.", "" ], [ "Holcman", "D.", "" ] ]
Active cellular transport is a fundamental mechanism for protein and vesicle delivery, cell cycle and molecular degradation. Viruses can hijack the transport system and use it to reach the nucleus. Most transport processes consist of intermittent dynamics, where the motion of a particle, such as a virus, alternates between pure Brownian and directed movement along microtubules. In this communication, we estimate the mean time for particle to attach to a microtubule network. This computation leads to a coarse grained equation of the intermittent motion in radial and cylindrical geometries. Finally, by using the degradation activity inside the cytoplasm, we obtain refined asymptotic estimations for the probability and the mean time a virus reaches a small nuclear pore.
1802.07304
Haiming Tang
Haiming Tang, Robert D Finn, Paul D Thomas
TreeGrafter: phylogenetic tree-based annotation of proteins with Gene Ontology terms and other annotations
null
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Summary: TreeGrafter is a new software tool for annotating protein sequences using annotated phylogenetic trees. Cur-rently, the tool provides annotations to Gene Ontology terms, and PANTHER protein class, family and subfamily. The ap-proach is generalizable to any annotations that have been made to internal nodes of a reference phylogenetic tree. Tree-Grafter takes each input query protein sequence, finds the best matching homologous family in a library of pre-calculated, pre-annotated gene trees, and then grafts it to the best location in the tree. It then annotates the sequence by propagating anno-tations from its ancestral nodes in the reference tree. We show that TreeGrafter outperforms subfamily HMM scoring for cor-rectly assigning subfamily membership, and that it produces highly specific annotations of GO terms based on annotated reference phylogenetic trees. This method will be further inte-grated into InterProScan, enabling an even broader user com-munity. Availability: TreeGrafter is freely available on the web at https://github.com/haimingt/TreeGrafting.
[ { "created": "Tue, 20 Feb 2018 19:56:59 GMT", "version": "v1" } ]
2018-02-22
[ [ "Tang", "Haiming", "" ], [ "Finn", "Robert D", "" ], [ "Thomas", "Paul D", "" ] ]
Summary: TreeGrafter is a new software tool for annotating protein sequences using annotated phylogenetic trees. Cur-rently, the tool provides annotations to Gene Ontology terms, and PANTHER protein class, family and subfamily. The ap-proach is generalizable to any annotations that have been made to internal nodes of a reference phylogenetic tree. Tree-Grafter takes each input query protein sequence, finds the best matching homologous family in a library of pre-calculated, pre-annotated gene trees, and then grafts it to the best location in the tree. It then annotates the sequence by propagating anno-tations from its ancestral nodes in the reference tree. We show that TreeGrafter outperforms subfamily HMM scoring for cor-rectly assigning subfamily membership, and that it produces highly specific annotations of GO terms based on annotated reference phylogenetic trees. This method will be further inte-grated into InterProScan, enabling an even broader user com-munity. Availability: TreeGrafter is freely available on the web at https://github.com/haimingt/TreeGrafting.
1608.03477
Shigekazu Oda
Shigekazu Oda, Yu Toyoshima and Mario de Bono
Modulation of sensory information processing by a neuroglobin in C. elegans
29 page, 5 figures, 11 supplementary figures
null
10.1073/pnas.1614596114
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensory receptor neurons match their dynamic range to ecologically relevant stimulus intensities. How this tuning is achieved is poorly understood in most receptors. We show that in the C. elegans URX O2 sensing neurons two putative molecular O2 sensors, a neuroglobin and O2-binding soluble guanylate cyclases, work antagonistically to sculpt a slowly sigmoidal O2 response curve tuned to approach saturation when O2 reaches 21%. glb-5 imposes this sigmoidal function by inhibiting O2-evoked Ca2+ responses in URX when O2 levels fall. Without GLB-5, the URX response curve approaches saturation at 15% O2. Behaviorally, GLB-5 signaling broadens the O2 preference of C. elegans while maintaining strong avoidance of 21% O2. Our computational aerotaxis model suggests that the relationship between GLB-5-modulated URX responses and reversal behavior is sufficient to broaden O2-preference. Thus, a neuroglobin can shift neural information coding leading to a change in perception and altered behavior.
[ { "created": "Thu, 11 Aug 2016 14:24:40 GMT", "version": "v1" }, { "created": "Wed, 14 Sep 2016 15:25:16 GMT", "version": "v2" } ]
2022-10-12
[ [ "Oda", "Shigekazu", "" ], [ "Toyoshima", "Yu", "" ], [ "de Bono", "Mario", "" ] ]
Sensory receptor neurons match their dynamic range to ecologically relevant stimulus intensities. How this tuning is achieved is poorly understood in most receptors. We show that in the C. elegans URX O2 sensing neurons two putative molecular O2 sensors, a neuroglobin and O2-binding soluble guanylate cyclases, work antagonistically to sculpt a slowly sigmoidal O2 response curve tuned to approach saturation when O2 reaches 21%. glb-5 imposes this sigmoidal function by inhibiting O2-evoked Ca2+ responses in URX when O2 levels fall. Without GLB-5, the URX response curve approaches saturation at 15% O2. Behaviorally, GLB-5 signaling broadens the O2 preference of C. elegans while maintaining strong avoidance of 21% O2. Our computational aerotaxis model suggests that the relationship between GLB-5-modulated URX responses and reversal behavior is sufficient to broaden O2-preference. Thus, a neuroglobin can shift neural information coding leading to a change in perception and altered behavior.
q-bio/0607048
David Lusseau
David Lusseau
Evidence for social role in a dolphin social network
11 pages, 3 figures, accepted for publication in Evolutionary Ecology
published in 2007, Evolutionary Ecology 21(3): 357-366
null
null
q-bio.PE
null
Social animals have to take into consideration the behaviour of conspecifics when making decisions to go by their daily lives. These decisions affect their fitness and there is therefore an evolutionary pressure to try making the right choices. In many instances individuals will make their own choices and the behaviour of the group will be a democratic integration of all decisions. However, in some instances it can be advantageous to follow the choice of a few individuals in the group if they have more information regarding the situation that has arisen. Here I provide early evidence that decisions about shifts in activity states in a population of bottlenose dolphin follow such a decision making process. This unshared consensus is mediated by a non-vocal signal which can be communicated globally within the dolphin school. These signals are emitted by individuals that tend to have more information about the behaviour of potential competitors because of their position in the social network. I hypothesise that this decision making process emerged from the social structure of the population and the need to maintain mixed-sex schools.
[ { "created": "Wed, 26 Jul 2006 18:29:17 GMT", "version": "v1" } ]
2009-03-09
[ [ "Lusseau", "David", "" ] ]
Social animals have to take into consideration the behaviour of conspecifics when making decisions to go by their daily lives. These decisions affect their fitness and there is therefore an evolutionary pressure to try making the right choices. In many instances individuals will make their own choices and the behaviour of the group will be a democratic integration of all decisions. However, in some instances it can be advantageous to follow the choice of a few individuals in the group if they have more information regarding the situation that has arisen. Here I provide early evidence that decisions about shifts in activity states in a population of bottlenose dolphin follow such a decision making process. This unshared consensus is mediated by a non-vocal signal which can be communicated globally within the dolphin school. These signals are emitted by individuals that tend to have more information about the behaviour of potential competitors because of their position in the social network. I hypothesise that this decision making process emerged from the social structure of the population and the need to maintain mixed-sex schools.
1004.2271
Colleen Kelly
Colleen K. Kellya, Stephen J. Blundell, Michael G. Bowler, Gordon A. Fox, Paul H. Harvey, Mark R. Lomas and F. Ian Woodward
The statistical mechanics of community assembly and species distribution
30 pages, including 4 figures, 1 table and 4 appendices
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Theoretically, communities at or near their equilibrium species number resist entry of new species. Such 'biotic resistance' recently has been questioned because of successful entry of alien species into diverse natural communities. Data on 10,409 naturalizations of 5350 plant species over 16 sites dispersed globally show exponential distributions for both species over sites and sites over number of species shared. These exponentials signal a statistical mechanics of species distribution, assuming two conditions. First, species and sites are equivalent, either identical ('neutral'), or so complex that the chance a species is in the right place at the right time is vanishingly small ('idiosyncratic'); the range of species and sites in our data disallows a neutral explanation. Secondly, the total number of naturalisations is fixed in any era by a 'regulator'. Previous correlation of species naturalization rates with net primary productivity over time suggests that regulator is related to productivity. We conclude that biotic resistance is a moving ceiling, with resistance controlled by productivity. The general observation that the majority of species occur naturally at only a few sites but only a few at many now has a quantitative [exponential] character, offering the study of species' distributions a previously unavailable rigor.
[ { "created": "Tue, 13 Apr 2010 21:11:40 GMT", "version": "v1" } ]
2010-04-15
[ [ "Kellya", "Colleen K.", "" ], [ "Blundell", "Stephen J.", "" ], [ "Bowler", "Michael G.", "" ], [ "Fox", "Gordon A.", "" ], [ "Harvey", "Paul H.", "" ], [ "Lomas", "Mark R.", "" ], [ "Woodward", "F. Ian", "" ] ]
Theoretically, communities at or near their equilibrium species number resist entry of new species. Such 'biotic resistance' recently has been questioned because of successful entry of alien species into diverse natural communities. Data on 10,409 naturalizations of 5350 plant species over 16 sites dispersed globally show exponential distributions for both species over sites and sites over number of species shared. These exponentials signal a statistical mechanics of species distribution, assuming two conditions. First, species and sites are equivalent, either identical ('neutral'), or so complex that the chance a species is in the right place at the right time is vanishingly small ('idiosyncratic'); the range of species and sites in our data disallows a neutral explanation. Secondly, the total number of naturalisations is fixed in any era by a 'regulator'. Previous correlation of species naturalization rates with net primary productivity over time suggests that regulator is related to productivity. We conclude that biotic resistance is a moving ceiling, with resistance controlled by productivity. The general observation that the majority of species occur naturally at only a few sites but only a few at many now has a quantitative [exponential] character, offering the study of species' distributions a previously unavailable rigor.
0710.5190
Ioannis Kontoyiannis
H.M. Aktulga, I. Kontoyiannis, L.A. Lyznik, L. Szpankowski, A.Y. Grama and W. Szpankowski
Identifying statistical dependence in genomic sequences via mutual information estimates
Preliminary version. Final version in EURASIP Journal on Bioinformatics and Systems Biology. See http://www.hindawi.com/journals/bsb/
null
null
null
q-bio.GN cs.IT math.IT
null
Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA) that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5' untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's Combined DNA Index System (CODIS), we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeats, an application of importance in genetic profiling.
[ { "created": "Fri, 26 Oct 2007 22:26:36 GMT", "version": "v1" } ]
2007-10-30
[ [ "Aktulga", "H. M.", "" ], [ "Kontoyiannis", "I.", "" ], [ "Lyznik", "L. A.", "" ], [ "Szpankowski", "L.", "" ], [ "Grama", "A. Y.", "" ], [ "Szpankowski", "W.", "" ] ]
Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA) that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5' untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's Combined DNA Index System (CODIS), we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeats, an application of importance in genetic profiling.
q-bio/0502027
Veit Schw\"ammle
Veit Schw\"ammle and Suzana M. de Oliveira
Simulations of a mortality plateau in the sexual Penna model for biological ageing
submitted to Phys. Rev. E
null
10.1103/PhysRevE.72.031911
null
q-bio.PE
null
The Penna model is a strategy to simulate the genetic dynamics of age-structured populations, in which the individuals genomes are represented by bit-strings. It provides a simple metaphor for the evolutionary process in terms of the mutation accumulation theory. In its original version, an individual dies due to inherited diseases when its current number of accumulated mutations, n, reaches a threshold value, T. Since the number of accumulated diseases increases with age, the probability to die is zero for very young ages (n < T) and equals 1 for the old ones (n >= T). Here, instead of using a step function to determine the genetic death age, we test several other functions that may or may not slightly increase the death probability at young ages (n < T), but that decreases this probability at old ones. Our purpose is to study the oldest old effect, that is, a plateau in the mortality curves at advanced ages. Imposing certain conditions, it has been possible to obtain a clear plateau using the Penna model. However, a more realistic one appears when a modified version, that keeps the population size fixed without fluctuations, is used. We also find a relation between the birth rate, the age-structure of the population and the death probability.
[ { "created": "Tue, 22 Feb 2005 17:04:33 GMT", "version": "v1" } ]
2009-11-11
[ [ "Schwämmle", "Veit", "" ], [ "de Oliveira", "Suzana M.", "" ] ]
The Penna model is a strategy to simulate the genetic dynamics of age-structured populations, in which the individuals genomes are represented by bit-strings. It provides a simple metaphor for the evolutionary process in terms of the mutation accumulation theory. In its original version, an individual dies due to inherited diseases when its current number of accumulated mutations, n, reaches a threshold value, T. Since the number of accumulated diseases increases with age, the probability to die is zero for very young ages (n < T) and equals 1 for the old ones (n >= T). Here, instead of using a step function to determine the genetic death age, we test several other functions that may or may not slightly increase the death probability at young ages (n < T), but that decreases this probability at old ones. Our purpose is to study the oldest old effect, that is, a plateau in the mortality curves at advanced ages. Imposing certain conditions, it has been possible to obtain a clear plateau using the Penna model. However, a more realistic one appears when a modified version, that keeps the population size fixed without fluctuations, is used. We also find a relation between the birth rate, the age-structure of the population and the death probability.
1806.08627
Kevin Painter
K. J. Painter
Mathematical models for chemotaxis and their applications in self-organisation phenomena
35 pages, 8 figures, Submitted to Journal of Theoretical Biology
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotaxis is a fundamental guidance mechanism of cells and organisms, responsible for attracting microbes to food, embryonic cells into developing tissues, immune cells to infection sites, animals towards potential mates, and mathematicians into biology. The Patlak-Keller-Segel (PKS) system forms part of the bedrock of mathematical biology, a go-to-choice for modellers and analysts alike. For the former it is simple yet recapitulates numerous phenomena; the latter are attracted to these rich dynamics. Here I review the adoption of PKS systems when explaining self-organisation processes. I consider their foundation, returning to the initial efforts of Patlak and Keller and Segel, and briefly describe their patterning properties. Applications of PKS systems are considered in their diverse areas, including microbiology, development, immunology, cancer, ecology and crime. In each case a historical perspective is provided on the evidence for chemotactic behaviour, followed by a review of modelling efforts; a compendium of the models is included as an Appendix. Finally, a half-serious/half-tongue-in-cheek model is developed to explain how cliques form in academia. Assumptions in which scholars alter their research line according to available problems leads to clustering of academics and the formation of "hot" research topics.
[ { "created": "Fri, 22 Jun 2018 12:27:28 GMT", "version": "v1" }, { "created": "Mon, 25 Jun 2018 14:23:51 GMT", "version": "v2" } ]
2018-06-26
[ [ "Painter", "K. J.", "" ] ]
Chemotaxis is a fundamental guidance mechanism of cells and organisms, responsible for attracting microbes to food, embryonic cells into developing tissues, immune cells to infection sites, animals towards potential mates, and mathematicians into biology. The Patlak-Keller-Segel (PKS) system forms part of the bedrock of mathematical biology, a go-to-choice for modellers and analysts alike. For the former it is simple yet recapitulates numerous phenomena; the latter are attracted to these rich dynamics. Here I review the adoption of PKS systems when explaining self-organisation processes. I consider their foundation, returning to the initial efforts of Patlak and Keller and Segel, and briefly describe their patterning properties. Applications of PKS systems are considered in their diverse areas, including microbiology, development, immunology, cancer, ecology and crime. In each case a historical perspective is provided on the evidence for chemotactic behaviour, followed by a review of modelling efforts; a compendium of the models is included as an Appendix. Finally, a half-serious/half-tongue-in-cheek model is developed to explain how cliques form in academia. Assumptions in which scholars alter their research line according to available problems leads to clustering of academics and the formation of "hot" research topics.
1608.00603
Niklas L.P. Lundstr\"om
Niklas L.P. Lundstr\"om, Hong Zhang, {\AA}ke Br\"annstr\"om
Pareto-efficient biological pest control enable high efficacy at small costs
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological pest control is increasingly used in agriculture as a an alternative to traditional chemical pest control. In many cases, this involves a one-off or periodic release of naturally occurring and/or genetically modified enemies such as predators, parasitoids, or pathogens. As the interaction between these enemies and the pest is complex and the production of natural enemies potentially expensive, it is not surprising that both the efficacy and economic viability of biological pest control are debated. Here, we investigate the performance of very simple control strategies. In particular, we show how Pareto-efficient one-off or periodic release strategies, that optimally trade off between efficacy and economic viability, can be devised and used to enable high efficacy at small economic costs. We demonstrate our method on a pest-pathogen-crop model with a tunable immigration rate of pests. By analyzing this model, we demonstrate that simple Pareto-efficient one-off and periodic release strategies are efficacious and simultaneously have profits that are close to the theoretical maximum obtained by strategies optimizing only the profit. When the immigration rate of pests is low to intermediate, one-off control strategies are sufficient, and when the immigration of pests is high, periodic release strategies are preferable. The methods presented here can be extended to more complex scenarios and be used to identify promising biological pest control strategies in many circumstances.
[ { "created": "Fri, 29 Jul 2016 11:42:11 GMT", "version": "v1" }, { "created": "Tue, 31 Jan 2017 15:43:56 GMT", "version": "v2" }, { "created": "Thu, 10 Aug 2017 19:30:50 GMT", "version": "v3" } ]
2017-08-14
[ [ "Lundström", "Niklas L. P.", "" ], [ "Zhang", "Hong", "" ], [ "Brännström", "Åke", "" ] ]
Biological pest control is increasingly used in agriculture as a an alternative to traditional chemical pest control. In many cases, this involves a one-off or periodic release of naturally occurring and/or genetically modified enemies such as predators, parasitoids, or pathogens. As the interaction between these enemies and the pest is complex and the production of natural enemies potentially expensive, it is not surprising that both the efficacy and economic viability of biological pest control are debated. Here, we investigate the performance of very simple control strategies. In particular, we show how Pareto-efficient one-off or periodic release strategies, that optimally trade off between efficacy and economic viability, can be devised and used to enable high efficacy at small economic costs. We demonstrate our method on a pest-pathogen-crop model with a tunable immigration rate of pests. By analyzing this model, we demonstrate that simple Pareto-efficient one-off and periodic release strategies are efficacious and simultaneously have profits that are close to the theoretical maximum obtained by strategies optimizing only the profit. When the immigration rate of pests is low to intermediate, one-off control strategies are sufficient, and when the immigration of pests is high, periodic release strategies are preferable. The methods presented here can be extended to more complex scenarios and be used to identify promising biological pest control strategies in many circumstances.
1911.02095
Gowri Nayar
Edward E. Seabolt, Gowri Nayar, Harsha Krishnareddy, Akshay Agarwal, Kristen L. Beck, Ignacio Terrizzano, Eser Kandogan, Mary Roth, Vandana Mukherjee, and James H. Kaufman
IBM Functional Genomics Platform, A Cloud-Based Platform for Studying Microbial Life at Scale
null
null
null
null
q-bio.QM cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid growth in biological sequence data is revolutionizing our understanding of genotypic diversity and challenging conventional approaches to informatics. With the increasing availability of genomic data, traditional bioinformatic tools require substantial computational time and the creation of ever-larger indices each time a researcher seeks to gain insight from the data. To address these challenges, we pre-computed important relationships between biological entities spanning the Central Dogma of Molecular Biology and captured this information in a relational database. The database can be queried across hundreds of millions of entities and returns results in a fraction of the time required by traditional methods. In this paper, we describe \textit{IBM Functional Genomics Platform} (formerly known as OMXWare), a comprehensive database relating genotype to phenotype for bacterial life. Continually updated, IBM Functional Genomics Platform today contains data derived from 200,000 curated, self-consistently assembled genomes. The database stores functional data for over 68 million genes, 52 million proteins, and 239 million domains with associated biological activity annotations from Gene Ontology, KEGG, MetaCyc, and Reactome. IBM Functional Genomics Platform maps all of the many-to-many connections between each biological entity including the originating genome, gene, protein, and protein domain. Various microbial studies, from infectious disease to environmental health, can benefit from the rich data and connections. We describe the data selection, the pipeline to create and update the IBM Functional Genomics Platform, and the developer tools (Python SDK and REST APIs) which allow researchers to efficiently study microbial life at scale.
[ { "created": "Tue, 5 Nov 2019 21:32:25 GMT", "version": "v1" }, { "created": "Sun, 15 Mar 2020 19:29:18 GMT", "version": "v2" }, { "created": "Mon, 30 Mar 2020 23:14:33 GMT", "version": "v3" } ]
2020-04-01
[ [ "Seabolt", "Edward E.", "" ], [ "Nayar", "Gowri", "" ], [ "Krishnareddy", "Harsha", "" ], [ "Agarwal", "Akshay", "" ], [ "Beck", "Kristen L.", "" ], [ "Terrizzano", "Ignacio", "" ], [ "Kandogan", "Eser", "" ], [ "Roth", "Mary", "" ], [ "Mukherjee", "Vandana", "" ], [ "Kaufman", "James H.", "" ] ]
The rapid growth in biological sequence data is revolutionizing our understanding of genotypic diversity and challenging conventional approaches to informatics. With the increasing availability of genomic data, traditional bioinformatic tools require substantial computational time and the creation of ever-larger indices each time a researcher seeks to gain insight from the data. To address these challenges, we pre-computed important relationships between biological entities spanning the Central Dogma of Molecular Biology and captured this information in a relational database. The database can be queried across hundreds of millions of entities and returns results in a fraction of the time required by traditional methods. In this paper, we describe \textit{IBM Functional Genomics Platform} (formerly known as OMXWare), a comprehensive database relating genotype to phenotype for bacterial life. Continually updated, IBM Functional Genomics Platform today contains data derived from 200,000 curated, self-consistently assembled genomes. The database stores functional data for over 68 million genes, 52 million proteins, and 239 million domains with associated biological activity annotations from Gene Ontology, KEGG, MetaCyc, and Reactome. IBM Functional Genomics Platform maps all of the many-to-many connections between each biological entity including the originating genome, gene, protein, and protein domain. Various microbial studies, from infectious disease to environmental health, can benefit from the rich data and connections. We describe the data selection, the pipeline to create and update the IBM Functional Genomics Platform, and the developer tools (Python SDK and REST APIs) which allow researchers to efficiently study microbial life at scale.
2407.11734
Alessandro Palma
Alessandro Palma, Till Richter, Hanyi Zhang, Manuel Lubetzki, Alexander Tong, Andrea Dittadi, Fabian Theis
Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen
28 pages, 12 figures
null
null
null
q-bio.QM cs.LG q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Generative modeling of single-cell RNA-seq data has shown invaluable potential in community-driven tasks such as trajectory inference, batch effect removal and gene expression generation. However, most recent deep models generating synthetic single cells from noise operate on pre-processed continuous gene expression approximations, ignoring the inherently discrete and over-dispersed nature of single-cell data, which limits downstream applications and hinders the incorporation of robust noise models. Moreover, crucial aspects of deep-learning-based synthetic single-cell generation remain underexplored, such as controllable multi-modal and multi-label generation and its role in the performance enhancement of downstream tasks. This work presents Cell Flow for Generation (CFGen), a flow-based conditional generative model for multi-modal single-cell counts, which explicitly accounts for the discrete nature of the data. Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks such as conditioning on multiple attributes and boosting rare cell type classification via data augmentation. By showcasing CFGen on a diverse set of biological datasets and settings, we provide evidence of its value to the fields of computational biology and deep generative models.
[ { "created": "Tue, 16 Jul 2024 14:05:03 GMT", "version": "v1" } ]
2024-07-17
[ [ "Palma", "Alessandro", "" ], [ "Richter", "Till", "" ], [ "Zhang", "Hanyi", "" ], [ "Lubetzki", "Manuel", "" ], [ "Tong", "Alexander", "" ], [ "Dittadi", "Andrea", "" ], [ "Theis", "Fabian", "" ] ]
Generative modeling of single-cell RNA-seq data has shown invaluable potential in community-driven tasks such as trajectory inference, batch effect removal and gene expression generation. However, most recent deep models generating synthetic single cells from noise operate on pre-processed continuous gene expression approximations, ignoring the inherently discrete and over-dispersed nature of single-cell data, which limits downstream applications and hinders the incorporation of robust noise models. Moreover, crucial aspects of deep-learning-based synthetic single-cell generation remain underexplored, such as controllable multi-modal and multi-label generation and its role in the performance enhancement of downstream tasks. This work presents Cell Flow for Generation (CFGen), a flow-based conditional generative model for multi-modal single-cell counts, which explicitly accounts for the discrete nature of the data. Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks such as conditioning on multiple attributes and boosting rare cell type classification via data augmentation. By showcasing CFGen on a diverse set of biological datasets and settings, we provide evidence of its value to the fields of computational biology and deep generative models.
2308.15914
Vince Grolmusz
Kinga K. Nagy, Krist\'of Tak\'acs, Imre N\'emeth, B\'alint Varga, Vince Grolmusz, M\'onika Moln\'ar, Be\'ata G. V\'ertessy
Novel enzymes for biodegradation of polycyclic aromatic hydrocarbons: metagenomics-linked identification followed by functional analysis
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Polycyclic aromatic hydrocarbons (PAHs) are highly toxic, carcinogenic substances. On soils contaminated with PAHs, crop cultivation, animal husbandry and even the survival of microflora in the soil are greatly perturbed, depending on the degree of contamination. Most microorganisms cannot tolerate PAH-contaminated soils, however, some microbial strains can adapt to these harsh conditions and survive on contaminated soils. Analysis of the metagenomes of contaminated environmental samples may lead to discovery of PAH-degrading enzymes suitable for green biotechnology methodologies ranging from biocatalysis to pollution control. In the present study, our goal was to apply a metagenomic data search to identify efficient novel enzymes in remediation of PAH-contaminated soils. The metagenomic hits were further analyzed using a set of bioinformatics tools to select protein sequences predicted to encode well-folded soluble enzymes. Three novel enzymes (two dioxygenases and one peroxidase) were cloned and used in soil remediation microcosms experiments. The novel enzymes were found to be efficient for degradation of naphthalene and phenanthrene. Adding the inorganic oxidant CaO2 further increased the degrading potential of the novel enzymes for anthracene and pyrene. We conclude that metagenome mining paired with bioinformatic predictions, structural modelling and functional assays constitutes a powerful approach towards novel enzymes for soil remediation.
[ { "created": "Wed, 30 Aug 2023 09:42:03 GMT", "version": "v1" } ]
2023-08-31
[ [ "Nagy", "Kinga K.", "" ], [ "Takács", "Kristóf", "" ], [ "Németh", "Imre", "" ], [ "Varga", "Bálint", "" ], [ "Grolmusz", "Vince", "" ], [ "Molnár", "Mónika", "" ], [ "Vértessy", "Beáta G.", "" ] ]
Polycyclic aromatic hydrocarbons (PAHs) are highly toxic, carcinogenic substances. On soils contaminated with PAHs, crop cultivation, animal husbandry and even the survival of microflora in the soil are greatly perturbed, depending on the degree of contamination. Most microorganisms cannot tolerate PAH-contaminated soils, however, some microbial strains can adapt to these harsh conditions and survive on contaminated soils. Analysis of the metagenomes of contaminated environmental samples may lead to discovery of PAH-degrading enzymes suitable for green biotechnology methodologies ranging from biocatalysis to pollution control. In the present study, our goal was to apply a metagenomic data search to identify efficient novel enzymes in remediation of PAH-contaminated soils. The metagenomic hits were further analyzed using a set of bioinformatics tools to select protein sequences predicted to encode well-folded soluble enzymes. Three novel enzymes (two dioxygenases and one peroxidase) were cloned and used in soil remediation microcosms experiments. The novel enzymes were found to be efficient for degradation of naphthalene and phenanthrene. Adding the inorganic oxidant CaO2 further increased the degrading potential of the novel enzymes for anthracene and pyrene. We conclude that metagenome mining paired with bioinformatic predictions, structural modelling and functional assays constitutes a powerful approach towards novel enzymes for soil remediation.
0902.2020
Michael Famulare
Michael Famulare (University of Washington) and Adrienne L. Fairhall (University of Washington)
Feature selection in simple neurons: how coding depends on spiking dynamics
23 Pages, LaTeX + 4 Figures. v2 is substantially expanded and revised. v3 corrects minor errors in Sec. 3.6
Neural Computation March 2010, Vol. 22, No. 3: 581-598
10.1162/neco.2009.02-09-956
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes, and a nonlinear function that relates stimulus to firing probability. In many sensory systems, these two components of the coding strategy are found to adapt to changes in the statistics of the inputs, in such a way as to improve information transmission. Here, we show for two simple neuron models how feature selectivity as captured by the spike-triggered average depends both on the parameters of the model and on the statistical characteristics of the input.
[ { "created": "Thu, 12 Feb 2009 02:37:16 GMT", "version": "v1" }, { "created": "Fri, 26 Jun 2009 22:43:17 GMT", "version": "v2" }, { "created": "Tue, 1 Nov 2011 01:49:42 GMT", "version": "v3" } ]
2011-11-02
[ [ "Famulare", "Michael", "", "University of Washington" ], [ "Fairhall", "Adrienne L.", "", "University of Washington" ] ]
The relationship between a neuron's complex inputs and its spiking output defines the neuron's coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes, and a nonlinear function that relates stimulus to firing probability. In many sensory systems, these two components of the coding strategy are found to adapt to changes in the statistics of the inputs, in such a way as to improve information transmission. Here, we show for two simple neuron models how feature selectivity as captured by the spike-triggered average depends both on the parameters of the model and on the statistical characteristics of the input.
1306.3371
Peter Csermely
David M. Gyurko, Daniel V. Veres, Dezso Modos, Katalin Lenti, Tamas Korcsmaros and Peter Csermely
Adaptation and learning of molecular networks as a description of cancer development at the systems-level: Potential use in anti-cancer therapies
8 pages,2 figures, 1 table, 98 references
Seminars in Cancer Biology 23 (2013) 262-269
10.1016/j.semcancer.2013.06.005
null
q-bio.MN nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a widening recognition that cancer cells are products of complex developmental processes. Carcinogenesis and metastasis formation are increasingly described as systems-level, network phenomena. Here we propose that malignant transformation is a two-phase process, where an initial increase of system plasticity is followed by a decrease of plasticity at late stages of carcinogenesis as a model of cellular learning. We describe the hallmarks of increased system plasticity of early, tumor initiating cells, such as increased noise, entropy, conformational and phenotypic plasticity, physical deformability, cell heterogeneity and network rearrangements. Finally, we argue that the large structural changes of molecular networks during cancer development necessitate a rather different targeting strategy in early and late phase of carcinogenesis. Plastic networks of early phase cancer development need a central hit, while rigid networks of late stage primary tumors or established metastases should be attacked by the network influence strategy, such as by edgetic, multi-target, or allo-network drugs. Cancer stem cells need special diagnosis and targeting, since their dormant and rapidly proliferating forms may have more rigid, or more plastic networks, respectively. The extremely high ability to change their rigidity/plasticity may be a key differentiating hallmark of cancer stem cells. The application of early stage-optimized anti-cancer drugs to late-stage patients may be a reason of many failures in anti-cancer therapies. Our hypotheses presented here underlie the need for patient-specific multi-target therapies applying the correct ratio of central hits and network influences -- in an optimized sequence.
[ { "created": "Fri, 14 Jun 2013 11:57:44 GMT", "version": "v1" }, { "created": "Tue, 17 Sep 2013 05:37:04 GMT", "version": "v2" } ]
2013-09-18
[ [ "Gyurko", "David M.", "" ], [ "Veres", "Daniel V.", "" ], [ "Modos", "Dezso", "" ], [ "Lenti", "Katalin", "" ], [ "Korcsmaros", "Tamas", "" ], [ "Csermely", "Peter", "" ] ]
There is a widening recognition that cancer cells are products of complex developmental processes. Carcinogenesis and metastasis formation are increasingly described as systems-level, network phenomena. Here we propose that malignant transformation is a two-phase process, where an initial increase of system plasticity is followed by a decrease of plasticity at late stages of carcinogenesis as a model of cellular learning. We describe the hallmarks of increased system plasticity of early, tumor initiating cells, such as increased noise, entropy, conformational and phenotypic plasticity, physical deformability, cell heterogeneity and network rearrangements. Finally, we argue that the large structural changes of molecular networks during cancer development necessitate a rather different targeting strategy in early and late phase of carcinogenesis. Plastic networks of early phase cancer development need a central hit, while rigid networks of late stage primary tumors or established metastases should be attacked by the network influence strategy, such as by edgetic, multi-target, or allo-network drugs. Cancer stem cells need special diagnosis and targeting, since their dormant and rapidly proliferating forms may have more rigid, or more plastic networks, respectively. The extremely high ability to change their rigidity/plasticity may be a key differentiating hallmark of cancer stem cells. The application of early stage-optimized anti-cancer drugs to late-stage patients may be a reason of many failures in anti-cancer therapies. Our hypotheses presented here underlie the need for patient-specific multi-target therapies applying the correct ratio of central hits and network influences -- in an optimized sequence.
q-bio/0510030
Ilya M. Nemenman
Kai Wang, Ilya Nemenman, Nilanjana Banerjee, Adam Margolin, Andrea Califano
Genome-wide discovery of modulators of transcriptional interactions in human B lymphocytes
15 pages, 3 figures, 2 tables; minor changes following referees' comments; accepted to RECOMB06
RECOMB'06 Proceedings, LNCS 3909, Springer, 2006
10.1007/11732990
null
q-bio.MN q-bio.GN q-bio.QM
null
Transcriptional interactions in a cell are modulated by a variety of mechanisms that prevent their representation as pure pairwise interactions between a transcription factor and its target(s). These include, among others, transcription factor activation by phosphorylation and acetylation, formation of active complexes with one or more co-factors, and mRNA/protein degradation and stabilization processes. This paper presents a first step towards the systematic, genome-wide computational inference of genes that modulate the interactions of specific transcription factors at the post-transcriptional level. The method uses a statistical test based on changes in the mutual information between a transcription factor and each of its candidate targets, conditional on the expression of a third gene. The approach was first validated on a synthetic network model, and then tested in the context of a mammalian cellular system. By analyzing 254 microarray expression profiles of normal and tumor related human B lymphocytes, we investigated the post transcriptional modulators of the MYC proto-oncogene, an important transcription factor involved in tumorigenesis. Our method discovered a set of 100 putative modulator genes, responsible for modulating 205 regulatory relationships between MYC and its targets. The set is significantly enriched in molecules with function consistent with their activities as modulators of cellular interactions, recapitulates established MYC regulation pathways, and provides a notable repertoire of novel regulators of MYC function. The approach has broad applicability and can be used to discover modulators of any other transcription factor, provided that adequate expression profile data are available.
[ { "created": "Sat, 15 Oct 2005 04:26:03 GMT", "version": "v1" }, { "created": "Thu, 2 Mar 2006 06:02:49 GMT", "version": "v2" } ]
2007-05-23
[ [ "Wang", "Kai", "" ], [ "Nemenman", "Ilya", "" ], [ "Banerjee", "Nilanjana", "" ], [ "Margolin", "Adam", "" ], [ "Califano", "Andrea", "" ] ]
Transcriptional interactions in a cell are modulated by a variety of mechanisms that prevent their representation as pure pairwise interactions between a transcription factor and its target(s). These include, among others, transcription factor activation by phosphorylation and acetylation, formation of active complexes with one or more co-factors, and mRNA/protein degradation and stabilization processes. This paper presents a first step towards the systematic, genome-wide computational inference of genes that modulate the interactions of specific transcription factors at the post-transcriptional level. The method uses a statistical test based on changes in the mutual information between a transcription factor and each of its candidate targets, conditional on the expression of a third gene. The approach was first validated on a synthetic network model, and then tested in the context of a mammalian cellular system. By analyzing 254 microarray expression profiles of normal and tumor related human B lymphocytes, we investigated the post transcriptional modulators of the MYC proto-oncogene, an important transcription factor involved in tumorigenesis. Our method discovered a set of 100 putative modulator genes, responsible for modulating 205 regulatory relationships between MYC and its targets. The set is significantly enriched in molecules with function consistent with their activities as modulators of cellular interactions, recapitulates established MYC regulation pathways, and provides a notable repertoire of novel regulators of MYC function. The approach has broad applicability and can be used to discover modulators of any other transcription factor, provided that adequate expression profile data are available.
1904.12353
Christian Wiwie
Christian Wiwie, Richard R\"ottger, Jan Baumbach
TiCoNE 2: A Composite Clustering Model for Robust Cluster Analyses on Noisy Data
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying groups of similar objects using clustering approaches is one of the most frequently employed first steps in exploratory biomedical data analysis. Many clustering methods have been developed that pursue different strategies to identify the optimal clustering for a data set. We previously published TiCoNE, an interactive clustering approach coupled with de-novo network enrichment of identified clusters. However, in this first version time-series and network analysis remained two separate steps in that only time-series data was clustered, and identified clusters mapped to and enriched within a network in a second separate step. In this work, we present TiCoNE 2: An extension that can now seamlessly incorporate multiple data types within its composite clustering model. Systematic evaluation on 50 random data sets, as well as on 2,400 data sets containing enriched cluster structure and varying levels of noise, shows that our approach is able to successfully recover cluster patterns embedded in random data and that it is more robust towards noise than non-composite models using only one data type, when applied to two data types simultaneously. Herein, each data set was clustered using five different similarity functions into k=10/30 clusters, resulting to ~5,000 clusterings in total. We evaluated the quality of each derived clustering with the Jaccard index and an internal validity score. We used TiCoNE to calculate empirical p-values for all generated clusters with different permutation functions, resulting in ~80,000 cluster p-values. We show, that derived p-values can be used to reliably distinguish between foreground and background clusters. TiCoNE 2 allows researchers to seamlessly analyze time-series data together with biological interaction networks in an intuitive way and thereby provides more robust results than single data type cluster analyses.
[ { "created": "Sun, 28 Apr 2019 17:47:13 GMT", "version": "v1" } ]
2019-04-30
[ [ "Wiwie", "Christian", "" ], [ "Röttger", "Richard", "" ], [ "Baumbach", "Jan", "" ] ]
Identifying groups of similar objects using clustering approaches is one of the most frequently employed first steps in exploratory biomedical data analysis. Many clustering methods have been developed that pursue different strategies to identify the optimal clustering for a data set. We previously published TiCoNE, an interactive clustering approach coupled with de-novo network enrichment of identified clusters. However, in this first version time-series and network analysis remained two separate steps in that only time-series data was clustered, and identified clusters mapped to and enriched within a network in a second separate step. In this work, we present TiCoNE 2: An extension that can now seamlessly incorporate multiple data types within its composite clustering model. Systematic evaluation on 50 random data sets, as well as on 2,400 data sets containing enriched cluster structure and varying levels of noise, shows that our approach is able to successfully recover cluster patterns embedded in random data and that it is more robust towards noise than non-composite models using only one data type, when applied to two data types simultaneously. Herein, each data set was clustered using five different similarity functions into k=10/30 clusters, resulting to ~5,000 clusterings in total. We evaluated the quality of each derived clustering with the Jaccard index and an internal validity score. We used TiCoNE to calculate empirical p-values for all generated clusters with different permutation functions, resulting in ~80,000 cluster p-values. We show, that derived p-values can be used to reliably distinguish between foreground and background clusters. TiCoNE 2 allows researchers to seamlessly analyze time-series data together with biological interaction networks in an intuitive way and thereby provides more robust results than single data type cluster analyses.
1905.12601
Nithin Nagaraj
Harikrishnan N B and Nithin Nagaraj
A Novel Chaos Theory Inspired Neuronal Architecture
6 pages, 5 figures. This is a pre-print version of the manuscript which we will be submitting soon to an international conference
null
null
null
q-bio.NC cs.LG cs.NE physics.data-an stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The practical success of widely used machine learning (ML) and deep learning (DL) algorithms in Artificial Intelligence (AI) community owes to availability of large datasets for training and huge computational resources. Despite the enormous practical success of AI, these algorithms are only loosely inspired from the biological brain and do not mimic any of the fundamental properties of neurons in the brain, one such property being the chaotic firing of biological neurons. This motivates us to develop a novel neuronal architecture where the individual neurons are intrinsically chaotic in nature. By making use of the topological transitivity property of chaos, our neuronal network is able to perform classification tasks with very less number of training samples. For the MNIST dataset, with as low as $0.1 \%$ of the total training data, our method outperforms ML and matches DL in classification accuracy for up to $7$ training samples/class. For the Iris dataset, our accuracy is comparable with ML algorithms, and even with just two training samples/class, we report an accuracy as high as $95.8 \%$. This work highlights the effectiveness of chaos and its properties for learning and paves the way for chaos-inspired neuronal architectures by closely mimicking the chaotic nature of neurons in the brain.
[ { "created": "Sun, 19 May 2019 07:45:57 GMT", "version": "v1" } ]
2019-05-30
[ [ "B", "Harikrishnan N", "" ], [ "Nagaraj", "Nithin", "" ] ]
The practical success of widely used machine learning (ML) and deep learning (DL) algorithms in Artificial Intelligence (AI) community owes to availability of large datasets for training and huge computational resources. Despite the enormous practical success of AI, these algorithms are only loosely inspired from the biological brain and do not mimic any of the fundamental properties of neurons in the brain, one such property being the chaotic firing of biological neurons. This motivates us to develop a novel neuronal architecture where the individual neurons are intrinsically chaotic in nature. By making use of the topological transitivity property of chaos, our neuronal network is able to perform classification tasks with very less number of training samples. For the MNIST dataset, with as low as $0.1 \%$ of the total training data, our method outperforms ML and matches DL in classification accuracy for up to $7$ training samples/class. For the Iris dataset, our accuracy is comparable with ML algorithms, and even with just two training samples/class, we report an accuracy as high as $95.8 \%$. This work highlights the effectiveness of chaos and its properties for learning and paves the way for chaos-inspired neuronal architectures by closely mimicking the chaotic nature of neurons in the brain.
1711.01330
David Holcman
Kanishka Basnayake, Claire Guerrier, Zeev Schuss, David Holcman
Asymptotics of extreme statistics of escape time in 1,2 and 3-dimensional diffusions
6 figs
null
10.1007/s00332-018-9493-7
null
q-bio.SC math.PR physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first of $N$ identical independently distributed (i.i.d.) Brownian trajectories that arrives to a small target, sets the time scale of activation, which in general is much faster than the arrival to the target of only a single trajectory. Analytical asymptotic expressions for the minimal time is notoriously difficult to compute in general geometries. We derive here asymptotic laws for the probability density function of the first and second arrival times of a large number of i.i.d. Brownian trajectories to a small target in 1,2, and 3 dimensions and study their range of validity by stochastic simulations. The results are applied to activation of biochemical pathways in cellular transduction.
[ { "created": "Sun, 29 Oct 2017 13:52:00 GMT", "version": "v1" } ]
2018-10-17
[ [ "Basnayake", "Kanishka", "" ], [ "Guerrier", "Claire", "" ], [ "Schuss", "Zeev", "" ], [ "Holcman", "David", "" ] ]
The first of $N$ identical independently distributed (i.i.d.) Brownian trajectories that arrives to a small target, sets the time scale of activation, which in general is much faster than the arrival to the target of only a single trajectory. Analytical asymptotic expressions for the minimal time is notoriously difficult to compute in general geometries. We derive here asymptotic laws for the probability density function of the first and second arrival times of a large number of i.i.d. Brownian trajectories to a small target in 1,2, and 3 dimensions and study their range of validity by stochastic simulations. The results are applied to activation of biochemical pathways in cellular transduction.
0809.1063
Gene-Wei Li
Gene-Wei Li, Otto G. Berg, and Johan Elf
Target Location by DNA-Binding Proteins: Effects of Roadblocks and DNA Looping
8 pages, 6 figures
null
null
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The model of facilitated diffusion describes how DNA-binding proteins, such as transcription factors (TFs), find their chromosomal targets by combining 3D diffusion through the cytoplasm and 1D sliding along nonspecific DNA sequences. The redundant 1D diffusion near the specific binding site extends the target size and facilitates target location. While this model successfully predicts the kinetics measured in test tubes, it has not been extended to account for the highly crowded environment in living cells. Here, we investigate the effect of other DNA-binding proteins that partially occupy the bacterial chromosome. We show how they would slow down the search process, mainly through restricted sliding near the target. This implies that increasing the overall DNA-binding protein concentration would have a marginal effect in reducing the search time, because any additional proteins would restrict the search process for each other. While the presence of other proteins prevents sliding from the flanking DNA, DNA looping provides an alternative path to transfer from neighboring sites to the target efficiently. We propose that, when looping is faster than the initial search process, the auxiliary binding sites further extend the effective target region and therefore facilitate the target location.
[ { "created": "Fri, 5 Sep 2008 15:48:57 GMT", "version": "v1" } ]
2008-09-08
[ [ "Li", "Gene-Wei", "" ], [ "Berg", "Otto G.", "" ], [ "Elf", "Johan", "" ] ]
The model of facilitated diffusion describes how DNA-binding proteins, such as transcription factors (TFs), find their chromosomal targets by combining 3D diffusion through the cytoplasm and 1D sliding along nonspecific DNA sequences. The redundant 1D diffusion near the specific binding site extends the target size and facilitates target location. While this model successfully predicts the kinetics measured in test tubes, it has not been extended to account for the highly crowded environment in living cells. Here, we investigate the effect of other DNA-binding proteins that partially occupy the bacterial chromosome. We show how they would slow down the search process, mainly through restricted sliding near the target. This implies that increasing the overall DNA-binding protein concentration would have a marginal effect in reducing the search time, because any additional proteins would restrict the search process for each other. While the presence of other proteins prevents sliding from the flanking DNA, DNA looping provides an alternative path to transfer from neighboring sites to the target efficiently. We propose that, when looping is faster than the initial search process, the auxiliary binding sites further extend the effective target region and therefore facilitate the target location.
1605.06108
Wieland Marth
Wieland Marth, Axel Voigt
Collective migration under hydrodynamic interactions -- a computational approach
null
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Substrate-based cell motility is essential for fundamental biological processes, such as tissue growth, wound healing and immune response. Even if a comprehensive understanding of this motility mode remains elusive, progress has been achieved in its modeling using a whole cell physical model. The model takes into account the main mechanisms of cell motility - actin polymerization, substrate mediated adhesion and actin-myosin dynamics and combines it with steric cell-cell and hydrodynamic interactions. The model predicts the onset of collective cell migration, which emerges spontaneously as a result of inelastic collisions of neighboring cells. Each cell here modeled as an active polar gel, is accomplished with two vortices if it moves. Open collision of two cells the two vortices which come close to each other annihilate. This leads to a rotation of the cells and together with the deformation and the reorientation of the actin filaments in each cell induces alignment of these cells and leads to persistent translational collective migration. The effect for low Reynolds numbers is as strong as in the non-hydrodynamic model, but it decreases with increasing Reynolds number.
[ { "created": "Thu, 19 May 2016 11:40:46 GMT", "version": "v1" } ]
2016-05-23
[ [ "Marth", "Wieland", "" ], [ "Voigt", "Axel", "" ] ]
Substrate-based cell motility is essential for fundamental biological processes, such as tissue growth, wound healing and immune response. Even if a comprehensive understanding of this motility mode remains elusive, progress has been achieved in its modeling using a whole cell physical model. The model takes into account the main mechanisms of cell motility - actin polymerization, substrate mediated adhesion and actin-myosin dynamics and combines it with steric cell-cell and hydrodynamic interactions. The model predicts the onset of collective cell migration, which emerges spontaneously as a result of inelastic collisions of neighboring cells. Each cell here modeled as an active polar gel, is accomplished with two vortices if it moves. Open collision of two cells the two vortices which come close to each other annihilate. This leads to a rotation of the cells and together with the deformation and the reorientation of the actin filaments in each cell induces alignment of these cells and leads to persistent translational collective migration. The effect for low Reynolds numbers is as strong as in the non-hydrodynamic model, but it decreases with increasing Reynolds number.
0811.3507
Noa Sela
Maayan Amit, Noa Sela, Hadas Keren, Zeev Melamed, Inna Muler, Noam Shomron, Shai Izraeli, Gil Ast
Biased exonization of transposed elements in duplicated genes: A lesson from the TIF-IA gene
null
BMC Molecular Biology 2007, 8:109
10.1186/1471-2199-8-109
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Gene duplication and exonization of intronic transposed elements are two mechanisms that enhance genomic diversity. We examined whether there is less selection against exonization of transposed elements in duplicated genes than in single-copy genes. Results: Genome-wide analysis of exonization of transposed elements revealed a higher rate of exonization within duplicated genes relative to single-copy genes. The gene for TIF-IA, an RNA polymerase I transcription initiation factor, underwent a humanoid-specific triplication, all three copies of the gene are active transcriptionally, although only one copy retains the ability to generate the TIF-IA protein. Prior to TIF-IA triplication, an Alu element was inserted into the first intron. In one of the non-protein coding copies, this Alu is exonized. We identified a single point mutation leading to exonization in one of the gene duplicates. When this mutation was introduced into the TIF-IA coding copy, exonization was activated and the level of the protein-coding mRNA was reduced substantially. A very low level of exonization was detected in normal human cells. However, this exonization was abundant in most leukemia cell lines evaluated, although the genomic sequence is unchanged in these cancerous cells compared to normal cells. Conclusion: The definition of the Alu element within the TIF-IA gene as an exon is restricted to certain types of cancers; the element is not exonized in normal human cells. These results further our understanding of the delicate interplay between gene duplication and alternative splicing and of the molecular evolutionary mechanisms leading to genetic innovations. This implies the existence of purifying selection against exonization in single copy genes, with duplicate genes free from such constrains.
[ { "created": "Fri, 21 Nov 2008 10:45:05 GMT", "version": "v1" } ]
2008-11-24
[ [ "Amit", "Maayan", "" ], [ "Sela", "Noa", "" ], [ "Keren", "Hadas", "" ], [ "Melamed", "Zeev", "" ], [ "Muler", "Inna", "" ], [ "Shomron", "Noam", "" ], [ "Izraeli", "Shai", "" ], [ "Ast", "Gil", "" ] ]
Background: Gene duplication and exonization of intronic transposed elements are two mechanisms that enhance genomic diversity. We examined whether there is less selection against exonization of transposed elements in duplicated genes than in single-copy genes. Results: Genome-wide analysis of exonization of transposed elements revealed a higher rate of exonization within duplicated genes relative to single-copy genes. The gene for TIF-IA, an RNA polymerase I transcription initiation factor, underwent a humanoid-specific triplication, all three copies of the gene are active transcriptionally, although only one copy retains the ability to generate the TIF-IA protein. Prior to TIF-IA triplication, an Alu element was inserted into the first intron. In one of the non-protein coding copies, this Alu is exonized. We identified a single point mutation leading to exonization in one of the gene duplicates. When this mutation was introduced into the TIF-IA coding copy, exonization was activated and the level of the protein-coding mRNA was reduced substantially. A very low level of exonization was detected in normal human cells. However, this exonization was abundant in most leukemia cell lines evaluated, although the genomic sequence is unchanged in these cancerous cells compared to normal cells. Conclusion: The definition of the Alu element within the TIF-IA gene as an exon is restricted to certain types of cancers; the element is not exonized in normal human cells. These results further our understanding of the delicate interplay between gene duplication and alternative splicing and of the molecular evolutionary mechanisms leading to genetic innovations. This implies the existence of purifying selection against exonization in single copy genes, with duplicate genes free from such constrains.
1609.04926
Alan D. Rendall
Alan D. Rendall
A Calvin bestiary
20 pages
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper compares a number of mathematical models for the Calvin cycle of photosynthesis and presents theorems on the existence and stability of steady states of these models. Results on five-variable models in the literature are surveyed. Next a number of larger models related to one introduced by Pettersson and Ryde-Pettersson are discussed. The mathematical nature of this model is clarified, showing that it is naturally defined as a system of differential-algebraic equations. It is proved that there are choices of parameters for which this model admits more than one positive steady state. This is done by analysing the limit where the storage of sugars from the cycle as starch is shut down. There is also a discussion of the minimal models for the cycle due to Hahn.
[ { "created": "Fri, 16 Sep 2016 07:30:26 GMT", "version": "v1" } ]
2016-09-19
[ [ "Rendall", "Alan D.", "" ] ]
This paper compares a number of mathematical models for the Calvin cycle of photosynthesis and presents theorems on the existence and stability of steady states of these models. Results on five-variable models in the literature are surveyed. Next a number of larger models related to one introduced by Pettersson and Ryde-Pettersson are discussed. The mathematical nature of this model is clarified, showing that it is naturally defined as a system of differential-algebraic equations. It is proved that there are choices of parameters for which this model admits more than one positive steady state. This is done by analysing the limit where the storage of sugars from the cycle as starch is shut down. There is also a discussion of the minimal models for the cycle due to Hahn.
1012.3801
Leonid Perlovsky
Leonid Perlovsky
Beauty and Art. Cognitive Function, Evolution, and Mathematical Models of the Mind
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper discusses relationships between aesthetics theory and mathematical models of mind. Mathematical theory describes abilities for concepts, emotions, instincts, imagination, adaptation, learning, cognition, language, approximate hierarchy of the mind and evolution of these abilities. The knowledge instinct is the foundation of higher mental abilities and aesthetic emotions. Aesthetic emotions are present in every act of perception and cognition, and at the top of the mind hierarchy they become emotions of the beautiful. The learning ability is essential to everyday perception and cognition as well as to the historical development of understanding of the meaning of life. I discuss a controversy surrounding this issue. Conclusions based on cognitive and mathematical models confirm that judgments of taste are at once subjective and objective, and I discuss what it means. The paper relates cognitive and mathematical concepts to those of philosophy and aesthetics, from Plato to our days, clarifies cognitive mechanisms and functions of the beautiful, and resolves many difficulties of contemporary aesthetics.
[ { "created": "Fri, 17 Dec 2010 03:18:55 GMT", "version": "v1" } ]
2010-12-20
[ [ "Perlovsky", "Leonid", "" ] ]
The paper discusses relationships between aesthetics theory and mathematical models of mind. Mathematical theory describes abilities for concepts, emotions, instincts, imagination, adaptation, learning, cognition, language, approximate hierarchy of the mind and evolution of these abilities. The knowledge instinct is the foundation of higher mental abilities and aesthetic emotions. Aesthetic emotions are present in every act of perception and cognition, and at the top of the mind hierarchy they become emotions of the beautiful. The learning ability is essential to everyday perception and cognition as well as to the historical development of understanding of the meaning of life. I discuss a controversy surrounding this issue. Conclusions based on cognitive and mathematical models confirm that judgments of taste are at once subjective and objective, and I discuss what it means. The paper relates cognitive and mathematical concepts to those of philosophy and aesthetics, from Plato to our days, clarifies cognitive mechanisms and functions of the beautiful, and resolves many difficulties of contemporary aesthetics.
2203.15635
Fabricio Martins Lopes
Murilo Montanini Breve, Matheus Henrique Pimenta-Zanon and Fabr\'icio Martins Lopes
BASiNETEntropy: an alignment-free method for classification of biological sequences through complex networks and entropy maximization
null
null
null
null
q-bio.MN cs.CE cs.IT cs.LG math.IT q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The discovery of nucleic acids and the structure of DNA have brought considerable advances in the understanding of life. The development of next-generation sequencing technologies has led to a large-scale generation of data, for which computational methods have become essential for analysis and knowledge discovery. In particular, RNAs have received much attention because of the diversity of their functionalities in the organism and the discoveries of different classes with different functions in many biological processes. Therefore, the correct identification of RNA sequences is increasingly important to provide relevant information to understand the functioning of organisms. This work addresses this context by presenting a new method for the classification of biological sequences through complex networks and entropy maximization. The maximum entropy principle is proposed to identify the most informative edges about the RNA class, generating a filtered complex network. The proposed method was evaluated in the classification of different RNA classes from 13 species. The proposed method was compared to PLEK, CPC2 and BASiNET methods, outperforming all compared methods. BASiNETEntropy classified all RNA sequences with high accuracy and low standard deviation in results, showing assertiveness and robustness. The proposed method is implemented in an open source in R language and is freely available at https://cran.r-project.org/web/packages/BASiNETEntropy.
[ { "created": "Thu, 24 Mar 2022 14:19:43 GMT", "version": "v1" } ]
2022-03-30
[ [ "Breve", "Murilo Montanini", "" ], [ "Pimenta-Zanon", "Matheus Henrique", "" ], [ "Lopes", "Fabrício Martins", "" ] ]
The discovery of nucleic acids and the structure of DNA have brought considerable advances in the understanding of life. The development of next-generation sequencing technologies has led to a large-scale generation of data, for which computational methods have become essential for analysis and knowledge discovery. In particular, RNAs have received much attention because of the diversity of their functionalities in the organism and the discoveries of different classes with different functions in many biological processes. Therefore, the correct identification of RNA sequences is increasingly important to provide relevant information to understand the functioning of organisms. This work addresses this context by presenting a new method for the classification of biological sequences through complex networks and entropy maximization. The maximum entropy principle is proposed to identify the most informative edges about the RNA class, generating a filtered complex network. The proposed method was evaluated in the classification of different RNA classes from 13 species. The proposed method was compared to PLEK, CPC2 and BASiNET methods, outperforming all compared methods. BASiNETEntropy classified all RNA sequences with high accuracy and low standard deviation in results, showing assertiveness and robustness. The proposed method is implemented in an open source in R language and is freely available at https://cran.r-project.org/web/packages/BASiNETEntropy.
1904.05024
Mohammad Nami
Fatemeh Torkamani, Farshad Nazaraghaie, Mohammad Nami
Geometric Meditation-Based Cognitive Behavioral Therapy in Obsessive-Compulsive Disorder: A Case Study
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Obsessive-Compulsive Disorder (OCD), characterized by unwanted and distressing intrusive thoughts, images, urges, doubts and ideas or sensations and repetitive mental or behavioral acts, which is regarded as an overwhelming mental disorder. Over the past few years, several studies have indicated how mindfulness-based interventions may be effectively used to remediate OCD symptoms based on which such methods are endorsed as effective complementary or alternative options to remediate OCD. The present pilot investigation assessed the effectiveness of Geometric Meditation-based Cognitive Behavior Therapy (GM-CBT) as a novel integrated approach to help OCD symptoms. Accordingly, an eight-week treatment program (90 minute sessions per week) in a single case of intractable OCD was found to result in a significant reduction in OCD symptoms, anxiety and depression as well as increased mindfulness skills and subsequent secondary outcomes. A three-month post treatment follow up suggested long-lasting beneficial effects. Such a pilot model may receive further endorsement as a holistic CBT approach for OCD.
[ { "created": "Wed, 10 Apr 2019 06:58:59 GMT", "version": "v1" } ]
2019-04-11
[ [ "Torkamani", "Fatemeh", "" ], [ "Nazaraghaie", "Farshad", "" ], [ "Nami", "Mohammad", "" ] ]
Obsessive-Compulsive Disorder (OCD), characterized by unwanted and distressing intrusive thoughts, images, urges, doubts and ideas or sensations and repetitive mental or behavioral acts, which is regarded as an overwhelming mental disorder. Over the past few years, several studies have indicated how mindfulness-based interventions may be effectively used to remediate OCD symptoms based on which such methods are endorsed as effective complementary or alternative options to remediate OCD. The present pilot investigation assessed the effectiveness of Geometric Meditation-based Cognitive Behavior Therapy (GM-CBT) as a novel integrated approach to help OCD symptoms. Accordingly, an eight-week treatment program (90 minute sessions per week) in a single case of intractable OCD was found to result in a significant reduction in OCD symptoms, anxiety and depression as well as increased mindfulness skills and subsequent secondary outcomes. A three-month post treatment follow up suggested long-lasting beneficial effects. Such a pilot model may receive further endorsement as a holistic CBT approach for OCD.
q-bio/0407041
Aleksandra Walczak
Aleksandra M. Walczak, Masaki Sasai and Peter G. Wolynes
Self consistent proteomic field theory of stochastic gene switches
29 pages, 40 figures
null
10.1529/biophysj.104.050666
null
q-bio.MN cond-mat.stat-mech q-bio.GN
null
We present a self-consistent field approximation to the problem of the genetic switch composed of two mutually repressing/activating genes. The protein and DNA state dynamics are treated stochastically and on equal footing. In this approach the mean influence of the proteomic cloud created by one gene on the action of another is self-consistently computed. Within this approximation a broad range of stochastic genetic switches may be solved exactly in terms of finding the probability distribution and its moments. A much larger class of problems, such as genetic networks and cascades also remain exactly solvable with this approximation. We discuss in depth certain specific types of basic switches, which are used by biological systems and compare their behavior to the expectation for a deterministic switch.
[ { "created": "Fri, 30 Jul 2004 16:38:44 GMT", "version": "v1" } ]
2009-11-10
[ [ "Walczak", "Aleksandra M.", "" ], [ "Sasai", "Masaki", "" ], [ "Wolynes", "Peter G.", "" ] ]
We present a self-consistent field approximation to the problem of the genetic switch composed of two mutually repressing/activating genes. The protein and DNA state dynamics are treated stochastically and on equal footing. In this approach the mean influence of the proteomic cloud created by one gene on the action of another is self-consistently computed. Within this approximation a broad range of stochastic genetic switches may be solved exactly in terms of finding the probability distribution and its moments. A much larger class of problems, such as genetic networks and cascades also remain exactly solvable with this approximation. We discuss in depth certain specific types of basic switches, which are used by biological systems and compare their behavior to the expectation for a deterministic switch.
2305.09513
Lewis Geer
Lewis Y. Geer, Joel Lapin, Douglas J. Slotta, Tytus D. Mak, Stephen E. Stein
AIomics: exploring more of the proteome using mass spectral libraries extended by AI
null
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
The unbounded permutations of biological molecules, including proteins and their constituent peptides, presents a dilemma in identifying the components of complex biosamples. Sequence search algorithms used to identify peptide spectra can be expanded to cover larger classes of molecules, including more modifications, isoforms, and atypical cleavage, but at the cost of false positives or false negatives due to the simplified spectra they compute from sequence records. Spectral library searching can help solve this issue by precisely matching experimental spectra to library spectra with excellent sensitivity and specificity. However, compiling spectral libraries that span entire proteomes is pragmatically difficult. Neural networks that predict complete spectra containing a full range of annotated and unannotated ions can be used to replace these simplified spectra with libraries of fully predicted spectra, including modified peptides. Using such a network, we created predicted spectral libraries that were used to rescore matches from a sequence search done over a large search space, including a large number of modifications. Rescoring improved the separation of true and false hits by 82%, yielding an 8% increase in peptide identifications, including a 21% increase in nonspecifically cleaved peptides and a 17% increase in phosphopeptides.
[ { "created": "Tue, 16 May 2023 15:04:40 GMT", "version": "v1" } ]
2023-05-17
[ [ "Geer", "Lewis Y.", "" ], [ "Lapin", "Joel", "" ], [ "Slotta", "Douglas J.", "" ], [ "Mak", "Tytus D.", "" ], [ "Stein", "Stephen E.", "" ] ]
The unbounded permutations of biological molecules, including proteins and their constituent peptides, presents a dilemma in identifying the components of complex biosamples. Sequence search algorithms used to identify peptide spectra can be expanded to cover larger classes of molecules, including more modifications, isoforms, and atypical cleavage, but at the cost of false positives or false negatives due to the simplified spectra they compute from sequence records. Spectral library searching can help solve this issue by precisely matching experimental spectra to library spectra with excellent sensitivity and specificity. However, compiling spectral libraries that span entire proteomes is pragmatically difficult. Neural networks that predict complete spectra containing a full range of annotated and unannotated ions can be used to replace these simplified spectra with libraries of fully predicted spectra, including modified peptides. Using such a network, we created predicted spectral libraries that were used to rescore matches from a sequence search done over a large search space, including a large number of modifications. Rescoring improved the separation of true and false hits by 82%, yielding an 8% increase in peptide identifications, including a 21% increase in nonspecifically cleaved peptides and a 17% increase in phosphopeptides.
1304.1385
Yves Dehouck
Yves Dehouck and Alexander S. Mikhailov
Effective harmonic potentials: insights into the internal cooperativity and sequence-specificity of protein dynamics
10 pages, 5 figures, 1 table ; Supplementary Material (11 pages, 7 figures, 1 table) ; 4 Supplementary tables as plain text files
PLoS Comput. Biol. 9 (2013) e1003209
10.1371/journal.pcbi.1003209
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proper biological functioning of proteins often relies on the occurrence of coordinated fluctuations around their native structure, or of wider and sometimes highly elaborated motions. Coarse-grained elastic-network descriptions are known to capture essential aspects of conformational dynamics in proteins, but have so far remained mostly phenomenological, and unable to account for the chemical specificities of amino acids. Here, we propose a method to derive residue- and distance-specific effective harmonic potentials from the statistical analysis of an extensive dataset of NMR conformational ensembles. These potentials constitute dynamical counterparts to the mean-force statistical potentials commonly used for static analyses of protein structures. In the context of the elastic network model, they yield a strongly improved description of the cooperative aspects of residue motions, and give the opportunity to systematically explore the influence of sequence details on protein dynamics.
[ { "created": "Thu, 4 Apr 2013 14:58:44 GMT", "version": "v1" } ]
2013-10-17
[ [ "Dehouck", "Yves", "" ], [ "Mikhailov", "Alexander S.", "" ] ]
The proper biological functioning of proteins often relies on the occurrence of coordinated fluctuations around their native structure, or of wider and sometimes highly elaborated motions. Coarse-grained elastic-network descriptions are known to capture essential aspects of conformational dynamics in proteins, but have so far remained mostly phenomenological, and unable to account for the chemical specificities of amino acids. Here, we propose a method to derive residue- and distance-specific effective harmonic potentials from the statistical analysis of an extensive dataset of NMR conformational ensembles. These potentials constitute dynamical counterparts to the mean-force statistical potentials commonly used for static analyses of protein structures. In the context of the elastic network model, they yield a strongly improved description of the cooperative aspects of residue motions, and give the opportunity to systematically explore the influence of sequence details on protein dynamics.
1503.08878
Wei Wei
Wei Wei, Fred Wolf and Xiao-Jing Wang
Impact of membrane bistability on dynamical response of neuronal populations
null
Phys. Rev. E 92, 032726 (2015)
10.1103/PhysRevE.92.032726
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons in many brain areas can develop pronounced depolarized state of membrane potential (up state) in addition to the normal hyperpolarized down state near the resting potential. The influence of the up state on signal encoding, however, is not well investigated. Here we construct a one-dimensional bistable neuron model and calculate the linear response to noisy oscillatory inputs analytically. We find that with the appearance of an up state, the transmission function is enhanced by the emergence of a local maximum at some optimal frequency and the phase lag relative to the input signal is reduced. We characterize the dependence of the enhancement of frequency response on intrinsic dynamics and on the occupancy of the up state.
[ { "created": "Tue, 31 Mar 2015 00:16:01 GMT", "version": "v1" } ]
2015-09-30
[ [ "Wei", "Wei", "" ], [ "Wolf", "Fred", "" ], [ "Wang", "Xiao-Jing", "" ] ]
Neurons in many brain areas can develop pronounced depolarized state of membrane potential (up state) in addition to the normal hyperpolarized down state near the resting potential. The influence of the up state on signal encoding, however, is not well investigated. Here we construct a one-dimensional bistable neuron model and calculate the linear response to noisy oscillatory inputs analytically. We find that with the appearance of an up state, the transmission function is enhanced by the emergence of a local maximum at some optimal frequency and the phase lag relative to the input signal is reduced. We characterize the dependence of the enhancement of frequency response on intrinsic dynamics and on the occupancy of the up state.
2312.01788
Farzan Vahedifard
Seth Adler, Farzan Vahedifard, Rachel Akers, Christopher Sica, Mehmet Kocak, Edwin Moore, Marc Minkus, Gianna Elias, Nikhil Aggarwal, Sharon Byrd, Mehmoodur Rasheed, Robert S. Katz
Functional Magnetic Resonance Imaging Changes and Increased Muscle Pressure in Fibromyalgia: Insights from Prominent Theories of Pain and Muscle Imaging
null
null
null
null
q-bio.NC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fibromyalgia is a complicated and multifaceted disorder marked by widespread chronic pain, fatigue, and muscle tenderness. Current explanations for the pathophysiology of this condition include the Central Sensitization Theory, Cytokine Inflammation Theory, Muscle Hypoxia, Muscle Tender Point Theory, and Small Fiber Neuropathy Theory. The objective of this review article is to examine and explain each of these current theories and to provide a background on our current understanding of fibromyalgia. The medical literature on this disorder, as well as on the roles of functional magnetic resonance imaging (fMRI) and elastography as diagnostic tools, was reviewed from the 1970s to early 2023, primarily using the PubMed database. Five prominent theories of fibromyalgia etiology were examined: 1) Central Sensitization Theory; 2) Cytokine Inflammation Theory; 3) Muscle Hypoxia; 4) Muscle Tender Point Theory; and 5) Small Fiber Neuropathy Theory. Previous fMRI studies of FMS have revealed two key findings. First, patients with FMS show altered activation patterns in brain regions involved in pain processing. Second, the connectivity between brain structures in individuals diagnosed with FMS and healthy controls is different. Both of these findings will be expanded upon in this paper. The article also explores the potential for future research in fibromyalgia due to the advancements in fMRI and elastography techniques, such as shear wave ultrasound. Increased understanding of the underlying mechanisms contributing to fibromyalgia symptoms is necessary for improved diagnosis and treatment, and advanced imaging techniques can aid in this process.
[ { "created": "Mon, 4 Dec 2023 10:21:30 GMT", "version": "v1" } ]
2023-12-05
[ [ "Adler", "Seth", "" ], [ "Vahedifard", "Farzan", "" ], [ "Akers", "Rachel", "" ], [ "Sica", "Christopher", "" ], [ "Kocak", "Mehmet", "" ], [ "Moore", "Edwin", "" ], [ "Minkus", "Marc", "" ], [ "Elias", "Gianna", "" ], [ "Aggarwal", "Nikhil", "" ], [ "Byrd", "Sharon", "" ], [ "Rasheed", "Mehmoodur", "" ], [ "Katz", "Robert S.", "" ] ]
Fibromyalgia is a complicated and multifaceted disorder marked by widespread chronic pain, fatigue, and muscle tenderness. Current explanations for the pathophysiology of this condition include the Central Sensitization Theory, Cytokine Inflammation Theory, Muscle Hypoxia, Muscle Tender Point Theory, and Small Fiber Neuropathy Theory. The objective of this review article is to examine and explain each of these current theories and to provide a background on our current understanding of fibromyalgia. The medical literature on this disorder, as well as on the roles of functional magnetic resonance imaging (fMRI) and elastography as diagnostic tools, was reviewed from the 1970s to early 2023, primarily using the PubMed database. Five prominent theories of fibromyalgia etiology were examined: 1) Central Sensitization Theory; 2) Cytokine Inflammation Theory; 3) Muscle Hypoxia; 4) Muscle Tender Point Theory; and 5) Small Fiber Neuropathy Theory. Previous fMRI studies of FMS have revealed two key findings. First, patients with FMS show altered activation patterns in brain regions involved in pain processing. Second, the connectivity between brain structures in individuals diagnosed with FMS and healthy controls is different. Both of these findings will be expanded upon in this paper. The article also explores the potential for future research in fibromyalgia due to the advancements in fMRI and elastography techniques, such as shear wave ultrasound. Increased understanding of the underlying mechanisms contributing to fibromyalgia symptoms is necessary for improved diagnosis and treatment, and advanced imaging techniques can aid in this process.
1605.01017
Daniel Korytowski
Daniel A. Korytowski and Hal L. Smith
Permanence and Stability of a Kill the Winner Model in Marine Ecology
12 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on the long term dynamics of "killing the winner" Lotka-Volterra models of marine communities consisting of bacteria, virus, and zooplankton. Under suitable conditions, it is shown that there is a unique equilibrium with all populations present which is stable, the system is permanent, and the limiting behavior of its solutions is strongly constrained.
[ { "created": "Tue, 3 May 2016 18:42:59 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2016 13:24:06 GMT", "version": "v2" } ]
2016-07-22
[ [ "Korytowski", "Daniel A.", "" ], [ "Smith", "Hal L.", "" ] ]
We focus on the long term dynamics of "killing the winner" Lotka-Volterra models of marine communities consisting of bacteria, virus, and zooplankton. Under suitable conditions, it is shown that there is a unique equilibrium with all populations present which is stable, the system is permanent, and the limiting behavior of its solutions is strongly constrained.
2212.09446
Johannes M\"uller
Sona John (1), Johannes M\"uller (1 and 2) ((1) Department of Mathematics, Technical University of Munich, 85748 Garching, Germany, (2) Institute for Computational Biology, Helmholtz Center Munich, 85764 Neuherberg, Germany)
Age structure, replicator equation, and the prisoner's dilemma
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the evolutionary dynamics of an age-structured population under weak frequency-dependent selection. It turns out that the weak selection is affected in a non-trivial way by the life-history trait. We can disentangle the dynamics, based on the appearance of different time scales. These time scales, which seem to form a universal structure in the interplay of weak selection and life-history traits, allow us to reduce the infinite dimensional model to a one-dimensional modified replicator equation. The modified replicator equation is then used to investigate cooperation (the prisoner's dilemma) by means of adaptive dynamics. We identify conditions under which age structure is able to promote cooperation. At the end we discuss the relevance of our findings.
[ { "created": "Mon, 19 Dec 2022 13:40:06 GMT", "version": "v1" } ]
2022-12-20
[ [ "John", "Sona", "", "1 and 2" ], [ "Müller", "Johannes", "", "1 and 2" ] ]
We investigate the evolutionary dynamics of an age-structured population under weak frequency-dependent selection. It turns out that the weak selection is affected in a non-trivial way by the life-history trait. We can disentangle the dynamics, based on the appearance of different time scales. These time scales, which seem to form a universal structure in the interplay of weak selection and life-history traits, allow us to reduce the infinite dimensional model to a one-dimensional modified replicator equation. The modified replicator equation is then used to investigate cooperation (the prisoner's dilemma) by means of adaptive dynamics. We identify conditions under which age structure is able to promote cooperation. At the end we discuss the relevance of our findings.
1604.03629
William Gray Roncal
Eva L. Dyer, William Gray Roncal, Hugo L. Fernandes, Doga G\"ursoy, Vincent De Andrade, Rafael Vescovi, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. K\"ording and Narayanan Kasthuri
Quantifying mesoscale neuroanatomy using X-ray microtomography
28 pages, 9 figures
null
null
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methods for resolving the 3D microstructure of the brain typically start by thinly slicing and staining the brain, and then imaging each individual section with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography ($\mu$CT) for producing mesoscale $(1~\mu m^3)$ resolution brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for $\mu$CT-based brain mapping that combines methods for sample preparation, imaging, automated segmentation of image volumes into cells and blood vessels, and statistical analysis of the resulting brain structures. Our results demonstrate that X-ray tomography promises rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts.
[ { "created": "Wed, 13 Apr 2016 01:46:54 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2016 19:56:59 GMT", "version": "v2" } ]
2016-07-27
[ [ "Dyer", "Eva L.", "" ], [ "Roncal", "William Gray", "" ], [ "Fernandes", "Hugo L.", "" ], [ "Gürsoy", "Doga", "" ], [ "De Andrade", "Vincent", "" ], [ "Vescovi", "Rafael", "" ], [ "Fezzaa", "Kamel", "" ], [ "Xiao", "Xianghui", "" ], [ "Vogelstein", "Joshua T.", "" ], [ "Jacobsen", "Chris", "" ], [ "Körding", "Konrad P.", "" ], [ "Kasthuri", "Narayanan", "" ] ]
Methods for resolving the 3D microstructure of the brain typically start by thinly slicing and staining the brain, and then imaging each individual section with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography ($\mu$CT) for producing mesoscale $(1~\mu m^3)$ resolution brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for $\mu$CT-based brain mapping that combines methods for sample preparation, imaging, automated segmentation of image volumes into cells and blood vessels, and statistical analysis of the resulting brain structures. Our results demonstrate that X-ray tomography promises rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts.
1904.01235
Hamidreza Chitsaz
Ali Ebrahimpour-Boroojeny, Sanjay Rajopadhye, Hamidreza Chitsaz
BPPart and BPMax: RNA-RNA Interaction Partition Function and Structure Prediction for the Base Pair Counting Model
null
null
null
null
q-bio.BM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA-RNA interaction (RRI) is ubiquitous and has complex roles in the cellular functions. In human health studies, miRNA-target and lncRNAs are among an elite class of RRIs that have been extensively studied. Bacterial ncRNA-target and RNA interference are other classes of RRIs that have received significant attention. In recent studies, mRNA-mRNA interaction instances have been observed, where both partners appear in the same pathway without any direct link between them, or any prior knowledge about their relationship. Those recently discovered cases suggest that RRI scope is much wider than those aforementioned elite classes. We revisit our RNA-RNA interaction partition function algorithm, piRNA, which computes the partition function, base-pairing probabilities, and structure for the comprehensive Turner energy model using 96 different dynamic programming tables. In this study, we strategically retreat from sophisticated thermodynamic models to the much simpler base pair counting model. That might seem counter-intuitive at the first glance; our idea is to benefit from the advantages of such simple models in terms of running time and memory footprint and compensate for the associated information loss by adding machine learning components in the future. Here, simple weighted base pair counting is considered to obtain BPPart for Base-pair Partition function and BPMax for Base-pair Maximization, which use 9 and 2 tables respectively. They are empirically 225 and 1350 fold faster than piRNA. A correlation of 0.855 and 0.836 was achieved between piRNA and BPPart and between piRNA and BPMax, respectively, in 37 degrees, and 0.920 and 0.904 in -180 degrees. We also discover two partner RNAs, SNORD3D and TRAF3, and hypothesize their potential roles in genetic diseases. We envision fusion of machine learning methods with the proposed algorithms in the future.
[ { "created": "Tue, 2 Apr 2019 06:48:31 GMT", "version": "v1" }, { "created": "Fri, 7 Aug 2020 17:13:29 GMT", "version": "v2" } ]
2020-08-10
[ [ "Ebrahimpour-Boroojeny", "Ali", "" ], [ "Rajopadhye", "Sanjay", "" ], [ "Chitsaz", "Hamidreza", "" ] ]
RNA-RNA interaction (RRI) is ubiquitous and has complex roles in the cellular functions. In human health studies, miRNA-target and lncRNAs are among an elite class of RRIs that have been extensively studied. Bacterial ncRNA-target and RNA interference are other classes of RRIs that have received significant attention. In recent studies, mRNA-mRNA interaction instances have been observed, where both partners appear in the same pathway without any direct link between them, or any prior knowledge about their relationship. Those recently discovered cases suggest that RRI scope is much wider than those aforementioned elite classes. We revisit our RNA-RNA interaction partition function algorithm, piRNA, which computes the partition function, base-pairing probabilities, and structure for the comprehensive Turner energy model using 96 different dynamic programming tables. In this study, we strategically retreat from sophisticated thermodynamic models to the much simpler base pair counting model. That might seem counter-intuitive at the first glance; our idea is to benefit from the advantages of such simple models in terms of running time and memory footprint and compensate for the associated information loss by adding machine learning components in the future. Here, simple weighted base pair counting is considered to obtain BPPart for Base-pair Partition function and BPMax for Base-pair Maximization, which use 9 and 2 tables respectively. They are empirically 225 and 1350 fold faster than piRNA. A correlation of 0.855 and 0.836 was achieved between piRNA and BPPart and between piRNA and BPMax, respectively, in 37 degrees, and 0.920 and 0.904 in -180 degrees. We also discover two partner RNAs, SNORD3D and TRAF3, and hypothesize their potential roles in genetic diseases. We envision fusion of machine learning methods with the proposed algorithms in the future.
1907.02351
Kristine Heiney
Kristine Heiney, Vibeke Devold Valderhaug, Ioanna Sandvig, Axel Sandvig, Gunnar Tufte, Hugo Lewi Hammer, Stefano Nichele
Evaluation of the criticality of in vitro neuronal networks: Toward an assessment of computational capacity
For presentation at the workshop "Novel Substrates and Models for the Emergence of Developmental, Learning and Cognitive Capabilities," 9th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2019), Oslo, Norway. (website: http://www.nichele.eu/ICDL-EPIROB_NSM/ICDL-EPIROB_SNM.html)
null
null
null
q-bio.NC cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Novel computing hardwares are necessary to keep up with today's increasing demand for data storage and processing power. In this research project, we turn to the brain for inspiration to develop novel computing substrates that are self-learning, scalable, energy-efficient, and fault-tolerant. The overarching aim of this work is to develop computational models that are able to reproduce target behaviors observed in in vitro neuronal networks. These models will be ultimately be used to aid in the realization of these behaviors in a more engineerable substrate: an array of nanomagnets. The target behaviors will be identified by analyzing electrophysiological recordings of the neuronal networks. Preliminary analysis has been performed to identify when a network is in a critical state based on the size distribution of network-wide avalanches of activity, and the results of this analysis are reported here. This classification of critical versus non-critical networks is valuable in identifying networks that can be expected to perform well on computational tasks, as criticality is widely considered to be the state in which a system is best suited for computation. This type of analysis is expected to enable the identification of networks that are well-suited for computation and the classification of networks as perturbed or healthy.
[ { "created": "Thu, 4 Jul 2019 12:14:30 GMT", "version": "v1" } ]
2019-07-05
[ [ "Heiney", "Kristine", "" ], [ "Valderhaug", "Vibeke Devold", "" ], [ "Sandvig", "Ioanna", "" ], [ "Sandvig", "Axel", "" ], [ "Tufte", "Gunnar", "" ], [ "Hammer", "Hugo Lewi", "" ], [ "Nichele", "Stefano", "" ] ]
Novel computing hardwares are necessary to keep up with today's increasing demand for data storage and processing power. In this research project, we turn to the brain for inspiration to develop novel computing substrates that are self-learning, scalable, energy-efficient, and fault-tolerant. The overarching aim of this work is to develop computational models that are able to reproduce target behaviors observed in in vitro neuronal networks. These models will be ultimately be used to aid in the realization of these behaviors in a more engineerable substrate: an array of nanomagnets. The target behaviors will be identified by analyzing electrophysiological recordings of the neuronal networks. Preliminary analysis has been performed to identify when a network is in a critical state based on the size distribution of network-wide avalanches of activity, and the results of this analysis are reported here. This classification of critical versus non-critical networks is valuable in identifying networks that can be expected to perform well on computational tasks, as criticality is widely considered to be the state in which a system is best suited for computation. This type of analysis is expected to enable the identification of networks that are well-suited for computation and the classification of networks as perturbed or healthy.
1708.02667
Hemachander Subramanian
Hemachander Subramanian, Robert A. Gatenby
Evolutionary advantage of directional symmetry breaking in self-replicating polymers
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the asymmetric nature of the nucleotides, the extant informational biomolecule, DNA, is constrained to replicate unidirectionally on a template. As a product of molecular evolution that sought to maximize replicative potential, DNA's unidirectional replication poses a mystery since symmetric bidirectional self-replicators obviously would replicate faster than unidirectional self-replicators and hence would have been evolutionarily more successful. Here we carefully examine the physico-chemical requirements for evolutionarily successful primordial self-replicators and theoretically show that at low monomer concentrations that possibly prevailed in the primordial oceans, asymmetric unidirectional self-replicators would have an evolutionary advantage over bidirectional self-replicators. The competing requirements of low and high kinetic barriers for formation and long lifetime of inter-strand bonds respectively are simultaneously satisfied through asymmetric kinetic influence of inter-strand bonds, resulting in evolutionarily successful unidirectional self-replicators.
[ { "created": "Tue, 8 Aug 2017 22:37:06 GMT", "version": "v1" } ]
2017-08-10
[ [ "Subramanian", "Hemachander", "" ], [ "Gatenby", "Robert A.", "" ] ]
Due to the asymmetric nature of the nucleotides, the extant informational biomolecule, DNA, is constrained to replicate unidirectionally on a template. As a product of molecular evolution that sought to maximize replicative potential, DNA's unidirectional replication poses a mystery since symmetric bidirectional self-replicators obviously would replicate faster than unidirectional self-replicators and hence would have been evolutionarily more successful. Here we carefully examine the physico-chemical requirements for evolutionarily successful primordial self-replicators and theoretically show that at low monomer concentrations that possibly prevailed in the primordial oceans, asymmetric unidirectional self-replicators would have an evolutionary advantage over bidirectional self-replicators. The competing requirements of low and high kinetic barriers for formation and long lifetime of inter-strand bonds respectively are simultaneously satisfied through asymmetric kinetic influence of inter-strand bonds, resulting in evolutionarily successful unidirectional self-replicators.
1012.0490
Alexander K. Vidybida
Alexander K. Vidybida
Testing of information condensation in a model reverberating spiking neural network
12 pages, 9 figures, 40 references. Content of this work was partially published in an abstract form in the abstract book of the 2nd International Biophysics Congress and Biotechnology at GAP & 21th National Biophysics Congress, (5-9 Oct. 2009) Diyarbakir, Turkey, http://www.ibc2009.org/. In v2 the ancillary file movie.pdf is added, which offers examples of neuronal network dynamics
International Journal of Neural Systems (IJNS), Volume: 21, Issue: 3 (June 2011), Page: 187-198
10.1142/S0129065711002742
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
[ { "created": "Thu, 2 Dec 2010 16:52:04 GMT", "version": "v1" }, { "created": "Mon, 10 Jan 2011 16:42:46 GMT", "version": "v2" } ]
2015-03-17
[ [ "Vidybida", "Alexander K.", "" ] ]
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
1407.6340
Michael Margaliot
Gilad Poker and Yoram Zarai and Michael Margaliot and Tamir Tuller
Maximizing Protein Translation Rate in the Nonhomogeneous Ribosome Flow Model: A Convex Optimization Approach
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino-acids together in a specific order to create a functioning protein. An important question is how to maximize protein production. Indeed, translation is known to consume most of the cell's energy and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ODEs. It also includes n+1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly-efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics.
[ { "created": "Wed, 23 Jul 2014 19:28:00 GMT", "version": "v1" } ]
2014-07-24
[ [ "Poker", "Gilad", "" ], [ "Zarai", "Yoram", "" ], [ "Margaliot", "Michael", "" ], [ "Tuller", "Tamir", "" ] ]
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino-acids together in a specific order to create a functioning protein. An important question is how to maximize protein production. Indeed, translation is known to consume most of the cell's energy and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation-elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ODEs. It also includes n+1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly-efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics.
2206.04047
A. Ali Heydari
A. Ali Heydari, Oscar A. Davalos, Katrina K. Hoyer, Suzanne S. Sindi
N-ACT: An Interpretable Deep Learning Model for Automatic Cell Type and Salient Gene Identification
null
null
null
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Single-cell RNA sequencing (scRNAseq) is rapidly advancing our understanding of cellular composition within complex tissues and organisms. A major limitation in most scRNAseq analysis pipelines is the reliance on manual annotations to determine cell identities, which are time consuming, subjective, and require expertise. Given the surge in cell sequencing, supervised methods-especially deep learning models-have been developed for automatic cell type identification (ACTI), which achieve high accuracy and scalability. However, all existing deep learning frameworks for ACTI lack interpretability and are used as "black-box" models. We present N-ACT (Neural-Attention for Cell Type identification): the first-of-its-kind interpretable deep neural network for ACTI utilizing neural-attention to detect salient genes for use in cell-type identification. We compare N-ACT to conventional annotation methods on two previously manually annotated data sets, demonstrating that N-ACT accurately identifies marker genes and cell types in an unsupervised manner, while performing comparably on multiple data sets to current state-of-the-art model in traditional supervised ACTI.
[ { "created": "Sun, 8 May 2022 18:13:28 GMT", "version": "v1" } ]
2022-06-10
[ [ "Heydari", "A. Ali", "" ], [ "Davalos", "Oscar A.", "" ], [ "Hoyer", "Katrina K.", "" ], [ "Sindi", "Suzanne S.", "" ] ]
Single-cell RNA sequencing (scRNAseq) is rapidly advancing our understanding of cellular composition within complex tissues and organisms. A major limitation in most scRNAseq analysis pipelines is the reliance on manual annotations to determine cell identities, which are time consuming, subjective, and require expertise. Given the surge in cell sequencing, supervised methods-especially deep learning models-have been developed for automatic cell type identification (ACTI), which achieve high accuracy and scalability. However, all existing deep learning frameworks for ACTI lack interpretability and are used as "black-box" models. We present N-ACT (Neural-Attention for Cell Type identification): the first-of-its-kind interpretable deep neural network for ACTI utilizing neural-attention to detect salient genes for use in cell-type identification. We compare N-ACT to conventional annotation methods on two previously manually annotated data sets, demonstrating that N-ACT accurately identifies marker genes and cell types in an unsupervised manner, while performing comparably on multiple data sets to current state-of-the-art model in traditional supervised ACTI.
2405.19857
Carrie Andrew
Carrie Andrew, Sharif Islam, Claus Weiland, Dag Endresen
Biodiversity data standards for the organization and dissemination of complex research projects and digital twins: a guide
42 pages, 2 figures, 1 box, 1 table
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biodiversity data are substantially increasing, spurred by technological advances and community (citizen) science initiatives. To integrate data is, likewise, becoming more commonplace. Open science promotes open sharing and data usage. Data standardization is an instrument for the organization and integration of biodiversity data, which is required for complex research projects and digital twins. However, just like with an actual instrument, there is a learning curve to understanding the data standards field. Here we provide a guide, for data providers and data users, on the logistics of compiling and utilizing biodiversity data. We emphasize data standards, because they are integral to data integration. Three primary avenues for compiling biodiversity data are compared, explaining the importance of research infrastructures for coordinated long-term data aggregation. We exemplify the Biodiversity Digital Twin (BioDT) as a case study. Four approaches to data standardization are presented in terms of the balance between practical constraints and the advancement of the data standards field. We aim for this paper to guide and raise awareness of the existing issues related to data standardization, and especially how data standards are key to data interoperability, i.e., machine accessibility. The future is promising for computational biodiversity advancements, such as with the BioDT project, but it rests upon the shoulders of machine actionability and readability, and that requires data standards for computational communication.
[ { "created": "Thu, 30 May 2024 09:04:40 GMT", "version": "v1" } ]
2024-05-31
[ [ "Andrew", "Carrie", "" ], [ "Islam", "Sharif", "" ], [ "Weiland", "Claus", "" ], [ "Endresen", "Dag", "" ] ]
Biodiversity data are substantially increasing, spurred by technological advances and community (citizen) science initiatives. To integrate data is, likewise, becoming more commonplace. Open science promotes open sharing and data usage. Data standardization is an instrument for the organization and integration of biodiversity data, which is required for complex research projects and digital twins. However, just like with an actual instrument, there is a learning curve to understanding the data standards field. Here we provide a guide, for data providers and data users, on the logistics of compiling and utilizing biodiversity data. We emphasize data standards, because they are integral to data integration. Three primary avenues for compiling biodiversity data are compared, explaining the importance of research infrastructures for coordinated long-term data aggregation. We exemplify the Biodiversity Digital Twin (BioDT) as a case study. Four approaches to data standardization are presented in terms of the balance between practical constraints and the advancement of the data standards field. We aim for this paper to guide and raise awareness of the existing issues related to data standardization, and especially how data standards are key to data interoperability, i.e., machine accessibility. The future is promising for computational biodiversity advancements, such as with the BioDT project, but it rests upon the shoulders of machine actionability and readability, and that requires data standards for computational communication.
1905.02121
Finn Kuusisto
Finn Kuusisto, Vitor Santos Costa, Zhonggang Hou, James Thomson, David Page, Ron Stewart
Machine Learning to Predict Developmental Neurotoxicity with High-throughput Data from 2D Bio-engineered Tissues
null
null
10.1109/ICMLA.2019.00055
null
q-bio.QM cs.LG q-bio.TO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a growing need for fast and accurate methods for testing developmental neurotoxicity across several chemical exposure sources. Current approaches, such as in vivo animal studies, and assays of animal and human primary cell cultures, suffer from challenges related to time, cost, and applicability to human physiology. We previously demonstrated success employing machine learning to predict developmental neurotoxicity using gene expression data collected from human 3D tissue models exposed to various compounds. The 3D model is biologically similar to developing neural structures, but its complexity necessitates extensive expertise and effort to employ. By instead focusing solely on constructing an assay of developmental neurotoxicity, we propose that a simpler 2D tissue model may prove sufficient. We thus compare the accuracy of predictive models trained on data from a 2D tissue model with those trained on data from a 3D tissue model, and find the 2D model to be substantially more accurate. Furthermore, we find the 2D model to be more robust under stringent gene set selection, whereas the 3D model suffers substantial accuracy degradation. While both approaches have advantages and disadvantages, we propose that our described 2D approach could be a valuable tool for decision makers when prioritizing neurotoxicity screening.
[ { "created": "Mon, 6 May 2019 16:13:59 GMT", "version": "v1" } ]
2020-02-26
[ [ "Kuusisto", "Finn", "" ], [ "Costa", "Vitor Santos", "" ], [ "Hou", "Zhonggang", "" ], [ "Thomson", "James", "" ], [ "Page", "David", "" ], [ "Stewart", "Ron", "" ] ]
There is a growing need for fast and accurate methods for testing developmental neurotoxicity across several chemical exposure sources. Current approaches, such as in vivo animal studies, and assays of animal and human primary cell cultures, suffer from challenges related to time, cost, and applicability to human physiology. We previously demonstrated success employing machine learning to predict developmental neurotoxicity using gene expression data collected from human 3D tissue models exposed to various compounds. The 3D model is biologically similar to developing neural structures, but its complexity necessitates extensive expertise and effort to employ. By instead focusing solely on constructing an assay of developmental neurotoxicity, we propose that a simpler 2D tissue model may prove sufficient. We thus compare the accuracy of predictive models trained on data from a 2D tissue model with those trained on data from a 3D tissue model, and find the 2D model to be substantially more accurate. Furthermore, we find the 2D model to be more robust under stringent gene set selection, whereas the 3D model suffers substantial accuracy degradation. While both approaches have advantages and disadvantages, we propose that our described 2D approach could be a valuable tool for decision makers when prioritizing neurotoxicity screening.
1202.0063
Jamie Blundell R
J. R. Blundell, A. Gallagher and T. M. A Fink
Regular neutral networks outperform robust ones by reaching their top growth rate more quickly
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the relative importance of "top-speed" (long-term growth rate) and "acceleration" (how quickly the long-term growth rate can be reached) in the evolutionary race to increase population size. We observe that fitness alone does not capture growth rate: robustness, a property of neutral network shape, combines with fitness to include the effect of deleterious mutations, giving growth rate. Similarly, we show that growth rate alone does not capture population size: regularity, a different property of neutral network shape, combines with growth rate to include the effect of higher depletion rates early on, giving size. Whereas robustness is a function of the principal eigenvalue of the neutral network adjacency matrix, regularity is a function of the principal eigenvector. We show that robustness is not correlated with regularity, and observe in silico the selection for regularity by evolving RNA ribozymes. Despite having smaller growth rates, the more regular ribozymes have the biggest populations.
[ { "created": "Wed, 1 Feb 2012 00:33:53 GMT", "version": "v1" } ]
2012-02-02
[ [ "Blundell", "J. R.", "" ], [ "Gallagher", "A.", "" ], [ "Fink", "T. M. A", "" ] ]
We study the relative importance of "top-speed" (long-term growth rate) and "acceleration" (how quickly the long-term growth rate can be reached) in the evolutionary race to increase population size. We observe that fitness alone does not capture growth rate: robustness, a property of neutral network shape, combines with fitness to include the effect of deleterious mutations, giving growth rate. Similarly, we show that growth rate alone does not capture population size: regularity, a different property of neutral network shape, combines with growth rate to include the effect of higher depletion rates early on, giving size. Whereas robustness is a function of the principal eigenvalue of the neutral network adjacency matrix, regularity is a function of the principal eigenvector. We show that robustness is not correlated with regularity, and observe in silico the selection for regularity by evolving RNA ribozymes. Despite having smaller growth rates, the more regular ribozymes have the biggest populations.
1910.01801
Pramod Shinde
Pramod Shinde, Loic Marrec, Aparna Rai, Alok Yadav, Rajesh Kumar, Mikhail Ivanchenko, Alexey Zaikin, Sarika Jalan
Symmetry in cancer networks identified: Proposal for multi-cancer biomarkers
20 Pages, 5 Figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most challenging problems in biomedicine and genomics is the identification of disease biomarkers. In this study, proteomics data from seven major cancers were used to construct two weighted protein-protein interaction (PPI) networks i.e., one for the normal and another for the cancer conditions. We developed rigorous, yet mathematically simple, methodology based on the degeneracy at -1 eigenvalues to identify structural symmetry or motif structures in network. Utilising eigenvectors corresponding to degenerate eigenvalues in the weighted adjacency matrix, we identified structural symmetry in underlying weighted PPI networks constructed using seven cancer data. Functional assessment of proteins forming these structural symmetry exhibited the property of cancer hallmarks. Survival analysis refined further this protein list proposing BMI, MAPK11, DDIT4, CDKN2A, and FYN as putative multi-cancer biomarkers. The combined framework of networks and spectral graph theory developed here can be applied to identify symmetrical patterns in other disease networks to predict proteins as potential disease biomarkers.
[ { "created": "Fri, 4 Oct 2019 05:04:58 GMT", "version": "v1" } ]
2019-10-07
[ [ "Shinde", "Pramod", "" ], [ "Marrec", "Loic", "" ], [ "Rai", "Aparna", "" ], [ "Yadav", "Alok", "" ], [ "Kumar", "Rajesh", "" ], [ "Ivanchenko", "Mikhail", "" ], [ "Zaikin", "Alexey", "" ], [ "Jalan", "Sarika", "" ] ]
One of the most challenging problems in biomedicine and genomics is the identification of disease biomarkers. In this study, proteomics data from seven major cancers were used to construct two weighted protein-protein interaction (PPI) networks i.e., one for the normal and another for the cancer conditions. We developed rigorous, yet mathematically simple, methodology based on the degeneracy at -1 eigenvalues to identify structural symmetry or motif structures in network. Utilising eigenvectors corresponding to degenerate eigenvalues in the weighted adjacency matrix, we identified structural symmetry in underlying weighted PPI networks constructed using seven cancer data. Functional assessment of proteins forming these structural symmetry exhibited the property of cancer hallmarks. Survival analysis refined further this protein list proposing BMI, MAPK11, DDIT4, CDKN2A, and FYN as putative multi-cancer biomarkers. The combined framework of networks and spectral graph theory developed here can be applied to identify symmetrical patterns in other disease networks to predict proteins as potential disease biomarkers.
1910.05621
Samuel Goldstein
Samuel Goldstein, Zhenhong Hu, Mingzhou Ding
Decoding Working Memory Load from EEG with LSTM Networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Working memory (WM) is a mechanism that temporarily stores and manipulates information in service of behavioral goals and is a highly dynamic process. Previous studies have considered decoding WM load using EEG but have not investigated the contribution of sequential information contained in the temporal patterns of the EEG data that can differentiate different WM loads. In our study, we develop a novel method of investigating the role of sequential information in the manipulation and storage of verbal information at various time scales and localize topographically the sources of the sequential information based decodability. High density EEG (128-channel) were recorded from twenty subjects performing a Sternberg verbal WM task with varying memory loads. Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) were trained to decode memory load during encoding, retention, activity-silent, and retrieval periods. Decoding accuracy was compared between ordered data and a temporally shuffled version that retains pattern based information of the data but not temporal relation to assess the contribution of sequential information to decoding memory load. The results show that (1) decoding accuracy increases with increase in the length of the EEG time series given to the LSTM for both ordered and temporally shuffled cases, with the increase being faster for ordered than temporally shuffled time series, and (2) according to the decoding weight maps, the frontal, temporal and some parietal areas are an important source of sequential information based decodability. This study, to our knowledge, is the first study applying a LSTM-RNN approach to investigate temporal dynamics in human EEG data in encoding WM load information.
[ { "created": "Sat, 12 Oct 2019 18:33:46 GMT", "version": "v1" } ]
2019-10-15
[ [ "Goldstein", "Samuel", "" ], [ "Hu", "Zhenhong", "" ], [ "Ding", "Mingzhou", "" ] ]
Working memory (WM) is a mechanism that temporarily stores and manipulates information in service of behavioral goals and is a highly dynamic process. Previous studies have considered decoding WM load using EEG but have not investigated the contribution of sequential information contained in the temporal patterns of the EEG data that can differentiate different WM loads. In our study, we develop a novel method of investigating the role of sequential information in the manipulation and storage of verbal information at various time scales and localize topographically the sources of the sequential information based decodability. High density EEG (128-channel) were recorded from twenty subjects performing a Sternberg verbal WM task with varying memory loads. Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) were trained to decode memory load during encoding, retention, activity-silent, and retrieval periods. Decoding accuracy was compared between ordered data and a temporally shuffled version that retains pattern based information of the data but not temporal relation to assess the contribution of sequential information to decoding memory load. The results show that (1) decoding accuracy increases with increase in the length of the EEG time series given to the LSTM for both ordered and temporally shuffled cases, with the increase being faster for ordered than temporally shuffled time series, and (2) according to the decoding weight maps, the frontal, temporal and some parietal areas are an important source of sequential information based decodability. This study, to our knowledge, is the first study applying a LSTM-RNN approach to investigate temporal dynamics in human EEG data in encoding WM load information.
q-bio/0510021
Provata Astero
Th. Oikonomou and A. Provata
Non-extensive Trends in the Size Distribution of Coding and Non-coding DNA Sequences in the Human Genome
13 pages, 10 figures, 2 tables
null
10.1140/epjb/e2006-00121-2
null
q-bio.GN
null
We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using Non-extensive Statistics. We show that the exponents governing the decay of the coding size distributions vary between $5.2 \le r \le 5.7$ for the short scales and $1.45 \le q \le 1.50$ for the large scales. On the contrary, the exponents governing the decay of the non-coding size distributions in these four chromosomes, take the values $2.4 \le r \le 3.2$ for the short scales and $1.50 \le q \le 1.72$ for the large scales. This quantitative difference, in particular in the tail exponent $q$, indicates that the non-coding (coding) size distributions have long (short) range correlations. This non-trivial difference in the DNA statistics is attributed to the non-conservative (conservative) evolution dynamics acting on the non-coding (coding) DNA sequences.
[ { "created": "Tue, 11 Oct 2005 19:21:01 GMT", "version": "v1" } ]
2009-11-11
[ [ "Oikonomou", "Th.", "" ], [ "Provata", "A.", "" ] ]
We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using Non-extensive Statistics. We show that the exponents governing the decay of the coding size distributions vary between $5.2 \le r \le 5.7$ for the short scales and $1.45 \le q \le 1.50$ for the large scales. On the contrary, the exponents governing the decay of the non-coding size distributions in these four chromosomes, take the values $2.4 \le r \le 3.2$ for the short scales and $1.50 \le q \le 1.72$ for the large scales. This quantitative difference, in particular in the tail exponent $q$, indicates that the non-coding (coding) size distributions have long (short) range correlations. This non-trivial difference in the DNA statistics is attributed to the non-conservative (conservative) evolution dynamics acting on the non-coding (coding) DNA sequences.
1801.06752
David Nguyen
David H. Nguyen
Translating the Architectural Complexity of the Colon or Polyp into a Sinusoid Wave for Classification via the Fast Fourier Transform
null
null
null
null
q-bio.QM q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
There is no method to quantify the spatial complexity within colon polyps. This paper describes a spatial transformation that translates the tissue architecture within a polyp, or a normal colon lining, into a complex sinusoid wave composed of discrete points. This sinusoid wave can then undergo the Fast Fourier Transform to obtain a spectrum of frequencies that represents the sinusoid wave. This spectrum can then serve as a signature of the spatial complexity [an index] within the polyp. By overlaying vertical lines that radiate from the bottom middle [like a fold-out fan] of an image of a polyp stained by hematoxylin and eosin, the image is segmented into sectors. Each vertical line also forms an angle with the horizontal axis of the image, ranging from 0 degrees to 180 degrees rising counter clockwise. Each vertical line will intersect with various features of the polyp [border of lumens, border of epithelial lining]. Each of these intersections is a point that can be characterized by its distance from the origin [this distance is also a magnitude of that point]. Thus, each intersection between radial line and polyp feature can be mapped by polar coordinates [radius length, angle measure]. By summing the distance of all points along the same radial line, each radial line that divides the image becomes one value. Plotting these values [y variable] against the angle of each radial line from the horizontal axis [x variable] results in a sinusoid wave consisting of discrete points. This method is referred to as the Linearized Compressed Polar Coordinates [LCPC] Transform. The LCPC transform, in conjunction with the Fast Fourier Transform, can reduce the complexity of visually hidden histological grades in colon polyps into categories of similar wave frequencies [each histological grade has a signature consisting of a handful of frequencies].
[ { "created": "Sun, 21 Jan 2018 02:18:51 GMT", "version": "v1" } ]
2018-01-23
[ [ "Nguyen", "David H.", "" ] ]
There is no method to quantify the spatial complexity within colon polyps. This paper describes a spatial transformation that translates the tissue architecture within a polyp, or a normal colon lining, into a complex sinusoid wave composed of discrete points. This sinusoid wave can then undergo the Fast Fourier Transform to obtain a spectrum of frequencies that represents the sinusoid wave. This spectrum can then serve as a signature of the spatial complexity [an index] within the polyp. By overlaying vertical lines that radiate from the bottom middle [like a fold-out fan] of an image of a polyp stained by hematoxylin and eosin, the image is segmented into sectors. Each vertical line also forms an angle with the horizontal axis of the image, ranging from 0 degrees to 180 degrees rising counter clockwise. Each vertical line will intersect with various features of the polyp [border of lumens, border of epithelial lining]. Each of these intersections is a point that can be characterized by its distance from the origin [this distance is also a magnitude of that point]. Thus, each intersection between radial line and polyp feature can be mapped by polar coordinates [radius length, angle measure]. By summing the distance of all points along the same radial line, each radial line that divides the image becomes one value. Plotting these values [y variable] against the angle of each radial line from the horizontal axis [x variable] results in a sinusoid wave consisting of discrete points. This method is referred to as the Linearized Compressed Polar Coordinates [LCPC] Transform. The LCPC transform, in conjunction with the Fast Fourier Transform, can reduce the complexity of visually hidden histological grades in colon polyps into categories of similar wave frequencies [each histological grade has a signature consisting of a handful of frequencies].
1410.1608
Raunaq Malhotra
Raunaq Malhotra, Daniel Elleder, Le Bao, David R Hunter, Raj Acharya, Mary Poss
Clustering pipeline for determining consensus sequences in targeted next-generation sequencing
6 pages, 4 figures, Paper accepted at conference BiCoB 2016 (http://www.cs.umb.edu/bicob/). Data and scripts available from https://github.com/raunaqm/ClusteringPipeline
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyses of targeted genomic sequencing data from next-generation-sequencing (NGS) technologies typically involves mapping reads to a reference sequence or clustering reads. For a number of species a reference genome is not available so the analyses of targeted sequencing data, for example polymorphic structural variation caused by mobile elements is difficult; clustering methods are preferred for such data analysis. Clustering of reads requires a clustering threshold parameter, which is used to compare and group reads. However, determining the optimal clustering threshold for a read dataset is challenging because of different sequence composition, the number of sequences present, and also the amount of sequencing errors in the dataset. High values of the clustering threshold parameter can falsely inflate the number of recovered genomic regions, while low values of clustering threshold can merge reads from distinct regions into a single cluster. Thus, an algorithm that can empirically determine clustering threshold is needed. We propose a pipeline for clustering genomic sequences wherein the clustering threshold is empirically determined from the NGS data. The optimal threshold is decided based on two internal clustering measures which assess clusters for small intra-cluster diameters and large inter-cluster distances. We evaluate the pipeline on two simulated datasets derived from human genome sequence simulating different genomic regions and sequencing depth. The total number of clusters obtained from our pipeline is closer to the actual number of reference sequences when compared to single round of clustering. Also, the number of clusters whose consensus sequence matches a corresponding reference sequence is higher in our pipeline. We observe that the presence of repeat regions affects clustering accuracy.
[ { "created": "Tue, 7 Oct 2014 03:50:26 GMT", "version": "v1" }, { "created": "Tue, 10 Mar 2015 17:32:34 GMT", "version": "v2" }, { "created": "Sat, 13 Feb 2016 18:22:09 GMT", "version": "v3" } ]
2016-02-16
[ [ "Malhotra", "Raunaq", "" ], [ "Elleder", "Daniel", "" ], [ "Bao", "Le", "" ], [ "Hunter", "David R", "" ], [ "Acharya", "Raj", "" ], [ "Poss", "Mary", "" ] ]
Analyses of targeted genomic sequencing data from next-generation-sequencing (NGS) technologies typically involves mapping reads to a reference sequence or clustering reads. For a number of species a reference genome is not available so the analyses of targeted sequencing data, for example polymorphic structural variation caused by mobile elements is difficult; clustering methods are preferred for such data analysis. Clustering of reads requires a clustering threshold parameter, which is used to compare and group reads. However, determining the optimal clustering threshold for a read dataset is challenging because of different sequence composition, the number of sequences present, and also the amount of sequencing errors in the dataset. High values of the clustering threshold parameter can falsely inflate the number of recovered genomic regions, while low values of clustering threshold can merge reads from distinct regions into a single cluster. Thus, an algorithm that can empirically determine clustering threshold is needed. We propose a pipeline for clustering genomic sequences wherein the clustering threshold is empirically determined from the NGS data. The optimal threshold is decided based on two internal clustering measures which assess clusters for small intra-cluster diameters and large inter-cluster distances. We evaluate the pipeline on two simulated datasets derived from human genome sequence simulating different genomic regions and sequencing depth. The total number of clusters obtained from our pipeline is closer to the actual number of reference sequences when compared to single round of clustering. Also, the number of clusters whose consensus sequence matches a corresponding reference sequence is higher in our pipeline. We observe that the presence of repeat regions affects clustering accuracy.
0912.3208
Ralph Stern
Ralph Stern
The Risk Distribution Curve and its Derivatives
11 pages, 3 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Risk stratification is most directly and informatively summarized as a risk distribution curve. From this curve the ROC curve, predictiveness curve, and other curves depicting risk stratification can be derived, demonstrating that they present similar information. A mathematical expression for the ROC curve AUC is derived which clarifies how this measure of discrimination quantifies the overlap between patients who have and don't have events. This expression is used to define the positive correlation between the dispersion of the risk distribution curve and the ROC curve AUC. As more disperse risk distributions and greater separation between patients with and without events characterize superior risk stratification, the ROC curve AUC provides useful information.
[ { "created": "Wed, 16 Dec 2009 17:28:08 GMT", "version": "v1" } ]
2009-12-17
[ [ "Stern", "Ralph", "" ] ]
Risk stratification is most directly and informatively summarized as a risk distribution curve. From this curve the ROC curve, predictiveness curve, and other curves depicting risk stratification can be derived, demonstrating that they present similar information. A mathematical expression for the ROC curve AUC is derived which clarifies how this measure of discrimination quantifies the overlap between patients who have and don't have events. This expression is used to define the positive correlation between the dispersion of the risk distribution curve and the ROC curve AUC. As more disperse risk distributions and greater separation between patients with and without events characterize superior risk stratification, the ROC curve AUC provides useful information.
1401.6524
Bernhard Mehlig
A. Eriksson, F. Elias Wolff, B. Mehlig, A. Manica
The emergence of the rescue effect from the interaction of local stochastic dynamics within a metapopulation
12 pages, 5 figures, electronic supplementary material
Proc. Roy.Soc. B 2014 281 1780 20133127
10.1098/rspb.2013.3127
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Immigration can rescue local populations from extinction, helping to stabilise a metapopulation. Local population dynamics is important for determining the strength of this rescue effect, but the mechanistic link between local demographic parameters and the rescue effect at the metapopulation level has received very little attention by modellers. We develop an analytical framework that allows us to describe the emergence of the rescue effect from interacting local stochastic dynamics. We show this framework to be applicable to a wide range of spatial scales, providing a powerful and convenient alternative to individual-based models for making predictions concerning the fate of metapopulations. We show that the rescue effect plays an important role in minimising the increase in local extinction probability associated with high demographic stochasticity, but its role is more limited in the case of high local environmental stochasticity of recruitment or survival. While most models postulate the rescue effect, our framework provides an explicit mechanistic link between local dynamics and the emergence of the rescue effect, and more generally the stability of the whole metapopulation.
[ { "created": "Sat, 25 Jan 2014 12:35:52 GMT", "version": "v1" } ]
2014-02-24
[ [ "Eriksson", "A.", "" ], [ "Wolff", "F. Elias", "" ], [ "Mehlig", "B.", "" ], [ "Manica", "A.", "" ] ]
Immigration can rescue local populations from extinction, helping to stabilise a metapopulation. Local population dynamics is important for determining the strength of this rescue effect, but the mechanistic link between local demographic parameters and the rescue effect at the metapopulation level has received very little attention by modellers. We develop an analytical framework that allows us to describe the emergence of the rescue effect from interacting local stochastic dynamics. We show this framework to be applicable to a wide range of spatial scales, providing a powerful and convenient alternative to individual-based models for making predictions concerning the fate of metapopulations. We show that the rescue effect plays an important role in minimising the increase in local extinction probability associated with high demographic stochasticity, but its role is more limited in the case of high local environmental stochasticity of recruitment or survival. While most models postulate the rescue effect, our framework provides an explicit mechanistic link between local dynamics and the emergence of the rescue effect, and more generally the stability of the whole metapopulation.
1909.07786
Giuseppe Jurman
Andrea Gobbi, Marco Cristoforetti, Giuseppe Jurman, Cesare Furlanello
High Resolution Forecasting of Heat Waves impacts on Leaf Area Index by Multiscale Multitemporal Deep Learning
null
null
null
null
q-bio.PE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Climate change impacts could cause progressive decrease of crop quality and yield, up to harvest failures. In particular, heat waves and other climate extremes can lead to localized food shortages and even threaten food security of communities worldwide. In this study, we apply a deep learning architecture for high resolution forecasting (300 m, 10 days) of the Leaf Area Index (LAI), whose dynamics has been widely used to model the growth phase of crops and impact of heat waves. LAI models can be computed at 0.1 degree spatial resolution with an auto regressive component adjusted with weather conditions, validated with remote sensing measurements. However model actionability is poor in regions of varying terrain morphology at this scale (about 8 km at the Alps latitude). Our deep learning model aims instead at forecasting LAI by training multiscale multitemporal (MSMT) data from the Copernicus Global Land Service (CGLS) project for all Europe at 300m resolution and medium-resolution historical weather data. Further, the deep learning model inputs integrate high-resolution land surface features, known to improve forecasts of agricultural productivity. The historical weather data are then replaced with forecast values to predict LAI values at 10 day horizon on Europe. We propose the MSMT model to develop a high resolution crop-specific warning system for mitigating damage due to heat waves and other extreme events.
[ { "created": "Fri, 13 Sep 2019 15:06:33 GMT", "version": "v1" } ]
2019-09-18
[ [ "Gobbi", "Andrea", "" ], [ "Cristoforetti", "Marco", "" ], [ "Jurman", "Giuseppe", "" ], [ "Furlanello", "Cesare", "" ] ]
Climate change impacts could cause progressive decrease of crop quality and yield, up to harvest failures. In particular, heat waves and other climate extremes can lead to localized food shortages and even threaten food security of communities worldwide. In this study, we apply a deep learning architecture for high resolution forecasting (300 m, 10 days) of the Leaf Area Index (LAI), whose dynamics has been widely used to model the growth phase of crops and impact of heat waves. LAI models can be computed at 0.1 degree spatial resolution with an auto regressive component adjusted with weather conditions, validated with remote sensing measurements. However model actionability is poor in regions of varying terrain morphology at this scale (about 8 km at the Alps latitude). Our deep learning model aims instead at forecasting LAI by training multiscale multitemporal (MSMT) data from the Copernicus Global Land Service (CGLS) project for all Europe at 300m resolution and medium-resolution historical weather data. Further, the deep learning model inputs integrate high-resolution land surface features, known to improve forecasts of agricultural productivity. The historical weather data are then replaced with forecast values to predict LAI values at 10 day horizon on Europe. We propose the MSMT model to develop a high resolution crop-specific warning system for mitigating damage due to heat waves and other extreme events.
1611.09950
Mike Steel Prof.
Mike Steel and Christoph Leuenberger
The optimal rate for resolving a near-polytomy in a phylogeny
12 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The reconstruction of phylogenetic trees from discrete character data typically relies on models that assume the characters evolve under a continuous-time Markov process operating at some overall rate $\lambda$. When $\lambda$ is too high or too low, it becomes difficult to distinguish a short interior edge from a polytomy (the tree that results from collapsing the edge). In this note, we investigate the rate that maximizes the expected log-likelihood ratio (i.e. the Kullback--Leibler separation) between the four-leaf unresolved (star) tree and a four-leaf binary tree with interior edge length $\epsilon$. For a simple two-state model, we show that as $\epsilon$ converges to $0$ the optimal rate also converges to zero when the four pendant edges have equal length. However, when the four pendant branches have unequal length, two local optima can arise, and it is possible for the globally optimal rate to converge to a non-zero constant as $\epsilon \rightarrow 0$. Moreover, in the setting where the four pendant branches have equal lengths and either (i) we replace the two-state model by an infinite-state model or (ii) we retain the two-state model and replace the Kullback--Leibler separation by Euclidean distance as the maximization goal, then the optimal rate also converges to a non-zero constant.
[ { "created": "Wed, 30 Nov 2016 00:10:00 GMT", "version": "v1" }, { "created": "Sat, 21 Jan 2017 18:21:21 GMT", "version": "v2" }, { "created": "Fri, 10 Mar 2017 17:24:30 GMT", "version": "v3" } ]
2017-03-13
[ [ "Steel", "Mike", "" ], [ "Leuenberger", "Christoph", "" ] ]
The reconstruction of phylogenetic trees from discrete character data typically relies on models that assume the characters evolve under a continuous-time Markov process operating at some overall rate $\lambda$. When $\lambda$ is too high or too low, it becomes difficult to distinguish a short interior edge from a polytomy (the tree that results from collapsing the edge). In this note, we investigate the rate that maximizes the expected log-likelihood ratio (i.e. the Kullback--Leibler separation) between the four-leaf unresolved (star) tree and a four-leaf binary tree with interior edge length $\epsilon$. For a simple two-state model, we show that as $\epsilon$ converges to $0$ the optimal rate also converges to zero when the four pendant edges have equal length. However, when the four pendant branches have unequal length, two local optima can arise, and it is possible for the globally optimal rate to converge to a non-zero constant as $\epsilon \rightarrow 0$. Moreover, in the setting where the four pendant branches have equal lengths and either (i) we replace the two-state model by an infinite-state model or (ii) we retain the two-state model and replace the Kullback--Leibler separation by Euclidean distance as the maximization goal, then the optimal rate also converges to a non-zero constant.
2401.01371
Cameron Zachreson
Cameron Zachreson, Ruarai Tobin, Camelia Walker, Eamon Conway, Freya M Shearer, Jodie McVernon, Nicholas Geard
A model-based assessment of social isolation practices for COVID-19 outbreak response in residential care facilities
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply a computational modelling approach to investigate the relative effectiveness of general isolation practices for mitigation of COVID-19 outbreaks in residential care facilities. Our study focuses on policies intended to reduce contact between residents, without regard to confirmed infection status. Despite the ubiquity of such policies, and their controversial association with adverse physical and mental health outcomes, little evidence exists evaluating their effectiveness at mitigating outbreaks. Through detailed simulations of COVID-19 outbreaks in residential care facilities, our results demonstrate that general isolation of residents provides little additional impact beyond what is achievable through isolation of confirmed cases and deployment of personal protective equipment.
[ { "created": "Fri, 29 Dec 2023 01:17:58 GMT", "version": "v1" } ]
2024-01-04
[ [ "Zachreson", "Cameron", "" ], [ "Tobin", "Ruarai", "" ], [ "Walker", "Camelia", "" ], [ "Conway", "Eamon", "" ], [ "Shearer", "Freya M", "" ], [ "McVernon", "Jodie", "" ], [ "Geard", "Nicholas", "" ] ]
We apply a computational modelling approach to investigate the relative effectiveness of general isolation practices for mitigation of COVID-19 outbreaks in residential care facilities. Our study focuses on policies intended to reduce contact between residents, without regard to confirmed infection status. Despite the ubiquity of such policies, and their controversial association with adverse physical and mental health outcomes, little evidence exists evaluating their effectiveness at mitigating outbreaks. Through detailed simulations of COVID-19 outbreaks in residential care facilities, our results demonstrate that general isolation of residents provides little additional impact beyond what is achievable through isolation of confirmed cases and deployment of personal protective equipment.
1403.5477
Manish Gupta
Nilay Chheda and Manish K Gupta
RNA as a Permutation
9 pages, 5 figures, 3 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA secondary structure prediction and classification are two important problems in the field of RNA biology. Here, we propose a new permutation based approach to create logical non-disjoint clusters of different secondary structures of a single class or type. Many different types of techniques exist to classify RNA secondary structure data but none of them have ever used permutation based approach which is very simple and yet powerful. We have written a small JAVA program to generate permutation, apply our algorithm on those permutations and analyze the data and create different logical clusters. We believe that these clusters can be utilized to untangle the mystery of RNA secondary structure and analyze the development patterns of unknown RNA.
[ { "created": "Thu, 20 Mar 2014 13:16:49 GMT", "version": "v1" } ]
2014-03-24
[ [ "Chheda", "Nilay", "" ], [ "Gupta", "Manish K", "" ] ]
RNA secondary structure prediction and classification are two important problems in the field of RNA biology. Here, we propose a new permutation based approach to create logical non-disjoint clusters of different secondary structures of a single class or type. Many different types of techniques exist to classify RNA secondary structure data but none of them have ever used permutation based approach which is very simple and yet powerful. We have written a small JAVA program to generate permutation, apply our algorithm on those permutations and analyze the data and create different logical clusters. We believe that these clusters can be utilized to untangle the mystery of RNA secondary structure and analyze the development patterns of unknown RNA.
0911.3710
Nivedita Deo
I. Garg and N. Deo
Scaling, phase transition and genus distribution functions in matrix models of RNA with linear external interactions
20 pages, 22 figures, 1 table
null
null
null
q-bio.BM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A linear external perturbation is introduced in the action of the partition function of the random matrix model of RNA [G. Vernizzi, H. Orland and A. Zee, Phys. Rev. Lett. 94, 168103 (2005)]. It is seen that (i). the perturbation distinguishes between paired and unpaired bases in that there are structural changes, from unpaired and paired base structures ($0 \leq \alpha < 1$) to completely paired base structures ($\alpha=1$), as the perturbation parameter $\alpha$ approaches 1 ($\alpha$ is the ratio of interaction strengths of original and perturbed terms in the action of the partition function), (ii). the genus distributions exhibit small differences for small even and odd lengths $L$, (iii). the partition function of the linear interacting matrix model is related via a scaling formula to the re-scaled partition function of the random matrix model of RNA, (iv). the free energy and specific heat are plotted as functions of $L$, $\alpha$ and temperature $T$ and their first derivative with respect to $\alpha$ is plotted as a function of $\alpha$. The free energy shows a phase transition at $\alpha=1$ for odd (both small and large) lengths and for even lengths the transition at $\alpha=1$ gets sharper and sharper as more pseudoknots are included (that is for large lengths).
[ { "created": "Thu, 19 Nov 2009 06:58:09 GMT", "version": "v1" } ]
2009-11-20
[ [ "Garg", "I.", "" ], [ "Deo", "N.", "" ] ]
A linear external perturbation is introduced in the action of the partition function of the random matrix model of RNA [G. Vernizzi, H. Orland and A. Zee, Phys. Rev. Lett. 94, 168103 (2005)]. It is seen that (i). the perturbation distinguishes between paired and unpaired bases in that there are structural changes, from unpaired and paired base structures ($0 \leq \alpha < 1$) to completely paired base structures ($\alpha=1$), as the perturbation parameter $\alpha$ approaches 1 ($\alpha$ is the ratio of interaction strengths of original and perturbed terms in the action of the partition function), (ii). the genus distributions exhibit small differences for small even and odd lengths $L$, (iii). the partition function of the linear interacting matrix model is related via a scaling formula to the re-scaled partition function of the random matrix model of RNA, (iv). the free energy and specific heat are plotted as functions of $L$, $\alpha$ and temperature $T$ and their first derivative with respect to $\alpha$ is plotted as a function of $\alpha$. The free energy shows a phase transition at $\alpha=1$ for odd (both small and large) lengths and for even lengths the transition at $\alpha=1$ gets sharper and sharper as more pseudoknots are included (that is for large lengths).
1906.01703
Jiawei Zhang
Jiawei Zhang
Basic Neural Units of the Brain: Neurons, Synapses and Action Potential
38 pages, 23 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a follow-up tutorial article of [29], in this paper, we will introduce the basic compositional units of the human brain, which will further illustrate the cell-level bio-structure of the brain. On average, the human brain contains about 100 billion neurons and many more neuroglia which serve to support and protect the neurons. Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synapses. In the nervous system, a synapse is a structure that permits a neuron to pass an electrical or chemical signal to another neuron or to the target effector cell. Such signals will be accumulated as the membrane potential of the neurons, and it will trigger and pass the signal pulse (i.e., action potential) to other neurons when the membrane potential is greater than a precisely defined threshold voltage. To be more specific, in this paper, we will talk about the neurons, synapses and the action potential concepts in detail. Many of the materials used in this paper are from wikipedia and several other neuroscience introductory articles, which will be properly cited in this paper. This is the second of the three tutorial articles about the brain (the other two are [29] and [28]). The readers are suggested to read the previous tutorial article [29] to get more background information about the brain structure and functions prior to reading this paper.
[ { "created": "Thu, 30 May 2019 23:11:57 GMT", "version": "v1" } ]
2019-06-06
[ [ "Zhang", "Jiawei", "" ] ]
As a follow-up tutorial article of [29], in this paper, we will introduce the basic compositional units of the human brain, which will further illustrate the cell-level bio-structure of the brain. On average, the human brain contains about 100 billion neurons and many more neuroglia which serve to support and protect the neurons. Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synapses. In the nervous system, a synapse is a structure that permits a neuron to pass an electrical or chemical signal to another neuron or to the target effector cell. Such signals will be accumulated as the membrane potential of the neurons, and it will trigger and pass the signal pulse (i.e., action potential) to other neurons when the membrane potential is greater than a precisely defined threshold voltage. To be more specific, in this paper, we will talk about the neurons, synapses and the action potential concepts in detail. Many of the materials used in this paper are from wikipedia and several other neuroscience introductory articles, which will be properly cited in this paper. This is the second of the three tutorial articles about the brain (the other two are [29] and [28]). The readers are suggested to read the previous tutorial article [29] to get more background information about the brain structure and functions prior to reading this paper.
2002.09035
Abdelrahman Zayed
Abdelrahman Zayed, Yasser Iturria-Medina, Arno Villringer, Bernhard Sehm and Christopher J. Steele
Rapid Quantification of White Matter Disconnection in the Human Brain
2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With an estimated five million new stroke survivors every year and a rapidly aging population suffering from hyperintensities and diseases of presumed vascular origin that affect white matter and contribute to cognitive decline, it is critical that we understand the impact of white matter damage on brain structure and behavior. Current techniques for assessing the impact of lesions consider only location, type, and extent, while ignoring how the affected region was connected to the rest of the brain. Regional brain function is a product of both local structure and its connectivity. Therefore, obtaining a map of white matter disconnection is a crucial step that could help us predict the behavioral deficits that patients exhibit. In the present work, we introduce a new practical method for computing lesion-based white matter disconnection maps that require only moderate computational resources. We achieve this by creating diffusion tractography models of the brains of healthy adults and assessing the connectivity between small regions. We then interrupt these connectivity models by projecting patients' lesions into them to compute predicted white matter disconnection. A quantified disconnection map can be computed for an individual patient in approximately 35 seconds using a single core CPU-based computation. In comparison, a similar quantification performed with other tools provided by MRtrix3 takes 5.47 minutes.
[ { "created": "Mon, 17 Feb 2020 01:57:21 GMT", "version": "v1" }, { "created": "Wed, 20 May 2020 02:19:47 GMT", "version": "v2" } ]
2020-05-21
[ [ "Zayed", "Abdelrahman", "" ], [ "Iturria-Medina", "Yasser", "" ], [ "Villringer", "Arno", "" ], [ "Sehm", "Bernhard", "" ], [ "Steele", "Christopher J.", "" ] ]
With an estimated five million new stroke survivors every year and a rapidly aging population suffering from hyperintensities and diseases of presumed vascular origin that affect white matter and contribute to cognitive decline, it is critical that we understand the impact of white matter damage on brain structure and behavior. Current techniques for assessing the impact of lesions consider only location, type, and extent, while ignoring how the affected region was connected to the rest of the brain. Regional brain function is a product of both local structure and its connectivity. Therefore, obtaining a map of white matter disconnection is a crucial step that could help us predict the behavioral deficits that patients exhibit. In the present work, we introduce a new practical method for computing lesion-based white matter disconnection maps that require only moderate computational resources. We achieve this by creating diffusion tractography models of the brains of healthy adults and assessing the connectivity between small regions. We then interrupt these connectivity models by projecting patients' lesions into them to compute predicted white matter disconnection. A quantified disconnection map can be computed for an individual patient in approximately 35 seconds using a single core CPU-based computation. In comparison, a similar quantification performed with other tools provided by MRtrix3 takes 5.47 minutes.
1810.04256
Jingjing Lyu
Jingjing Lyu, Linda A. Auker, Anupam Priyadarshi and Rana D. Parshad
The effects of invasive epibionts on crab-mussel communities: a theoretical approach to understand mussel population decline
33 pages, 24 figures
null
10.1142/S0218339020500060
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Blue mussels (Mytilus edulis) are an important keystone species that have been declining in the Gulf of Maine. This could be attributed to a variety of complex factors such as indirect effects due to invasion by epibionts, which remains unexplored mathematically. Based on classical optimal foraging theory and anti-fouling defense mechanisms of mussels, we derive an ODE model for crab-mussel interactions in the presence of an invasive epibiont, Didemnum vexillum. The dynamical analysis leads to results on stability, global boundedness and bifurcations of the model. Next, via optimal control methods we predict various ecological outcomes. Our results have key implications for preserving mussel populations in the advent of invasion by non-native epibionts. In particular they help us understand the changing dynamics of local predator-prey communities, due to indirect effects that epibionts confer.
[ { "created": "Tue, 9 Oct 2018 21:46:50 GMT", "version": "v1" } ]
2021-08-06
[ [ "Lyu", "Jingjing", "" ], [ "Auker", "Linda A.", "" ], [ "Priyadarshi", "Anupam", "" ], [ "Parshad", "Rana D.", "" ] ]
Blue mussels (Mytilus edulis) are an important keystone species that have been declining in the Gulf of Maine. This could be attributed to a variety of complex factors such as indirect effects due to invasion by epibionts, which remains unexplored mathematically. Based on classical optimal foraging theory and anti-fouling defense mechanisms of mussels, we derive an ODE model for crab-mussel interactions in the presence of an invasive epibiont, Didemnum vexillum. The dynamical analysis leads to results on stability, global boundedness and bifurcations of the model. Next, via optimal control methods we predict various ecological outcomes. Our results have key implications for preserving mussel populations in the advent of invasion by non-native epibionts. In particular they help us understand the changing dynamics of local predator-prey communities, due to indirect effects that epibionts confer.
1807.10139
Oleg Gradov V.
Oleg V. Gradov, Margaret A. Gradova
Can graphene bilayers be the membrane mimetic materials? "Ion channels" in graphene-based nanostructures
null
Radioelectronics. Nanosystems. Information Technologies, 8(2):154-170, 2016
10.17725/rensit.2016.08.154 10.17725/rensit.2016.08.154
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prospects of application of graphene and related structures as the membrane mimetic materials, capable of reproducing several biomembrane functions up to the certain limit, are analyzed in the series of our papers. This paper considers the possibility of the ion channel function modeling using graphene and its derivatives. The physical mechanisms providing selective permeability for different membrane mimetic materials, as well as the limits of the adequate simulation of the transport, catalytic, sensing and electrogenic properties of the cell membrane ion channels using bilayered graphene-based structures are discussed.
[ { "created": "Mon, 23 Jul 2018 13:06:45 GMT", "version": "v1" } ]
2018-07-27
[ [ "Gradov", "Oleg V.", "" ], [ "Gradova", "Margaret A.", "" ] ]
The prospects of application of graphene and related structures as the membrane mimetic materials, capable of reproducing several biomembrane functions up to the certain limit, are analyzed in the series of our papers. This paper considers the possibility of the ion channel function modeling using graphene and its derivatives. The physical mechanisms providing selective permeability for different membrane mimetic materials, as well as the limits of the adequate simulation of the transport, catalytic, sensing and electrogenic properties of the cell membrane ion channels using bilayered graphene-based structures are discussed.
1305.5922
Denis Menshykau
Dagmar Iber, Simon Tanaka, Patrick Fried, Philipp Germann and Denis Menshykau
Simulating Tissue Morphogenesis and Signaling
null
null
null
null
q-bio.TO q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During embryonic development tissue morphogenesis and signaling are tightly coupled. It is therefore important to simulate both tissue morphogenesis and signaling simultaneously in in silico models of developmental processes. The resolution of the processes depends on the questions of interest. As part of this chapter we will introduce different descriptions of tissue morphogenesis. In the most simple approximation tissue is a continuous domain and tissue expansion is described according to a pre-defined function of time (and possibly space). In a slightly more advanced version the expansion speed and direction of the tissue may depend on a signaling variable that evolves on the domain. Both versions will be referred to as 'prescribed growth'. Alternatively tissue can be regarded as incompressible fluid and can be described with Navier-Stokes equations. Local cell expansion, proliferation, and death are then incorporated by a source term. In other applications the cell boundaries may be important and cell-based models must be introduced. Finally, cells may move within the tissue, a process best described by agent-based models.
[ { "created": "Sat, 25 May 2013 13:56:37 GMT", "version": "v1" } ]
2013-05-28
[ [ "Iber", "Dagmar", "" ], [ "Tanaka", "Simon", "" ], [ "Fried", "Patrick", "" ], [ "Germann", "Philipp", "" ], [ "Menshykau", "Denis", "" ] ]
During embryonic development tissue morphogenesis and signaling are tightly coupled. It is therefore important to simulate both tissue morphogenesis and signaling simultaneously in in silico models of developmental processes. The resolution of the processes depends on the questions of interest. As part of this chapter we will introduce different descriptions of tissue morphogenesis. In the most simple approximation tissue is a continuous domain and tissue expansion is described according to a pre-defined function of time (and possibly space). In a slightly more advanced version the expansion speed and direction of the tissue may depend on a signaling variable that evolves on the domain. Both versions will be referred to as 'prescribed growth'. Alternatively tissue can be regarded as incompressible fluid and can be described with Navier-Stokes equations. Local cell expansion, proliferation, and death are then incorporated by a source term. In other applications the cell boundaries may be important and cell-based models must be introduced. Finally, cells may move within the tissue, a process best described by agent-based models.
2105.05220
Yuhong Wang
Yuhong Wang, Sam Michael, Ruili Huang, Jinghua Zhao, Katlin Recabo, Danielle Bougie, Qiang Shu, Paul Shinn, Hongmao Sun
Retro Drug Design: From Target Properties to Molecular Structures
27 pages, 6 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
To generate drug molecules of desired properties with computational methods is the holy grail in pharmaceutical research. Here we describe an AI strategy, retro drug design, or RDD, to generate novel small molecule drugs from scratch to meet predefined requirements, including but not limited to biological activity against a drug target, and optimal range of physicochemical and ADMET properties. Traditional predictive models were first trained over experimental data for the target properties, using an atom typing based molecular descriptor system, ATP. Monte Carlo sampling algorithm was then utilized to find the solutions in the ATP space defined by the target properties, and the deep learning model of Seq2Seq was employed to decode molecular structures from the solutions. To test feasibility of the algorithm, we challenged RDD to generate novel drugs that can activate {\mu} opioid receptor (MOR) and penetrate blood brain barrier (BBB). Starting from vectors of random numbers, RDD generated 180,000 chemical structures, of which 78% were chemically valid. About 42,000 (31%) of the valid structures fell into the property space defined by MOR activity and BBB permeability. Out of the 42,000 structures, only 267 chemicals were commercially available, indicating a high extent of novelty of the AI-generated compounds. We purchased and assayed 96 compounds, and 25 of which were found to be MOR agonists. These compounds also have excellent BBB scores. The results presented in this paper illustrate that RDD has potential to revolutionize the current drug discovery process and create novel structures with multiple desired properties, including biological functions and ADMET properties. Availability of an AI-enabled fast track in drug discovery is essential to cope with emergent public health threat, such as pandemic of COVID-19.
[ { "created": "Tue, 11 May 2021 17:37:22 GMT", "version": "v1" } ]
2021-05-12
[ [ "Wang", "Yuhong", "" ], [ "Michael", "Sam", "" ], [ "Huang", "Ruili", "" ], [ "Zhao", "Jinghua", "" ], [ "Recabo", "Katlin", "" ], [ "Bougie", "Danielle", "" ], [ "Shu", "Qiang", "" ], [ "Shinn", "Paul", "" ], [ "Sun", "Hongmao", "" ] ]
To generate drug molecules of desired properties with computational methods is the holy grail in pharmaceutical research. Here we describe an AI strategy, retro drug design, or RDD, to generate novel small molecule drugs from scratch to meet predefined requirements, including but not limited to biological activity against a drug target, and optimal range of physicochemical and ADMET properties. Traditional predictive models were first trained over experimental data for the target properties, using an atom typing based molecular descriptor system, ATP. Monte Carlo sampling algorithm was then utilized to find the solutions in the ATP space defined by the target properties, and the deep learning model of Seq2Seq was employed to decode molecular structures from the solutions. To test feasibility of the algorithm, we challenged RDD to generate novel drugs that can activate {\mu} opioid receptor (MOR) and penetrate blood brain barrier (BBB). Starting from vectors of random numbers, RDD generated 180,000 chemical structures, of which 78% were chemically valid. About 42,000 (31%) of the valid structures fell into the property space defined by MOR activity and BBB permeability. Out of the 42,000 structures, only 267 chemicals were commercially available, indicating a high extent of novelty of the AI-generated compounds. We purchased and assayed 96 compounds, and 25 of which were found to be MOR agonists. These compounds also have excellent BBB scores. The results presented in this paper illustrate that RDD has potential to revolutionize the current drug discovery process and create novel structures with multiple desired properties, including biological functions and ADMET properties. Availability of an AI-enabled fast track in drug discovery is essential to cope with emergent public health threat, such as pandemic of COVID-19.
2203.08341
Sue Ann Campbell
Liang Chen and Sue Ann Campbell
Exact mean-field models for spiking neural networks with adaptation
30 pages, 7 figures
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Networks of spiking neurons with adaption have been shown to be able to reproduce a wide range of neural activities, including the emergent population bursting and spike synchrony that underpin brain disorders and normal function. Exact mean-field models derived from spiking neural networks are extremely valuable, as such models can be used to determine how individual neuron and network parameters interact to produce macroscopic network behaviour. In the paper, we derive and analyze a set of exact mean-field equations for the neural network with spike frequency adaptation. Specifically, our model is a network of Izhikevich neurons, where each neuron is modeled by a two dimensional system consisting of a quadratic integrate and fire equation plus an equation which implements spike frequency adaptation. Previous work deriving a mean-field model for this type of network, relied on the assumption of sufficiently slow dynamics of the adaptation variable. However, this approximation did not succeeded in establishing an exact correspondence between the macroscopic description and the realistic neural network, especially when the adaptation time constant was not large. The challenge lies in how to achieve a closed set of mean-field equations with the inclusion of the mean-field expression of the adaptation variable. We address this challenge by using a Lorentzian ansatz combined with the moment closure approach to arrive at the mean-field system in the thermodynamic limit. The resulting macroscopic description is capable of qualitatively and quantitatively describing the collective dynamics of the neural network, including transition between tonic firing and bursting.
[ { "created": "Wed, 16 Mar 2022 01:20:43 GMT", "version": "v1" } ]
2022-03-17
[ [ "Chen", "Liang", "" ], [ "Campbell", "Sue Ann", "" ] ]
Networks of spiking neurons with adaption have been shown to be able to reproduce a wide range of neural activities, including the emergent population bursting and spike synchrony that underpin brain disorders and normal function. Exact mean-field models derived from spiking neural networks are extremely valuable, as such models can be used to determine how individual neuron and network parameters interact to produce macroscopic network behaviour. In the paper, we derive and analyze a set of exact mean-field equations for the neural network with spike frequency adaptation. Specifically, our model is a network of Izhikevich neurons, where each neuron is modeled by a two dimensional system consisting of a quadratic integrate and fire equation plus an equation which implements spike frequency adaptation. Previous work deriving a mean-field model for this type of network, relied on the assumption of sufficiently slow dynamics of the adaptation variable. However, this approximation did not succeeded in establishing an exact correspondence between the macroscopic description and the realistic neural network, especially when the adaptation time constant was not large. The challenge lies in how to achieve a closed set of mean-field equations with the inclusion of the mean-field expression of the adaptation variable. We address this challenge by using a Lorentzian ansatz combined with the moment closure approach to arrive at the mean-field system in the thermodynamic limit. The resulting macroscopic description is capable of qualitatively and quantitatively describing the collective dynamics of the neural network, including transition between tonic firing and bursting.
2309.03099
Chakradhar Guntuboina
Chakradhar Guntuboina, Adrita Das, Parisa Mollaei, Seongwon Kim, and Amir Barati Farimani
PeptideBERT: A Language Model based on Transformers for Peptide Property Prediction
24 pages
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent advances in Language Models have enabled the protein modeling community with a powerful tool since protein sequences can be represented as text. Specifically, by taking advantage of Transformers, sequence-to-property prediction will be amenable without the need for explicit structural data. In this work, inspired by recent progress in Large Language Models (LLMs), we introduce PeptideBERT, a protein language model for predicting three key properties of peptides (hemolysis, solubility, and non-fouling). The PeptideBert utilizes the ProtBERT pretrained transformer model with 12 attention heads and 12 hidden layers. We then finetuned the pretrained model for the three downstream tasks. Our model has achieved state of the art (SOTA) for predicting Hemolysis, which is a task for determining peptide's potential to induce red blood cell lysis. Our PeptideBert non-fouling model also achieved remarkable accuracy in predicting peptide's capacity to resist non-specific interactions. This model, trained predominantly on shorter sequences, benefits from the dataset where negative examples are largely associated with insoluble peptides. Codes, models, and data used in this study are freely available at: https://github.com/ChakradharG/PeptideBERT
[ { "created": "Mon, 28 Aug 2023 01:09:21 GMT", "version": "v1" } ]
2023-09-07
[ [ "Guntuboina", "Chakradhar", "" ], [ "Das", "Adrita", "" ], [ "Mollaei", "Parisa", "" ], [ "Kim", "Seongwon", "" ], [ "Farimani", "Amir Barati", "" ] ]
Recent advances in Language Models have enabled the protein modeling community with a powerful tool since protein sequences can be represented as text. Specifically, by taking advantage of Transformers, sequence-to-property prediction will be amenable without the need for explicit structural data. In this work, inspired by recent progress in Large Language Models (LLMs), we introduce PeptideBERT, a protein language model for predicting three key properties of peptides (hemolysis, solubility, and non-fouling). The PeptideBert utilizes the ProtBERT pretrained transformer model with 12 attention heads and 12 hidden layers. We then finetuned the pretrained model for the three downstream tasks. Our model has achieved state of the art (SOTA) for predicting Hemolysis, which is a task for determining peptide's potential to induce red blood cell lysis. Our PeptideBert non-fouling model also achieved remarkable accuracy in predicting peptide's capacity to resist non-specific interactions. This model, trained predominantly on shorter sequences, benefits from the dataset where negative examples are largely associated with insoluble peptides. Codes, models, and data used in this study are freely available at: https://github.com/ChakradharG/PeptideBERT
q-bio/0312027
Dietrich Stauffer
Stanislaw Cebrat, Jan P. Radomski and Dietrich Stauffer
Genetic Paralog Analysis and Simulations
10 pages including figs, for ICCS 2004 Cracow
null
null
null
q-bio.GN
null
Using Monte Carlo methods, we simulated the effects of bias in generation and elimination of paralogs on the size distribution of paralog groups. It was found that the function describing the decay of the number of paralog groups with their size depends on the ratio between the probability of duplications of genes and their deletions, which corresponds to different selection pressures on the genome size. Slightly different slopes of curves describing the decay of the number of paralog groups with their size were also observed when the threshold of homology between paralogous sequences was changed.
[ { "created": "Thu, 18 Dec 2003 21:15:08 GMT", "version": "v1" } ]
2007-05-23
[ [ "Cebrat", "Stanislaw", "" ], [ "Radomski", "Jan P.", "" ], [ "Stauffer", "Dietrich", "" ] ]
Using Monte Carlo methods, we simulated the effects of bias in generation and elimination of paralogs on the size distribution of paralog groups. It was found that the function describing the decay of the number of paralog groups with their size depends on the ratio between the probability of duplications of genes and their deletions, which corresponds to different selection pressures on the genome size. Slightly different slopes of curves describing the decay of the number of paralog groups with their size were also observed when the threshold of homology between paralogous sequences was changed.
1605.06544
Xaq Pitkow
Rajkumar Vasudeva Raju and Xaq Pitkow
Inference by Reparameterization in Neural Population Codes
9 pages, 6 figures, submitted to NIPS 2016
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behavioral experiments on humans and animals suggest that the brain performs probabilistic inference to interpret its environment. Here we present a new general-purpose, biologically-plausible neural implementation of approximate inference. The neural network represents uncertainty using Probabilistic Population Codes (PPCs), which are distributed neural representations that naturally encode probability distributions, and support marginalization and evidence integration in a biologically-plausible manner. By connecting multiple PPCs together as a probabilistic graphical model, we represent multivariate probability distributions. Approximate inference in graphical models can be accomplished by message-passing algorithms that disseminate local information throughout the graph. An attractive and often accurate example of such an algorithm is Loopy Belief Propagation (LBP), which uses local marginalization and evidence integration operations to perform approximate inference efficiently even for complex models. Unfortunately, a subtle feature of LBP renders it neurally implausible. However, LBP can be elegantly reformulated as a sequence of Tree-based Reparameterizations (TRP) of the graphical model. We re-express the TRP updates as a nonlinear dynamical system with both fast and slow timescales, and show that this produces a neurally plausible solution. By combining all of these ideas, we show that a network of PPCs can represent multivariate probability distributions and implement the TRP updates to perform probabilistic inference. Simulations with Gaussian graphical models demonstrate that the neural network inference quality is comparable to the direct evaluation of LBP and robust to noise, and thus provides a promising mechanism for general probabilistic inference in the population codes of the brain.
[ { "created": "Fri, 20 May 2016 21:38:32 GMT", "version": "v1" } ]
2016-05-24
[ [ "Raju", "Rajkumar Vasudeva", "" ], [ "Pitkow", "Xaq", "" ] ]
Behavioral experiments on humans and animals suggest that the brain performs probabilistic inference to interpret its environment. Here we present a new general-purpose, biologically-plausible neural implementation of approximate inference. The neural network represents uncertainty using Probabilistic Population Codes (PPCs), which are distributed neural representations that naturally encode probability distributions, and support marginalization and evidence integration in a biologically-plausible manner. By connecting multiple PPCs together as a probabilistic graphical model, we represent multivariate probability distributions. Approximate inference in graphical models can be accomplished by message-passing algorithms that disseminate local information throughout the graph. An attractive and often accurate example of such an algorithm is Loopy Belief Propagation (LBP), which uses local marginalization and evidence integration operations to perform approximate inference efficiently even for complex models. Unfortunately, a subtle feature of LBP renders it neurally implausible. However, LBP can be elegantly reformulated as a sequence of Tree-based Reparameterizations (TRP) of the graphical model. We re-express the TRP updates as a nonlinear dynamical system with both fast and slow timescales, and show that this produces a neurally plausible solution. By combining all of these ideas, we show that a network of PPCs can represent multivariate probability distributions and implement the TRP updates to perform probabilistic inference. Simulations with Gaussian graphical models demonstrate that the neural network inference quality is comparable to the direct evaluation of LBP and robust to noise, and thus provides a promising mechanism for general probabilistic inference in the population codes of the brain.
2011.13506
Jesus M Cortes
Izaro Fernandez-Iriondo, Antonio Jimenez-Marin, Ibai Diez, Paolo Bonifazi, Stephan P. Swinnen, Miguel A. Mu\~noz and Jesus M. Cortes
Small variation in dynamic functional connectivity in cerebellar networks
19 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain networks can be defined and explored through their connectivity. Here, we analyzed the relationship between structural connectivity (SC) across 2,514 regions that cover the entire brain and brainstem, and their dynamic functional connectivity (DFC). To do so, we focused on a combination of two metrics: the first assesses the degree of SC-DFC similarity and the second is the intrinsic variability of the DFC networks over time. Overall, we found that cerebellar networks have a smaller DFC variability than other networks in the brain. Moreover, the internal structure of the cerebellum could be clearly divided in two distinct posterior and anterior parts, the latter also connected to the brainstem. The mechanism to maintain small variability of the DFC in the posterior part of the cerebellum is consistent with another of our findings, namely, that this structure exhibits the highest SC-DFC similarity relative to the other networks studied. By contrast, the anterior part of the cerebellum also exhibits small DFC variability but it has the lowest SC-DFC similarity, suggesting a different mechanism is at play. Because this structure connects to the brainstem, which regulates sleep cycles, cardiac and respiratory functioning, we suggest that such critical functionality drives the low variability in the DFC. Overall, the low variability detected in DFC expands our current knowledge of cerebellar networks, which are extremely rich and complex, participating in a wide range of cognitive functions, from movement control and coordination to executive function or emotional regulation. Moreover, the association between such low variability and structure suggests that differentiated computational principles can be applied in the cerebellum as opposed to other structures, such as the cerebral cortex.
[ { "created": "Fri, 27 Nov 2020 00:39:42 GMT", "version": "v1" } ]
2020-11-30
[ [ "Fernandez-Iriondo", "Izaro", "" ], [ "Jimenez-Marin", "Antonio", "" ], [ "Diez", "Ibai", "" ], [ "Bonifazi", "Paolo", "" ], [ "Swinnen", "Stephan P.", "" ], [ "Muñoz", "Miguel A.", "" ], [ "Cortes", "Jesus M.", "" ] ]
Brain networks can be defined and explored through their connectivity. Here, we analyzed the relationship between structural connectivity (SC) across 2,514 regions that cover the entire brain and brainstem, and their dynamic functional connectivity (DFC). To do so, we focused on a combination of two metrics: the first assesses the degree of SC-DFC similarity and the second is the intrinsic variability of the DFC networks over time. Overall, we found that cerebellar networks have a smaller DFC variability than other networks in the brain. Moreover, the internal structure of the cerebellum could be clearly divided in two distinct posterior and anterior parts, the latter also connected to the brainstem. The mechanism to maintain small variability of the DFC in the posterior part of the cerebellum is consistent with another of our findings, namely, that this structure exhibits the highest SC-DFC similarity relative to the other networks studied. By contrast, the anterior part of the cerebellum also exhibits small DFC variability but it has the lowest SC-DFC similarity, suggesting a different mechanism is at play. Because this structure connects to the brainstem, which regulates sleep cycles, cardiac and respiratory functioning, we suggest that such critical functionality drives the low variability in the DFC. Overall, the low variability detected in DFC expands our current knowledge of cerebellar networks, which are extremely rich and complex, participating in a wide range of cognitive functions, from movement control and coordination to executive function or emotional regulation. Moreover, the association between such low variability and structure suggests that differentiated computational principles can be applied in the cerebellum as opposed to other structures, such as the cerebral cortex.
1809.08203
Michael Hasselmo PhD
Michael E. Hasselmo
A model of cortical cognitive function using hierarchical interactions of gating matrices in internal agents coding relational representations
6 figures, version 2 simplifies notation and changes notation from row vector to column vector for clarity in the equations, and fixes some typos
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flexible cognition requires the ability to rapidly detect systematic functions of variables and guide future behavior based on predictions. The model described here proposes a potential framework for patterns of neural activity to detect systematic functions and relations between components of sensory input and apply them in a predictive manner. This model includes multiple internal gating agents that operate within the state space of neural activity, in analogy to external agents behaving in the external environment. The multiple internal gating agents represent patterns of neural activity that detect and gate patterns of matrix connectivity representing the relations between different neural populations. The patterns of gating matrix connectivity represent functions that can be used to predict future components of a series of sensory inputs or the relationship between different features of a static sensory stimulus. The model is applied to the prediction of dynamical trajectories, the internal relationship between features of different sensory stimuli and to the prediction of affine transformations that could be useful for solving cognitive tasks such as the Ravens progressive matrices task.
[ { "created": "Fri, 21 Sep 2018 16:46:33 GMT", "version": "v1" }, { "created": "Mon, 15 Oct 2018 18:21:46 GMT", "version": "v2" } ]
2018-10-17
[ [ "Hasselmo", "Michael E.", "" ] ]
Flexible cognition requires the ability to rapidly detect systematic functions of variables and guide future behavior based on predictions. The model described here proposes a potential framework for patterns of neural activity to detect systematic functions and relations between components of sensory input and apply them in a predictive manner. This model includes multiple internal gating agents that operate within the state space of neural activity, in analogy to external agents behaving in the external environment. The multiple internal gating agents represent patterns of neural activity that detect and gate patterns of matrix connectivity representing the relations between different neural populations. The patterns of gating matrix connectivity represent functions that can be used to predict future components of a series of sensory inputs or the relationship between different features of a static sensory stimulus. The model is applied to the prediction of dynamical trajectories, the internal relationship between features of different sensory stimuli and to the prediction of affine transformations that could be useful for solving cognitive tasks such as the Ravens progressive matrices task.
2104.00386
Lorenzo Madeddu
Giorgio Grani, Lorenzo Madeddu and Paola Velardi
A network-based analysis of disease modules from a taxonomic perspective
8 pages, 5 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Human-curated disease ontologies are widely used for diagnostic evaluation, treatment and data comparisons over time, and clinical decision support. The classification principles underlying these ontologies are guided by the analysis of observable pathological similarities between disorders, often based on anatomical or histological principles. Although, thanks to recent advances in molecular biology, disease ontologies are slowly changing to integrate the etiological and genetic origins of diseases, nosology still reflects this "reductionist" perspective. Proximity relationships of disease modules (hereafter DMs) in the human interactome network are now increasingly used in diagnostics, to identify pathobiologically similar diseases and to support drug repurposing and discovery. On the other hand, similarity relations induced from structural proximity of DMs also have several limitations, such as incomplete knowledge of disease-gene relationships and reliability of clinical trials to assess their validity. The purpose of the study described in this paper is to shed more light on disease similarities by analyzing the relationship between categorical proximity of diseases in human-curated ontologies and structural proximity of the related DM in the interactome. Method: We propose a methodology (and related algorithms) to automatically induce a hierarchical structure from proximity relations between DMs, and to compare this structure with a human-curated disease taxonomy. Results: We demonstrate that the proposed methodology allows to systematically analyze commonalities and differences among structural and categorical similarity of human diseases, help refine and extend human disease classification systems, and identify promising network areas where new disease-gene interactions can be discovered.
[ { "created": "Thu, 1 Apr 2021 10:42:02 GMT", "version": "v1" } ]
2021-04-02
[ [ "Grani", "Giorgio", "" ], [ "Madeddu", "Lorenzo", "" ], [ "Velardi", "Paola", "" ] ]
Objective: Human-curated disease ontologies are widely used for diagnostic evaluation, treatment and data comparisons over time, and clinical decision support. The classification principles underlying these ontologies are guided by the analysis of observable pathological similarities between disorders, often based on anatomical or histological principles. Although, thanks to recent advances in molecular biology, disease ontologies are slowly changing to integrate the etiological and genetic origins of diseases, nosology still reflects this "reductionist" perspective. Proximity relationships of disease modules (hereafter DMs) in the human interactome network are now increasingly used in diagnostics, to identify pathobiologically similar diseases and to support drug repurposing and discovery. On the other hand, similarity relations induced from structural proximity of DMs also have several limitations, such as incomplete knowledge of disease-gene relationships and reliability of clinical trials to assess their validity. The purpose of the study described in this paper is to shed more light on disease similarities by analyzing the relationship between categorical proximity of diseases in human-curated ontologies and structural proximity of the related DM in the interactome. Method: We propose a methodology (and related algorithms) to automatically induce a hierarchical structure from proximity relations between DMs, and to compare this structure with a human-curated disease taxonomy. Results: We demonstrate that the proposed methodology allows to systematically analyze commonalities and differences among structural and categorical similarity of human diseases, help refine and extend human disease classification systems, and identify promising network areas where new disease-gene interactions can be discovered.
2402.12796
Guillermo Barrios Morales
Guillermo B. Morales
The dynamics of neural codes in biological and artificial neural networks
A dissertation submitted to the University of Granada in partial fulfillment of the requirements for the degree of Doctor of Philosophy
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
Advancing our knowledge of how the brain processes information remains a key challenge in neuroscience. This thesis combines three different approaches to the study of the dynamics of neural networks and their encoding representations: a computational approach, that builds upon basic biological features of neurons and their networks to construct effective models that can simulate their structure and dynamics; a machine-learning approach, which draws a parallel with the functional capabilities of brain networks, allowing us to infer the dynamical and encoding properties required to solve certain input-processing tasks; and a final, theoretical treatment, which will take us into the fascinating hypothesis of the "critical" brain as the mathematical foundation that can explain the emergent collective properties arising from the interactions of millions of neurons. Hand in hand with physics, we venture into the realm of neuroscience to explain the existence of quasi-universal scaling properties across brain regions, setting out to quantify the distance of their dynamics from a critical point. Next, we move into the grounds of artificial intelligence, where the very same theory of critical phenomena will prove very useful for explaining the effects of biologically-inspired plasticity rules in the forecasting ability of Reservoir Computers. Halfway into our journey, we explore the concept of neural representations of external stimuli, unveiling a surprising link between the dynamical regime of neural networks and the optimal topological properties of such representation manifolds. The thesis ends with the singular problem of representational drift in the process of odor encoding carried out by the olfactory cortex, uncovering the potential synaptic plasticity mechanisms that could explain this recently observed phenomenon.
[ { "created": "Tue, 20 Feb 2024 08:10:16 GMT", "version": "v1" } ]
2024-02-21
[ [ "Morales", "Guillermo B.", "" ] ]
Advancing our knowledge of how the brain processes information remains a key challenge in neuroscience. This thesis combines three different approaches to the study of the dynamics of neural networks and their encoding representations: a computational approach, that builds upon basic biological features of neurons and their networks to construct effective models that can simulate their structure and dynamics; a machine-learning approach, which draws a parallel with the functional capabilities of brain networks, allowing us to infer the dynamical and encoding properties required to solve certain input-processing tasks; and a final, theoretical treatment, which will take us into the fascinating hypothesis of the "critical" brain as the mathematical foundation that can explain the emergent collective properties arising from the interactions of millions of neurons. Hand in hand with physics, we venture into the realm of neuroscience to explain the existence of quasi-universal scaling properties across brain regions, setting out to quantify the distance of their dynamics from a critical point. Next, we move into the grounds of artificial intelligence, where the very same theory of critical phenomena will prove very useful for explaining the effects of biologically-inspired plasticity rules in the forecasting ability of Reservoir Computers. Halfway into our journey, we explore the concept of neural representations of external stimuli, unveiling a surprising link between the dynamical regime of neural networks and the optimal topological properties of such representation manifolds. The thesis ends with the singular problem of representational drift in the process of odor encoding carried out by the olfactory cortex, uncovering the potential synaptic plasticity mechanisms that could explain this recently observed phenomenon.
1812.00794
Federico Chella
Federico Chella, Vittorio Pizzella, Filippo Zappasodi, Laura Marzetti
Impact of the reference choice on scalp EEG connectivity estimation
Added funding acknowledgements to Metadata. Acknowledgements: "This project has received funding from the European Commission Horizon 2020 research and innovation program under Grant Agreement No. 686865 (BREAKBEN - H2020-FETOPEN-2014-2015/H2020-FETOPEN-2014-2015-RIA). The content reflects only the author's view and the European Commission is not responsible for the content."
Journal of Neural Engineering, 13, 036016 (2016)
10.1088/1741-2560/13/3/036016
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several scalp EEG functional connectivity studies, mostly clinical, seem to overlook the reference electrode impact. The subsequent interpretation of brain connectivity is thus often biased by the choice a non-neutral reference. This study aims at systematically investigating these effects. As EEG reference, we examined: the vertex electrode (Cz), the digitally linked mastoids (DLM), the average reference (AVE), and the Reference Electrode Standardization Technique (REST). As a connectivity metric, we used the imaginary part of coherency. We tested simulated and real data (eyes open resting state), by evaluating the influence of electrode density, effect of head model accuracy in the REST transformation, and impact on the characterization of the topology of functional networks from graph analysis. Simulations demonstrated that REST significantly reduced the distortion of connectivity patterns when compared to AVE, Cz and DLM references. Moreover, the availability of high-density EEG systems and an accurate knowledge of the head model are crucial elements to improve REST performance. For real data, a systematic change of the spatial pattern of functional connectivity depending on the chosen reference was also observed. The distortion of connectivity patterns was larger for the Cz reference, and progressively decreases when using the DLM, the AVE, the REST. Strikingly, we also showed that network attributes derived from graph analysis, i.e., node degree and local efficiency, are significantly influenced by the EEG reference choice. Overall, this study highlights that significant differences arise in scalp EEG functional connectivity and graph network properties, in dependence of the chosen reference. We hope our study will convey the message that caution should be taken when interpreting and comparing results obtained from different laboratories when using different reference schemes.
[ { "created": "Mon, 3 Dec 2018 14:48:17 GMT", "version": "v1" }, { "created": "Thu, 24 Jan 2019 14:17:09 GMT", "version": "v2" } ]
2019-01-25
[ [ "Chella", "Federico", "" ], [ "Pizzella", "Vittorio", "" ], [ "Zappasodi", "Filippo", "" ], [ "Marzetti", "Laura", "" ] ]
Several scalp EEG functional connectivity studies, mostly clinical, seem to overlook the reference electrode impact. The subsequent interpretation of brain connectivity is thus often biased by the choice a non-neutral reference. This study aims at systematically investigating these effects. As EEG reference, we examined: the vertex electrode (Cz), the digitally linked mastoids (DLM), the average reference (AVE), and the Reference Electrode Standardization Technique (REST). As a connectivity metric, we used the imaginary part of coherency. We tested simulated and real data (eyes open resting state), by evaluating the influence of electrode density, effect of head model accuracy in the REST transformation, and impact on the characterization of the topology of functional networks from graph analysis. Simulations demonstrated that REST significantly reduced the distortion of connectivity patterns when compared to AVE, Cz and DLM references. Moreover, the availability of high-density EEG systems and an accurate knowledge of the head model are crucial elements to improve REST performance. For real data, a systematic change of the spatial pattern of functional connectivity depending on the chosen reference was also observed. The distortion of connectivity patterns was larger for the Cz reference, and progressively decreases when using the DLM, the AVE, the REST. Strikingly, we also showed that network attributes derived from graph analysis, i.e., node degree and local efficiency, are significantly influenced by the EEG reference choice. Overall, this study highlights that significant differences arise in scalp EEG functional connectivity and graph network properties, in dependence of the chosen reference. We hope our study will convey the message that caution should be taken when interpreting and comparing results obtained from different laboratories when using different reference schemes.
1401.2016
Giovanni Marco Dall'Olio PhD
Giovanni Marco Dall'Olio, Ali R. Vahdati, Bertranpetit Jaume, Wagner Andreas, Laayouni Hafid
VCF2Networks: applying Genotype Networks to Single Nucleotide Variants data
5 pages, 1 table, plus supplementary of 7 pages and 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: Genotype networks are a method used in systems biology to study the innovability of a given phenotype, determining whether the phenotype is robust to mutations, and how do the genotypes associated to it are distributed in the genotype space. Here we developed VCF2Networks, a tool to apply this method to population genetics data, and in particular to single Nucleotide Variants data encoded in the Variant Call file Format (VCF). A complete summary of the properties of the genotype network that can be calculated by VCF2Networks is given in the Supplementary Materials 1. Availability and Implementation: The home page of the project is https://bitbucket.org/dalloliogm/vcf2networks . VCF2Networks is also available directly from the Python Package Index (PyPI), under the name vcf2networks.
[ { "created": "Thu, 9 Jan 2014 14:31:09 GMT", "version": "v1" }, { "created": "Sun, 18 May 2014 19:52:26 GMT", "version": "v2" } ]
2014-05-20
[ [ "Dall'Olio", "Giovanni Marco", "" ], [ "Vahdati", "Ali R.", "" ], [ "Jaume", "Bertranpetit", "" ], [ "Andreas", "Wagner", "" ], [ "Hafid", "Laayouni", "" ] ]
Summary: Genotype networks are a method used in systems biology to study the innovability of a given phenotype, determining whether the phenotype is robust to mutations, and how do the genotypes associated to it are distributed in the genotype space. Here we developed VCF2Networks, a tool to apply this method to population genetics data, and in particular to single Nucleotide Variants data encoded in the Variant Call file Format (VCF). A complete summary of the properties of the genotype network that can be calculated by VCF2Networks is given in the Supplementary Materials 1. Availability and Implementation: The home page of the project is https://bitbucket.org/dalloliogm/vcf2networks . VCF2Networks is also available directly from the Python Package Index (PyPI), under the name vcf2networks.
q-bio/0701041
Adam Schwarz
Adam J. Schwarz, Alessandro Gozzi, Angelo Bifone
Community structure and modularity in networks of correlated brain activity
10 pages, 2 figs [v2:] arXiv identifier added explicitly in header (not automatically tagged to original PDF upload)
null
null
null
q-bio.NC
null
We present an approach to study functional segregation and integration in the living brain based on community structure decomposition determined by maximum modularity. We demonstrate this method with a network derived from functional imaging data with nodes defined by individual image pixels, and edges in terms of correlated signal changes. We found communities whose anatomical distributions correspond to biologically meaningful structures and include compelling functional subdivisions between anatomically equivalent brain regions.
[ { "created": "Thu, 25 Jan 2007 16:45:42 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2007 09:34:36 GMT", "version": "v2" } ]
2007-05-23
[ [ "Schwarz", "Adam J.", "" ], [ "Gozzi", "Alessandro", "" ], [ "Bifone", "Angelo", "" ] ]
We present an approach to study functional segregation and integration in the living brain based on community structure decomposition determined by maximum modularity. We demonstrate this method with a network derived from functional imaging data with nodes defined by individual image pixels, and edges in terms of correlated signal changes. We found communities whose anatomical distributions correspond to biologically meaningful structures and include compelling functional subdivisions between anatomically equivalent brain regions.
1110.4302
Florence Yerly
Chrystel Feller, Jean-Pierre Gabriel, Christian Mazza and Florence Yerly
Pattern formation in auxin flux
27 pages
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The plant hormone auxin is fundamental for plant growth, and its spatial distribution in plant tissues is critical for plant morphogenesis. We consider a leading model of the polar auxin flux, and study in full detail the stability of the possible equilibrium configurations. We show that the critical states of the auxin transport process are composed of basic building blocks, which are isolated in a background of auxin depleted cells, and are not geometrically regular in general. The same model was considered recently through a continuous limit and a coupling to the von Karman equations, to model the interplay of biochemistry and mechanics during plant growth. Our conclusions might be of interest in this setting, since, for example, we establish the existence of Lyapunov functions for the auxin flux, proving in this way the convergence of pure transport processes toward the set of critical configurations.
[ { "created": "Wed, 19 Oct 2011 14:50:29 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2011 15:44:27 GMT", "version": "v2" } ]
2011-10-25
[ [ "Feller", "Chrystel", "" ], [ "Gabriel", "Jean-Pierre", "" ], [ "Mazza", "Christian", "" ], [ "Yerly", "Florence", "" ] ]
The plant hormone auxin is fundamental for plant growth, and its spatial distribution in plant tissues is critical for plant morphogenesis. We consider a leading model of the polar auxin flux, and study in full detail the stability of the possible equilibrium configurations. We show that the critical states of the auxin transport process are composed of basic building blocks, which are isolated in a background of auxin depleted cells, and are not geometrically regular in general. The same model was considered recently through a continuous limit and a coupling to the von Karman equations, to model the interplay of biochemistry and mechanics during plant growth. Our conclusions might be of interest in this setting, since, for example, we establish the existence of Lyapunov functions for the auxin flux, proving in this way the convergence of pure transport processes toward the set of critical configurations.
1310.7182
Bernard Auriol
Bernard M. Auriol, J\'er\^ome B\'eard, Jean-Marc Broto, Didier F. Descouens, Lise J.S. Durand, Frederick Garcia, Christian F. Gillieaux, Elizabeth G. Joiner, Bernard Libes, Robert Ruiz, Claire Thalamas
Overt and covert paths for sound in the auditory system of mammals
we submitted a new, more complete version in January 2022
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The consensus, according to which the transmission of sound from the tympanum to the Outer Hair Cells is solely mechanical, is problematic, especially with respect to high pitched sounds. We demonstrate that the collagenous fibers of the tympanum produce electric potentials synchronous to acoustic vibrations and that, contrary to expectations, their amplitude increases as the frequency of the vibration increases. These electrical potentials cannot be reduced to the cochlear microphonic. Moreover, the alteration of collagen as well as that of the gap junctions (electric synapses) necessary for the transmission of the electric potentials to the complex formed by the Deiters Cells and Outer Hair Cells, results in hypoacousis or deafness. The discovery of an electronic pathway, complementary to air and bone conduction has the potential for elucidating certain important as yet unexplained aspects of hearing with respect to cochlear amplification, otoacoustic emissions, and hypoacusis related to the deterioration of collagen or of gap-junctions. Thus, our findings have important implications for both theory and practice.
[ { "created": "Sun, 27 Oct 2013 10:54:26 GMT", "version": "v1" }, { "created": "Sat, 7 Dec 2013 21:58:40 GMT", "version": "v2" }, { "created": "Tue, 23 Dec 2014 11:27:59 GMT", "version": "v3" }, { "created": "Fri, 21 Jan 2022 15:00:38 GMT", "version": "v4" } ]
2022-01-24
[ [ "Auriol", "Bernard M.", "" ], [ "Béard", "Jérôme", "" ], [ "Broto", "Jean-Marc", "" ], [ "Descouens", "Didier F.", "" ], [ "Durand", "Lise J. S.", "" ], [ "Garcia", "Frederick", "" ], [ "Gillieaux", "Christian F.", "" ], [ "Joiner", "Elizabeth G.", "" ], [ "Libes", "Bernard", "" ], [ "Ruiz", "Robert", "" ], [ "Thalamas", "Claire", "" ] ]
The consensus, according to which the transmission of sound from the tympanum to the Outer Hair Cells is solely mechanical, is problematic, especially with respect to high pitched sounds. We demonstrate that the collagenous fibers of the tympanum produce electric potentials synchronous to acoustic vibrations and that, contrary to expectations, their amplitude increases as the frequency of the vibration increases. These electrical potentials cannot be reduced to the cochlear microphonic. Moreover, the alteration of collagen as well as that of the gap junctions (electric synapses) necessary for the transmission of the electric potentials to the complex formed by the Deiters Cells and Outer Hair Cells, results in hypoacousis or deafness. The discovery of an electronic pathway, complementary to air and bone conduction has the potential for elucidating certain important as yet unexplained aspects of hearing with respect to cochlear amplification, otoacoustic emissions, and hypoacusis related to the deterioration of collagen or of gap-junctions. Thus, our findings have important implications for both theory and practice.
1805.12198
Matt Amodio
Matthew Amodio, David van Dijk, Ruth Montgomery, Guy Wolf, Smita Krishnaswamy
Out-of-Sample Extrapolation with Neuron Editing
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training. For instance, a generative adversarial network (GAN) exclusively trained to transform images of black-haired men to blond-haired men might not have the same effect on images of black-haired women. This is because neural networks are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. We motivate our technique on an image domain and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.
[ { "created": "Wed, 30 May 2018 19:50:40 GMT", "version": "v1" }, { "created": "Thu, 27 Sep 2018 21:16:33 GMT", "version": "v2" }, { "created": "Thu, 4 Oct 2018 18:57:53 GMT", "version": "v3" }, { "created": "Wed, 23 Jan 2019 22:34:24 GMT", "version": "v4" } ]
2019-01-25
[ [ "Amodio", "Matthew", "" ], [ "van Dijk", "David", "" ], [ "Montgomery", "Ruth", "" ], [ "Wolf", "Guy", "" ], [ "Krishnaswamy", "Smita", "" ] ]
While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training. For instance, a generative adversarial network (GAN) exclusively trained to transform images of black-haired men to blond-haired men might not have the same effect on images of black-haired women. This is because neural networks are good at generation within the manifold of the data that they are trained on. However, generating new samples outside of the manifold or extrapolating "out-of-sample" is a much harder problem that has been less well studied. To address this, we introduce a technique called neuron editing that learns how neurons encode an edit for a particular transformation in a latent space. We use an autoencoder to decompose the variation within the dataset into activations of different neurons and generate transformed data by defining an editing transformation on those neurons. By performing the transformation in a latent trained space, we encode fairly complex and non-linear transformations to the data with much simpler distribution shifts to the neuron's activations. We motivate our technique on an image domain and then move to our two main biological applications: removal of batch artifacts representing unwanted noise and modeling the effect of drug treatments to predict synergy between drugs.
2309.03589
Emanuele Casali Dr.
Emanuele Casali, Stefano A. Serapian, Eleonora Gianquinto, Matteo Castelli, Massimo Bertinaria, Francesca Spyrakis and Giorgio Colombo
NLRP3 monomer Functional Dynamics: from the Effects of Allosteric Binding to Implications for Drug Design
null
International Journal of Biological Macromolecules 246 (2023) 125609
10.1016/j.ijbiomac.2023.125609
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The protein NLRP3 and its complexes are associated with an array of inflammatory pathologies, among which neurodegenerative, autoimmune, and metabolic diseases. Targeting the NLRP3 inflammasome represents a promising strategy for easing the symptoms of pathologic neuroinflammation. When the inflammasome is activated, NLRP3 undergoes a conformational change triggering the production of pro-inflammatory cytokines IL-1\beta and IL-18, as well as cell death by pyroptosis. NLRP3 nucleotide-binding and oligomerization (NACHT) domain plays a crucial role in this function by binding and hydrolysing ATP and is primarily responsible, together with conformational transitions involving the PYD domain, for the complex-assembly process. Allosteric ligands proved able to induce NLRP3 inhibition. Herein, we examine the origins of allosteric inhibition of NLRP3. Through the use of molecular dynamics (MD) simulations and advanced analysis methods, we provide molecular-level insights into how allosteric binding affects protein structure and dynamics, remodelling of the conformational ensembles populated by the protein, with key reverberations on how NLRP3 is preorganized for assembly and ultimately function. The data are used to develop a Machine Learning model to define the protein as Active or Inactive, only based on the analysis of its internal dynamics. We propose this model as a novel tool to select allosteric ligands.
[ { "created": "Thu, 7 Sep 2023 09:29:20 GMT", "version": "v1" } ]
2023-09-08
[ [ "Casali", "Emanuele", "" ], [ "Serapian", "Stefano A.", "" ], [ "Gianquinto", "Eleonora", "" ], [ "Castelli", "Matteo", "" ], [ "Bertinaria", "Massimo", "" ], [ "Spyrakis", "Francesca", "" ], [ "Colombo", "Giorgio", "" ] ]
The protein NLRP3 and its complexes are associated with an array of inflammatory pathologies, among which neurodegenerative, autoimmune, and metabolic diseases. Targeting the NLRP3 inflammasome represents a promising strategy for easing the symptoms of pathologic neuroinflammation. When the inflammasome is activated, NLRP3 undergoes a conformational change triggering the production of pro-inflammatory cytokines IL-1\beta and IL-18, as well as cell death by pyroptosis. NLRP3 nucleotide-binding and oligomerization (NACHT) domain plays a crucial role in this function by binding and hydrolysing ATP and is primarily responsible, together with conformational transitions involving the PYD domain, for the complex-assembly process. Allosteric ligands proved able to induce NLRP3 inhibition. Herein, we examine the origins of allosteric inhibition of NLRP3. Through the use of molecular dynamics (MD) simulations and advanced analysis methods, we provide molecular-level insights into how allosteric binding affects protein structure and dynamics, remodelling of the conformational ensembles populated by the protein, with key reverberations on how NLRP3 is preorganized for assembly and ultimately function. The data are used to develop a Machine Learning model to define the protein as Active or Inactive, only based on the analysis of its internal dynamics. We propose this model as a novel tool to select allosteric ligands.
1409.5459
Benjamin Allen
Benjamin Allen, Christine Sample, Yulia A. Dementieva, Ruben C. Medeiros, Christopher Paoletti, Martin A. Nowak
The molecular clock of neutral evolution can be accelerated or slowed by asymmetric spatial structure
31 pages, 9 figures
PLoS Comput Biol (2015) 11(2): e1004108
10.1371/journal.pcbi.1004108
null
q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Over time, a population acquires neutral genetic substitutions as a consequence of random drift. A famous result in population genetics asserts that the rate, $K$, at which these substitutions accumulate in the population coincides with the mutation rate, $u$, at which they arise in individuals: $K=u$. This identity enables genetic sequence data to be used as a "molecular clock" to estimate the timing of evolutionary events. While the molecular clock is known to be perturbed by selection, it is thought that $K=u$ holds very generally for neutral evolution. Here we show that asymmetric spatial population structure can alter the molecular clock rate for neutral mutations, leading to either $K<u$ or $K>u$. Deviations from $K=u$ occur because mutations arise unequally at different sites and have different probabilities of fixation depending on where they arise. If birth rates are uniform across sites, then $K \leq u$. In general, $K$ can take any value between 0 and $Nu$. Our model can be applied to a variety of population structures. In one example, we investigate the accumulation of genetic mutations in the small intestine. In another application, we analyze over 900 Twitter networks to study the effect of network topology on the fixation of neutral innovations in social evolution.
[ { "created": "Thu, 18 Sep 2014 20:39:03 GMT", "version": "v1" } ]
2015-05-05
[ [ "Allen", "Benjamin", "" ], [ "Sample", "Christine", "" ], [ "Dementieva", "Yulia A.", "" ], [ "Medeiros", "Ruben C.", "" ], [ "Paoletti", "Christopher", "" ], [ "Nowak", "Martin A.", "" ] ]
Over time, a population acquires neutral genetic substitutions as a consequence of random drift. A famous result in population genetics asserts that the rate, $K$, at which these substitutions accumulate in the population coincides with the mutation rate, $u$, at which they arise in individuals: $K=u$. This identity enables genetic sequence data to be used as a "molecular clock" to estimate the timing of evolutionary events. While the molecular clock is known to be perturbed by selection, it is thought that $K=u$ holds very generally for neutral evolution. Here we show that asymmetric spatial population structure can alter the molecular clock rate for neutral mutations, leading to either $K<u$ or $K>u$. Deviations from $K=u$ occur because mutations arise unequally at different sites and have different probabilities of fixation depending on where they arise. If birth rates are uniform across sites, then $K \leq u$. In general, $K$ can take any value between 0 and $Nu$. Our model can be applied to a variety of population structures. In one example, we investigate the accumulation of genetic mutations in the small intestine. In another application, we analyze over 900 Twitter networks to study the effect of network topology on the fixation of neutral innovations in social evolution.
2408.01242
Matthias Dold
Matthias Dold, Joana Pereira, Bastian Sajonz, Volker A. Coenen, Marcus L.F. Janssen, Michael Tangermann
A modular open-source software platform for BCI research with application in closed-loop deep brain stimulation
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
This work introduces Dareplane, a modular and broad technology agnostic open source software platform for brain-computer interface research with an application focus on adaptive deep brain stimulation (aDBS). While the search for suitable biomarkers to inform aDBS has provided rich results over the last two decades, development of control strategies is not progressing at the same pace. One difficulty for investigating control approaches resides with the complex setups required for aDBS experiments. The Dareplane platform supports aDBS setups, and more generally brain computer interfaces, by providing a modular, technology-agnostic, and easy-to-implement software platform to make experimental setups more resilient and replicable. The key features of the platform are presented and the composition of modules into a full experimental setup is discussed in the context of a Python-based orchestration module. The performance of a typical experimental setup on Dareplane for aDBS is evaluated in three benchtop experiments, covering (a) an easy-to-replicate setup using an Arduino microcontroller, (b) a setup with hardware of an implantable pulse generator, and (c) a setup using an established and CE certified external neurostimulator. Benchmark results are presented for individual processing steps and full closed-loop processing. The results show that the microcontroller setup in (a) provides timing comparable to the realistic setups in (b) and (c). The Dareplane platform was successfully used in a total of 19 open-loop DBS sessions with externalized DBS and electrocorticography (ECoG) leads. In addition, the full technical feasibility of the platform in the aDBS context is demonstrated in a first closed-loop session with externalized leads on a patient with Parkinson's disease receiving DBS treatment.
[ { "created": "Fri, 2 Aug 2024 13:01:02 GMT", "version": "v1" } ]
2024-08-05
[ [ "Dold", "Matthias", "" ], [ "Pereira", "Joana", "" ], [ "Sajonz", "Bastian", "" ], [ "Coenen", "Volker A.", "" ], [ "Janssen", "Marcus L. F.", "" ], [ "Tangermann", "Michael", "" ] ]
This work introduces Dareplane, a modular and broad technology agnostic open source software platform for brain-computer interface research with an application focus on adaptive deep brain stimulation (aDBS). While the search for suitable biomarkers to inform aDBS has provided rich results over the last two decades, development of control strategies is not progressing at the same pace. One difficulty for investigating control approaches resides with the complex setups required for aDBS experiments. The Dareplane platform supports aDBS setups, and more generally brain computer interfaces, by providing a modular, technology-agnostic, and easy-to-implement software platform to make experimental setups more resilient and replicable. The key features of the platform are presented and the composition of modules into a full experimental setup is discussed in the context of a Python-based orchestration module. The performance of a typical experimental setup on Dareplane for aDBS is evaluated in three benchtop experiments, covering (a) an easy-to-replicate setup using an Arduino microcontroller, (b) a setup with hardware of an implantable pulse generator, and (c) a setup using an established and CE certified external neurostimulator. Benchmark results are presented for individual processing steps and full closed-loop processing. The results show that the microcontroller setup in (a) provides timing comparable to the realistic setups in (b) and (c). The Dareplane platform was successfully used in a total of 19 open-loop DBS sessions with externalized DBS and electrocorticography (ECoG) leads. In addition, the full technical feasibility of the platform in the aDBS context is demonstrated in a first closed-loop session with externalized leads on a patient with Parkinson's disease receiving DBS treatment.
1012.2640
Subhadip Raychaudhuri
Philippos K. Tsourkas, Wanli Liu, Somkanya C Das, Susan K. Pierce, and Subhadip Raychaudhuri
Discrimination of Membrane Antigen Affinity by B cells Requires Dominance of Kinetic Proofreading over Serial Triggering
29 pages, 8 figures
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
B cells receptor (BCR) signaling in response to membrane-bound antigen increases with antigen affinity, a process known as affinity discrimination. We use computational modeling to show that B cell affinity discrimination requires that kinetic proofreading predominate over serial engagement. We find that if BCR molecules become signaling-capable immediately upon binding antigen, the loss in serial engagement as affinity increases results in weaker signaling with increasing affinity. A threshold time for antigen to stay bound to BCR for several seconds before the latter becomes signaling-capable, similar to kinetic proofreading, is needed to overcome the loss in serial engagement due to increasing antigen affinity, and replicate the monotonic increase in B cell signaling with affinity observed in B cell activation experiments. This finding matches well with the experimentally observed time (~ 20 seconds) required for the BCR signaling domains to undergo antigen and lipid raft-mediated conformational changes that lead to Src-family kinase recruitment. We hypothesize that the physical basis of the threshold time of antigen binding may lie in the formation timescale of BCR dimers. The latter decreases with increasing affinity, resulting in shorter threshold antigen binding times as affinity increases. Such an affinity-dependent kinetic proofreading requirement results in affinity discrimination very similar to that observed in biological experiments. B cell affinity discrimination is critical to the process of affinity maturation and the production of high affinity antibodies, and thus our results here have important implications in applications such as vaccine design.
[ { "created": "Mon, 13 Dec 2010 06:29:56 GMT", "version": "v1" } ]
2010-12-14
[ [ "Tsourkas", "Philippos K.", "" ], [ "Liu", "Wanli", "" ], [ "Das", "Somkanya C", "" ], [ "Pierce", "Susan K.", "" ], [ "Raychaudhuri", "Subhadip", "" ] ]
B cells receptor (BCR) signaling in response to membrane-bound antigen increases with antigen affinity, a process known as affinity discrimination. We use computational modeling to show that B cell affinity discrimination requires that kinetic proofreading predominate over serial engagement. We find that if BCR molecules become signaling-capable immediately upon binding antigen, the loss in serial engagement as affinity increases results in weaker signaling with increasing affinity. A threshold time for antigen to stay bound to BCR for several seconds before the latter becomes signaling-capable, similar to kinetic proofreading, is needed to overcome the loss in serial engagement due to increasing antigen affinity, and replicate the monotonic increase in B cell signaling with affinity observed in B cell activation experiments. This finding matches well with the experimentally observed time (~ 20 seconds) required for the BCR signaling domains to undergo antigen and lipid raft-mediated conformational changes that lead to Src-family kinase recruitment. We hypothesize that the physical basis of the threshold time of antigen binding may lie in the formation timescale of BCR dimers. The latter decreases with increasing affinity, resulting in shorter threshold antigen binding times as affinity increases. Such an affinity-dependent kinetic proofreading requirement results in affinity discrimination very similar to that observed in biological experiments. B cell affinity discrimination is critical to the process of affinity maturation and the production of high affinity antibodies, and thus our results here have important implications in applications such as vaccine design.
q-bio/0601023
Jaewook Joo
Jaewook Joo, Eric Harvill and Reka Albert
Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria
39 pages, 7 figures
null
10.1007/s10955-006-9182-z
null
q-bio.PE
null
Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.
[ { "created": "Tue, 17 Jan 2006 01:42:25 GMT", "version": "v1" } ]
2009-11-13
[ [ "Joo", "Jaewook", "" ], [ "Harvill", "Eric", "" ], [ "Albert", "Reka", "" ] ]
Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.
1112.3574
Virgil Griffith
Virgil Griffith, Larry S. Yaeger
Ideal Free Distribution in Agents with Evolved Neural Architectures
7 pages
Artificial Life X: Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems. 372-378. MIT Press. Cambridge, MA. 2006
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the matching of agents to resources in a computational ecology configured to present heterogeneous resource patches to evolving, neurally controlled agents. We repeatedly find a nearly optimal, ideal free distribution (IFD) of agents to resources. Deviations from IFD are shown to be consistent with models of human foraging behaviors, and possibly driven by spatial constraints and maximum foraging rates. The lack of any model parameters addressing agent foraging or clustering behaviors and the biological verisimilitude of our agent control systems differentiates these results from simpler models and suggests the possibility of exploring the underlying mechanisms by which optimal foraging emerges.
[ { "created": "Thu, 15 Dec 2011 17:21:49 GMT", "version": "v1" } ]
2011-12-16
[ [ "Griffith", "Virgil", "" ], [ "Yaeger", "Larry S.", "" ] ]
We investigate the matching of agents to resources in a computational ecology configured to present heterogeneous resource patches to evolving, neurally controlled agents. We repeatedly find a nearly optimal, ideal free distribution (IFD) of agents to resources. Deviations from IFD are shown to be consistent with models of human foraging behaviors, and possibly driven by spatial constraints and maximum foraging rates. The lack of any model parameters addressing agent foraging or clustering behaviors and the biological verisimilitude of our agent control systems differentiates these results from simpler models and suggests the possibility of exploring the underlying mechanisms by which optimal foraging emerges.
1510.04364
Kamuela Yong
Kamuela E. Yong, Edgar D\'iaz Herrera, Carlos Castillo-Chavez
From bee species aggregation to models of disease avoidance: The Ben-Hur effect
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The movie Ben-Hur highlights the dynamics of contagion associated with leprosy, a pattern of forced aggregation driven by the emergence of symptoms and the fear of contagion. The 2014 Ebola outbreaks reaffirmed the dynamics of redistribution among symptomatic and asymptomatic or non-infected individuals as a way to avoid contagion. In this manuscript, we explore the establishment of clusters of infection via density-dependence avoidance (diffusive instability). We illustrate this possibility in two ways: using a phenomenological driven model where disease incidence is assumed to be a decreasing function of the size of the symptomatic population and with a model that accounts for the deliberate movement of individuals in response to a gradient of symptomatic infectious individuals. The results in this manuscript are preliminary but indicative of the role that behavior, here modeled in crude simplistic ways, may have on disease dynamics, particularly on the spatial redistribution of epidemiological classes.
[ { "created": "Thu, 15 Oct 2015 00:47:08 GMT", "version": "v1" }, { "created": "Wed, 4 Nov 2015 17:08:19 GMT", "version": "v2" } ]
2015-11-05
[ [ "Yong", "Kamuela E.", "" ], [ "Herrera", "Edgar Díaz", "" ], [ "Castillo-Chavez", "Carlos", "" ] ]
The movie Ben-Hur highlights the dynamics of contagion associated with leprosy, a pattern of forced aggregation driven by the emergence of symptoms and the fear of contagion. The 2014 Ebola outbreaks reaffirmed the dynamics of redistribution among symptomatic and asymptomatic or non-infected individuals as a way to avoid contagion. In this manuscript, we explore the establishment of clusters of infection via density-dependence avoidance (diffusive instability). We illustrate this possibility in two ways: using a phenomenological driven model where disease incidence is assumed to be a decreasing function of the size of the symptomatic population and with a model that accounts for the deliberate movement of individuals in response to a gradient of symptomatic infectious individuals. The results in this manuscript are preliminary but indicative of the role that behavior, here modeled in crude simplistic ways, may have on disease dynamics, particularly on the spatial redistribution of epidemiological classes.
1506.05171
Travis Gibson
Travis E. Gibson, Amir Bashan, Hong-Tai Cao, Scott T. Weiss, and Yang-Yu Liu
On the Origins and Control of Community Types in the Human Microbiome
Main Text, Figures, Methods, Supplementary Figures, and Supplementary Text
null
10.1371/journal.pcbi.1004688
null
q-bio.QM cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbiome-based stratification of healthy individuals into compositional categories, referred to as "community types", holds promise for drastically improving personalized medicine. Despite this potential, the existence of community types and the degree of their distinctness have been highly debated. Here we adopted a dynamic systems approach and found that heterogeneity in the interspecific interactions or the presence of strongly interacting species is sufficient to explain community types, independent of the topology of the underlying ecological network. By controlling the presence or absence of these strongly interacting species we can steer the microbial ecosystem to any desired community type. This open-loop control strategy still holds even when the community types are not distinct but appear as dense regions within a continuous gradient. This finding can be used to develop viable therapeutic strategies for shifting the microbial composition to a healthy configuration
[ { "created": "Wed, 17 Jun 2015 00:00:03 GMT", "version": "v1" }, { "created": "Thu, 12 Nov 2015 23:05:13 GMT", "version": "v2" }, { "created": "Thu, 21 Jan 2016 18:29:58 GMT", "version": "v3" } ]
2016-04-27
[ [ "Gibson", "Travis E.", "" ], [ "Bashan", "Amir", "" ], [ "Cao", "Hong-Tai", "" ], [ "Weiss", "Scott T.", "" ], [ "Liu", "Yang-Yu", "" ] ]
Microbiome-based stratification of healthy individuals into compositional categories, referred to as "community types", holds promise for drastically improving personalized medicine. Despite this potential, the existence of community types and the degree of their distinctness have been highly debated. Here we adopted a dynamic systems approach and found that heterogeneity in the interspecific interactions or the presence of strongly interacting species is sufficient to explain community types, independent of the topology of the underlying ecological network. By controlling the presence or absence of these strongly interacting species we can steer the microbial ecosystem to any desired community type. This open-loop control strategy still holds even when the community types are not distinct but appear as dense regions within a continuous gradient. This finding can be used to develop viable therapeutic strategies for shifting the microbial composition to a healthy configuration
0803.1879
Naoki Masuda Dr.
Taro Ueno, Naoki Masuda
Controlling nosocomial infection based on structure of hospital social networks
12 figures, 2 tables
Journal of Theoretical Biology, 254, 655-666 (2008)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nosocomial infection raises a serious public health problem, as implied by the existence of pathogens characteristic to healthcare and hospital-mediated outbreaks of influenza and SARS. We simulate stochastic SIR dynamics on social networks, which are based on observations in a hospital in Tokyo, to explore effective containment strategies against nosocomial infection. The observed networks have hierarchical and modular structure. We show that healthcare workers, particularly medical doctors, are main vectors of diseases on these networks. Intervention methods that restrict interaction between medical doctors and their visits to different wards shrink the final epidemic size more than intervention methods that directly protect patients, such as isolating patients in single rooms. By the same token, vaccinating doctors with priority rather than patients or nurses is more effective. Finally, vaccinating individuals with large betweenness centrality is superior to vaccinating ones with large connectedness to others or randomly chosen individuals, as suggested by previous model studies. [The abstract of the manuscript has more information.]
[ { "created": "Thu, 13 Mar 2008 02:07:55 GMT", "version": "v1" }, { "created": "Sun, 19 Oct 2008 08:11:53 GMT", "version": "v2" } ]
2008-10-19
[ [ "Ueno", "Taro", "" ], [ "Masuda", "Naoki", "" ] ]
Nosocomial infection raises a serious public health problem, as implied by the existence of pathogens characteristic to healthcare and hospital-mediated outbreaks of influenza and SARS. We simulate stochastic SIR dynamics on social networks, which are based on observations in a hospital in Tokyo, to explore effective containment strategies against nosocomial infection. The observed networks have hierarchical and modular structure. We show that healthcare workers, particularly medical doctors, are main vectors of diseases on these networks. Intervention methods that restrict interaction between medical doctors and their visits to different wards shrink the final epidemic size more than intervention methods that directly protect patients, such as isolating patients in single rooms. By the same token, vaccinating doctors with priority rather than patients or nurses is more effective. Finally, vaccinating individuals with large betweenness centrality is superior to vaccinating ones with large connectedness to others or randomly chosen individuals, as suggested by previous model studies. [The abstract of the manuscript has more information.]
2007.15117
Gabriele Mongiano
Gabriele Mongiano, Patrizia Titone, Simone Pagnoncelli, Davide Sacco, Luigi Tamborini, Roberto Pilu, Simone Bregaglio
Phenotypic variability in Italian rice germplasm
Published in European Journal of Agronomy
Eur J Agron 120, 126131 (2020)
10.1016/j.eja.2020.126131
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plant breeding is one of the key strategies for further enhancement of crop yields; however, effective breeding strategies require phenotypic characterisation of the available germplasm. This study sought to characterise the phenotypic expression of fourteen crop traits related to phenology, plant architecture, and yield in a panel of 40 cultivars selected to represent the phenotypic variability present in Italian rice germplasm during a two-season field experiment. The observed range of phenotypic variation was high for many traits (coefficients of variation ranging from 5.9\% to 45.4\%) including yield (mean: 6.47 t ha\textsuperscript{-1}; CV: 15.4\%; min: 2.19 t ha\textsuperscript{-1}; max: 8.95 t ha\textsuperscript{-1}), and multiple strong associations emerged in all the analysed traits. Cluster analysis extracted three groups of genotypes characterised by alternative yield strategies, i.e. "high-tillering", "early maturing", and "increased source-sink". Our findings highlight that rice yield decrease when one strategy is overemphasised. In contrast, the highest-yielding genotypes have a balanced ratio between sink and source organs, along with a proportional duration of vegetative and reproductive phases. This study depicts the phenotypic variability in Italian rice cultivars and proposes a novel classification based on yield-related traits which could be of use in multiple rice breeding applications.
[ { "created": "Wed, 29 Jul 2020 21:24:31 GMT", "version": "v1" }, { "created": "Tue, 11 Aug 2020 08:17:04 GMT", "version": "v2" } ]
2020-08-12
[ [ "Mongiano", "Gabriele", "" ], [ "Titone", "Patrizia", "" ], [ "Pagnoncelli", "Simone", "" ], [ "Sacco", "Davide", "" ], [ "Tamborini", "Luigi", "" ], [ "Pilu", "Roberto", "" ], [ "Bregaglio", "Simone", "" ] ]
Plant breeding is one of the key strategies for further enhancement of crop yields; however, effective breeding strategies require phenotypic characterisation of the available germplasm. This study sought to characterise the phenotypic expression of fourteen crop traits related to phenology, plant architecture, and yield in a panel of 40 cultivars selected to represent the phenotypic variability present in Italian rice germplasm during a two-season field experiment. The observed range of phenotypic variation was high for many traits (coefficients of variation ranging from 5.9\% to 45.4\%) including yield (mean: 6.47 t ha\textsuperscript{-1}; CV: 15.4\%; min: 2.19 t ha\textsuperscript{-1}; max: 8.95 t ha\textsuperscript{-1}), and multiple strong associations emerged in all the analysed traits. Cluster analysis extracted three groups of genotypes characterised by alternative yield strategies, i.e. "high-tillering", "early maturing", and "increased source-sink". Our findings highlight that rice yield decrease when one strategy is overemphasised. In contrast, the highest-yielding genotypes have a balanced ratio between sink and source organs, along with a proportional duration of vegetative and reproductive phases. This study depicts the phenotypic variability in Italian rice cultivars and proposes a novel classification based on yield-related traits which could be of use in multiple rice breeding applications.
0904.3254
Yves Jouanneau
Yves Jouanneau (LCBM, BBSI), Christine Meyer (LCBM), Jean Jakoncic, V. Stojanoff, Jacques Gaillard (LRM/SCIB)
Characterization of a naphthalene dioxygenase endowed with an exceptionally broad substrate specificity toward polycyclic aromatic hydrocarbons
null
Biochemistry (American Chemical Society) 45 (2006) 12380-12391
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Sphingomonas CHY-1, a single ring-hydroxylating dioxygenase is responsible for the initial attack of a range of polycyclic aromatic hydrocarbons (PAHs) composed of up to five rings. The components of this enzyme were separately purified and characterized. The oxygenase component (ht-PhnI) was shown to contain one Rieske-type [2Fe-2S] cluster and one mononuclear Fe center per alpha subunit, based on EPR measurements and iron assay. Steady-state kinetic measurements revealed that the enzyme had a relatively low apparent Michaelis constant for naphthalene (Km= 0.92 $\pm$ 0.15 $\mu$M), and an apparent specificity constant of 2.0 $\pm$ 0.3 $\mu$M-1 s-1. Naphthalene was converted to the corresponding 1,2-dihydrodiol with stoichiometric oxidation of NADH. On the other hand, the oxidation of eight other PAHs occurred at slower rates, and with coupling efficiencies that decreased with the enzyme reaction rate. Uncoupling was associated with hydrogen peroxide formation, which is potentially deleterious to cells and might inhibit PAH degradation. In single turnover reactions, ht-PhnI alone catalyzed PAH hydroxylation at a faster rate in the presence of organic solvent, suggesting that the transfer of substrate to the active site is a limiting factor. The four-ring PAHs chrysene and benz[a]anthracene were subjected to a double ring-dihydroxylation, giving rise to the formation of a significant proportion of bis-cis-dihydrodiols. In addition, the dihydroxylation of benz[a]anthracene yielded three dihydrodiols, the enzyme showing a preference for carbons in positions 1,2 and 10,11. This is the first characterization of a dioxygenase able to dihydroxylate PAHs made up of four and five rings.
[ { "created": "Tue, 21 Apr 2009 13:54:50 GMT", "version": "v1" } ]
2009-04-22
[ [ "Jouanneau", "Yves", "", "LCBM, BBSI" ], [ "Meyer", "Christine", "", "LCBM" ], [ "Jakoncic", "Jean", "", "LRM/SCIB" ], [ "Stojanoff", "V.", "", "LRM/SCIB" ], [ "Gaillard", "Jacques", "", "LRM/SCIB" ] ]
In Sphingomonas CHY-1, a single ring-hydroxylating dioxygenase is responsible for the initial attack of a range of polycyclic aromatic hydrocarbons (PAHs) composed of up to five rings. The components of this enzyme were separately purified and characterized. The oxygenase component (ht-PhnI) was shown to contain one Rieske-type [2Fe-2S] cluster and one mononuclear Fe center per alpha subunit, based on EPR measurements and iron assay. Steady-state kinetic measurements revealed that the enzyme had a relatively low apparent Michaelis constant for naphthalene (Km= 0.92 $\pm$ 0.15 $\mu$M), and an apparent specificity constant of 2.0 $\pm$ 0.3 $\mu$M-1 s-1. Naphthalene was converted to the corresponding 1,2-dihydrodiol with stoichiometric oxidation of NADH. On the other hand, the oxidation of eight other PAHs occurred at slower rates, and with coupling efficiencies that decreased with the enzyme reaction rate. Uncoupling was associated with hydrogen peroxide formation, which is potentially deleterious to cells and might inhibit PAH degradation. In single turnover reactions, ht-PhnI alone catalyzed PAH hydroxylation at a faster rate in the presence of organic solvent, suggesting that the transfer of substrate to the active site is a limiting factor. The four-ring PAHs chrysene and benz[a]anthracene were subjected to a double ring-dihydroxylation, giving rise to the formation of a significant proportion of bis-cis-dihydrodiols. In addition, the dihydroxylation of benz[a]anthracene yielded three dihydrodiols, the enzyme showing a preference for carbons in positions 1,2 and 10,11. This is the first characterization of a dioxygenase able to dihydroxylate PAHs made up of four and five rings.
1905.13173
Roberto N. Mu\~noz
Roberto N. Mu\~noz, Angus Leung, Aidan Zecevik, Felix A. Pollock, Dror Cohen, Bruno van Swinderen, Naotsugu Tsuchiya, Kavan Modi
General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in Drosophila
14 pages, 6 figures. Comments welcome; Added time-reversal analysis, updated discussion, new figures (Fig. 5 & Fig. 6) and Tables (Tab. 1)
Phys. Rev. Research 2, 023219 (2020)
10.1103/PhysRevResearch.2.023219
null
q-bio.NC physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply techniques from the field of computational mechanics to evaluate the statistical complexity of neural recording data from fruit flies. First, we connect statistical complexity to the flies' level of conscious arousal, which is manipulated by general anesthesia (isoflurane). We show that the complexity of even single channel time series data decreases under anesthesia. The observed difference in complexity between the two states of conscious arousal increases as higher orders of temporal correlations are taken into account. We then go on to show that, in addition to reducing complexity, anesthesia also modulates the informational structure between the forward- and reverse-time neural signals. Specifically, using three distinct notions of temporal asymmetry we show that anesthesia reduces temporal asymmetry on information-theoretic and information-geometric grounds. In contrast to prior work, our results show that: (1) Complexity differences can emerge at very short timescales and across broad regions of the fly brain, thus heralding the macroscopic state of anesthesia in a previously unforeseen manner, and (2) that general anesthesia also modulates the temporal asymmetry of neural signals. Together, our results demonstrate that anesthetized brains become both less structured and more reversible.
[ { "created": "Thu, 30 May 2019 16:57:48 GMT", "version": "v1" }, { "created": "Fri, 2 Aug 2019 06:22:56 GMT", "version": "v2" }, { "created": "Wed, 3 Jun 2020 01:57:40 GMT", "version": "v3" } ]
2020-06-04
[ [ "Muñoz", "Roberto N.", "" ], [ "Leung", "Angus", "" ], [ "Zecevik", "Aidan", "" ], [ "Pollock", "Felix A.", "" ], [ "Cohen", "Dror", "" ], [ "van Swinderen", "Bruno", "" ], [ "Tsuchiya", "Naotsugu", "" ], [ "Modi", "Kavan", "" ] ]
We apply techniques from the field of computational mechanics to evaluate the statistical complexity of neural recording data from fruit flies. First, we connect statistical complexity to the flies' level of conscious arousal, which is manipulated by general anesthesia (isoflurane). We show that the complexity of even single channel time series data decreases under anesthesia. The observed difference in complexity between the two states of conscious arousal increases as higher orders of temporal correlations are taken into account. We then go on to show that, in addition to reducing complexity, anesthesia also modulates the informational structure between the forward- and reverse-time neural signals. Specifically, using three distinct notions of temporal asymmetry we show that anesthesia reduces temporal asymmetry on information-theoretic and information-geometric grounds. In contrast to prior work, our results show that: (1) Complexity differences can emerge at very short timescales and across broad regions of the fly brain, thus heralding the macroscopic state of anesthesia in a previously unforeseen manner, and (2) that general anesthesia also modulates the temporal asymmetry of neural signals. Together, our results demonstrate that anesthetized brains become both less structured and more reversible.
1004.2009
Suhita Nadkarni
Suhita Nadkarni, Thomas Bartol, Terrence Sejnowski and Herbert Levine
Spatial and Temporal Correlates of Vesicular Release at Hippocampal Synapses
null
null
null
null
q-bio.NC q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a spatially explicit biophysical model of the hippocampal CA3-CA1 presynaptic bouton to study local calcium dynamics leading to vesicle fusion. A kinetic model with two calcium sensors is formulated specifically for the CA3-CA1 synapse. The model includes a sensor for fast synchronous release that lasts a few tens of milliseconds and a sensor for slow asynchronous release that lasts a few hundred milliseconds. We show that a variety of extant data on CA3-CA1 synapse can be accounted for consistently only when a refractory period of the order of few milliseconds between releases is introduced. Including a second sensor for asynchronous release that has a slow unbinding site and therefore an embedded long memory, is shown to play a role in short-term plasticity by facilitating release. For synchronous release mediated by Synaptotagmin II a third time scale is revealed in addition to the fast and slow release. This third time scale corresponds to "stimulus-correlated super-fast" neurotransmitter release. Our detailed spatial simulation indicates that all three-time scales of neurotransmitter release are an emergent property of the calcium sensor and independent of synaptic ultrastructure. Furthermore, it allows us to identify features of synaptic transmission that are universal and those that are modulated by structure.
[ { "created": "Mon, 12 Apr 2010 17:23:43 GMT", "version": "v1" } ]
2010-05-02
[ [ "Nadkarni", "Suhita", "" ], [ "Bartol", "Thomas", "" ], [ "Sejnowski", "Terrence", "" ], [ "Levine", "Herbert", "" ] ]
We develop a spatially explicit biophysical model of the hippocampal CA3-CA1 presynaptic bouton to study local calcium dynamics leading to vesicle fusion. A kinetic model with two calcium sensors is formulated specifically for the CA3-CA1 synapse. The model includes a sensor for fast synchronous release that lasts a few tens of milliseconds and a sensor for slow asynchronous release that lasts a few hundred milliseconds. We show that a variety of extant data on CA3-CA1 synapse can be accounted for consistently only when a refractory period of the order of few milliseconds between releases is introduced. Including a second sensor for asynchronous release that has a slow unbinding site and therefore an embedded long memory, is shown to play a role in short-term plasticity by facilitating release. For synchronous release mediated by Synaptotagmin II a third time scale is revealed in addition to the fast and slow release. This third time scale corresponds to "stimulus-correlated super-fast" neurotransmitter release. Our detailed spatial simulation indicates that all three-time scales of neurotransmitter release are an emergent property of the calcium sensor and independent of synaptic ultrastructure. Furthermore, it allows us to identify features of synaptic transmission that are universal and those that are modulated by structure.
1107.4777
Inbal Hecht
Inbal Hecht, Herbert Levine, Wouter-Jan Rappel and Eshel Ben-Jacob
Self-assisted Amoeboid Navigation in Complex Environments
null
null
10.1371/journal.pone.0021955
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Living cells of many types need to move in response to external stimuli in order to accomplish their functional tasks; these tasks range from wound healing to immune response to fertilization. While the directional motion is typically dictated by an external signal, the actual motility is also restricted by physical constraints, such as the presence of other cells and the extracellular matrix. The ability to successfully navigate in the presence of obstacles is not only essential for organisms, but might prove relevant in the study of autonomous robotic motion. Methodology/principal findings: We study a computational model of amoeboid chemotactic navigation under differing conditions, from motion in an obstacle-free environment to navigation between obstacles and finally to moving in a maze. We use the maze as a simple stand-in for a motion task with severe constraints, as might be expected in dense extracellular matrix. Whereas agents using simple chemotaxis can successfully navigate around small obstacles, the presence of large barriers can often lead to agent trapping. We further show that employing a simple memory mechanism, namely secretion of a repulsive chemical by the agent, helps the agent escape from such trapping. Conclusions/significance: Our main conclusion is that cells employing simple chemotactic strategies will often be unable to navigate through maze-like geometries, but a simple chemical marker mechanism (which we refer to as "self-assistance") significantly improves success rates. This realization provides important insights into mechanisms that might be employed by real cells migrating in complex environments as well as clues for the design of robotic navigation strategies. The results can be extended to more complicated multi-cellular systems and can be used in the study of mammalian cell migration and cancer metastasis.
[ { "created": "Sun, 24 Jul 2011 18:00:02 GMT", "version": "v1" } ]
2015-05-28
[ [ "Hecht", "Inbal", "" ], [ "Levine", "Herbert", "" ], [ "Rappel", "Wouter-Jan", "" ], [ "Ben-Jacob", "Eshel", "" ] ]
Background: Living cells of many types need to move in response to external stimuli in order to accomplish their functional tasks; these tasks range from wound healing to immune response to fertilization. While the directional motion is typically dictated by an external signal, the actual motility is also restricted by physical constraints, such as the presence of other cells and the extracellular matrix. The ability to successfully navigate in the presence of obstacles is not only essential for organisms, but might prove relevant in the study of autonomous robotic motion. Methodology/principal findings: We study a computational model of amoeboid chemotactic navigation under differing conditions, from motion in an obstacle-free environment to navigation between obstacles and finally to moving in a maze. We use the maze as a simple stand-in for a motion task with severe constraints, as might be expected in dense extracellular matrix. Whereas agents using simple chemotaxis can successfully navigate around small obstacles, the presence of large barriers can often lead to agent trapping. We further show that employing a simple memory mechanism, namely secretion of a repulsive chemical by the agent, helps the agent escape from such trapping. Conclusions/significance: Our main conclusion is that cells employing simple chemotactic strategies will often be unable to navigate through maze-like geometries, but a simple chemical marker mechanism (which we refer to as "self-assistance") significantly improves success rates. This realization provides important insights into mechanisms that might be employed by real cells migrating in complex environments as well as clues for the design of robotic navigation strategies. The results can be extended to more complicated multi-cellular systems and can be used in the study of mammalian cell migration and cancer metastasis.
1208.4066
Mathukumalli Vidyasagar
Nitin Kumar Singh, M. Eren Ahsen, Shiva Mankala, Hyun-Seok Kim, Michael A. White, M. Vidyasagar
Reverse Engineering Gene Interaction Networks Using the Phi-Mixing Coefficient
19 pages, 6 figures
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Constructing gene interaction networks (GINs) from high-throughput gene expression data is an important and challenging problem in systems biology. Existing algorithms produce networks that either have undirected and unweighted edges, or else are constrained to contain no cycles, both of which are biologically unrealistic. In the present paper we propose a new algorithm, based on a concept from probability theory known as the phi-mixing coefficient, that produces networks whose edges are weighted and directed, and are permitted to contain cycles. Because there is no "ground truth" for genome-wide networks on a human scale, we analyzed the outcomes of several experiments on lung cancer, and matched the predictions from the inferred networks with experimental results. Specifically, we inferred three networks (NSCLC, Neuro-endocrine NSCLC plus SCLC, and normal) from the gene expression measurements of 157 lung cancer and 59 normal cell lines, compared with the outcomes of siRNA screening of 19,000+ genes on 11 NSCLC cell lines, and analyzed data from a ChIP-Seq experiment to determine putative downstream targets of the lineage specific oncogenic transcription factor ASCL1. The inferred networks displayed a scale-free or power law behavior between the degree of a node and the number of nodes with that degree. There was a strong correlation between the degree of a gene in the inferred NSCLC network and its essentiality for the survival of the cells. The inferred downstream neighborhood genes of ASCL1 in the SCLC network were significantly enriched by ChIP-Seq determined putative target genes, while no such enrichment was found in the inferred NSCLC network.
[ { "created": "Mon, 20 Aug 2012 17:39:28 GMT", "version": "v1" }, { "created": "Sat, 12 Mar 2016 10:03:26 GMT", "version": "v2" } ]
2016-03-15
[ [ "Singh", "Nitin Kumar", "" ], [ "Ahsen", "M. Eren", "" ], [ "Mankala", "Shiva", "" ], [ "Kim", "Hyun-Seok", "" ], [ "White", "Michael A.", "" ], [ "Vidyasagar", "M.", "" ] ]
Constructing gene interaction networks (GINs) from high-throughput gene expression data is an important and challenging problem in systems biology. Existing algorithms produce networks that either have undirected and unweighted edges, or else are constrained to contain no cycles, both of which are biologically unrealistic. In the present paper we propose a new algorithm, based on a concept from probability theory known as the phi-mixing coefficient, that produces networks whose edges are weighted and directed, and are permitted to contain cycles. Because there is no "ground truth" for genome-wide networks on a human scale, we analyzed the outcomes of several experiments on lung cancer, and matched the predictions from the inferred networks with experimental results. Specifically, we inferred three networks (NSCLC, Neuro-endocrine NSCLC plus SCLC, and normal) from the gene expression measurements of 157 lung cancer and 59 normal cell lines, compared with the outcomes of siRNA screening of 19,000+ genes on 11 NSCLC cell lines, and analyzed data from a ChIP-Seq experiment to determine putative downstream targets of the lineage specific oncogenic transcription factor ASCL1. The inferred networks displayed a scale-free or power law behavior between the degree of a node and the number of nodes with that degree. There was a strong correlation between the degree of a gene in the inferred NSCLC network and its essentiality for the survival of the cells. The inferred downstream neighborhood genes of ASCL1 in the SCLC network were significantly enriched by ChIP-Seq determined putative target genes, while no such enrichment was found in the inferred NSCLC network.
2006.01873
Marcos Capistran Dr
Marcos A. Capistran, Antonio Capella, J. Andres Christen
Forecasting hospital demand during COVID-19 pandemic outbreaks
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a compartmental SEIRD model aimed at forecasting hospital occupancy in metropolitan areas during the current COVID-19 outbreak. The model features asymptomatic and symptomatic infections with detailed hospital dynamics. We model explicitly branching probabilities and non exponential residence times in each latent and infected compartments. Using both hospital admittance confirmed cases and deaths we infer the contact rate and the initial conditions of the dynamical system, considering break points to model lockdown interventions. Our Bayesian approach allows us to produce timely probabilistic forecasts of hospital demand. The model has been used by the federal government of Mexico to assist public policy, and has been applied for the analysis of more than 70 metropolitan areas and the 32 states in the country.
[ { "created": "Tue, 2 Jun 2020 18:44:54 GMT", "version": "v1" }, { "created": "Fri, 5 Jun 2020 04:43:56 GMT", "version": "v2" } ]
2020-06-08
[ [ "Capistran", "Marcos A.", "" ], [ "Capella", "Antonio", "" ], [ "Christen", "J. Andres", "" ] ]
We present a compartmental SEIRD model aimed at forecasting hospital occupancy in metropolitan areas during the current COVID-19 outbreak. The model features asymptomatic and symptomatic infections with detailed hospital dynamics. We model explicitly branching probabilities and non exponential residence times in each latent and infected compartments. Using both hospital admittance confirmed cases and deaths we infer the contact rate and the initial conditions of the dynamical system, considering break points to model lockdown interventions. Our Bayesian approach allows us to produce timely probabilistic forecasts of hospital demand. The model has been used by the federal government of Mexico to assist public policy, and has been applied for the analysis of more than 70 metropolitan areas and the 32 states in the country.
1207.3907
Erik Garrison
Erik Garrison and Gabor Marth
Haplotype-based variant detection from short-read sequencing
9 pages, partial draft
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by/3.0/
The direct detection of haplotypes from short-read DNA sequencing data requires changes to existing small-variant detection methods. Here, we develop a Bayesian statistical framework which is capable of modeling multiallelic loci in sets of individuals with non-uniform copy number. We then describe our implementation of this framework in a haplotype-based variant detector, FreeBayes.
[ { "created": "Tue, 17 Jul 2012 07:53:38 GMT", "version": "v1" }, { "created": "Fri, 20 Jul 2012 23:24:32 GMT", "version": "v2" } ]
2012-07-24
[ [ "Garrison", "Erik", "" ], [ "Marth", "Gabor", "" ] ]
The direct detection of haplotypes from short-read DNA sequencing data requires changes to existing small-variant detection methods. Here, we develop a Bayesian statistical framework which is capable of modeling multiallelic loci in sets of individuals with non-uniform copy number. We then describe our implementation of this framework in a haplotype-based variant detector, FreeBayes.
1904.07231
Jean Faber
Jean Faber, Priscila C. Antoneli, Guillem Via, Noemi S. Ara\'ujo, Daniel J. L. L. Pinheiro, Esper Cavalheiro
Critical elements for connectivity analysis of brain networks
39 pages, 14 figures and 4 tables
null
null
null
q-bio.NC q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, new and important perspectives were introduced in the field of neuroimaging with the emergence of the connectionist approach. In this new context, it is important to know not only which brain areas are activated by a particular stimulus but, mainly, how these areas are structurally and functionally connected, distributed, and organized in relation to other areas. Additionally, the arrangement of the network elements, i.e., its topology, and the dynamics they give rise to are also important. This new approach is called connectomics. It brings together a series of techniques and methodologies capable of systematizing, from the different types of signals and images of the nervous system, how neuronal units to brain areas are connected. Through this approach, the different patterns of connectivity can be graphically and mathematically represented by the so-called connectomes. The connectome uses quantitative metrics to evaluate structural and functional information from images of neural tracts and pathways or signals from the metabolic and/or electrophysiologic activity of cell populations or brain areas. Besides, with adequate treatment of this information, it is also possible to infer causal relationships. In this way, structural and functional evaluations are complementary descriptions which, together, represent the anatomic and physiologic neural properties, establishing a new paradigm for understanding how the brain functions by looking at brain connections. Here, we highlight five critical elements of a network that allows an integrative analysis, focusing mainly on a functional description. These elements include; (i) the properties of its nodes; (ii) the metrics for connectivity and coupling between nodes; (iii) the network topologies; (iv) the network dynamics and (v) the interconnections between different domains and scales of network representations.
[ { "created": "Mon, 15 Apr 2019 07:34:36 GMT", "version": "v1" } ]
2019-04-17
[ [ "Faber", "Jean", "" ], [ "Antoneli", "Priscila C.", "" ], [ "Via", "Guillem", "" ], [ "Araújo", "Noemi S.", "" ], [ "Pinheiro", "Daniel J. L. L.", "" ], [ "Cavalheiro", "Esper", "" ] ]
In recent years, new and important perspectives were introduced in the field of neuroimaging with the emergence of the connectionist approach. In this new context, it is important to know not only which brain areas are activated by a particular stimulus but, mainly, how these areas are structurally and functionally connected, distributed, and organized in relation to other areas. Additionally, the arrangement of the network elements, i.e., its topology, and the dynamics they give rise to are also important. This new approach is called connectomics. It brings together a series of techniques and methodologies capable of systematizing, from the different types of signals and images of the nervous system, how neuronal units to brain areas are connected. Through this approach, the different patterns of connectivity can be graphically and mathematically represented by the so-called connectomes. The connectome uses quantitative metrics to evaluate structural and functional information from images of neural tracts and pathways or signals from the metabolic and/or electrophysiologic activity of cell populations or brain areas. Besides, with adequate treatment of this information, it is also possible to infer causal relationships. In this way, structural and functional evaluations are complementary descriptions which, together, represent the anatomic and physiologic neural properties, establishing a new paradigm for understanding how the brain functions by looking at brain connections. Here, we highlight five critical elements of a network that allows an integrative analysis, focusing mainly on a functional description. These elements include; (i) the properties of its nodes; (ii) the metrics for connectivity and coupling between nodes; (iii) the network topologies; (iv) the network dynamics and (v) the interconnections between different domains and scales of network representations.
1201.2933
Brian Kolterman
Brian E. Kolterman, Ivan Iossifov, and Alexei A. Koulakov
A race model for singular olfactory receptor expression
null
null
null
null
q-bio.MN q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In vertebrates, olfactory sensory neurons choose only one olfactory receptor to produce out of ~2000 possibilities. The mechanism for how this singular receptor expression occurs is unknown. Here we propose a mechanism that can stochastically select a single gene out of a large number of possibilities. In this model, receptor genes compete for a limited pool of transcription factors (TFs). The gene that recruits a target number of TFs is selected for expression. To support this mechanism, we have attempted to detect repeated motifs within known sequences of mouse olfactory receptor promoters. We find motifs that are significantly overrepresented in olfactory versus other gene promoters. We identify possible TFs that can target these motifs. Our model suggests that a small number of TFs can control the selection of a single gene out of ~2000 possibilities.
[ { "created": "Fri, 13 Jan 2012 20:54:03 GMT", "version": "v1" } ]
2012-01-16
[ [ "Kolterman", "Brian E.", "" ], [ "Iossifov", "Ivan", "" ], [ "Koulakov", "Alexei A.", "" ] ]
In vertebrates, olfactory sensory neurons choose only one olfactory receptor to produce out of ~2000 possibilities. The mechanism for how this singular receptor expression occurs is unknown. Here we propose a mechanism that can stochastically select a single gene out of a large number of possibilities. In this model, receptor genes compete for a limited pool of transcription factors (TFs). The gene that recruits a target number of TFs is selected for expression. To support this mechanism, we have attempted to detect repeated motifs within known sequences of mouse olfactory receptor promoters. We find motifs that are significantly overrepresented in olfactory versus other gene promoters. We identify possible TFs that can target these motifs. Our model suggests that a small number of TFs can control the selection of a single gene out of ~2000 possibilities.
1806.10428
Jos\'e Antonio Villacorta-Atienza
Jose Antonio Villacorta-Atienza (1,2), Carlos Calvo-Tapia (2), Sergio Diez-Hermano (1), Abel Sanchez-Jimenez (1,2), Sergey Lobov (3), Nadia Krilova (3), Antonio Murciano (1), Gabriela Lopez-Tolsa (4), Ricardo Pellon (4), Valeri Makarov (2) ((1) Biomathematics Unit (BEE Department), Faculty of Biology, Complutense University of Madrid, Spain, (2) Institute of Interdisciplinary Mathematics (IMI), Faculty of Mathematics, Complutense University of Madrid, Spain, (3) Lobachevsky State University of Nizhny Novgorod, Russia, (4) Department of Basic Psychology, Universidad de Educaci\'on a Distancia (UNED), Spain)
Static Internal Representation Of Dynamic Situations Reveals Time Compaction In Human Cognition
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The time-changing nature of our world demands processing of huge amounts of information in fast and reliable way to generate successful behaviors. Therefore, significant brain resources are devoted to process spatiotemporal information. Neural basis of spatial processing and their cognitive correlates are well established mostly for static environments. Nonetheless, in time-changing situations the brain exploits specific processing mechanisms for temporal information based on prediction and anticipation, as time compression during visual perception and mental navigation. Alternative hypothesis of time compaction integrates both views, postulating that dynamic situations are internally represented as static spatial maps where temporal information is extracted by predicting and structuring the relevant interactions. Nevertheless, empirical approaches tackling the biological soundness of time compaction are still lacking. Here we show that performance in a discrimination learning task involving dynamic situations can be either favored or hampered via previous exposition to interfering static scenes. In this sense, men were effectively conditioned in contrast to a control group, in coherence with the hypothesis. Meanwhile, women performed on par with control men, regardless of the previous conditioning. This suggests time compaction is a salient cognitive strategy in men when dealing with dynamic situations, while women seem to rely on a broader range of information processing strategies. Finally, we further corroborated the time compaction mechanism involved in these experimental findings through a mathematical model of the experimental process. Our results point to some form of static internal representation mechanism at cognitive level involved in decision-making and strategy planning in dynamic situations [...]
[ { "created": "Wed, 27 Jun 2018 12:10:56 GMT", "version": "v1" } ]
2018-06-28
[ [ "Villacorta-Atienza", "Jose Antonio", "" ], [ "Calvo-Tapia", "Carlos", "" ], [ "Diez-Hermano", "Sergio", "" ], [ "Sanchez-Jimenez", "Abel", "" ], [ "Lobov", "Sergey", "" ], [ "Krilova", "Nadia", "" ], [ "Murciano", "Antonio", "" ], [ "Lopez-Tolsa", "Gabriela", "" ], [ "Pellon", "Ricardo", "" ], [ "Makarov", "Valeri", "" ] ]
The time-changing nature of our world demands processing of huge amounts of information in fast and reliable way to generate successful behaviors. Therefore, significant brain resources are devoted to process spatiotemporal information. Neural basis of spatial processing and their cognitive correlates are well established mostly for static environments. Nonetheless, in time-changing situations the brain exploits specific processing mechanisms for temporal information based on prediction and anticipation, as time compression during visual perception and mental navigation. Alternative hypothesis of time compaction integrates both views, postulating that dynamic situations are internally represented as static spatial maps where temporal information is extracted by predicting and structuring the relevant interactions. Nevertheless, empirical approaches tackling the biological soundness of time compaction are still lacking. Here we show that performance in a discrimination learning task involving dynamic situations can be either favored or hampered via previous exposition to interfering static scenes. In this sense, men were effectively conditioned in contrast to a control group, in coherence with the hypothesis. Meanwhile, women performed on par with control men, regardless of the previous conditioning. This suggests time compaction is a salient cognitive strategy in men when dealing with dynamic situations, while women seem to rely on a broader range of information processing strategies. Finally, we further corroborated the time compaction mechanism involved in these experimental findings through a mathematical model of the experimental process. Our results point to some form of static internal representation mechanism at cognitive level involved in decision-making and strategy planning in dynamic situations [...]
2101.05458
Sayan Nag
Sayan Nag
On the stability of equilibria of the physiologically-informed dynamic causal model
null
null
null
null
q-bio.NC nlin.CD physics.med-ph
http://creativecommons.org/licenses/by/4.0/
Experimental manipulations perturb the neuronal activity. This phenomenon is manifested in the fMRI response. Dynamic causal model and its variants can model these neuronal responses along with the BOLD responses [1, 2, 3, 4, 5] . Physiologically-informed DCM (P-DCM) [5] gives state-of-the-art results in this aspect. But, P-DCM has more parameters compared to the standard DCM model and the stability of this particular model is still unexplored. In this work, we will try to explore the stability of the P-DCM model and find the ranges of the model parameters which make it stable.
[ { "created": "Thu, 14 Jan 2021 04:33:31 GMT", "version": "v1" } ]
2021-01-15
[ [ "Nag", "Sayan", "" ] ]
Experimental manipulations perturb the neuronal activity. This phenomenon is manifested in the fMRI response. Dynamic causal model and its variants can model these neuronal responses along with the BOLD responses [1, 2, 3, 4, 5] . Physiologically-informed DCM (P-DCM) [5] gives state-of-the-art results in this aspect. But, P-DCM has more parameters compared to the standard DCM model and the stability of this particular model is still unexplored. In this work, we will try to explore the stability of the P-DCM model and find the ranges of the model parameters which make it stable.
1609.00788
Olivia Prosper
Swati DebRoy, Olivia Prosper, Austin Mishoe, Anuj Mubayi
Challenges in Modeling Complexity of Neglected Tropical Diseases: Assessment of Visceral Leishmaniasis Dynamics in Resource Limited Settings
20 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neglected tropical diseases (NTD), particularly vector-borne diseases (VBD), account for a large proportion of the global disease burden, and their control faces several challenges including diminishing human and financial resources for those distressed from such diseases. Visceral Leishmaniasis (VL), the second-largest parasitic killer in the world (after malaria) affects poor populations in endemic countries and causes considerable cost to the affected individuals and their society. Mathematical models can serve as a critical tool for understanding the driving mechanisms of a NTD such as VL. The WHO promotes integrated control programs for VL but this policy is not well supported by systematic quantitative and dynamic evidence and so potential benefits of the policy are limited. Moreover, mathematical models can be readily developed and used to understand the functioning of the VL system cheaply and systematically. The focus of this research is three-fold: (i) to identify non-traditional but critical mechanisms for ongoing VL transmission in resource limited regions, (ii) to review mathematical models used for other infectious diseases that have the potential to capture identified factors of VL, and (iii) to suggest novel quantitative models for understanding VL dynamics and for evaluating control programs in such frameworks for achieveing VL elimination goals.
[ { "created": "Sat, 3 Sep 2016 03:31:23 GMT", "version": "v1" } ]
2016-09-06
[ [ "DebRoy", "Swati", "" ], [ "Prosper", "Olivia", "" ], [ "Mishoe", "Austin", "" ], [ "Mubayi", "Anuj", "" ] ]
Neglected tropical diseases (NTD), particularly vector-borne diseases (VBD), account for a large proportion of the global disease burden, and their control faces several challenges including diminishing human and financial resources for those distressed from such diseases. Visceral Leishmaniasis (VL), the second-largest parasitic killer in the world (after malaria) affects poor populations in endemic countries and causes considerable cost to the affected individuals and their society. Mathematical models can serve as a critical tool for understanding the driving mechanisms of a NTD such as VL. The WHO promotes integrated control programs for VL but this policy is not well supported by systematic quantitative and dynamic evidence and so potential benefits of the policy are limited. Moreover, mathematical models can be readily developed and used to understand the functioning of the VL system cheaply and systematically. The focus of this research is three-fold: (i) to identify non-traditional but critical mechanisms for ongoing VL transmission in resource limited regions, (ii) to review mathematical models used for other infectious diseases that have the potential to capture identified factors of VL, and (iii) to suggest novel quantitative models for understanding VL dynamics and for evaluating control programs in such frameworks for achieveing VL elimination goals.
2208.00637
Catherine Beauchemin
Christian Quirouette, Daniel Cresta, Jizhou Li, Kathleen P. Wilkie, Haozhao Liang and Catherine A.A. Beauchemin
Stochastic failure of cell infection post viral entry: Implications for infection outcomes and antiviral therapy
35 pages, 13 figures (supplementary: 6 pages, 5 figures)
Sci. Rep. 13, 17243 (2023)
10.1038/s41598-023-44180-w
RIKEN-iTHEMS-Report-22
q-bio.CB q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
A virus infection can be initiated with very few or even a single infectious virion, and as such can become extinct, i.e. stochastically fail to take hold or spread significantly. There are many ways that a fully competent infectious virion, having successfully entered a cell, can fail to cause a productive infection, i.e. one that yields infectious virus progeny. Though many discrete, stochastic mathematical models (DSMs) have been developed and used to estimate a virus infection's extinction probability, these typically neglect infection failure post viral entry. The DSM presented herein introduces parameter $\gamma\in(0,1]$ which corresponds to the probability that a virion's entry into a cell will result in a productive cell infection. We derive an expression for the likelihood of infection extinction in this new DSM, and find that prophylactic therapy with an antiviral acting to reduce $\gamma$ is best at increasing an infection's extinction probability, compared to antivirals acting on the rates of virus production or virus entry into cells. Using the DSM, we investigate the difference in the fraction of cells consumed by so-called extinct versus established virus infections, and find that this distinction becomes biologically meaningless as the probability of extinction approaches 100%. We show that infections wherein virus is release by an infected cell as a single burst, rather than at a constant rate over the cell's infectious lifespan, has the same probability of infection extinction, despite previous claims to this effect [Pearson 2011, doi:10.1371/journal.pcbi.1001058]. Instead, extending previous work by others [Yan 2016, doi:10.1007/s00285-015-0961-5], we show how the assumed distribution for the stochastic virus burst size, affects the extinction probability and associated critical antiviral efficacy.
[ { "created": "Mon, 1 Aug 2022 06:51:27 GMT", "version": "v1" } ]
2024-03-20
[ [ "Quirouette", "Christian", "" ], [ "Cresta", "Daniel", "" ], [ "Li", "Jizhou", "" ], [ "Wilkie", "Kathleen P.", "" ], [ "Liang", "Haozhao", "" ], [ "Beauchemin", "Catherine A. A.", "" ] ]
A virus infection can be initiated with very few or even a single infectious virion, and as such can become extinct, i.e. stochastically fail to take hold or spread significantly. There are many ways that a fully competent infectious virion, having successfully entered a cell, can fail to cause a productive infection, i.e. one that yields infectious virus progeny. Though many discrete, stochastic mathematical models (DSMs) have been developed and used to estimate a virus infection's extinction probability, these typically neglect infection failure post viral entry. The DSM presented herein introduces parameter $\gamma\in(0,1]$ which corresponds to the probability that a virion's entry into a cell will result in a productive cell infection. We derive an expression for the likelihood of infection extinction in this new DSM, and find that prophylactic therapy with an antiviral acting to reduce $\gamma$ is best at increasing an infection's extinction probability, compared to antivirals acting on the rates of virus production or virus entry into cells. Using the DSM, we investigate the difference in the fraction of cells consumed by so-called extinct versus established virus infections, and find that this distinction becomes biologically meaningless as the probability of extinction approaches 100%. We show that infections wherein virus is release by an infected cell as a single burst, rather than at a constant rate over the cell's infectious lifespan, has the same probability of infection extinction, despite previous claims to this effect [Pearson 2011, doi:10.1371/journal.pcbi.1001058]. Instead, extending previous work by others [Yan 2016, doi:10.1007/s00285-015-0961-5], we show how the assumed distribution for the stochastic virus burst size, affects the extinction probability and associated critical antiviral efficacy.
1804.07609
Gabriel Silva
Gabriel A. Silva
The Effect of Signaling Latencies and Node Refractory States on the Dynamics of Networks
Version 3: Accepted version in press in Neural Computation
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that model the competing dynamics of incident signals into nodes along directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal refractory dynamics and processing times of the downstream node receiving the signals. This framework naturally extends to the construction of a perceptron model that takes into account such dynamic geometric considerations. We first describe the model in detail, culminating with the model of a geometric dynamic perceptron. We then derive upper and lower bounds for a notion of optimal efficient signaling between vertex pairs based on the structure of the framework. Efficient signaling in the context of the framework we develop here means that there needs to be a temporal match between the arrival time of the signals relative to how quickly nodes can internally process signals. These bounds reflect numerical constraints on the compensation of the timing of signaling events of upstream nodes attempting to activate downstream nodes they connect into that preserve this notion of efficiency. When a mismatch between signal arrival times and the internal states of activated nodes occurs, it can cause a break down in the signaling dynamics of the network. In contrast to essentially all the current state of the art in machine learning, this work provides a theoretical foundation for machine learning and intelligence architectures based on the timing of node activations and their abilities to respond, rather than necessary changes in synaptic weights. At the same time, this work is guiding the discovery of experimentally testable new structure-function principles in the biological brain.
[ { "created": "Tue, 17 Apr 2018 22:31:29 GMT", "version": "v1" }, { "created": "Wed, 11 Jul 2018 05:34:07 GMT", "version": "v2" }, { "created": "Sun, 4 Aug 2019 17:23:34 GMT", "version": "v3" } ]
2019-08-06
[ [ "Silva", "Gabriel A.", "" ] ]
We describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that model the competing dynamics of incident signals into nodes along directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal refractory dynamics and processing times of the downstream node receiving the signals. This framework naturally extends to the construction of a perceptron model that takes into account such dynamic geometric considerations. We first describe the model in detail, culminating with the model of a geometric dynamic perceptron. We then derive upper and lower bounds for a notion of optimal efficient signaling between vertex pairs based on the structure of the framework. Efficient signaling in the context of the framework we develop here means that there needs to be a temporal match between the arrival time of the signals relative to how quickly nodes can internally process signals. These bounds reflect numerical constraints on the compensation of the timing of signaling events of upstream nodes attempting to activate downstream nodes they connect into that preserve this notion of efficiency. When a mismatch between signal arrival times and the internal states of activated nodes occurs, it can cause a break down in the signaling dynamics of the network. In contrast to essentially all the current state of the art in machine learning, this work provides a theoretical foundation for machine learning and intelligence architectures based on the timing of node activations and their abilities to respond, rather than necessary changes in synaptic weights. At the same time, this work is guiding the discovery of experimentally testable new structure-function principles in the biological brain.
2207.07716
Diogo L. Pires
Diogo L. Pires and Mark Broom
More can be better: An analysis of single-mutant fixation probability functions under $2\times2$ games
23 pages, 8 figures
Proceedings of Royal Society A. 478: 20220577
10.1098/rspa.2022.0577
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Evolutionary game theory has proved to be a powerful tool to probe the self-organisation of collective behaviour by considering frequency-dependent fitness in evolutionary processes. It has shown that the stability of a strategy depends not only on the payoffs received after each encounter but also on the population's size. Here, we study $2\times2$ games in well-mixed finite populations by analysing the fixation probabilities of single mutants as functions of population size. We proved that 9 out of the 24 possible games always lead to monotonically decreasing functions, similarly to fixed fitness scenarios. However, fixation functions showed increasing regions under 12 distinct anti-coordination, coordination, and dominance games. Perhaps counter-intuitively, this establishes that single-mutant strategies often benefit from being in larger populations. Fixation functions that increase from a global minimum to a positive asymptotic value are pervasive but may have been easily concealed by the weak selection limit. We obtained sufficient conditions to observe fixation increasing for small populations and three distinct ways this can occur. Finally, we describe fixation functions with increasing regions bounded by two extremes under intermediate population sizes. We associate their occurrence with transitions from having one global extreme to other shapes.
[ { "created": "Fri, 15 Jul 2022 19:15:51 GMT", "version": "v1" }, { "created": "Fri, 28 Oct 2022 11:01:40 GMT", "version": "v2" }, { "created": "Tue, 29 Nov 2022 14:37:10 GMT", "version": "v3" } ]
2023-01-11
[ [ "Pires", "Diogo L.", "" ], [ "Broom", "Mark", "" ] ]
Evolutionary game theory has proved to be a powerful tool to probe the self-organisation of collective behaviour by considering frequency-dependent fitness in evolutionary processes. It has shown that the stability of a strategy depends not only on the payoffs received after each encounter but also on the population's size. Here, we study $2\times2$ games in well-mixed finite populations by analysing the fixation probabilities of single mutants as functions of population size. We proved that 9 out of the 24 possible games always lead to monotonically decreasing functions, similarly to fixed fitness scenarios. However, fixation functions showed increasing regions under 12 distinct anti-coordination, coordination, and dominance games. Perhaps counter-intuitively, this establishes that single-mutant strategies often benefit from being in larger populations. Fixation functions that increase from a global minimum to a positive asymptotic value are pervasive but may have been easily concealed by the weak selection limit. We obtained sufficient conditions to observe fixation increasing for small populations and three distinct ways this can occur. Finally, we describe fixation functions with increasing regions bounded by two extremes under intermediate population sizes. We associate their occurrence with transitions from having one global extreme to other shapes.
1807.03911
Peter Clote
Defne Surujon, Yann Ponty, Peter Clote
Small-world networks and RNA secondary structures
paper 17 pages, 3 figures, 2 tables; supplementary information 12 pages, 6 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let Sn denote the network of all RNA secondary structures of length n, in which undirected edges exist between structures s, t such that t is obtained from s by the addition, removal or shift of a single base pair. Using context-free grammars, generating functions and complex analysis, we show that the asymptotic average degree is O(n) and that the asymptotic clustering coeffcient is O(1/n), from which it follows that the family Sn, n = 1,2,3,... of secondary structure networks is not small-world.
[ { "created": "Wed, 11 Jul 2018 00:42:35 GMT", "version": "v1" } ]
2018-07-12
[ [ "Surujon", "Defne", "" ], [ "Ponty", "Yann", "" ], [ "Clote", "Peter", "" ] ]
Let Sn denote the network of all RNA secondary structures of length n, in which undirected edges exist between structures s, t such that t is obtained from s by the addition, removal or shift of a single base pair. Using context-free grammars, generating functions and complex analysis, we show that the asymptotic average degree is O(n) and that the asymptotic clustering coeffcient is O(1/n), from which it follows that the family Sn, n = 1,2,3,... of secondary structure networks is not small-world.