id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2003.07394
Marco D'Alessandro
Marco D'Alessandro, Stefan T. Radev, Andreas Voss, Luigi Lombardi
A Bayesian brain model of adaptive behavior: An application to the Wisconsin Card Sorting Task
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Adaptive behavior emerges through a dynamic interaction between cognitive agents and changing environmental demands. The investigation of information processing underlying adaptive behavior relies on controlled experimental settings in which individuals are asked to accomplish demanding tasks whereby a hidden state or an abstract rule has to be learned dynamically. Although performance in such tasks is regularly considered as a proxy for measuring high-level cognitive processes, the standard approach consists in summarizing response patterns by simple heuristic scoring measures. With this work, we propose and validate a new computational Bayesian model accounting for individual performance in the established Wisconsin Card Sorting Test. We embed the new model within the mathematical framework of Bayesian Brain Theory, according to which beliefs about the hidden environmental states are dynamically updated following the logic of Bayesian inference. Our computational model maps distinct cognitive processes into separable, neurobiologically plausible, information-theoretic constructs underlying observed response patterns. We assess model identification and expressiveness in accounting for meaningful human performance through extensive simulation studies. We further apply the model to real behavioral data in order to highlight the utility of the proposed model in recovering cognitive dynamics at an individual level. Practical and theoretical implications of our computational modeling approach for clinical and cognitive neuroscience research are finally discussed, as well as potential future improvements.
[ { "created": "Mon, 16 Mar 2020 18:25:51 GMT", "version": "v1" }, { "created": "Thu, 26 Nov 2020 21:29:39 GMT", "version": "v2" } ]
2020-11-30
[ [ "D'Alessandro", "Marco", "" ], [ "Radev", "Stefan T.", "" ], [ "Voss", "Andreas", "" ], [ "Lombardi", "Luigi", "" ] ]
Adaptive behavior emerges through a dynamic interaction between cognitive agents and changing environmental demands. The investigation of information processing underlying adaptive behavior relies on controlled experimental settings in which individuals are asked to accomplish demanding tasks whereby a hidden state or an abstract rule has to be learned dynamically. Although performance in such tasks is regularly considered as a proxy for measuring high-level cognitive processes, the standard approach consists in summarizing response patterns by simple heuristic scoring measures. With this work, we propose and validate a new computational Bayesian model accounting for individual performance in the established Wisconsin Card Sorting Test. We embed the new model within the mathematical framework of Bayesian Brain Theory, according to which beliefs about the hidden environmental states are dynamically updated following the logic of Bayesian inference. Our computational model maps distinct cognitive processes into separable, neurobiologically plausible, information-theoretic constructs underlying observed response patterns. We assess model identification and expressiveness in accounting for meaningful human performance through extensive simulation studies. We further apply the model to real behavioral data in order to highlight the utility of the proposed model in recovering cognitive dynamics at an individual level. Practical and theoretical implications of our computational modeling approach for clinical and cognitive neuroscience research are finally discussed, as well as potential future improvements.
1610.05113
Joseph Burger
Joseph R. Burger, Vanessa P. Weinberger, Pablo A. Marquet
Humans: the hyper-dense species
null
Scientific Reports volume 7, Article number: 43869 (2017)
10.1038/srep43869
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth's 'energetic equivalence rule' supported Van Valen's conjecture by showing a trade off between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use established metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at lower densities than a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities that are up to four orders of magnitude greater than hunter-gatherers yet cities consume up to two orders of magnitude greater energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible through enormous fluxes of energy and materials across urban boundaries to sustain hyper-dense, modern humans. The metabolic rift with nature created by hyper-dense cities supported by fossil fuel energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.
[ { "created": "Mon, 17 Oct 2016 13:49:30 GMT", "version": "v1" } ]
2019-02-15
[ [ "Burger", "Joseph R.", "" ], [ "Weinberger", "Vanessa P.", "" ], [ "Marquet", "Pablo A.", "" ] ]
Humans, like all organisms, are subject to fundamental biophysical laws. Van Valen predicted that, because of zero-sum dynamics, all populations of all species in a given environment flux the same amount of energy on average. Damuth's 'energetic equivalence rule' supported Van Valen's conjecture by showing a trade off between few big animals per area with high individual metabolic rates compared to abundant small species with low energy requirements. We use established metabolic scaling theory to compare variation in densities and individual energy use in human societies to other land mammals. We show that hunter-gatherers occurred at lower densities than a mammal of our size. Most modern humans, in contrast, concentrate in large cities at densities that are up to four orders of magnitude greater than hunter-gatherers yet cities consume up to two orders of magnitude greater energy per capita. Today, cities across the globe flux greater energy than net primary productivity on a per area basis. This is possible through enormous fluxes of energy and materials across urban boundaries to sustain hyper-dense, modern humans. The metabolic rift with nature created by hyper-dense cities supported by fossil fuel energy poses formidable challenges for establishing a sustainable relationship on a rapidly urbanizing, yet finite planet.
2102.04502
Mareike Fischer
Mareike Fischer and Michael Hendriksen
A survey of the monotonicity and non-contradiction of consensus methods and supertree methods
null
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent study, Bryant, Francis and Steel investigated the concept of \enquote{future-proofing} consensus methods in phylogenetics. That is, they investigated if such methods can be robust against the introduction of additional data like added trees or new species. In the present manuscript, we analyze consensus methods under a different aspect of introducing new data, namely concerning the discovery of new clades. In evolutionary biology, often formerly unresolved clades get resolved by refined reconstruction methods or new genetic data analyses. In our manuscript we investigate which properties of consensus methods can guarantee that such new insights do not disagree with previously found consensus trees, but merely refine them, a property termed \emph{monotonicity}. Along the lines of analyzing monotonicity, we also study two {established} supertree methods, namely Matrix Representation with Parsimony (MRP) and Matrix Representation with Compatibility (MRC), which have also been suggested as consensus methods in the literature. While we (just like Bryant, Francis and Steel in their recent study) unfortunately have to conclude some negative answers concerning general consensus methods, we also state some relevant and positive results concerning the majority rule ($\mathtt{MR}$) and strict consensus methods, which are amongst the most frequently used consensus methods. Moreover, we show that there exist infinitely many consensus methods which are monotonic and have some other desirable properties. \textbf{Keywords:} consensus tree, phylogenetics, majority rule, tree refinement, matrix representation with parsimony \textbf{MSC:} C92B05, 05C05
[ { "created": "Mon, 8 Feb 2021 19:52:46 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 15:37:23 GMT", "version": "v2" } ]
2024-06-12
[ [ "Fischer", "Mareike", "" ], [ "Hendriksen", "Michael", "" ] ]
In a recent study, Bryant, Francis and Steel investigated the concept of \enquote{future-proofing} consensus methods in phylogenetics. That is, they investigated if such methods can be robust against the introduction of additional data like added trees or new species. In the present manuscript, we analyze consensus methods under a different aspect of introducing new data, namely concerning the discovery of new clades. In evolutionary biology, often formerly unresolved clades get resolved by refined reconstruction methods or new genetic data analyses. In our manuscript we investigate which properties of consensus methods can guarantee that such new insights do not disagree with previously found consensus trees, but merely refine them, a property termed \emph{monotonicity}. Along the lines of analyzing monotonicity, we also study two {established} supertree methods, namely Matrix Representation with Parsimony (MRP) and Matrix Representation with Compatibility (MRC), which have also been suggested as consensus methods in the literature. While we (just like Bryant, Francis and Steel in their recent study) unfortunately have to conclude some negative answers concerning general consensus methods, we also state some relevant and positive results concerning the majority rule ($\mathtt{MR}$) and strict consensus methods, which are amongst the most frequently used consensus methods. Moreover, we show that there exist infinitely many consensus methods which are monotonic and have some other desirable properties. \textbf{Keywords:} consensus tree, phylogenetics, majority rule, tree refinement, matrix representation with parsimony \textbf{MSC:} C92B05, 05C05
1306.3421
Bradley Dickson
Bradley M. Dickson and Dmitri B. Kireev
The entropic lock and key of the histone code
2 figure
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The intricate pattern of chemical modifications on DNA and histones, the "histone code", is considered to be a key gene regulation factor. Multivalency is seen by many as an essential instrument to transmit the "encoded" information to the transcription machinery via multi-domain effector proteins and chromatin-associated complexes. However, as examples of multivalent histone engagement accumulate, an apparent contradiction is emerging. The isolated effector domains are notably weak binders, thus it is often asserted that the entropic cost of orienting multiple domains can be "prepaid" by a rigid tether. Meanwhile, evidence suggests that the tethers are largely disordered and offer little rigidity. Here we consider a mechanism to "prepay" the entropic costs of orienting the domains for binding, not through rigidity of the tether but through the careful spacing of the modifications on chromatin. An all-atom molecular dynamics study of the most fully characterized multivalent chromatin effector conforms to the conditions for an optimal free-energy payout, as predicted by the model discussed here.
[ { "created": "Fri, 14 Jun 2013 15:09:55 GMT", "version": "v1" }, { "created": "Thu, 8 Aug 2013 22:48:56 GMT", "version": "v2" }, { "created": "Fri, 20 Sep 2013 17:36:12 GMT", "version": "v3" }, { "created": "Tue, 3 Dec 2013 22:29:41 GMT", "version": "v4" } ]
2013-12-05
[ [ "Dickson", "Bradley M.", "" ], [ "Kireev", "Dmitri B.", "" ] ]
The intricate pattern of chemical modifications on DNA and histones, the "histone code", is considered to be a key gene regulation factor. Multivalency is seen by many as an essential instrument to transmit the "encoded" information to the transcription machinery via multi-domain effector proteins and chromatin-associated complexes. However, as examples of multivalent histone engagement accumulate, an apparent contradiction is emerging. The isolated effector domains are notably weak binders, thus it is often asserted that the entropic cost of orienting multiple domains can be "prepaid" by a rigid tether. Meanwhile, evidence suggests that the tethers are largely disordered and offer little rigidity. Here we consider a mechanism to "prepay" the entropic costs of orienting the domains for binding, not through rigidity of the tether but through the careful spacing of the modifications on chromatin. An all-atom molecular dynamics study of the most fully characterized multivalent chromatin effector conforms to the conditions for an optimal free-energy payout, as predicted by the model discussed here.
1704.04795
Adam Noel
Adam Noel, Dimitrios Makrakis, Andrew W. Eckford
Root Mean Square Error of Neural Spike Train Sequence Matching with Optogenetics
6 pages, 5 figures. Will be presented at IEEE Global Communications Conference (IEEE GLOBECOM 2017) in December 2017
null
10.1109/GLOCOM.2017.8255060
null
q-bio.NC cs.IT math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optogenetics is an emerging field of neuroscience where neurons are genetically modified to express light-sensitive receptors that enable external control over when the neurons fire. Given the prominence of neuronal signaling within the brain and throughout the body, optogenetics has significant potential to improve the understanding of the nervous system and to develop treatments for neurological diseases. This paper uses a simple optogenetic model to compare the timing distortion between a randomly-generated target spike sequence and an externally-stimulated neuron spike sequence. The distortion is measured by filtering each sequence and finding the root mean square error between the two filter outputs. The expected distortion is derived in closed form when the target sequence generation rate is sufficiently low. Derivations are verified via simulations.
[ { "created": "Sun, 16 Apr 2017 16:34:09 GMT", "version": "v1" }, { "created": "Mon, 21 Aug 2017 02:01:02 GMT", "version": "v2" } ]
2018-02-14
[ [ "Noel", "Adam", "" ], [ "Makrakis", "Dimitrios", "" ], [ "Eckford", "Andrew W.", "" ] ]
Optogenetics is an emerging field of neuroscience where neurons are genetically modified to express light-sensitive receptors that enable external control over when the neurons fire. Given the prominence of neuronal signaling within the brain and throughout the body, optogenetics has significant potential to improve the understanding of the nervous system and to develop treatments for neurological diseases. This paper uses a simple optogenetic model to compare the timing distortion between a randomly-generated target spike sequence and an externally-stimulated neuron spike sequence. The distortion is measured by filtering each sequence and finding the root mean square error between the two filter outputs. The expected distortion is derived in closed form when the target sequence generation rate is sufficiently low. Derivations are verified via simulations.
1904.03133
Michael Rumetshofer
Moritz P. K. Frewein, Michael Rumetshofer, Georg Pabst
Global Small-Angle Scattering Data Analysis of Inverted Hexagonal Phases
null
J. Appl. Cryst. 52, 403-414 (2019)
10.1107/S1600576719002760
null
q-bio.BM cond-mat.soft
http://creativecommons.org/licenses/by/4.0/
We have developed a global analysis model for randomly oriented, fully hydrated inverted hexagonal (H$_\text{II}$) phases formed by many amphiphiles in aqueous solution, including membrane lipids. The model is based on a structure factor for hexagonally packed rods and a compositional model for the scattering length density (SLD) enabling also the analysis of positionally weakly correlated H$_\text{II}$ phases. For optimization of the adjustable parameters we used Bayesian probability theory, which allows to retrieve parameter correlations in much more detail than standard analysis techniques, and thereby enables a realistic error analysis. The model was applied to different phosphatidylethanolamines including previously not reported H$_\text{II}$ data for diC14:0 and diC16:1 phosphatidylethanolamine. The extracted structural features include intrinsic lipid curvature, hydrocarbon chain length and area per lipid at the position of the neutral plane.
[ { "created": "Fri, 5 Apr 2019 15:57:36 GMT", "version": "v1" } ]
2019-04-08
[ [ "Frewein", "Moritz P. K.", "" ], [ "Rumetshofer", "Michael", "" ], [ "Pabst", "Georg", "" ] ]
We have developed a global analysis model for randomly oriented, fully hydrated inverted hexagonal (H$_\text{II}$) phases formed by many amphiphiles in aqueous solution, including membrane lipids. The model is based on a structure factor for hexagonally packed rods and a compositional model for the scattering length density (SLD) enabling also the analysis of positionally weakly correlated H$_\text{II}$ phases. For optimization of the adjustable parameters we used Bayesian probability theory, which allows to retrieve parameter correlations in much more detail than standard analysis techniques, and thereby enables a realistic error analysis. The model was applied to different phosphatidylethanolamines including previously not reported H$_\text{II}$ data for diC14:0 and diC16:1 phosphatidylethanolamine. The extracted structural features include intrinsic lipid curvature, hydrocarbon chain length and area per lipid at the position of the neutral plane.
1410.0322
Matthew Burgess
Matthew G. Burgess, Christopher Costello, Alexa Fredston-Hermann, Malin L. Pinsky, Steven D. Gaines, David Tilman, Stephen Polasky
Range contraction enables harvesting to extinction
25 pages total, 8 pages main text, 17 pages supporting information
null
10.1073/pnas.1607551114
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Economic incentives to harvest a species usually diminish as its abundance declines, because harvest costs increase. This prevents harvesting to extinction. A known exception can occur if consumer demand causes a declining species' harvest price to rise faster than costs. This threat may affect rare and valuable species, such as large land mammals, sturgeons, and bluefin tunas. We analyze a similar but underappreciated threat, which arises when the geographic area (range) occupied by a species contracts as its abundance declines. Range contractions maintain the local densities of declining populations, which facilitates harvesting to extinction by preventing abundance declines from causing harvest costs to rise. Factors causing such range contractions include schooling, herding, or flocking behaviors--which, ironically, can be predator-avoidance adaptations; patchy environments; habitat loss; and climate change. We use a simple model to identify combinations of range contractions and price increases capable of causing extinction from profitable overharvesting, and we compare these to an empirical review. We find that some aquatic species that school or forage in patchy environments experience sufficiently severe range contractions as they decline to allow profitable harvesting to extinction even with little or no price increase; and some high-value declining aquatic species experience severe price increases. For terrestrial species, the data needed to evaluate our theory are scarce, but available evidence suggests that extinction-enabling range contractions may be common among declining mammals and birds. Thus, factors causing range contraction as abundance declines may pose unexpectedly large extinction risks to harvested species.
[ { "created": "Wed, 1 Oct 2014 18:31:57 GMT", "version": "v1" }, { "created": "Wed, 18 Mar 2015 23:15:37 GMT", "version": "v2" }, { "created": "Wed, 29 Mar 2017 05:24:35 GMT", "version": "v3" }, { "created": "Thu, 30 Mar 2017 00:30:08 GMT", "version": "v4" } ]
2017-03-31
[ [ "Burgess", "Matthew G.", "" ], [ "Costello", "Christopher", "" ], [ "Fredston-Hermann", "Alexa", "" ], [ "Pinsky", "Malin L.", "" ], [ "Gaines", "Steven D.", "" ], [ "Tilman", "David", "" ], [ "Polasky", "Stephen", "" ] ]
Economic incentives to harvest a species usually diminish as its abundance declines, because harvest costs increase. This prevents harvesting to extinction. A known exception can occur if consumer demand causes a declining species' harvest price to rise faster than costs. This threat may affect rare and valuable species, such as large land mammals, sturgeons, and bluefin tunas. We analyze a similar but underappreciated threat, which arises when the geographic area (range) occupied by a species contracts as its abundance declines. Range contractions maintain the local densities of declining populations, which facilitates harvesting to extinction by preventing abundance declines from causing harvest costs to rise. Factors causing such range contractions include schooling, herding, or flocking behaviors--which, ironically, can be predator-avoidance adaptations; patchy environments; habitat loss; and climate change. We use a simple model to identify combinations of range contractions and price increases capable of causing extinction from profitable overharvesting, and we compare these to an empirical review. We find that some aquatic species that school or forage in patchy environments experience sufficiently severe range contractions as they decline to allow profitable harvesting to extinction even with little or no price increase; and some high-value declining aquatic species experience severe price increases. For terrestrial species, the data needed to evaluate our theory are scarce, but available evidence suggests that extinction-enabling range contractions may be common among declining mammals and birds. Thus, factors causing range contraction as abundance declines may pose unexpectedly large extinction risks to harvested species.
2209.09953
Fidel Santamaria
Horacio G. Rotstein and Fidel Santamaria
Development of theoretical frameworks in neuroscience: a pressing need in a sea of data
null
null
null
null
q-bio.NC q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Neuroscience is undergoing dramatic progress because of the vast data streams derived from the new technologies product of the BRAIN initiative and other enterprises. As any other scientific field, neuroscience benefits from having clear definitions of its theoretical components and their interactions. This allows generating theories that integrate knowledge, provide mechanistic insights, and predict results under new experimental conditions. However, theoretical neuroscience is a heterogeneous field that has not yet agreed on how to build theories or whether it is desirable to have an overarching theory or whether theories are simply tools to understand the brain. Here we advocate for the need of developing theoretical frameworks as a basis of generating common theoretical structures. We enumerate the elements of theoretical frameworks we deem necessary for any theory in neuroscience. In particular, we address the notions of paradigms, models, and scales of organizations. We then identify areas with pressing needs to develop brain theories: integration of statistical and dynamic approaches; multi-scale integration; coding; and interpretability in the context of Artificial Intelligence. We also point out that future theoretical frameworks would benefit from the incorporation of the principles of Evolution as a fundamental structure rather than purely mathematical or engineering principles. Rather than providing definite answers, the objective of this paper is to serve as an initial and succinct presentation of these topics to encourage discussion and further in depth development of each topic.
[ { "created": "Tue, 20 Sep 2022 19:07:55 GMT", "version": "v1" } ]
2022-09-22
[ [ "Rotstein", "Horacio G.", "" ], [ "Santamaria", "Fidel", "" ] ]
Neuroscience is undergoing dramatic progress because of the vast data streams derived from the new technologies product of the BRAIN initiative and other enterprises. As any other scientific field, neuroscience benefits from having clear definitions of its theoretical components and their interactions. This allows generating theories that integrate knowledge, provide mechanistic insights, and predict results under new experimental conditions. However, theoretical neuroscience is a heterogeneous field that has not yet agreed on how to build theories or whether it is desirable to have an overarching theory or whether theories are simply tools to understand the brain. Here we advocate for the need of developing theoretical frameworks as a basis of generating common theoretical structures. We enumerate the elements of theoretical frameworks we deem necessary for any theory in neuroscience. In particular, we address the notions of paradigms, models, and scales of organizations. We then identify areas with pressing needs to develop brain theories: integration of statistical and dynamic approaches; multi-scale integration; coding; and interpretability in the context of Artificial Intelligence. We also point out that future theoretical frameworks would benefit from the incorporation of the principles of Evolution as a fundamental structure rather than purely mathematical or engineering principles. Rather than providing definite answers, the objective of this paper is to serve as an initial and succinct presentation of these topics to encourage discussion and further in depth development of each topic.
1902.07231
Zachary Wu
Zachary Wu, S. B. Jennifer Kan, Russell D. Lewis, Bruce J. Wittmann, Frances H. Arnold
Machine learning-assisted directed protein evolution with combinatorial libraries
Corrected best S-selective variant sequence in Figure 4. Corrected less R-selective variant sequences from Round II Input library in Table 2 and Supp Table 4. Corrections may also be found on PNAS version https://www.pnas.org/content/early/2019/12/26/1921770117
PNAS April 30, 2019 116 (18) 8852-8858
10.1073/pnas.1901979116
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
To reduce experimental effort associated with directed protein evolution and to explore the sequence space encoded by mutating multiple positions simultaneously, we incorporate machine learning in the directed evolution workflow. Combinatorial sequence space can be quite expensive to sample experimentally, but machine learning models trained on tested variants provide a fast method for testing sequence space computationally. We validate this approach on a large published empirical fitness landscape for human GB1 binding protein, demonstrating that machine learning-guided directed evolution finds variants with higher fitness than those found by other directed evolution approaches. We then provide an example application in evolving an enzyme to produce each of the two possible product enantiomers (stereodivergence) of a new-to-nature carbene Si-H insertion reaction. The approach predicted libraries enriched in functional enzymes and fixed seven mutations in two rounds of evolution to identify variants for selective catalysis with 93% and 79% ee. By greatly increasing throughput with in silico modeling, machine learning enhances the quality and diversity of sequence solutions for a protein engineering problem.
[ { "created": "Tue, 19 Feb 2019 19:03:00 GMT", "version": "v1" }, { "created": "Thu, 21 Feb 2019 20:47:56 GMT", "version": "v2" }, { "created": "Wed, 22 May 2019 22:27:30 GMT", "version": "v3" }, { "created": "Sat, 4 Jan 2020 17:57:06 GMT", "version": "v4" } ]
2020-01-07
[ [ "Wu", "Zachary", "" ], [ "Kan", "S. B. Jennifer", "" ], [ "Lewis", "Russell D.", "" ], [ "Wittmann", "Bruce J.", "" ], [ "Arnold", "Frances H.", "" ] ]
To reduce experimental effort associated with directed protein evolution and to explore the sequence space encoded by mutating multiple positions simultaneously, we incorporate machine learning in the directed evolution workflow. Combinatorial sequence space can be quite expensive to sample experimentally, but machine learning models trained on tested variants provide a fast method for testing sequence space computationally. We validate this approach on a large published empirical fitness landscape for human GB1 binding protein, demonstrating that machine learning-guided directed evolution finds variants with higher fitness than those found by other directed evolution approaches. We then provide an example application in evolving an enzyme to produce each of the two possible product enantiomers (stereodivergence) of a new-to-nature carbene Si-H insertion reaction. The approach predicted libraries enriched in functional enzymes and fixed seven mutations in two rounds of evolution to identify variants for selective catalysis with 93% and 79% ee. By greatly increasing throughput with in silico modeling, machine learning enhances the quality and diversity of sequence solutions for a protein engineering problem.
1509.02205
Floyd Reed
\'Aki J. L\'aruson and Floyd A. Reed
Stability of Underdominant Genetic Polymorphisms in Population Networks
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterozygote disadvantage is potentially a potent driver of population genetic divergence. Also referred to as underdominance, this phenomena describes a situation where a genetic heterozygote has a lower overall fitness than either homozygote. Attention so far has mostly been given to underdominance within a single population and the maintenance of genetic differences between two populations exchanging migrants. Here we explore the dynamics of an underdominant system in a network of multiple discrete, yet interconnected, populations. Stability of genetic differences in response to increases in migration in various topological networks is assessed. The network topology can have a dominant and occasionally non-intuitive influence on the genetic stability of the system.
[ { "created": "Mon, 7 Sep 2015 22:00:56 GMT", "version": "v1" }, { "created": "Sun, 27 Sep 2015 21:33:12 GMT", "version": "v2" } ]
2015-09-29
[ [ "Láruson", "Áki J.", "" ], [ "Reed", "Floyd A.", "" ] ]
Heterozygote disadvantage is potentially a potent driver of population genetic divergence. Also referred to as underdominance, this phenomena describes a situation where a genetic heterozygote has a lower overall fitness than either homozygote. Attention so far has mostly been given to underdominance within a single population and the maintenance of genetic differences between two populations exchanging migrants. Here we explore the dynamics of an underdominant system in a network of multiple discrete, yet interconnected, populations. Stability of genetic differences in response to increases in migration in various topological networks is assessed. The network topology can have a dominant and occasionally non-intuitive influence on the genetic stability of the system.
2207.06001
Siddhartha Chakrabarty
Suryadeepto Nag and Ananda Shikhara Bhat and Siddhartha P. Chakrabarty
Studying the age of onset and detection of Chronic Myeloid Leukemia using a three-stage stochastic model
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Chronic Myeloid Leukemia (CML) is a biphasic malignant clonal disorder that progresses, first with a chronic phase, where the cells have enhanced proliferation only, and then to a blast phase, where the cells have the ability of self-renewal. It is well-recognized that the Philadelphia chromosome (which contains the BCR-ABL fusion gene) is the "hallmark of CML". However, empirical studies have shown that the mere presence of BCR-ABL may not be a sufficient condition for the development of CML, and further modifications related to tumor suppressors may be necessary. Accordingly, we develop a three-mutation stochastic model of CML progression, with the three stages corresponding to the non-malignant cells with BCR-ABL presence, the malignant cells in the chronic phase and the malignant cells in the blast phase. We demonstrate that the model predictions agree with age incidence data from the United States. Finally, we develop a framework for the retrospective estimation of the time of onset of malignancy, from the time of detection of the cancer.
[ { "created": "Wed, 13 Jul 2022 07:06:59 GMT", "version": "v1" } ]
2022-07-14
[ [ "Nag", "Suryadeepto", "" ], [ "Bhat", "Ananda Shikhara", "" ], [ "Chakrabarty", "Siddhartha P.", "" ] ]
Chronic Myeloid Leukemia (CML) is a biphasic malignant clonal disorder that progresses, first with a chronic phase, where the cells have enhanced proliferation only, and then to a blast phase, where the cells have the ability of self-renewal. It is well-recognized that the Philadelphia chromosome (which contains the BCR-ABL fusion gene) is the "hallmark of CML". However, empirical studies have shown that the mere presence of BCR-ABL may not be a sufficient condition for the development of CML, and further modifications related to tumor suppressors may be necessary. Accordingly, we develop a three-mutation stochastic model of CML progression, with the three stages corresponding to the non-malignant cells with BCR-ABL presence, the malignant cells in the chronic phase and the malignant cells in the blast phase. We demonstrate that the model predictions agree with age incidence data from the United States. Finally, we develop a framework for the retrospective estimation of the time of onset of malignancy, from the time of detection of the cancer.
1611.04023
Gerard Rinkus
Gerard J. Rinkus
Sparsey: Event Recognition via Deep Hierarchical Spare Distributed Codes
This is a manuscript form of a paper published in Frontiers in Computational Neuroscience in 2014 (http://dx.doi.org/10.3389/fncom.2014.00160). 65 pages, 28 figures, 8 tables
Frontiers in Computational Neuroscience, Vol. 8, Article 160 (2014)
10.3389/fncom.2014.00160
null
q-bio.NC cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale, more complex spatiotemporal features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field, which we equate with the cortical macrocolumn (mac), at each level. In localism, each represented feature/event (item) is coded by a single unit. Our model, Sparsey, is also hierarchical but crucially, uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. SDCs of different items can overlap and the size of overlap between items can represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to huge datasets. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.
[ { "created": "Sat, 12 Nov 2016 17:35:23 GMT", "version": "v1" } ]
2016-11-21
[ [ "Rinkus", "Gerard J.", "" ] ]
Visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale, more complex spatiotemporal features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field, which we equate with the cortical macrocolumn (mac), at each level. In localism, each represented feature/event (item) is coded by a single unit. Our model, Sparsey, is also hierarchical but crucially, uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. SDCs of different items can overlap and the size of overlap between items can represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to huge datasets. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.
q-bio/0607026
Alex Sanchez-Pla
Alex Sanchez-Pla, Miquel Salicru, Jordi Ocanya
Distance based Inference for Gene-Ontology Analysis of Microarray Experiments
Submitted to Journal of Statistical Planning and Inference
null
null
null
q-bio.GN q-bio.QM
null
The increasing availability of high throughput data arising from gene expression studies leads to the necessity of methods for summarizing the available information. As annotation quality improves it is becoming common to rely on the Gene Ontology (GO) to build functional profiles that characterize a set of genes using the frequency of use of each GO term or group of terms in the array. In this work we describe a statistical model for such profiles, provide methods to compare profiles and develop inferential procedures to assess this comparison. An R-package implementing the methods is available.
[ { "created": "Tue, 18 Jul 2006 22:07:47 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sanchez-Pla", "Alex", "" ], [ "Salicru", "Miquel", "" ], [ "Ocanya", "Jordi", "" ] ]
The increasing availability of high throughput data arising from gene expression studies leads to the necessity of methods for summarizing the available information. As annotation quality improves it is becoming common to rely on the Gene Ontology (GO) to build functional profiles that characterize a set of genes using the frequency of use of each GO term or group of terms in the array. In this work we describe a statistical model for such profiles, provide methods to compare profiles and develop inferential procedures to assess this comparison. An R-package implementing the methods is available.
0711.1989
Su-Chan Park
Su-Chan Park and Joachim Krug
Evolution in random fitness landscapes: the infinite sites model
Dedicated to Thomas Nattermann on the occasion of his 60th birthday. Submitted to JSTAT. Error in Section 3.2 was corrected
J. Stat. Mech. (2008) P04014
10.1088/1742-5468/2008/04/P04014
null
q-bio.PE cond-mat.dis-nn
null
We consider the evolution of an asexually reproducing population in an uncorrelated random fitness landscape in the limit of infinite genome size, which implies that each mutation generates a new fitness value drawn from a probability distribution $g(w)$. This is the finite population version of Kingman's house of cards model [J.F.C. Kingman, \textit{J. Appl. Probab.} \textbf{15}, 1 (1978)]. In contrast to Kingman's work, the focus here is on unbounded distributions $g(w)$ which lead to an indefinite growth of the population fitness. The model is solved analytically in the limit of infinite population size $N \to \infty$ and simulated numerically for finite $N$. When the genome-wide mutation probability $U$ is small, the long time behavior of the model reduces to a point process of fixation events, which is referred to as a \textit{diluted record process} (DRP). The DRP is similar to the standard record process except that a new record candidate (a number that exceeds all previous entries in the sequence) is accepted only with a certain probability that depends on the values of the current record and the candidate. We develop a systematic analytic approximation scheme for the DRP. At finite $U$ the fitness frequency distribution of the population decomposes into a stationary part due to mutations and a traveling wave component due to selection, which is shown to imply a reduction of the mean fitness by a factor of $1-U$ compared to the $U \to 0$ limit.
[ { "created": "Tue, 13 Nov 2007 14:31:34 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2007 12:46:26 GMT", "version": "v2" } ]
2009-11-13
[ [ "Park", "Su-Chan", "" ], [ "Krug", "Joachim", "" ] ]
We consider the evolution of an asexually reproducing population in an uncorrelated random fitness landscape in the limit of infinite genome size, which implies that each mutation generates a new fitness value drawn from a probability distribution $g(w)$. This is the finite population version of Kingman's house of cards model [J.F.C. Kingman, \textit{J. Appl. Probab.} \textbf{15}, 1 (1978)]. In contrast to Kingman's work, the focus here is on unbounded distributions $g(w)$ which lead to an indefinite growth of the population fitness. The model is solved analytically in the limit of infinite population size $N \to \infty$ and simulated numerically for finite $N$. When the genome-wide mutation probability $U$ is small, the long time behavior of the model reduces to a point process of fixation events, which is referred to as a \textit{diluted record process} (DRP). The DRP is similar to the standard record process except that a new record candidate (a number that exceeds all previous entries in the sequence) is accepted only with a certain probability that depends on the values of the current record and the candidate. We develop a systematic analytic approximation scheme for the DRP. At finite $U$ the fitness frequency distribution of the population decomposes into a stationary part due to mutations and a traveling wave component due to selection, which is shown to imply a reduction of the mean fitness by a factor of $1-U$ compared to the $U \to 0$ limit.
1306.2852
Diego Ferreiro
R. Gonzalo Parra, Roc\'io Espada, Ignacio E. S\'anchez, Manfred J. Sippl, Diego U. Ferreiro
Detecting Repetitions and Periodicities in Proteins by Tiling the Structural Space
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/3.0/
The notion of energy landscapes provides conceptual tools for understanding the complexities of protein folding and function. Energy Landscape Theory indicates that it is much easier to find sequences that satisfy the "Principle of Minimal Frustration" when the folded structure is symmetric (Wolynes, P. G. Symmetry and the Energy Landscapes of Biomolecules. Proc. Natl. Acad. Sci. U.S.A. 1996, 93, 14249-14255). Similarly, repeats and structural mosaics may be fundamentally related to landscapes with multiple embedded funnels. Here we present analytical tools to detect and compare structural repetitions in protein molecules. By an exhaustive analysis of the distribution of structural repeats using a robust metric we define those portions of a protein molecule that best describe the overall structure as a tessellation of basic units. The patterns produced by such tessellations provide intuitive representations of the repeating regions and their association towards higher order arrangements. We find that some protein architectures can be described as nearly periodic, while in others clear separations between repetitions exist. Since the method is independent of amino acid sequence information we can identify structural units that can be encoded by a variety of distinct amino acid sequences.
[ { "created": "Wed, 12 Jun 2013 15:01:02 GMT", "version": "v1" } ]
2013-06-13
[ [ "Parra", "R. Gonzalo", "" ], [ "Espada", "Rocío", "" ], [ "Sánchez", "Ignacio E.", "" ], [ "Sippl", "Manfred J.", "" ], [ "Ferreiro", "Diego U.", "" ] ]
The notion of energy landscapes provides conceptual tools for understanding the complexities of protein folding and function. Energy Landscape Theory indicates that it is much easier to find sequences that satisfy the "Principle of Minimal Frustration" when the folded structure is symmetric (Wolynes, P. G. Symmetry and the Energy Landscapes of Biomolecules. Proc. Natl. Acad. Sci. U.S.A. 1996, 93, 14249-14255). Similarly, repeats and structural mosaics may be fundamentally related to landscapes with multiple embedded funnels. Here we present analytical tools to detect and compare structural repetitions in protein molecules. By an exhaustive analysis of the distribution of structural repeats using a robust metric we define those portions of a protein molecule that best describe the overall structure as a tessellation of basic units. The patterns produced by such tessellations provide intuitive representations of the repeating regions and their association towards higher order arrangements. We find that some protein architectures can be described as nearly periodic, while in others clear separations between repetitions exist. Since the method is independent of amino acid sequence information we can identify structural units that can be encoded by a variety of distinct amino acid sequences.
1312.3869
Vikash Bhardwaj PhD
Bhardwaj Vikash, Gupta Swapni, Meena Sitaram and Sharma Kulbhushan
FPCB : a simple and swift strategy for mirror repeat identification
14 pages,3 figures,1 table
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After the recent advancement of sequencing strategies, mirror repeats have been found to be present in the gene sequence of many organisms and species. This presence of mirror repeats in most of the sequences indicates towards some important functional role of these repeats. However, a simple and quick strategy to search these repeats in a given sequence is not available. We in this manuscript have proposed a simple and swift strategy named as FPCB strategy to identify mirror repeats in a give sequence. The strategy includes three simple steps of downloading sequencing in FASTA format (F), making its parallel complement (PC) and finally performing a homology search with the original sequence (B). At least twenty genes were analyzed using the proposed study. A number and types of mirror repeats were observed. We have also tried to give nomenclature to these repeats. We hope that the proposed FPCB strategy will be quite helpful for the identification of mirror repeats in DNA or mRNA sequence. Also the strategy may help in unraveling the functional role of mirror repeats in various processes including evolution.
[ { "created": "Fri, 13 Dec 2013 16:45:29 GMT", "version": "v1" } ]
2013-12-16
[ [ "Vikash", "Bhardwaj", "" ], [ "Swapni", "Gupta", "" ], [ "Sitaram", "Meena", "" ], [ "Kulbhushan", "Sharma", "" ] ]
After the recent advancement of sequencing strategies, mirror repeats have been found to be present in the gene sequence of many organisms and species. This presence of mirror repeats in most of the sequences indicates towards some important functional role of these repeats. However, a simple and quick strategy to search these repeats in a given sequence is not available. We in this manuscript have proposed a simple and swift strategy named as FPCB strategy to identify mirror repeats in a give sequence. The strategy includes three simple steps of downloading sequencing in FASTA format (F), making its parallel complement (PC) and finally performing a homology search with the original sequence (B). At least twenty genes were analyzed using the proposed study. A number and types of mirror repeats were observed. We have also tried to give nomenclature to these repeats. We hope that the proposed FPCB strategy will be quite helpful for the identification of mirror repeats in DNA or mRNA sequence. Also the strategy may help in unraveling the functional role of mirror repeats in various processes including evolution.
0706.3234
Edgar Delgado-Eckert MS
Edgar Delgado-Eckert
Reverse engineering time discrete finite dynamical systems: A feasible undertaking?
Submitted to journal, currently under review
PLoS ONE, 4(3), 2009
10.1371/journal.pone.0004939.
null
q-bio.QM math.DS q-bio.MN
null
With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as r^(q^n), where q is a natural number and 0<r<1. Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.
[ { "created": "Thu, 21 Jun 2007 22:52:50 GMT", "version": "v1" } ]
2010-01-18
[ [ "Delgado-Eckert", "Edgar", "" ] ]
With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as r^(q^n), where q is a natural number and 0<r<1. Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.
2006.14565
Karol Szymula
Karol P. Szymula, Fabio Pasqualetti, Ann M. Graybiel, Theresa M. Desrochers, and Danielle S. Bassett
Habit learning supported by efficiently controlled network dynamics in naive macaque monkeys
Main Text: 17 pages and 6 figures; Supplement Text: 9 pages and 8 figures
null
null
null
q-bio.NC cs.IT math.IT math.OC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Primates display a marked ability to learn habits in uncertain and dynamic environments. The associated perceptions and actions of such habits engage distributed neural circuits. Yet, precisely how such circuits support the computations necessary for habit learning remain far from understood. Here we construct a formal theory of network energetics to account for how changes in brain state produce changes in sequential behavior. We exercise the theory in the context of multi-unit recordings spanning the caudate nucleus, prefrontal cortex, and frontal eyefields of female macaque monkeys engaged in 60-180 sessions of a free scan task that induces motor habits. The theory relies on the determination of effective connectivity between recording channels, and on the stipulation that a brain state is taken to be the trial-specific firing rate across those channels. The theory then predicts how much energy will be required to transition from one state into another, given the constraint that activity can spread solely through effective connections. Consistent with the theory's predictions, we observed smaller energy requirements for transitions between more similar and more complex trial saccade patterns, and for sessions characterized by less entropic selection of saccade patterns. Using a virtual lesioning approach, we demonstrate the resilience of the observed relationships between minimum control energy and behavior to significant disruptions in the inferred effective connectivity. Our theoretically principled approach to the study of habit learning paves the way for future efforts examining how behavior arises from changing patterns of activity in distributed neural circuitry.
[ { "created": "Thu, 25 Jun 2020 17:09:07 GMT", "version": "v1" } ]
2020-06-26
[ [ "Szymula", "Karol P.", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Graybiel", "Ann M.", "" ], [ "Desrochers", "Theresa M.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Primates display a marked ability to learn habits in uncertain and dynamic environments. The associated perceptions and actions of such habits engage distributed neural circuits. Yet, precisely how such circuits support the computations necessary for habit learning remain far from understood. Here we construct a formal theory of network energetics to account for how changes in brain state produce changes in sequential behavior. We exercise the theory in the context of multi-unit recordings spanning the caudate nucleus, prefrontal cortex, and frontal eyefields of female macaque monkeys engaged in 60-180 sessions of a free scan task that induces motor habits. The theory relies on the determination of effective connectivity between recording channels, and on the stipulation that a brain state is taken to be the trial-specific firing rate across those channels. The theory then predicts how much energy will be required to transition from one state into another, given the constraint that activity can spread solely through effective connections. Consistent with the theory's predictions, we observed smaller energy requirements for transitions between more similar and more complex trial saccade patterns, and for sessions characterized by less entropic selection of saccade patterns. Using a virtual lesioning approach, we demonstrate the resilience of the observed relationships between minimum control energy and behavior to significant disruptions in the inferred effective connectivity. Our theoretically principled approach to the study of habit learning paves the way for future efforts examining how behavior arises from changing patterns of activity in distributed neural circuitry.
1901.00405
Grzegorz A Rempala
Wasiur R. KhudaBukhsh and Boseung Choi and Eben Kenah and Grzegorz A. Rempala
Survival Dynamical Systems for the Population-level Analysis of Epidemics
27 pages and 6 figures
null
null
null
q-bio.PE math.DS stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the classical Susceptible-Infected-Recovered (SIR) epidemic models proposed by Kermack and Mckendrick, we consider a class of stochastic compartmental dynamical systems with a notion of partial ordering among the compartments. We call such systems unidirectional Mass Transfer Models (MTMs). We show that there is a natural way of interpreting a uni-directional MTM as a Survival Dynamical System (SDS) that is described in terms of survival functions instead of population counts. This SDS interpretation allows us to employ tools from survival analysis to address various issues with data collection and statistical inference of unidirectional MTMs. In particular, we propose and numerically validate a statistical inference procedure based on SDS-likelihoods. We use the SIR model as a running example throughout the paper to illustrate the ideas.
[ { "created": "Wed, 2 Jan 2019 14:57:34 GMT", "version": "v1" } ]
2019-01-03
[ [ "KhudaBukhsh", "Wasiur R.", "" ], [ "Choi", "Boseung", "" ], [ "Kenah", "Eben", "" ], [ "Rempala", "Grzegorz A.", "" ] ]
Motivated by the classical Susceptible-Infected-Recovered (SIR) epidemic models proposed by Kermack and Mckendrick, we consider a class of stochastic compartmental dynamical systems with a notion of partial ordering among the compartments. We call such systems unidirectional Mass Transfer Models (MTMs). We show that there is a natural way of interpreting a uni-directional MTM as a Survival Dynamical System (SDS) that is described in terms of survival functions instead of population counts. This SDS interpretation allows us to employ tools from survival analysis to address various issues with data collection and statistical inference of unidirectional MTMs. In particular, we propose and numerically validate a statistical inference procedure based on SDS-likelihoods. We use the SIR model as a running example throughout the paper to illustrate the ideas.
2101.07350
Tajudeen Yahaya Dr.
Tajudeen Yahaya, Titilola Salisu, Yusuf Abdulrahman, Abdulrazak Umar
Update on the genetic and epigenetic etiology of gestational diabetes mellitus: a review
14 pages
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Background: Many studies have been conducted on the genetic and epigenetic etiology of gestational diabetes mellitus (GDM) in the last two decades because of the diseases increasing prevalence and role in the global diabetes mellitus (DM) explosion. An update on the genetic and epigenetic etiology of GDM then becomes imperative to better understand and stem the rising incidence of the disease. This review, therefore, articulated GDM candidate genes and their pathophysiology for the awareness of stakeholders. Main body (genetic and epigenetic etiology, GDM): The search discovered 83 GDM candidate genes, of which TCF7L2, MTNR1B, CDKAL1, IRS1, and KCNQ1 are the most prevalent. Certain polymorphisms of these genes can modulate beta-cell dysfunction, adiposity, obesity, and insulin resistance through several mechanisms. Environmental triggers such as diets, pollutants, and microbes may also cause epigenetic changes in these genes, resulting in a loss of insulin-boosting and glucose metabolism functions. Early detection and adequate management may resolve the condition after delivery; otherwise, it will progress to maternal type 2 diabetes mellitus (T2DM) and fetal configuration to future obesity and DM. This shows that GDM is a strong risk factor for T2DM and, in rare cases, type 1 diabetes mellitus (T1DM) and maturity-onset diabetes of the young (MODY). This further shows that GDM significantly contributes to the rising incidence and burden of DM worldwide and its prevention may reverse the trend. Conclusion: Mutations and epigenetic changes in certain genes are strong risk factors for GDM. For affected individuals with such etiologies, medical practitioners should formulate drugs and treatment procedures that target these genes and their pathophysiology.
[ { "created": "Mon, 18 Jan 2021 22:33:47 GMT", "version": "v1" } ]
2021-01-20
[ [ "Yahaya", "Tajudeen", "" ], [ "Salisu", "Titilola", "" ], [ "Abdulrahman", "Yusuf", "" ], [ "Umar", "Abdulrazak", "" ] ]
Background: Many studies have been conducted on the genetic and epigenetic etiology of gestational diabetes mellitus (GDM) in the last two decades because of the diseases increasing prevalence and role in the global diabetes mellitus (DM) explosion. An update on the genetic and epigenetic etiology of GDM then becomes imperative to better understand and stem the rising incidence of the disease. This review, therefore, articulated GDM candidate genes and their pathophysiology for the awareness of stakeholders. Main body (genetic and epigenetic etiology, GDM): The search discovered 83 GDM candidate genes, of which TCF7L2, MTNR1B, CDKAL1, IRS1, and KCNQ1 are the most prevalent. Certain polymorphisms of these genes can modulate beta-cell dysfunction, adiposity, obesity, and insulin resistance through several mechanisms. Environmental triggers such as diets, pollutants, and microbes may also cause epigenetic changes in these genes, resulting in a loss of insulin-boosting and glucose metabolism functions. Early detection and adequate management may resolve the condition after delivery; otherwise, it will progress to maternal type 2 diabetes mellitus (T2DM) and fetal configuration to future obesity and DM. This shows that GDM is a strong risk factor for T2DM and, in rare cases, type 1 diabetes mellitus (T1DM) and maturity-onset diabetes of the young (MODY). This further shows that GDM significantly contributes to the rising incidence and burden of DM worldwide and its prevention may reverse the trend. Conclusion: Mutations and epigenetic changes in certain genes are strong risk factors for GDM. For affected individuals with such etiologies, medical practitioners should formulate drugs and treatment procedures that target these genes and their pathophysiology.
2209.01498
Andrew Lizarraga
Andrew Lizarraga, Katherine L. Narr, Kirsten A. Donald, Shantanu H. Joshi
StreamNet: A WAE for White Matter Streamline Analysis
null
null
null
null
q-bio.QM cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present StreamNet, an autoencoder architecture for the analysis of the highly heterogeneous geometry of large collections of white matter streamlines. This proposed framework takes advantage of geometry-preserving properties of the Wasserstein-1 metric in order to achieve direct encoding and reconstruction of entire bundles of streamlines. We show that the model not only accurately captures the distributive structures of streamlines in the population, but is also able to achieve superior reconstruction performance between real and synthetic streamlines. Experimental model performance is evaluated on white matter streamlines resulting from T1-weighted diffusion imaging of 40 healthy controls using recent state of the art bundle comparison metric that measures fiber-shape similarities.
[ { "created": "Sat, 3 Sep 2022 20:51:07 GMT", "version": "v1" }, { "created": "Sun, 16 Oct 2022 19:39:04 GMT", "version": "v2" }, { "created": "Wed, 19 Oct 2022 16:49:41 GMT", "version": "v3" } ]
2022-10-20
[ [ "Lizarraga", "Andrew", "" ], [ "Narr", "Katherine L.", "" ], [ "Donald", "Kirsten A.", "" ], [ "Joshi", "Shantanu H.", "" ] ]
We present StreamNet, an autoencoder architecture for the analysis of the highly heterogeneous geometry of large collections of white matter streamlines. This proposed framework takes advantage of geometry-preserving properties of the Wasserstein-1 metric in order to achieve direct encoding and reconstruction of entire bundles of streamlines. We show that the model not only accurately captures the distributive structures of streamlines in the population, but is also able to achieve superior reconstruction performance between real and synthetic streamlines. Experimental model performance is evaluated on white matter streamlines resulting from T1-weighted diffusion imaging of 40 healthy controls using recent state of the art bundle comparison metric that measures fiber-shape similarities.
2305.14376
Yi Yang
Yi Yang, Hejie Cui, Carl Yang
PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis
Accepted to CHIL 2023, 19 pages
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
The human brain is the central hub of the neurobiological system, controlling behavior and cognition in complex ways. Recent advances in neuroscience and neuroimaging analysis have shown a growing interest in the interactions between brain regions of interest (ROIs) and their impact on neural development and disorder diagnosis. As a powerful deep model for analyzing graph-structured data, Graph Neural Networks (GNNs) have been applied for brain network analysis. However, training deep models requires large amounts of labeled data, which is often scarce in brain network datasets due to the complexities of data acquisition and sharing restrictions. To make the most out of available training data, we propose PTGB, a GNN pre-training framework that captures intrinsic brain network structures, regardless of clinical outcomes, and is easily adaptable to various downstream tasks. PTGB comprises two key components: (1) an unsupervised pre-training technique designed specifically for brain networks, which enables learning from large-scale datasets without task-specific labels; (2) a data-driven parcellation atlas mapping pipeline that facilitates knowledge transfer across datasets with different ROI systems. Extensive evaluations using various GNN models have demonstrated the robust and superior performance of PTGB compared to baseline methods.
[ { "created": "Sat, 20 May 2023 21:07:47 GMT", "version": "v1" } ]
2023-05-25
[ [ "Yang", "Yi", "" ], [ "Cui", "Hejie", "" ], [ "Yang", "Carl", "" ] ]
The human brain is the central hub of the neurobiological system, controlling behavior and cognition in complex ways. Recent advances in neuroscience and neuroimaging analysis have shown a growing interest in the interactions between brain regions of interest (ROIs) and their impact on neural development and disorder diagnosis. As a powerful deep model for analyzing graph-structured data, Graph Neural Networks (GNNs) have been applied for brain network analysis. However, training deep models requires large amounts of labeled data, which is often scarce in brain network datasets due to the complexities of data acquisition and sharing restrictions. To make the most out of available training data, we propose PTGB, a GNN pre-training framework that captures intrinsic brain network structures, regardless of clinical outcomes, and is easily adaptable to various downstream tasks. PTGB comprises two key components: (1) an unsupervised pre-training technique designed specifically for brain networks, which enables learning from large-scale datasets without task-specific labels; (2) a data-driven parcellation atlas mapping pipeline that facilitates knowledge transfer across datasets with different ROI systems. Extensive evaluations using various GNN models have demonstrated the robust and superior performance of PTGB compared to baseline methods.
1603.06538
Gao-De Li Dr
Gao-De Li
Natural site-directed mutagenesis might exist in eukaryotic cells
5 pages
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Site-directed mutagenesis refers to a man-made molecular biology method that is used to make genetic alterations in the DNA sequence of a gene of interest. But based on our recently published experimental findings, we propose that natural site-directed mutagenesis might exist in the eukaryotic cells, which is triggered by harmful agents and co-directed by special transcription hotspots and mutation-contained intranuclear primers.
[ { "created": "Mon, 21 Mar 2016 19:08:03 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2016 17:10:18 GMT", "version": "v2" } ]
2016-04-19
[ [ "Li", "Gao-De", "" ] ]
Site-directed mutagenesis refers to a man-made molecular biology method that is used to make genetic alterations in the DNA sequence of a gene of interest. But based on our recently published experimental findings, we propose that natural site-directed mutagenesis might exist in the eukaryotic cells, which is triggered by harmful agents and co-directed by special transcription hotspots and mutation-contained intranuclear primers.
1403.3310
Marianne Rooman
Marianne Rooman, Jaroslav Albert, Mitia Duerinckx
Stochastic noise reduction upon complexification: positively correlated birth-death type systems
38 pages, 5 figures, to appear in J. Theor. Biol
null
null
null
q-bio.BM q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell systems consist of a huge number of various molecules that display specific patterns of interactions, which have a determining influence on the cell's functioning. In general, such complexity is seen to increase with the complexity of the organism, with a concomitant increase of the accuracy and specificity of the cellular processes. The question thus arises how the complexification of systems - modeled here by simple interacting birth-death type processes - can lead to a reduction of the noise - described by the variance of the number of molecules. To gain understanding of this issue, we investigated the difference between a single system containing molecules that are produced and degraded, and the same system - with the same average number of molecules - connected to a buffer. We modeled these systems using Ito stochastic differential equations in discrete time, as they allow straightforward analytical developments. In general, when the molecules in the system and the buffer are positively correlated, the variance on the number of molecules in the system is found to decrease compared to the equivalent system without a buffer. Only buffers that are too noisy by themselves tend to increase the noise in the main system. We tested this result on two model cases, in which the system and the buffer contain proteins in their active and inactive state, or protein monomers and homodimers. We found that in the second test case, where the interconversion terms are non-linear in the number of molecules, the noise reduction is much more pronounced; it reaches up to 20% reduction of the Fano factor with the parameter values tested in numerical simulations on an unperturbed birth-death model. We extended our analysis to two arbitrary interconnected systems.
[ { "created": "Thu, 13 Mar 2014 16:02:48 GMT", "version": "v1" } ]
2014-03-14
[ [ "Rooman", "Marianne", "" ], [ "Albert", "Jaroslav", "" ], [ "Duerinckx", "Mitia", "" ] ]
Cell systems consist of a huge number of various molecules that display specific patterns of interactions, which have a determining influence on the cell's functioning. In general, such complexity is seen to increase with the complexity of the organism, with a concomitant increase of the accuracy and specificity of the cellular processes. The question thus arises how the complexification of systems - modeled here by simple interacting birth-death type processes - can lead to a reduction of the noise - described by the variance of the number of molecules. To gain understanding of this issue, we investigated the difference between a single system containing molecules that are produced and degraded, and the same system - with the same average number of molecules - connected to a buffer. We modeled these systems using Ito stochastic differential equations in discrete time, as they allow straightforward analytical developments. In general, when the molecules in the system and the buffer are positively correlated, the variance on the number of molecules in the system is found to decrease compared to the equivalent system without a buffer. Only buffers that are too noisy by themselves tend to increase the noise in the main system. We tested this result on two model cases, in which the system and the buffer contain proteins in their active and inactive state, or protein monomers and homodimers. We found that in the second test case, where the interconversion terms are non-linear in the number of molecules, the noise reduction is much more pronounced; it reaches up to 20% reduction of the Fano factor with the parameter values tested in numerical simulations on an unperturbed birth-death model. We extended our analysis to two arbitrary interconnected systems.
1106.3600
Liane Gabora
Liane Gabora and Apara Ranjan
How Insight Emerges in a Distributed, Content-addressable Memory
17 pages; 2 figures
In A. Bristol, O. Vartanian, & J. Kaufman (Eds.), The neuroscience of creativity (pp. 19-43). Cambridge, MA: MIT Press (2013)
10.7551/mitpress/9780262019583.003.0002
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We begin this chapter with the bold claim that it provides a neuroscientific explanation of the magic of creativity. Creativity presents a formidable challenge for neuroscience. Neuroscience generally involves studying what happens in the brain when someone engages in a task that involves responding to a stimulus, or retrieving information from memory and using it the right way, or at the right time. If the relevant information is not already encoded in memory, the task generally requires that the individual make systematic use of information that is encoded in memory. But creativity is different. It paradoxically involves studying how someone pulls out of their brain something that was never put into it! Moreover, it must be something both new and useful, or appropriate to the task at hand. The ability to pull out of memory something new and appropriate that was never stored there in the first place is what we refer to as the magic of creativity. Even if we are so fortunate as to determine which areas of the brain are active and how these areas interact during creative thought, we will not have an answer to the question of how the brain comes up with solutions and artworks that are new and appropriate. On the other hand, since the representational capacity of neurons emerges at a level that is higher than that of the individual neurons themselves, the inner workings of neurons is too low a level to explain the magic of creativity. Thus we look to a level that is midway between gross brain regions and neurons. Since creativity generally involves combining concepts from different domains, or seeing old ideas from new perspectives, we focus our efforts on the neural mechanisms underlying the representation of concepts and ideas. Thus we ask questions about the brain at the level that accounts for its representational capacity, i.e. at the level of distributed aggregates of neurons.
[ { "created": "Sat, 18 Jun 2011 00:26:40 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 01:41:58 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2019 22:03:09 GMT", "version": "v3" } ]
2019-07-09
[ [ "Gabora", "Liane", "" ], [ "Ranjan", "Apara", "" ] ]
We begin this chapter with the bold claim that it provides a neuroscientific explanation of the magic of creativity. Creativity presents a formidable challenge for neuroscience. Neuroscience generally involves studying what happens in the brain when someone engages in a task that involves responding to a stimulus, or retrieving information from memory and using it the right way, or at the right time. If the relevant information is not already encoded in memory, the task generally requires that the individual make systematic use of information that is encoded in memory. But creativity is different. It paradoxically involves studying how someone pulls out of their brain something that was never put into it! Moreover, it must be something both new and useful, or appropriate to the task at hand. The ability to pull out of memory something new and appropriate that was never stored there in the first place is what we refer to as the magic of creativity. Even if we are so fortunate as to determine which areas of the brain are active and how these areas interact during creative thought, we will not have an answer to the question of how the brain comes up with solutions and artworks that are new and appropriate. On the other hand, since the representational capacity of neurons emerges at a level that is higher than that of the individual neurons themselves, the inner workings of neurons is too low a level to explain the magic of creativity. Thus we look to a level that is midway between gross brain regions and neurons. Since creativity generally involves combining concepts from different domains, or seeing old ideas from new perspectives, we focus our efforts on the neural mechanisms underlying the representation of concepts and ideas. Thus we ask questions about the brain at the level that accounts for its representational capacity, i.e. at the level of distributed aggregates of neurons.
1912.07567
Samuel Bobholz
Samuel Bobholz, Allison Lowman, Alexander Barrington, Michael Brehler, Sean McGarry, Elizabeth J. Cochran, Jennifer Connelly, Wade M. Mueller, Mohit Agarwal, Darren O'Neill, Anjishnu Banerjee, Peter S. LaViolette
Radiomic features of multi-parametric MRI present stable associations with analogous histological features in brain cancer patients
16 pages, 1 table, 5 figures, 1 supplemental figure
null
null
null
q-bio.QM physics.med-ph q-bio.NC q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MR-derived radiomic features have demonstrated substantial predictive utility in modeling different prognostic factors of glioblastomas and other brain cancers. However, the biological relationship underpinning these predictive models has been largely unstudied, with the generalizability of these models also called into question. Here, we examine the localized relationship between MR-derived radiomic features and histology-derived histomic features using a dataset of 16 brain cancer patients. Tile-based radiomics features were collected on T1W, post-contrast T1W, FLAIR, and DWI-derived ADC images acquired prior to patient death, with analogous histomic features collected for autopsy samples co-registered to the MRI. Features were collected for each original image, as well as a 3D wavelet decomposition of each image, resulting in 837 features per MR image and histology image. Correlative analyses were used to assess the degree of association between radiomic-histomic pairs for each MRI. The influence of several confounds were also assessed using linear mixed effect models for the normalized radiomic-histomic distance, testing for main effects of scanners from different vendors and acquisition field strength. Results as a whole were largely heterogenous, but several features demonstrated substantial associations with their histomic analogs, particularly those derived from the FLAIR and post-contrast T1W images. These most-associated features typically presented as stable across confounding factors as well. These data suggest that a subset of radiomic features are able to consistently capture texture information about the underlying tissue histology.
[ { "created": "Mon, 16 Dec 2019 18:30:46 GMT", "version": "v1" } ]
2019-12-17
[ [ "Bobholz", "Samuel", "" ], [ "Lowman", "Allison", "" ], [ "Barrington", "Alexander", "" ], [ "Brehler", "Michael", "" ], [ "McGarry", "Sean", "" ], [ "Cochran", "Elizabeth J.", "" ], [ "Connelly", "Jennifer", "" ], [ "Mueller", "Wade M.", "" ], [ "Agarwal", "Mohit", "" ], [ "O'Neill", "Darren", "" ], [ "Banerjee", "Anjishnu", "" ], [ "LaViolette", "Peter S.", "" ] ]
MR-derived radiomic features have demonstrated substantial predictive utility in modeling different prognostic factors of glioblastomas and other brain cancers. However, the biological relationship underpinning these predictive models has been largely unstudied, with the generalizability of these models also called into question. Here, we examine the localized relationship between MR-derived radiomic features and histology-derived histomic features using a dataset of 16 brain cancer patients. Tile-based radiomics features were collected on T1W, post-contrast T1W, FLAIR, and DWI-derived ADC images acquired prior to patient death, with analogous histomic features collected for autopsy samples co-registered to the MRI. Features were collected for each original image, as well as a 3D wavelet decomposition of each image, resulting in 837 features per MR image and histology image. Correlative analyses were used to assess the degree of association between radiomic-histomic pairs for each MRI. The influence of several confounds were also assessed using linear mixed effect models for the normalized radiomic-histomic distance, testing for main effects of scanners from different vendors and acquisition field strength. Results as a whole were largely heterogenous, but several features demonstrated substantial associations with their histomic analogs, particularly those derived from the FLAIR and post-contrast T1W images. These most-associated features typically presented as stable across confounding factors as well. These data suggest that a subset of radiomic features are able to consistently capture texture information about the underlying tissue histology.
2405.20619
Md Nurul Anwar
Md Nurul Anwar, James M. McCaw, Alexander E. Zarebski, Roslyn I. Hickson, Jennifer A. Flegg
Investigation of P. Vivax Elimination via Mass Drug Administration
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Plasmodium vivax is the most geographically widespread malaria parasite due to its ability to remain dormant (as a hypnozoite) in the human liver and subsequently reactivate. Given the majority of P. vivax infections are due to hypnozoite reactivation, targeting the hypnozoite reservoir with a radical cure is crucial for achieving P. vivax elimination. Stochastic effects can strongly influence dynamics when disease prevalence is low or when the population size is small. Hence, it is important to account for this when modelling malaria elimination.cWe use a stochastic multiscale model of P. vivax transmission to study the impacts of multiple rounds of mass drug administration (MDA) with a radical cure, accounting for superinfection and hypnozoite dynamics. Our results indicate multiple rounds of MDA with a high-efficacy drug are needed to achieve a substantial probability of elimination. This work has the potential to help guide P. vivax elimination strategies by quantifying elimination probabilities for an MDA approach.
[ { "created": "Fri, 31 May 2024 04:58:06 GMT", "version": "v1" } ]
2024-06-03
[ [ "Anwar", "Md Nurul", "" ], [ "McCaw", "James M.", "" ], [ "Zarebski", "Alexander E.", "" ], [ "Hickson", "Roslyn I.", "" ], [ "Flegg", "Jennifer A.", "" ] ]
Plasmodium vivax is the most geographically widespread malaria parasite due to its ability to remain dormant (as a hypnozoite) in the human liver and subsequently reactivate. Given the majority of P. vivax infections are due to hypnozoite reactivation, targeting the hypnozoite reservoir with a radical cure is crucial for achieving P. vivax elimination. Stochastic effects can strongly influence dynamics when disease prevalence is low or when the population size is small. Hence, it is important to account for this when modelling malaria elimination.cWe use a stochastic multiscale model of P. vivax transmission to study the impacts of multiple rounds of mass drug administration (MDA) with a radical cure, accounting for superinfection and hypnozoite dynamics. Our results indicate multiple rounds of MDA with a high-efficacy drug are needed to achieve a substantial probability of elimination. This work has the potential to help guide P. vivax elimination strategies by quantifying elimination probabilities for an MDA approach.
2301.10748
Dominic Giles
Dominic Giles, Robert Gray, Chris Foulon, Guilherme Pombo, Tianbo Xu, H. Rolf J\"ager, Jorge Cardoso, Sebastien Ourselin, Geraint Rees, Ashwani Jha, Parashkev Nachev
Individualized prescriptive inference in ischaemic stroke
131 pages
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The gold standard in the treatment of ischaemic stroke is set by evidence from randomized controlled trials. Yet the manifest complexity of the brain's functional, connective, and vascular architectures introduces heterogeneity in treatment susceptibility that violates the underlying statistical premisses, potentially leading to substantial errors at both individual and population levels. The counterfactual nature of therapeutic inference has made quantifying the impact of this defect difficult. Combining large-scale meta-analytic connective, functional, genetic expression, and receptor distribution data with high-resolution maps of 4 119 acute ischaemic lesions, here we conduct a comprehensive series of semi-synthetic virtual interventional trials, quantifying the fidelity of the traditional approach in inferring individual treatment effects against biologically plausible, empirically informed ground truths, across 103 628 800 distinct simulations. Combining deep generative models expressive enough to capture the observed lesion heterogeneity with flexible causal modelling, we find that the richness of the lesion representation is decisive in determining individual-level fidelity, even where freedom from treatment allocation bias cannot be guaranteed. Our results indicate that complex modelling with richly represented lesion data is critical to individualized prescriptive inference in ischaemic stroke.
[ { "created": "Wed, 25 Jan 2023 18:11:02 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 19:26:48 GMT", "version": "v2" } ]
2024-02-29
[ [ "Giles", "Dominic", "" ], [ "Gray", "Robert", "" ], [ "Foulon", "Chris", "" ], [ "Pombo", "Guilherme", "" ], [ "Xu", "Tianbo", "" ], [ "Jäger", "H. Rolf", "" ], [ "Cardoso", "Jorge", "" ], [ "Ourselin", "Sebastien", "" ], [ "Rees", "Geraint", "" ], [ "Jha", "Ashwani", "" ], [ "Nachev", "Parashkev", "" ] ]
The gold standard in the treatment of ischaemic stroke is set by evidence from randomized controlled trials. Yet the manifest complexity of the brain's functional, connective, and vascular architectures introduces heterogeneity in treatment susceptibility that violates the underlying statistical premisses, potentially leading to substantial errors at both individual and population levels. The counterfactual nature of therapeutic inference has made quantifying the impact of this defect difficult. Combining large-scale meta-analytic connective, functional, genetic expression, and receptor distribution data with high-resolution maps of 4 119 acute ischaemic lesions, here we conduct a comprehensive series of semi-synthetic virtual interventional trials, quantifying the fidelity of the traditional approach in inferring individual treatment effects against biologically plausible, empirically informed ground truths, across 103 628 800 distinct simulations. Combining deep generative models expressive enough to capture the observed lesion heterogeneity with flexible causal modelling, we find that the richness of the lesion representation is decisive in determining individual-level fidelity, even where freedom from treatment allocation bias cannot be guaranteed. Our results indicate that complex modelling with richly represented lesion data is critical to individualized prescriptive inference in ischaemic stroke.
1405.4774
Thierry Emonet
Yann S. Dufour, Xiongfei Fu, Luis Hernandez-Nunez, Thierry Emonet
Limits of feedback control in bacterial chemotaxis
Corrected one typo. First two authors contributed equally. Notably, there were various typos in the values of the parameters in the model of motor adaptation. The results remain unchanged
PLoS Computational Biology 10(6): e1003694 (2014)
10.1371/journal.pcbi.1003694
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inputs to signaling pathways can have complex statistics that depend on the environment and on the behavioral response to previous stimuli. Such behavioral feedback is particularly important in navigation. Successful navigation relies on proper coupling between sensors, which gather information during motion, and actuators, which control behavior. Because reorientation conditions future inputs, behavioral feedback can place sensors and actuators in an operational regime different from the resting state. How then can organisms maintain proper information transfer through the pathway while navigating diverse environments? In bacterial chemotaxis, robust performance is often attributed to the zero integral feedback control of the sensor, which guarantees that activity returns to resting state when the input remains constant. While this property provides sensitivity over a wide range of signal intensities, it remains unclear how other parameters affect chemotactic performance, especially when considering that the swimming behavior of the cell determines the input signal. Using analytical models and simulations that incorporate recent experimental evidences about behavioral feedback and flagellar motor adaptation we identify an operational regime of the pathway that maximizes drift velocity for various environments and sensor adaptation rates. This optimal regime is outside the dynamic range of the motor response, but maximizes the contrast between run duration up and down gradients. In steep gradients, the feedback from chemotactic drift can push the system through a bifurcation. This creates a non-chemotactic state that traps cells unless the motor is allowed to adapt. Although motor adaptation helps, we find that as the strength of the feedback increases individual phenotypes cannot maintain the optimal operational regime in all environments, suggesting that diversity could be beneficial.
[ { "created": "Mon, 19 May 2014 15:49:00 GMT", "version": "v1" }, { "created": "Tue, 11 Nov 2014 15:41:21 GMT", "version": "v2" }, { "created": "Mon, 1 Dec 2014 16:21:07 GMT", "version": "v3" } ]
2014-12-02
[ [ "Dufour", "Yann S.", "" ], [ "Fu", "Xiongfei", "" ], [ "Hernandez-Nunez", "Luis", "" ], [ "Emonet", "Thierry", "" ] ]
Inputs to signaling pathways can have complex statistics that depend on the environment and on the behavioral response to previous stimuli. Such behavioral feedback is particularly important in navigation. Successful navigation relies on proper coupling between sensors, which gather information during motion, and actuators, which control behavior. Because reorientation conditions future inputs, behavioral feedback can place sensors and actuators in an operational regime different from the resting state. How then can organisms maintain proper information transfer through the pathway while navigating diverse environments? In bacterial chemotaxis, robust performance is often attributed to the zero integral feedback control of the sensor, which guarantees that activity returns to resting state when the input remains constant. While this property provides sensitivity over a wide range of signal intensities, it remains unclear how other parameters affect chemotactic performance, especially when considering that the swimming behavior of the cell determines the input signal. Using analytical models and simulations that incorporate recent experimental evidences about behavioral feedback and flagellar motor adaptation we identify an operational regime of the pathway that maximizes drift velocity for various environments and sensor adaptation rates. This optimal regime is outside the dynamic range of the motor response, but maximizes the contrast between run duration up and down gradients. In steep gradients, the feedback from chemotactic drift can push the system through a bifurcation. This creates a non-chemotactic state that traps cells unless the motor is allowed to adapt. Although motor adaptation helps, we find that as the strength of the feedback increases individual phenotypes cannot maintain the optimal operational regime in all environments, suggesting that diversity could be beneficial.
2203.14886
Giovanni Bussi
Valerio Piomponi, Thorben Fr\"ohlking, Mattia Bernetti, Giovanni Bussi
Molecular simulations matching denaturation experiments for N6-Methyladenosine
Supporting information available as ancillary material. Version to be submitted to journal
ACS Cent. Sci. 2022, 8, 8, 1218-1228
10.1021/acscentsci.2c00565
null
q-bio.BM physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Post-transcriptional modifications are crucial for RNA function and can affect its structure and dynamics. Force-field based classical molecular dynamics simulations are a fundamental tool to characterize biomolecular dynamics and their application to RNA is flourishing. Here we show that the set of force-field parameters for N$^6$-methyladenosine (m$^6$A) developed for the commonly used AMBER force field does not reproduce duplex denaturation experiments and, specifically, cannot be used to describe both paired and unpaired states. Then we use reweighting techniques to derive new parameters matching available experimental data. The resulting force field can be used to properly describe paired and unpaired m$^6$A in both syn and anti conformation, and thus opens the way to the use of molecular simulations to investigate the effects of N6 methylations on RNA structural dynamics.
[ { "created": "Mon, 28 Mar 2022 16:41:45 GMT", "version": "v1" }, { "created": "Mon, 2 May 2022 18:38:45 GMT", "version": "v2" } ]
2022-08-26
[ [ "Piomponi", "Valerio", "" ], [ "Fröhlking", "Thorben", "" ], [ "Bernetti", "Mattia", "" ], [ "Bussi", "Giovanni", "" ] ]
Post-transcriptional modifications are crucial for RNA function and can affect its structure and dynamics. Force-field based classical molecular dynamics simulations are a fundamental tool to characterize biomolecular dynamics and their application to RNA is flourishing. Here we show that the set of force-field parameters for N$^6$-methyladenosine (m$^6$A) developed for the commonly used AMBER force field does not reproduce duplex denaturation experiments and, specifically, cannot be used to describe both paired and unpaired states. Then we use reweighting techniques to derive new parameters matching available experimental data. The resulting force field can be used to properly describe paired and unpaired m$^6$A in both syn and anti conformation, and thus opens the way to the use of molecular simulations to investigate the effects of N6 methylations on RNA structural dynamics.
0712.4382
William Bialek
Samuel F. Taylor, Naftali Tishby and William Bialek
Information and fitness
null
null
null
null
q-bio.PE
null
The growth rate of organisms depends both on external conditions and on internal states, such as the expression levels of various genes. We show that to achieve a criterion mean growth rate over an ensemble of conditions, the internal variables must carry a minimum number of bits of information about those conditions. Evolutionary competition thus can select for cellular mechanisms that are more efficient in an abstract, information theoretic sense. Estimates based on recent experiments suggest that the minimum information required for reasonable growth rates is close to the maximum information that can be conveyed through biologically realistic regulatory mechanisms. These ideas are applicable most directly to unicellular organisms, but there are analogies to problems in higher organisms, and we suggest new experiments for both cases.
[ { "created": "Fri, 28 Dec 2007 18:08:37 GMT", "version": "v1" } ]
2007-12-31
[ [ "Taylor", "Samuel F.", "" ], [ "Tishby", "Naftali", "" ], [ "Bialek", "William", "" ] ]
The growth rate of organisms depends both on external conditions and on internal states, such as the expression levels of various genes. We show that to achieve a criterion mean growth rate over an ensemble of conditions, the internal variables must carry a minimum number of bits of information about those conditions. Evolutionary competition thus can select for cellular mechanisms that are more efficient in an abstract, information theoretic sense. Estimates based on recent experiments suggest that the minimum information required for reasonable growth rates is close to the maximum information that can be conveyed through biologically realistic regulatory mechanisms. These ideas are applicable most directly to unicellular organisms, but there are analogies to problems in higher organisms, and we suggest new experiments for both cases.
2311.00592
Mohamed El Khalifi
Mohamed El Khalifi and Tom Britton
SIRS epidemics with individual heterogeneity of immunity waning
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
We analyse an extended SIRS epidemic model in which immunity at the individual level wanes gradually at exponential rate, but where the waning rate may differ between individuals, for instance as an effect of differences in immune systems. The model also includes vaccination schemes aimed to reach and maintain herd immunity. We consider both the informed situation where the individual waning parameters are known, thus allowing selection of vaccinees being based on both time since last vaccination as well as on the individual waning rate, and the more likely uninformed situation where individual waning parameters are unobserved, thus only allowing vaccination schemes to depend on time since last vaccination. The optimal vaccination policies for both the informed and uniformed heterogeneous situation are derived and compared with the homogeneous waning model (meaning all individuals have the same immunity waning rate), as well as to the classic SIRS model where immunity at the individual level drops from complete immunity to complete susceptibility in one leap. It is shown that the classic SIRS model requires least vaccines, followed by the SIRS with homogeneous gradual waning, followed by the informed situation for the model with heterogeneous gradual waning. The situation requiring most vaccines for herd immunity is the most likely scenario, that immunity wanes gradually with unobserved individual heterogeneity. For parameter values chosen to mimic COVID-19 and assuming perfect initial immunity and cumulative immunity of 12 months, the classic homogeneous SIRS epidemic suggests that vaccinating individuals every 15 months is sufficient to reach and maintain herd immunity, whereas the uninformed case for exponential waning with rate heterogeneity corresponding to a coefficient of variation being 0.5, requires that individuals instead need to be vaccinated every 4.4 months.
[ { "created": "Wed, 1 Nov 2023 15:37:54 GMT", "version": "v1" } ]
2023-11-02
[ [ "Khalifi", "Mohamed El", "" ], [ "Britton", "Tom", "" ] ]
We analyse an extended SIRS epidemic model in which immunity at the individual level wanes gradually at exponential rate, but where the waning rate may differ between individuals, for instance as an effect of differences in immune systems. The model also includes vaccination schemes aimed to reach and maintain herd immunity. We consider both the informed situation where the individual waning parameters are known, thus allowing selection of vaccinees being based on both time since last vaccination as well as on the individual waning rate, and the more likely uninformed situation where individual waning parameters are unobserved, thus only allowing vaccination schemes to depend on time since last vaccination. The optimal vaccination policies for both the informed and uniformed heterogeneous situation are derived and compared with the homogeneous waning model (meaning all individuals have the same immunity waning rate), as well as to the classic SIRS model where immunity at the individual level drops from complete immunity to complete susceptibility in one leap. It is shown that the classic SIRS model requires least vaccines, followed by the SIRS with homogeneous gradual waning, followed by the informed situation for the model with heterogeneous gradual waning. The situation requiring most vaccines for herd immunity is the most likely scenario, that immunity wanes gradually with unobserved individual heterogeneity. For parameter values chosen to mimic COVID-19 and assuming perfect initial immunity and cumulative immunity of 12 months, the classic homogeneous SIRS epidemic suggests that vaccinating individuals every 15 months is sufficient to reach and maintain herd immunity, whereas the uninformed case for exponential waning with rate heterogeneity corresponding to a coefficient of variation being 0.5, requires that individuals instead need to be vaccinated every 4.4 months.
1601.07867
John Medaglia
John D. Medaglia, Weiyu Huang, Santiago Segarra, Christopher Olm, James Gee, Murray Grossman, Alejandro Ribeiro, Corey T. McMillan, Danielle S. Bassett
Brain network efficiency is influenced by pathological source of corticobasal syndrome
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal neuroimaging studies of corticobasal syndrome using volumetric MRI and DTI successfully discriminate between Alzheimer's disease and frontotemporal lobar degeneration but this evidence has typically included clinically heterogeneous patient cohorts and has rarely assessed the network structure of these distinct sources of pathology. Using structural MRI data, we identify areas in fronto-temporo-parietal cortex with reduced gray matter density in corticobasal syndrome relative to age matched controls. A support vector machine procedure demonstrates that gray matter density poorly discriminates between frontotemporal lobar degeneration and Alzheimer's disease pathology subgroups with low sensitivity and specificity. In contrast, a statistic of local network efficiency demonstrates excellent discriminatory power, with high sensitivity and specificity. Our results indicate that the underlying pathological sources of corticobasal syndrome can be classified more accurately using graph theoretical statistics of white matter microstructure in association cortex than by regional gray matter density alone. These results highlight the importance of a multimodal neuroimaging approach to diagnostic analyses of corticobasal syndrome and suggest that distinct sources of pathology mediate the circuitry of brain regions affected by corticobasal syndrome.
[ { "created": "Thu, 28 Jan 2016 19:12:44 GMT", "version": "v1" } ]
2016-01-29
[ [ "Medaglia", "John D.", "" ], [ "Huang", "Weiyu", "" ], [ "Segarra", "Santiago", "" ], [ "Olm", "Christopher", "" ], [ "Gee", "James", "" ], [ "Grossman", "Murray", "" ], [ "Ribeiro", "Alejandro", "" ], [ "McMillan", "Corey T.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Multimodal neuroimaging studies of corticobasal syndrome using volumetric MRI and DTI successfully discriminate between Alzheimer's disease and frontotemporal lobar degeneration but this evidence has typically included clinically heterogeneous patient cohorts and has rarely assessed the network structure of these distinct sources of pathology. Using structural MRI data, we identify areas in fronto-temporo-parietal cortex with reduced gray matter density in corticobasal syndrome relative to age matched controls. A support vector machine procedure demonstrates that gray matter density poorly discriminates between frontotemporal lobar degeneration and Alzheimer's disease pathology subgroups with low sensitivity and specificity. In contrast, a statistic of local network efficiency demonstrates excellent discriminatory power, with high sensitivity and specificity. Our results indicate that the underlying pathological sources of corticobasal syndrome can be classified more accurately using graph theoretical statistics of white matter microstructure in association cortex than by regional gray matter density alone. These results highlight the importance of a multimodal neuroimaging approach to diagnostic analyses of corticobasal syndrome and suggest that distinct sources of pathology mediate the circuitry of brain regions affected by corticobasal syndrome.
2010.10516
Christian Kuehn
Leonhard Horstmeyer and Christian Kuehn and Stefan Thurner
Balancing quarantine and self-distancing measures in adaptive epidemic networks
abstract slightly shortened in preview due to maximum character count of arXiv abstracts
null
null
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the relative importance of two key control measures for epidemic spreading: endogenous social self-distancing and exogenous imposed quarantine. We use the framework of adaptive networks, moment-closure, and ordinary differential equations (ODEs) to introduce several novel models based upon susceptible-infected-recovered (SIR) dynamics. First, we compare computationally expensive, adaptive network simulations with their corresponding computationally highly efficient ODE equivalents and find excellent agreement. Second, we discover that there exists a relatively simple critical curve in parameter space for the epidemic threshold, which strongly suggests that there is a mutual compensation effect between the two mitigation strategies: as long as social distancing and quarantine measures are both sufficiently strong, large outbreaks are prevented. Third, we study the total number of infected and the maximum peak during large outbreaks using a combination of analytical estimates and numerical simulations. Also for large outbreaks we find a similar compensation effect as for the epidemic threshold. This suggests that if there is little incentive for social distancing within a population, drastic quarantining is required, and vice versa. Both pure scenarios are unrealistic in practice. Our models show that only a combination of measures is likely to succeed to control epidemic spreading. Fourth, we analytically compute an upper bound for the total number of infected on adaptive networks, using integral estimates in combination with the moment-closure approximation on the level of an observable. This is a methodological innovation. Our method allows us to elegantly and quickly check and cross-validate various conjectures about the relevance of different network control measures.
[ { "created": "Tue, 20 Oct 2020 13:35:50 GMT", "version": "v1" } ]
2020-10-22
[ [ "Horstmeyer", "Leonhard", "" ], [ "Kuehn", "Christian", "" ], [ "Thurner", "Stefan", "" ] ]
We study the relative importance of two key control measures for epidemic spreading: endogenous social self-distancing and exogenous imposed quarantine. We use the framework of adaptive networks, moment-closure, and ordinary differential equations (ODEs) to introduce several novel models based upon susceptible-infected-recovered (SIR) dynamics. First, we compare computationally expensive, adaptive network simulations with their corresponding computationally highly efficient ODE equivalents and find excellent agreement. Second, we discover that there exists a relatively simple critical curve in parameter space for the epidemic threshold, which strongly suggests that there is a mutual compensation effect between the two mitigation strategies: as long as social distancing and quarantine measures are both sufficiently strong, large outbreaks are prevented. Third, we study the total number of infected and the maximum peak during large outbreaks using a combination of analytical estimates and numerical simulations. Also for large outbreaks we find a similar compensation effect as for the epidemic threshold. This suggests that if there is little incentive for social distancing within a population, drastic quarantining is required, and vice versa. Both pure scenarios are unrealistic in practice. Our models show that only a combination of measures is likely to succeed to control epidemic spreading. Fourth, we analytically compute an upper bound for the total number of infected on adaptive networks, using integral estimates in combination with the moment-closure approximation on the level of an observable. This is a methodological innovation. Our method allows us to elegantly and quickly check and cross-validate various conjectures about the relevance of different network control measures.
2003.07778
Aboul Ella Hassanien Abo
Haytham H. Elmousalami and Aboul Ella Hassanien (Scientific Research Group in Egypt -- SRGE)
Day Level Forecasting for Coronavirus Disease (COVID-19) Spread: Analysis, Modeling and Recommendations
19 pages, 19 figure 2 tables
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In mid of March 2020, Coronaviruses such as COVID-19 is declared as an international epidemic. More than 125000 confirmed cases and 4,607 death cases have been recorded around more than 118 countries. Unfortunately, a coronavirus vaccine is expected to take at least 18 months if it works at all. Moreover, COVID -19 epidemics can mutate into a more aggressive form. Day level information about the COVID -19 spread is crucial to measure the behavior of this new virus globally. Therefore, this study presents a comparison of day level forecasting models on COVID-19 affected cases using time series models and mathematical formulation. The forecasting models and data strongly suggest that the number of coronavirus cases grows exponentially in countries that do not mandate quarantines, restrictions on travel and public gatherings, and closing of schools, universities, and workplaces (Social Distancing).
[ { "created": "Sun, 15 Mar 2020 16:07:09 GMT", "version": "v1" } ]
2020-03-18
[ [ "Elmousalami", "Haytham H.", "", "Scientific Research\n Group in Egypt -- SRGE" ], [ "Hassanien", "Aboul Ella", "", "Scientific Research\n Group in Egypt -- SRGE" ] ]
In mid of March 2020, Coronaviruses such as COVID-19 is declared as an international epidemic. More than 125000 confirmed cases and 4,607 death cases have been recorded around more than 118 countries. Unfortunately, a coronavirus vaccine is expected to take at least 18 months if it works at all. Moreover, COVID -19 epidemics can mutate into a more aggressive form. Day level information about the COVID -19 spread is crucial to measure the behavior of this new virus globally. Therefore, this study presents a comparison of day level forecasting models on COVID-19 affected cases using time series models and mathematical formulation. The forecasting models and data strongly suggest that the number of coronavirus cases grows exponentially in countries that do not mandate quarantines, restrictions on travel and public gatherings, and closing of schools, universities, and workplaces (Social Distancing).
1402.4287
Canan Atilgan
Gokce Guven, Ali Rana Atilgan, Canan Atilgan
Protonation States of Remote Residues Affect Binding-Release Dynamics of the Ligand but not the Conformation of apo Ferric Binding Protein
26 pages, 4 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have studied the apo (Fe3+ free) form of periplasmic ferric binding protein (FbpA) under different conditions and we have monitored the changes in the binding and release dynamics of H2PO4- that acts as a synergistic anion in the presence of Fe3+. Our simulations predict a dissociation constant of 2.2$\pm$0.2 mM which is in remarkable agreement with the experimentally measured value of 2.3$\pm$0.3 mM under the same ionization strength and pH conditions. We apply perturbations relevant for changes in environmental conditions as (i) different values of ionic strength (IS), and (ii) protonation of a group of residues to mimic a different pH environment. Local perturbations are also studied by protonation or mutation of a site distal to the binding region that is known to mechanically manipulate the hinge-like motions of FbpA. We find that while the average conformation of the protein is intact in all simulations, the H2PO4- dynamics may be substantially altered by the changing conditions. In particular, the bound fraction which is 20$\%$ for the wild type system is increased to 50$\%$ with a D52A mutation/protonation and further to over 90$\%$ at the protonation conditions mimicking those at pH 5.5. The change in the dynamics is traced to the altered electrostatic distribution on the surface of the protein which in turn affects hydrogen bonding patterns at the active site. The observations are quantified by rigorous free energy calculations. Our results lend clues as to how the environment versus single residue perturbations may be utilized for regulation of binding modes in hFbpA systems in the absence of conformational changes.
[ { "created": "Tue, 18 Feb 2014 10:49:37 GMT", "version": "v1" } ]
2014-02-19
[ [ "Guven", "Gokce", "" ], [ "Atilgan", "Ali Rana", "" ], [ "Atilgan", "Canan", "" ] ]
We have studied the apo (Fe3+ free) form of periplasmic ferric binding protein (FbpA) under different conditions and we have monitored the changes in the binding and release dynamics of H2PO4- that acts as a synergistic anion in the presence of Fe3+. Our simulations predict a dissociation constant of 2.2$\pm$0.2 mM which is in remarkable agreement with the experimentally measured value of 2.3$\pm$0.3 mM under the same ionization strength and pH conditions. We apply perturbations relevant for changes in environmental conditions as (i) different values of ionic strength (IS), and (ii) protonation of a group of residues to mimic a different pH environment. Local perturbations are also studied by protonation or mutation of a site distal to the binding region that is known to mechanically manipulate the hinge-like motions of FbpA. We find that while the average conformation of the protein is intact in all simulations, the H2PO4- dynamics may be substantially altered by the changing conditions. In particular, the bound fraction which is 20$\%$ for the wild type system is increased to 50$\%$ with a D52A mutation/protonation and further to over 90$\%$ at the protonation conditions mimicking those at pH 5.5. The change in the dynamics is traced to the altered electrostatic distribution on the surface of the protein which in turn affects hydrogen bonding patterns at the active site. The observations are quantified by rigorous free energy calculations. Our results lend clues as to how the environment versus single residue perturbations may be utilized for regulation of binding modes in hFbpA systems in the absence of conformational changes.
q-bio/0607039
Matthew Berryman
Matthew J. Berryman
Mathematic principles underlying genetic structures
12 pages, 4 figures, corresponding paper for talk at Manning Clark House conference held in honour of Paul Davies: "From Stars to Brains"
null
null
null
q-bio.GN
null
Many people are familiar with the physico-chemical properties of gene sequences. In this paper I present a mathematical perspective: how do mathematical principles such as information theory, coding theory, and combinatorics influence the beginnings of life and the formation of the genetic codes we observe today? What constraints on possible life forms are imposed by information-theoretical concepts? Further, I detail how mathematical principles can help us to analyse the genetic sequences we observe in the world today.
[ { "created": "Sat, 22 Jul 2006 23:44:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Berryman", "Matthew J.", "" ] ]
Many people are familiar with the physico-chemical properties of gene sequences. In this paper I present a mathematical perspective: how do mathematical principles such as information theory, coding theory, and combinatorics influence the beginnings of life and the formation of the genetic codes we observe today? What constraints on possible life forms are imposed by information-theoretical concepts? Further, I detail how mathematical principles can help us to analyse the genetic sequences we observe in the world today.
2201.02675
Shannon Cartwright
S. L. Cartwright, J Schmied, A Livernois, B. A. Mallard
Effect of In-vivo Heat Challenge on Physiological Parameters and Function of Peripheral Blood Mononuclear Cells in Immune Phenotyped Dairy Cattle
Submitted to Veterinary Immunology and Immunopathology
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
The frequency of heat waves are increasing due to climate change, which leads to an increase in the occurrence of heat stress in dairy cattle. Previous studies have shown that dairy cattle identified as high immune responders have a reduced incidence of disease and improved vaccine response compared to average and low responders. Additionally, it has been observed that when cells from immune phenotyped cattle are exposed to in-vitro heat challenge, high immune responders exhibit increased heat tolerance compared to average and low responders. Therefore, the objective of this study was to evaluate physiological parameters and the function of blood mononuclear cells in immune phenotyped dairy cattle exposed to in-vivo heat challenge. A total of 24 immune phenotyped lactating dairy cattle (8 high, 8 average and 8 low) were housed in the tie-stall area of the barn and exposed to an in-vivo heat challenge for 4 hours on 2 subsequent days. Blood samples were taken both pre- and post-challenge and respiration rates and rectal temperatures were recorded. Temperature and humidity measurements were taken in correspondence with all respiration rate and rectal temperature measurements to calculate the temperature humidity index. Blood mononuclear cells were isolated from blood collected pre and post challenge and the concentration of heat shock protein 70 and cell proliferation were assessed. Results showed that average and low responders had significantly greater respiration rates compared to high responders at a temperature humidity index of 77 and above. High responders had a higher heat shock protein 70 concentration and greater cell proliferation after in-vivo heat challenge compared to average and low responders. These results paralleled those found during in-vitro heat challenge confirming that high responders may be more resilient to heat stress compared average and low responders.
[ { "created": "Fri, 7 Jan 2022 20:50:09 GMT", "version": "v1" } ]
2022-01-11
[ [ "Cartwright", "S. L.", "" ], [ "Schmied", "J", "" ], [ "Livernois", "A", "" ], [ "Mallard", "B. A.", "" ] ]
The frequency of heat waves are increasing due to climate change, which leads to an increase in the occurrence of heat stress in dairy cattle. Previous studies have shown that dairy cattle identified as high immune responders have a reduced incidence of disease and improved vaccine response compared to average and low responders. Additionally, it has been observed that when cells from immune phenotyped cattle are exposed to in-vitro heat challenge, high immune responders exhibit increased heat tolerance compared to average and low responders. Therefore, the objective of this study was to evaluate physiological parameters and the function of blood mononuclear cells in immune phenotyped dairy cattle exposed to in-vivo heat challenge. A total of 24 immune phenotyped lactating dairy cattle (8 high, 8 average and 8 low) were housed in the tie-stall area of the barn and exposed to an in-vivo heat challenge for 4 hours on 2 subsequent days. Blood samples were taken both pre- and post-challenge and respiration rates and rectal temperatures were recorded. Temperature and humidity measurements were taken in correspondence with all respiration rate and rectal temperature measurements to calculate the temperature humidity index. Blood mononuclear cells were isolated from blood collected pre and post challenge and the concentration of heat shock protein 70 and cell proliferation were assessed. Results showed that average and low responders had significantly greater respiration rates compared to high responders at a temperature humidity index of 77 and above. High responders had a higher heat shock protein 70 concentration and greater cell proliferation after in-vivo heat challenge compared to average and low responders. These results paralleled those found during in-vitro heat challenge confirming that high responders may be more resilient to heat stress compared average and low responders.
0709.2762
Sylvain Hanneton
B. Durette (GIPSA-lab, LPN), L. Gamond (GIPSA-lab), Sylvain Hanneton (NPSM), D. Alleysson (LPN), J. H\'erault (GIPSA-lab)
Biomimetic Space-Variant Sampling in a Vision Prosthesis Improves the User's Skill in a Localization Task
null
CVHI 2007, Conference & Workshop on Assistive Technologies for People with Vision & Hearing Impairments, Granada : Espagne (2007)
null
null
q-bio.NC
null
In this experiment, we test the hypothesis of whether a 'retina-like' space variant sampling pattern can improve the efficiency of a visual prosthesis. Subjects wearing a visuo-auditory substitution system were tested for their ability to point at visual targets. The test group (space-variant sampling), performed significantly better than the control group (uniform sampling). The pointing accuracy was enhanced, as was the speed to find the target. Surprisingly, the time spanned to complete the training was also reduced, suggesting that this space-variant sampling scheme facilitates the mastering of sensorimotor contingencies.
[ { "created": "Tue, 18 Sep 2007 06:52:58 GMT", "version": "v1" } ]
2007-09-19
[ [ "Durette", "B.", "", "GIPSA-lab, LPN" ], [ "Gamond", "L.", "", "GIPSA-lab" ], [ "Hanneton", "Sylvain", "", "NPSM" ], [ "Alleysson", "D.", "", "LPN" ], [ "Hérault", "J.", "", "GIPSA-lab" ] ]
In this experiment, we test the hypothesis of whether a 'retina-like' space variant sampling pattern can improve the efficiency of a visual prosthesis. Subjects wearing a visuo-auditory substitution system were tested for their ability to point at visual targets. The test group (space-variant sampling), performed significantly better than the control group (uniform sampling). The pointing accuracy was enhanced, as was the speed to find the target. Surprisingly, the time spanned to complete the training was also reduced, suggesting that this space-variant sampling scheme facilitates the mastering of sensorimotor contingencies.
1003.0031
Steven Frank
Steven A. Frank and D. Eric Smith
Measurement Invariance, Entropy, and Probability
null
Entropy 12:289-303 (2010)
10.3390/e12030289
null
q-bio.QM cond-mat.stat-mech physics.data-an
http://creativecommons.org/licenses/by/3.0/
We show that the natural scaling of measurement for a particular problem defines the most likely probability distribution of observations taken from that measurement scale. Our approach extends the method of maximum entropy to use measurement scale as a type of information constraint. We argue that a very common measurement scale is linear at small magnitudes grading into logarithmic at large magnitudes, leading to observations that often follow Student's probability distribution which has a Gaussian shape for small fluctuations from the mean and a power law shape for large fluctuations from the mean. An inverse scaling often arises in which measures naturally grade from logarithmic to linear as one moves from small to large magnitudes, leading to observations that often follow a gamma probability distribution. A gamma distribution has a power law shape for small magnitudes and an exponential shape for large magnitudes. The two measurement scales are natural inverses connected by the Laplace integral transform. This inversion connects the two major scaling patterns commonly found in nature. We also show that superstatistics is a special case of an integral transform, and thus can be understood as a particular way in which to change the scale of measurement. Incorporating information about measurement scale into maximum entropy provides a general approach to the relations between measurement, information and probability.
[ { "created": "Fri, 26 Feb 2010 22:49:24 GMT", "version": "v1" } ]
2010-03-02
[ [ "Frank", "Steven A.", "" ], [ "Smith", "D. Eric", "" ] ]
We show that the natural scaling of measurement for a particular problem defines the most likely probability distribution of observations taken from that measurement scale. Our approach extends the method of maximum entropy to use measurement scale as a type of information constraint. We argue that a very common measurement scale is linear at small magnitudes grading into logarithmic at large magnitudes, leading to observations that often follow Student's probability distribution which has a Gaussian shape for small fluctuations from the mean and a power law shape for large fluctuations from the mean. An inverse scaling often arises in which measures naturally grade from logarithmic to linear as one moves from small to large magnitudes, leading to observations that often follow a gamma probability distribution. A gamma distribution has a power law shape for small magnitudes and an exponential shape for large magnitudes. The two measurement scales are natural inverses connected by the Laplace integral transform. This inversion connects the two major scaling patterns commonly found in nature. We also show that superstatistics is a special case of an integral transform, and thus can be understood as a particular way in which to change the scale of measurement. Incorporating information about measurement scale into maximum entropy provides a general approach to the relations between measurement, information and probability.
q-bio/0608028
Arni S.R. Srinivasa Rao
Arni S.R. Srinivasa Rao
Incubation periods under various anti-retroviral therapies in homogeneous mixing and age-structured dynamical models: A theoretical approach
53 pages
Rocky Mountain Journal of Mathematics, (2015), 45, 3: 973-1031
10.1216/RMJ-2015-45-3-973
null
q-bio.QM math.PR q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the launch of second line anti-retroviral therapy for HIV infected individuals, there has been an increased expectation on surviving period of people with HIV. We consider previously well-known models in HIV epidemiology where the parameter for incubation period is used as one of the important components to explain the dynamics of the variables. Such models are extended here to explain the dynamics with respect to a given therapy that prolongs life of an HIV infected individual. A deconvolution method is demonstrated for estimation of parameters in the situations when no-therapy and multiple therapies are given to the infected population. The models and deconvolution method are extended in order to study the impact of therapy in age-structured populations. A generalization for a situation when n-types of therapies are available is given. Models are demonstrated using hypothetical data and sensitivity of the parameters are also computed.
[ { "created": "Tue, 15 Aug 2006 14:04:22 GMT", "version": "v1" }, { "created": "Sun, 9 Mar 2008 06:37:58 GMT", "version": "v2" }, { "created": "Thu, 2 May 2013 22:25:31 GMT", "version": "v3" } ]
2021-06-15
[ [ "Rao", "Arni S. R. Srinivasa", "" ] ]
With the launch of second line anti-retroviral therapy for HIV infected individuals, there has been an increased expectation on surviving period of people with HIV. We consider previously well-known models in HIV epidemiology where the parameter for incubation period is used as one of the important components to explain the dynamics of the variables. Such models are extended here to explain the dynamics with respect to a given therapy that prolongs life of an HIV infected individual. A deconvolution method is demonstrated for estimation of parameters in the situations when no-therapy and multiple therapies are given to the infected population. The models and deconvolution method are extended in order to study the impact of therapy in age-structured populations. A generalization for a situation when n-types of therapies are available is given. Models are demonstrated using hypothetical data and sensitivity of the parameters are also computed.
1206.4476
Simon Powers
Simon T. Powers, Daniel J. Taylor, Joanna J. Bryson
Punishment can promote defection in group-structured populations
Text updated to match accepted version. 21 pages, 3 figures
Journal of Theoretical Biology (2012), vol. 311, pp. 107-116
10.1016/j.jtbi.2012.07.010
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pro-social punishment, whereby cooperators punish defectors, is often suggested as a mechanism that maintains cooperation in large human groups. Importantly, models that support this idea have to date only allowed defectors to be the target of punishment. However, recent empirical work has demonstrated the existence of anti-social punishment in public goods games. That is, individuals that defect have been found to also punish cooperators. Some recent theoretical studies have found that such anti-social punishment can prevent the evolution of pro-social punishment and cooperation. However, the evolution of anti-social punishment in group-structured populations has not been formally addressed. Previous work has informally argued that group-structure must favour pro-social punishment. Here we formally investigate how two demographic factors, group size and dispersal frequency, affect selection pressures on pro- and anti-social punishment. Contrary to the suggestions of previous work, we find that anti-social punishment can prevent the evolution of pro-social punishment and cooperation under a range of group structures. Given that anti-social punishment has now been found in all studied extant human cultures, the claims of previous models showing the co-evolution of pro-social punishment and cooperation in group-structured populations should be re-evaluated.
[ { "created": "Wed, 20 Jun 2012 12:35:49 GMT", "version": "v1" }, { "created": "Wed, 22 Aug 2012 13:09:50 GMT", "version": "v2" } ]
2015-03-20
[ [ "Powers", "Simon T.", "" ], [ "Taylor", "Daniel J.", "" ], [ "Bryson", "Joanna J.", "" ] ]
Pro-social punishment, whereby cooperators punish defectors, is often suggested as a mechanism that maintains cooperation in large human groups. Importantly, models that support this idea have to date only allowed defectors to be the target of punishment. However, recent empirical work has demonstrated the existence of anti-social punishment in public goods games. That is, individuals that defect have been found to also punish cooperators. Some recent theoretical studies have found that such anti-social punishment can prevent the evolution of pro-social punishment and cooperation. However, the evolution of anti-social punishment in group-structured populations has not been formally addressed. Previous work has informally argued that group-structure must favour pro-social punishment. Here we formally investigate how two demographic factors, group size and dispersal frequency, affect selection pressures on pro- and anti-social punishment. Contrary to the suggestions of previous work, we find that anti-social punishment can prevent the evolution of pro-social punishment and cooperation under a range of group structures. Given that anti-social punishment has now been found in all studied extant human cultures, the claims of previous models showing the co-evolution of pro-social punishment and cooperation in group-structured populations should be re-evaluated.
1407.3998
Farhad Taher Ghahramani
Arash Tirandaz, Farhad Taher Ghahramani, Afshin Shafiee
Emergence of molecular chirality due to chiral interactions in a biological environment
J.Biol.Phys.2014
J. Bio. Phys. 40, 369 (2014)
10.1007/s10867-014-9356-x
null
q-bio.BM quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the interplay between tunneling process and chiral interactions in the discrimination of chiral states for an ensemble of molecules in a biological environment. Each molecule is described by an asymmetric double-well potential and the environment is modeled as a bath of harmonic oscillators. We carefully analyze different time-scales appearing in the resulting master equation at both weak- and strong-coupling limits. The corresponding results are accompanied by a set of coupled differential equations characterizing optical activity of the molecules. We show that, at the weak-coupling limit, chiral interactions prohibit the coherent racemization induced by decoherence effects and thus preserve the initial chiral state. At the strong-coupling limit, considering the memory effects of the environment, Markovian behavior is observed at long times.
[ { "created": "Sat, 12 Jul 2014 16:26:36 GMT", "version": "v1" } ]
2017-04-28
[ [ "Tirandaz", "Arash", "" ], [ "Ghahramani", "Farhad Taher", "" ], [ "Shafiee", "Afshin", "" ] ]
We explore the interplay between tunneling process and chiral interactions in the discrimination of chiral states for an ensemble of molecules in a biological environment. Each molecule is described by an asymmetric double-well potential and the environment is modeled as a bath of harmonic oscillators. We carefully analyze different time-scales appearing in the resulting master equation at both weak- and strong-coupling limits. The corresponding results are accompanied by a set of coupled differential equations characterizing optical activity of the molecules. We show that, at the weak-coupling limit, chiral interactions prohibit the coherent racemization induced by decoherence effects and thus preserve the initial chiral state. At the strong-coupling limit, considering the memory effects of the environment, Markovian behavior is observed at long times.
q-bio/0605021
Artur Luczak
Artur Luczak
Spatial embedding of neuronal trees modeled by diffusive growth
9 pages, 6 figures
null
null
null
q-bio.NC q-bio.QM
null
The relative importance of the intrinsic and extrinsic factors determining the variety of geometric shapes exhibited by dendritic trees remains unclear. This question was addressed by developing a model of the growth of dendritic trees based on diffusion-limited aggregation process. The model reproduces diverse neuronal shapes (i.e., granule cells, Purkinje cells, the basal and apical dendrites of pyramidal cells, and the axonal trees of interneurons) by changing only the size of the growth area, the time span of pruning, and the spatial concentration of 'neurotrophic particles'. Moreover, the presented model shows how competition between neurons can affect the shape of the dendritic trees. The model reveals that the creation of complex (but reproducible) dendrite-like trees does not require precise guidance or an intrinsic plan of the dendrite geometry. Instead, basic environmental factors and the simple rules of diffusive growth adequately account for the spatial embedding of different types of dendrites observed in the cortex. An example demonstrating the broad applicability of the algorithm to model diverse types of tree structures is also presented. Key words: Diffusion-limited aggregation; Neuronal morphology; Dendrites; Growth model; tree; dendritic geometry.
[ { "created": "Sun, 14 May 2006 21:58:16 GMT", "version": "v1" } ]
2007-05-23
[ [ "Luczak", "Artur", "" ] ]
The relative importance of the intrinsic and extrinsic factors determining the variety of geometric shapes exhibited by dendritic trees remains unclear. This question was addressed by developing a model of the growth of dendritic trees based on diffusion-limited aggregation process. The model reproduces diverse neuronal shapes (i.e., granule cells, Purkinje cells, the basal and apical dendrites of pyramidal cells, and the axonal trees of interneurons) by changing only the size of the growth area, the time span of pruning, and the spatial concentration of 'neurotrophic particles'. Moreover, the presented model shows how competition between neurons can affect the shape of the dendritic trees. The model reveals that the creation of complex (but reproducible) dendrite-like trees does not require precise guidance or an intrinsic plan of the dendrite geometry. Instead, basic environmental factors and the simple rules of diffusive growth adequately account for the spatial embedding of different types of dendrites observed in the cortex. An example demonstrating the broad applicability of the algorithm to model diverse types of tree structures is also presented. Key words: Diffusion-limited aggregation; Neuronal morphology; Dendrites; Growth model; tree; dendritic geometry.
2210.02684
Dehong Xu
Dehong Xu, Ruiqi Gao, Wen-Hao Zhang, Xue-Xin Wei, Ying Nian Wu
Conformal Isometry of Lie Group Representation in Recurrent Network of Grid Cells
null
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The activity of the grid cell population in the medial entorhinal cortex (MEC) of the mammalian brain forms a vector representation of the self-position of the animal. Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal. In doing so, the grid cell system effectively performs path integration. In this paper, we investigate the algebraic, geometric, and topological properties of grid cells using recurrent network models. Algebraically, we study the Lie group and Lie algebra of the recurrent transformation as a representation of self-motion. Geometrically, we study the conformal isometry of the Lie group representation where the local displacement of the activity vector in the neural space is proportional to the local displacement of the agent in the 2D physical space. Topologically, the compact abelian Lie group representation automatically leads to the torus topology commonly assumed and observed in neuroscience. We then focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells. Our numerical experiments show that conformal isometry leads to hexagon periodic patterns in the grid cell responses and our model is capable of accurate path integration. Code is available at \url{https://github.com/DehongXu/grid-cell-rnn}.
[ { "created": "Thu, 6 Oct 2022 05:26:49 GMT", "version": "v1" }, { "created": "Mon, 7 Nov 2022 05:55:57 GMT", "version": "v2" } ]
2022-11-08
[ [ "Xu", "Dehong", "" ], [ "Gao", "Ruiqi", "" ], [ "Zhang", "Wen-Hao", "" ], [ "Wei", "Xue-Xin", "" ], [ "Wu", "Ying Nian", "" ] ]
The activity of the grid cell population in the medial entorhinal cortex (MEC) of the mammalian brain forms a vector representation of the self-position of the animal. Recurrent neural networks have been proposed to explain the properties of the grid cells by updating the neural activity vector based on the velocity input of the animal. In doing so, the grid cell system effectively performs path integration. In this paper, we investigate the algebraic, geometric, and topological properties of grid cells using recurrent network models. Algebraically, we study the Lie group and Lie algebra of the recurrent transformation as a representation of self-motion. Geometrically, we study the conformal isometry of the Lie group representation where the local displacement of the activity vector in the neural space is proportional to the local displacement of the agent in the 2D physical space. Topologically, the compact abelian Lie group representation automatically leads to the torus topology commonly assumed and observed in neuroscience. We then focus on a simple non-linear recurrent model that underlies the continuous attractor neural networks of grid cells. Our numerical experiments show that conformal isometry leads to hexagon periodic patterns in the grid cell responses and our model is capable of accurate path integration. Code is available at \url{https://github.com/DehongXu/grid-cell-rnn}.
2004.08897
Abd AlRahman AlMomani
Abd AlRahman AlMomani and Erik Bollt
Informative Ranking of Stand Out Collections of Symptoms: A New Data-Driven Approach to Identify the Strong Warning Signs of COVID 19
15 pages, 10 Figures
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop here a data-driven approach for disease recognition based on given symptoms, to be efficient tool for anomaly detection. In a clinical setting and when presented with a patient with a combination of traits, a doctor may wonder if a certain combination of symptoms may be especially predictive, such as the question, "Are fevers more informative in women than men?" The answer to this question is, yes. We develop here a methodology to enumerate such questions, to learn what are the stronger warning signs when attempting to diagnose a disease, called Conditional Predictive Informativity, (CPI), whose ranking we call CPIR. This simple to use process allows us to identify particularly informative combinations of symptoms and traits that may help medical field analysis in general, and possibly to become a new data-driven advised approach for individual medical diagnosis, as well as for broader public policy discussion. In particular we have been motivated to develop this tool in the current enviroment of the pressing world crisis due to the COVID 19 pandemic. We apply the methods here to data collected from national, provincial, and municipal health reports, as well as additional information from online, and then curated to an online publically available Github repository.
[ { "created": "Sun, 19 Apr 2020 16:22:17 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2020 17:11:21 GMT", "version": "v2" } ]
2020-05-01
[ [ "AlMomani", "Abd AlRahman", "" ], [ "Bollt", "Erik", "" ] ]
We develop here a data-driven approach for disease recognition based on given symptoms, to be efficient tool for anomaly detection. In a clinical setting and when presented with a patient with a combination of traits, a doctor may wonder if a certain combination of symptoms may be especially predictive, such as the question, "Are fevers more informative in women than men?" The answer to this question is, yes. We develop here a methodology to enumerate such questions, to learn what are the stronger warning signs when attempting to diagnose a disease, called Conditional Predictive Informativity, (CPI), whose ranking we call CPIR. This simple to use process allows us to identify particularly informative combinations of symptoms and traits that may help medical field analysis in general, and possibly to become a new data-driven advised approach for individual medical diagnosis, as well as for broader public policy discussion. In particular we have been motivated to develop this tool in the current enviroment of the pressing world crisis due to the COVID 19 pandemic. We apply the methods here to data collected from national, provincial, and municipal health reports, as well as additional information from online, and then curated to an online publically available Github repository.
2208.04196
Kaizhang Wang
Xiaoxian Tang and Kaizhang Wang
Hopf Bifurcations of Reaction Networks with Zero-One Stoichiometric Coefficients
27 pages, 5 figures
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the reaction networks with zero-one stoichiometric coefficients (or simply zero-one networks), we prove that if a network admits a Hopf bifurcation, then the rank of the stoichiometric matrix is at least four. As a corollary, we show that if a zero-one network admits a Hopf bifurcation, then it contains at least four species and five reactions. As applications, we show that there exist rank-four subnetworks, which have the capacity for Hopf bifurcations/oscillations, in two biologically significant networks: the MAPK cascades and the ERK network. We provide a computational tool for computing all four-species, five-reaction, zero-one networks that have the capacity for Hopf bifurcations.
[ { "created": "Mon, 8 Aug 2022 14:53:17 GMT", "version": "v1" } ]
2022-08-09
[ [ "Tang", "Xiaoxian", "" ], [ "Wang", "Kaizhang", "" ] ]
For the reaction networks with zero-one stoichiometric coefficients (or simply zero-one networks), we prove that if a network admits a Hopf bifurcation, then the rank of the stoichiometric matrix is at least four. As a corollary, we show that if a zero-one network admits a Hopf bifurcation, then it contains at least four species and five reactions. As applications, we show that there exist rank-four subnetworks, which have the capacity for Hopf bifurcations/oscillations, in two biologically significant networks: the MAPK cascades and the ERK network. We provide a computational tool for computing all four-species, five-reaction, zero-one networks that have the capacity for Hopf bifurcations.
q-bio/0609010
Jonn Miritzis
PGL Leach and J Miritzis
Analytic Behaviour of Competition among Three Species
14 pages, to appear in Journal of Nonlinear Mathematical Physics
null
10.2991/jnmp.2006.13.4.8
null
q-bio.PE
null
We analyse the classical model of competition between three species studied by May and Leonard ({\it SIAM J Appl Math} \textbf{29} (1975) 243-256) with the approaches of singularity analysis and symmetry analysis to identify values of the parameters for which the system is integrable. We observe some striking relations between critical values arising from the approach of dynamical systems and the singularity and symmetry analyses.
[ { "created": "Thu, 7 Sep 2006 16:10:28 GMT", "version": "v1" } ]
2015-06-26
[ [ "Leach", "PGL", "" ], [ "Miritzis", "J", "" ] ]
We analyse the classical model of competition between three species studied by May and Leonard ({\it SIAM J Appl Math} \textbf{29} (1975) 243-256) with the approaches of singularity analysis and symmetry analysis to identify values of the parameters for which the system is integrable. We observe some striking relations between critical values arising from the approach of dynamical systems and the singularity and symmetry analyses.
1005.5305
Luke Heaton Mr
Luke Heaton, Eduardo Lopez, Philip K. Maini, Mark D. Fricker and Nick S. Jones
Growth-induced mass flows in fungal networks
To be published in PRSB. 20 pages, plus 8 pages of supplementary information, and 3 page bibliography
null
null
null
q-bio.TO physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cord-forming fungi form extensive networks that continuously adapt to maintain an efficient transport system. As osmotically driven water uptake is often distal from the tips, and aqueous fluids are incompressible, we propose that growth induces mass flows across the mycelium, whether or not there are intrahyphal concentration gradients. We imaged the temporal evolution of networks formed by Phanerochaete velutina, and at each stage calculated the unique set of currents that account for the observed changes in cord volume, while minimising the work required to overcome viscous drag. Predicted speeds were in reasonable agreement with experimental data, and the pressure gradients needed to produce these flows are small. Furthermore, cords that were predicted to carry fast-moving or large currents were significantly more likely to increase in size than cords with slow-moving or small currents. The incompressibility of the fluids within fungi means there is a rapid global response to local fluid movements. Hence velocity of fluid flow is a local signal that conveys quasi-global information about the role of a cord within the mycelium. We suggest that fluid incompressibility and the coupling of growth and mass flow are critical physical features that enable the development of efficient, adaptive, biological transport networks.
[ { "created": "Fri, 28 May 2010 14:48:34 GMT", "version": "v1" } ]
2010-05-31
[ [ "Heaton", "Luke", "" ], [ "Lopez", "Eduardo", "" ], [ "Maini", "Philip K.", "" ], [ "Fricker", "Mark D.", "" ], [ "Jones", "Nick S.", "" ] ]
Cord-forming fungi form extensive networks that continuously adapt to maintain an efficient transport system. As osmotically driven water uptake is often distal from the tips, and aqueous fluids are incompressible, we propose that growth induces mass flows across the mycelium, whether or not there are intrahyphal concentration gradients. We imaged the temporal evolution of networks formed by Phanerochaete velutina, and at each stage calculated the unique set of currents that account for the observed changes in cord volume, while minimising the work required to overcome viscous drag. Predicted speeds were in reasonable agreement with experimental data, and the pressure gradients needed to produce these flows are small. Furthermore, cords that were predicted to carry fast-moving or large currents were significantly more likely to increase in size than cords with slow-moving or small currents. The incompressibility of the fluids within fungi means there is a rapid global response to local fluid movements. Hence velocity of fluid flow is a local signal that conveys quasi-global information about the role of a cord within the mycelium. We suggest that fluid incompressibility and the coupling of growth and mass flow are critical physical features that enable the development of efficient, adaptive, biological transport networks.
1403.6839
Brian Williams Dr
Brian Williams, Eleanor Gouws, David Ginsburg
Ending AIDS in Gabon: How long will it take? How much will it cost?
arXiv admin note: substantial text overlap with arXiv:1311.1815, arXiv:1401.6430
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prevalence of HIV in West Africa is lower than elsewhere in Africa but Gabon has one of the highest rates of HIV in that region. Gabon has a small population and a high per capita gross domestic product making it an ideal place to carry out a programme of early treatment for HIV. The effectiveness, availability and affordability of triple combination therapy make it possible to contemplate ending AIDS deaths and HIV transmission in the short term and HIV prevalence in the long term. Here we consider what would have happened in Gabon without the development of potent anti-retroviral therapy (ART), the impact that the current roll-out of ART has had on HIV, and what might be possible if early treatment with ART becomes available to all. We fit a dynamic transmission model to trends in the adult prevalence of HIV and infer trends in incidence, mortality and the impact of ART. The availability of ART has reduced the prevalence of HIV among adults not on ART from 4.2% to 2.9%, annual incidence from 0.43% to 0.27%, and the proportion of adults dying from AIDS illnesses each year from 0.36% to 0.13% saving the lives of 2.3 thousand people in 2013 alone. The provision of ART has been highly cost effective saving the country at least $18 million up to 2013.
[ { "created": "Wed, 26 Mar 2014 20:02:43 GMT", "version": "v1" } ]
2014-03-28
[ [ "Williams", "Brian", "" ], [ "Gouws", "Eleanor", "" ], [ "Ginsburg", "David", "" ] ]
The prevalence of HIV in West Africa is lower than elsewhere in Africa but Gabon has one of the highest rates of HIV in that region. Gabon has a small population and a high per capita gross domestic product making it an ideal place to carry out a programme of early treatment for HIV. The effectiveness, availability and affordability of triple combination therapy make it possible to contemplate ending AIDS deaths and HIV transmission in the short term and HIV prevalence in the long term. Here we consider what would have happened in Gabon without the development of potent anti-retroviral therapy (ART), the impact that the current roll-out of ART has had on HIV, and what might be possible if early treatment with ART becomes available to all. We fit a dynamic transmission model to trends in the adult prevalence of HIV and infer trends in incidence, mortality and the impact of ART. The availability of ART has reduced the prevalence of HIV among adults not on ART from 4.2% to 2.9%, annual incidence from 0.43% to 0.27%, and the proportion of adults dying from AIDS illnesses each year from 0.36% to 0.13% saving the lives of 2.3 thousand people in 2013 alone. The provision of ART has been highly cost effective saving the country at least $18 million up to 2013.
2012.12532
Charleston Chiang
Charleston W.K. Chiang
The opportunities and challenges of integrating population histories into genetic studies of diverse populations: a motivating example from Native Hawaiians
6 pages, 2 figures
Front. Genet., 27 September 2021
10.3389/fgene.2021.643883
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
There is an urgent and well-recognized need to extend genetic studies to diverse populations, but several obstacles continue to be prohibitive, including (but not limited to) the difficulty of recruiting individuals from diverse populations in large numbers and the lack of representation in available genomic references. These obstacles notwithstanding, studying multiple diverse populations would provide informative, population-specific insights. Using Native Hawaiians as an example of an understudied population with a unique evolutionary history, I will argue that by developing key genomic resources and integrating evolutionary thinking into genetic epidemiology, we will have the opportunity to efficiently advance our knowledge of the genetic risk factors, ameliorate health disparity, and improve healthcare in this underserved population.
[ { "created": "Wed, 23 Dec 2020 08:04:17 GMT", "version": "v1" } ]
2021-09-28
[ [ "Chiang", "Charleston W. K.", "" ] ]
There is an urgent and well-recognized need to extend genetic studies to diverse populations, but several obstacles continue to be prohibitive, including (but not limited to) the difficulty of recruiting individuals from diverse populations in large numbers and the lack of representation in available genomic references. These obstacles notwithstanding, studying multiple diverse populations would provide informative, population-specific insights. Using Native Hawaiians as an example of an understudied population with a unique evolutionary history, I will argue that by developing key genomic resources and integrating evolutionary thinking into genetic epidemiology, we will have the opportunity to efficiently advance our knowledge of the genetic risk factors, ameliorate health disparity, and improve healthcare in this underserved population.
1403.7484
Bj{\o}rn Panyella Pedersen Ph.D.
Bj{\o}rn Panyella Pedersen, Pontus Gourdon, Xiangyu Liu, Jesper Lykkegaard Karlsen, Poul Nissen (Centre for Membrane Pumps in Cells and Disease, Dept. of Molecular Biology, Aarhus University)
Initiating Heavy-atom Based Phasing by Multi-Dimensional Molecular Replacement
19 pages total, main paper: 6 pages (2 figures), supplementary material: 13 pages (2 figures, 9 tabels)
Acta Cryst D (2016)
10.1107/S2059798315022482
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To obtain an electron-density map from a macromolecular crystal the phase-problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitantly the determination of the heavy atom substructure. This is customarily done by direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available, as often the case for e.g. membrane proteins. Here we present an approach for heavy atom site identification based on a Molecular Replacement Parameter Matrix (MRPM) search. It involves an n-dimensional search to test a wide spectrum of molecular replacement parameters, such as clusters of different conformations. The result is scored by the ability to identify heavy-atom positions, from anomalous difference Fourier maps, that allow meaningful phases to be determined. The strategy was successfully applied in the determination of a membrane protein structure, the CopA Cu+-ATPase, when other methods had failed to resolve the heavy atom substructure. MRPM is particularly suited for proteins undergoing large conformational changes where multiple search models should be generated, and it enables the identification of weak but correct molecular replacement solutions with maximum contrast to prime experimental phasing efforts.
[ { "created": "Fri, 28 Mar 2014 18:56:35 GMT", "version": "v1" } ]
2018-08-20
[ [ "Pedersen", "Bjørn Panyella", "", "Centre for Membrane Pumps in Cells and\n Disease, Dept. of Molecular Biology, Aarhus University" ], [ "Gourdon", "Pontus", "", "Centre for Membrane Pumps in Cells and\n Disease, Dept. of Molecular Biology, Aarhus University" ], [ "Liu", "Xiangyu", "", "Centre for Membrane Pumps in Cells and\n Disease, Dept. of Molecular Biology, Aarhus University" ], [ "Karlsen", "Jesper Lykkegaard", "", "Centre for Membrane Pumps in Cells and\n Disease, Dept. of Molecular Biology, Aarhus University" ], [ "Nissen", "Poul", "", "Centre for Membrane Pumps in Cells and\n Disease, Dept. of Molecular Biology, Aarhus University" ] ]
To obtain an electron-density map from a macromolecular crystal the phase-problem needs to be solved, which often involves the use of heavy-atom derivative crystals and concomitantly the determination of the heavy atom substructure. This is customarily done by direct methods or Patterson-based approaches, which however may fail when only poorly diffracting derivative crystals are available, as often the case for e.g. membrane proteins. Here we present an approach for heavy atom site identification based on a Molecular Replacement Parameter Matrix (MRPM) search. It involves an n-dimensional search to test a wide spectrum of molecular replacement parameters, such as clusters of different conformations. The result is scored by the ability to identify heavy-atom positions, from anomalous difference Fourier maps, that allow meaningful phases to be determined. The strategy was successfully applied in the determination of a membrane protein structure, the CopA Cu+-ATPase, when other methods had failed to resolve the heavy atom substructure. MRPM is particularly suited for proteins undergoing large conformational changes where multiple search models should be generated, and it enables the identification of weak but correct molecular replacement solutions with maximum contrast to prime experimental phasing efforts.
2408.01817
Jing Yan
Jing Yan, Yunxuan Feng, Wei Dai and Yaoyu Zhang
State-dependent Filtering of the Ring Model
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Robustness is a measure of functional reliability of a system against perturbations. To achieve a good and robust performance, a system must filter out external perturbations by its internal priors. These priors are usually distilled in the structure and the states of the system. Biophysical neural network are known to be robust but the exact mechanisms are still elusive. In this paper, we probe how orientation-selective neurons organized on a 1-D ring network respond to perturbations in the hope of gaining some insights on the robustness of visual system in brain. We analyze the steady-state of the rate-based network and prove that the activation state of neurons, rather than their firing rates, determines how the model respond to perturbations. We then identify specific perturbation patterns that induce the largest responses for different configurations of activation states, and find them to be sinusoidal or sinusoidal-like while other patterns are largely attenuated. Similar results are observed in a spiking ring model. Finally, we remap the perturbations in orientation back into the 2-D image space using Gabor functions. The resulted optimal perturbation patterns mirror adversarial attacks in deep learning that exploit the priors of the system. Our results suggest that based on different state configurations, these priors could underlie some of the illusionary experiences as the cost of visual robustness.
[ { "created": "Sat, 3 Aug 2024 16:22:19 GMT", "version": "v1" } ]
2024-08-06
[ [ "Yan", "Jing", "" ], [ "Feng", "Yunxuan", "" ], [ "Dai", "Wei", "" ], [ "Zhang", "Yaoyu", "" ] ]
Robustness is a measure of functional reliability of a system against perturbations. To achieve a good and robust performance, a system must filter out external perturbations by its internal priors. These priors are usually distilled in the structure and the states of the system. Biophysical neural network are known to be robust but the exact mechanisms are still elusive. In this paper, we probe how orientation-selective neurons organized on a 1-D ring network respond to perturbations in the hope of gaining some insights on the robustness of visual system in brain. We analyze the steady-state of the rate-based network and prove that the activation state of neurons, rather than their firing rates, determines how the model respond to perturbations. We then identify specific perturbation patterns that induce the largest responses for different configurations of activation states, and find them to be sinusoidal or sinusoidal-like while other patterns are largely attenuated. Similar results are observed in a spiking ring model. Finally, we remap the perturbations in orientation back into the 2-D image space using Gabor functions. The resulted optimal perturbation patterns mirror adversarial attacks in deep learning that exploit the priors of the system. Our results suggest that based on different state configurations, these priors could underlie some of the illusionary experiences as the cost of visual robustness.
1112.2464
Bruno. Cessac
J. C. Vasquez, O. Marre, A. G. Palacios, M. J. Berry II, B. Cessac
Gibbs distribution analysis of temporal correlations structure in retina ganglion cells
To appear in J. Physiol. Paris
Journal of Physiology Paris, Vol. 106, Issue 3-4, pp 120-127, (2012)
10.1016/j.jphysparis.2011.11.001
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to estimate Gibbs distributions with \textit{spatio-temporal} constraints on spike trains statistics. We apply this method to spike trains recorded from ganglion cells of the salamander retina, in response to natural movies. Our analysis, restricted to a few neurons, performs more accurately than pairwise synchronization models (Ising) or the 1-time step Markov models (\cite{marre-boustani-etal:09}) to describe the statistics of spatio-temporal spike patterns and emphasizes the role of higher order spatio-temporal interactions.
[ { "created": "Mon, 12 Dec 2011 07:43:33 GMT", "version": "v1" } ]
2013-01-11
[ [ "Vasquez", "J. C.", "" ], [ "Marre", "O.", "" ], [ "Palacios", "A. G.", "" ], [ "Berry", "M. J.", "II" ], [ "Cessac", "B.", "" ] ]
We present a method to estimate Gibbs distributions with \textit{spatio-temporal} constraints on spike trains statistics. We apply this method to spike trains recorded from ganglion cells of the salamander retina, in response to natural movies. Our analysis, restricted to a few neurons, performs more accurately than pairwise synchronization models (Ising) or the 1-time step Markov models (\cite{marre-boustani-etal:09}) to describe the statistics of spatio-temporal spike patterns and emphasizes the role of higher order spatio-temporal interactions.
2211.02660
Divya Nori
Divya Nori, Connor W. Coley, Roc\'io Mercado
De novo PROTAC design using graph-based deep generative models
Presented at NeurIPS 2022 AI4Science Workshop
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
PROteolysis TArgeting Chimeras (PROTACs) are an emerging therapeutic modality for degrading a protein of interest (POI) by marking it for degradation by the proteasome. Recent developments in artificial intelligence (AI) suggest that deep generative models can assist with the de novo design of molecules with desired properties, and their application to PROTAC design remains largely unexplored. We show that a graph-based generative model can be used to propose novel PROTAC-like structures from empty graphs. Our model can be guided towards the generation of large molecules (30--140 heavy atoms) predicted to degrade a POI through policy-gradient reinforcement learning (RL). Rewards during RL are applied using a boosted tree surrogate model that predicts a molecule's degradation potential for each POI. Using this approach, we steer the generative model towards compounds with higher likelihoods of predicted degradation activity. Despite being trained on sparse public data, the generative model proposes molecules with substructures found in known degraders. After fine-tuning, predicted activity against a challenging POI increases from 50% to >80% with near-perfect chemical validity for sampled compounds, suggesting this is a promising approach for the optimization of large, PROTAC-like molecules for targeted protein degradation.
[ { "created": "Fri, 4 Nov 2022 15:34:45 GMT", "version": "v1" } ]
2022-11-08
[ [ "Nori", "Divya", "" ], [ "Coley", "Connor W.", "" ], [ "Mercado", "Rocío", "" ] ]
PROteolysis TArgeting Chimeras (PROTACs) are an emerging therapeutic modality for degrading a protein of interest (POI) by marking it for degradation by the proteasome. Recent developments in artificial intelligence (AI) suggest that deep generative models can assist with the de novo design of molecules with desired properties, and their application to PROTAC design remains largely unexplored. We show that a graph-based generative model can be used to propose novel PROTAC-like structures from empty graphs. Our model can be guided towards the generation of large molecules (30--140 heavy atoms) predicted to degrade a POI through policy-gradient reinforcement learning (RL). Rewards during RL are applied using a boosted tree surrogate model that predicts a molecule's degradation potential for each POI. Using this approach, we steer the generative model towards compounds with higher likelihoods of predicted degradation activity. Despite being trained on sparse public data, the generative model proposes molecules with substructures found in known degraders. After fine-tuning, predicted activity against a challenging POI increases from 50% to >80% with near-perfect chemical validity for sampled compounds, suggesting this is a promising approach for the optimization of large, PROTAC-like molecules for targeted protein degradation.
q-bio/0412038
Jacek Miekisz
Jacek Miekisz
Equilibrium transitions in finite populations of players
8 pages
null
null
null
q-bio.PE
null
We discuss stochastic dynamics of finite populations of individuals playing games. We review recent results concerning the dependence of the long-run behavior of such systems on the number of players and the noise level. In the case of two-player games with two symmetric Nash equilibria, when the number of players increases, the population undergoes multiple transitions between its equilibria.
[ { "created": "Mon, 20 Dec 2004 13:07:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Miekisz", "Jacek", "" ] ]
We discuss stochastic dynamics of finite populations of individuals playing games. We review recent results concerning the dependence of the long-run behavior of such systems on the number of players and the noise level. In the case of two-player games with two symmetric Nash equilibria, when the number of players increases, the population undergoes multiple transitions between its equilibria.
q-bio/0607051
Jesus M. Cortes
J.M.A.M. Kusters, J.M. Cortes, W.P.M. van Meerwijk, D.L. Ypey, A.P.R. Theuvenet and C.C.A.M. Gielen
Hysteresis and bi-stability by an interplay of calcium oscillations and action potential firing
29 pages, 6 figures
null
10.1103/PhysRevLett.98.098107
null
q-bio.CB
null
Many cell types exhibit oscillatory activity, such as repetitive action potential firing due to the Hodgkin-Huxley dynamics of ion channels in the cell membrane or reveal intracellular inositol triphosphate (IP$_3$) mediated calcium oscillations (CaOs) by calcium-induced calcium release channels (IP$_3$-receptor) in the membrane of the endoplasmic reticulum (ER). The dynamics of the excitable membrane and that of the IP$_3$-mediated CaOs have been the subject of many studies. However, the interaction between the excitable cell membrane and IP$_3$-mediated CaOs, which are coupled by cytosolic calcium which affects the dynamics of both, has not been studied. This study for the first time applied stability analysis to investigate the dynamic behavior of a model, which includes both an excitable membrane and an intracellular IP$_3$-mediated calcium oscillator. Taking the IP$_3$ concentration as a control parameter, the model exhibits a novel rich spectrum of stable and unstable states with hysteresis. The four stable states of the model correspond in detail to previously reported growth-state dependent states of the membrane potential of normal rat kidney fibroblasts in cell culture. The hysteresis is most pronounced for experimentally observed parameter values of the model, suggesting a functional importance of hysteresis. This study shows that the four growth-dependent cell states may not reflect the behavior of cells that have differentiated into different cell types with different properties, but simply reflect four different states of a single cell type, that is characterized by a single model.
[ { "created": "Mon, 31 Jul 2006 10:40:29 GMT", "version": "v1" } ]
2015-06-26
[ [ "Kusters", "J. M. A. M.", "" ], [ "Cortes", "J. M.", "" ], [ "van Meerwijk", "W. P. M.", "" ], [ "Ypey", "D. L.", "" ], [ "Theuvenet", "A. P. R.", "" ], [ "Gielen", "C. C. A. M.", "" ] ]
Many cell types exhibit oscillatory activity, such as repetitive action potential firing due to the Hodgkin-Huxley dynamics of ion channels in the cell membrane or reveal intracellular inositol triphosphate (IP$_3$) mediated calcium oscillations (CaOs) by calcium-induced calcium release channels (IP$_3$-receptor) in the membrane of the endoplasmic reticulum (ER). The dynamics of the excitable membrane and that of the IP$_3$-mediated CaOs have been the subject of many studies. However, the interaction between the excitable cell membrane and IP$_3$-mediated CaOs, which are coupled by cytosolic calcium which affects the dynamics of both, has not been studied. This study for the first time applied stability analysis to investigate the dynamic behavior of a model, which includes both an excitable membrane and an intracellular IP$_3$-mediated calcium oscillator. Taking the IP$_3$ concentration as a control parameter, the model exhibits a novel rich spectrum of stable and unstable states with hysteresis. The four stable states of the model correspond in detail to previously reported growth-state dependent states of the membrane potential of normal rat kidney fibroblasts in cell culture. The hysteresis is most pronounced for experimentally observed parameter values of the model, suggesting a functional importance of hysteresis. This study shows that the four growth-dependent cell states may not reflect the behavior of cells that have differentiated into different cell types with different properties, but simply reflect four different states of a single cell type, that is characterized by a single model.
1002.0409
Michael B\"orsch
Stefan Ernst, Claire Batisse, Nawid Zarrabi, Bettina Boettcher, Michael Boersch
Regulatory assembly of the vacuolar proton pump VOV1-ATPase in yeast cells by FLIM-FRET
8 pages, 3 figures
null
10.1117/12.841169
null
q-bio.SC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the reversible disassembly of VOV1-ATPase in life yeast cells by time resolved confocal FRET imaging. VOV1-ATPase in the vacuolar membrane pumps protons from the cytosol into the vacuole. VOV1-ATPase is a rotary biological nanomotor driven by ATP hydrolysis. The emerging proton gradient is used for transport processes as well as for pH and Ca2+ homoeostasis in the cell. Activity of the VOV1-ATPase is regulated through assembly / disassembly processes. During starvation the two parts of VOV1-ATPase start to disassemble. This process is reversed after addition of glucose. The exact mechanisms are unknown. To follow the disassembly / reassembly in vivo we tagged two subunits C and E with different fluorescent proteins. Cellular distributions of C and E were monitored using a duty cycle-optimized alternating laser excitation scheme (DCO-ALEX) for time resolved confocal FRET-FLIM measurements.
[ { "created": "Tue, 2 Feb 2010 07:52:12 GMT", "version": "v1" } ]
2015-05-18
[ [ "Ernst", "Stefan", "" ], [ "Batisse", "Claire", "" ], [ "Zarrabi", "Nawid", "" ], [ "Boettcher", "Bettina", "" ], [ "Boersch", "Michael", "" ] ]
We investigate the reversible disassembly of VOV1-ATPase in life yeast cells by time resolved confocal FRET imaging. VOV1-ATPase in the vacuolar membrane pumps protons from the cytosol into the vacuole. VOV1-ATPase is a rotary biological nanomotor driven by ATP hydrolysis. The emerging proton gradient is used for transport processes as well as for pH and Ca2+ homoeostasis in the cell. Activity of the VOV1-ATPase is regulated through assembly / disassembly processes. During starvation the two parts of VOV1-ATPase start to disassemble. This process is reversed after addition of glucose. The exact mechanisms are unknown. To follow the disassembly / reassembly in vivo we tagged two subunits C and E with different fluorescent proteins. Cellular distributions of C and E were monitored using a duty cycle-optimized alternating laser excitation scheme (DCO-ALEX) for time resolved confocal FRET-FLIM measurements.
1807.02763
Yun S. Song
Jeffrey P. Spence, Matthias Steinr\"ucken, Jonathan Terhorst, and Yun S. Song
Inference of Population History using Coalescent HMMs: Review and Outlook
12 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studying how diverse human populations are related is of historical and anthropological interest, in addition to providing a realistic null model for testing for signatures of natural selection or disease associations. Furthermore, understanding the demographic histories of other species is playing an increasingly important role in conservation genetics. A number of statistical methods have been developed to infer population demographic histories using whole-genome sequence data, with recent advances focusing on allowing for more flexible modeling choices, scaling to larger data sets, and increasing statistical power. Here we review coalescent hidden Markov models, a powerful class of population genetic inference methods that can effectively utilize linkage disequilibrium information. We highlight recent advances, give advice for practitioners, point out potential pitfalls, and present possible future research directions.
[ { "created": "Sun, 8 Jul 2018 06:32:27 GMT", "version": "v1" } ]
2018-07-10
[ [ "Spence", "Jeffrey P.", "" ], [ "Steinrücken", "Matthias", "" ], [ "Terhorst", "Jonathan", "" ], [ "Song", "Yun S.", "" ] ]
Studying how diverse human populations are related is of historical and anthropological interest, in addition to providing a realistic null model for testing for signatures of natural selection or disease associations. Furthermore, understanding the demographic histories of other species is playing an increasingly important role in conservation genetics. A number of statistical methods have been developed to infer population demographic histories using whole-genome sequence data, with recent advances focusing on allowing for more flexible modeling choices, scaling to larger data sets, and increasing statistical power. Here we review coalescent hidden Markov models, a powerful class of population genetic inference methods that can effectively utilize linkage disequilibrium information. We highlight recent advances, give advice for practitioners, point out potential pitfalls, and present possible future research directions.
1110.5375
Andrew Black
Andrew J. Black and Alan J. McKane
WKB calculation of an epidemic outbreak distribution
Updated with some minor corrections
J. Stat. Mech. (2011) P12006
10.1088/1742-5468/2011/12/P12006
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We calculate both the exponential and pre-factor contributions in a WKB approximation of the master equation for a stochastic SIR model with highly oscillatory dynamics. Fixing the basic parameters of the model we investigate how the outbreak distribution changes with the population size. We show that this distribution rapidly becomes highly non-Gaussian, acquiring large tails indicating the presence of rare, but large outbreaks, as the population is made smaller. The analytic results are found to be in excellent agreement with simulations until the systems become so small that the dynamics are dominated by fade-out of the disease.
[ { "created": "Mon, 24 Oct 2011 23:22:37 GMT", "version": "v1" }, { "created": "Thu, 8 Dec 2011 02:18:39 GMT", "version": "v2" } ]
2011-12-14
[ [ "Black", "Andrew J.", "" ], [ "McKane", "Alan J.", "" ] ]
We calculate both the exponential and pre-factor contributions in a WKB approximation of the master equation for a stochastic SIR model with highly oscillatory dynamics. Fixing the basic parameters of the model we investigate how the outbreak distribution changes with the population size. We show that this distribution rapidly becomes highly non-Gaussian, acquiring large tails indicating the presence of rare, but large outbreaks, as the population is made smaller. The analytic results are found to be in excellent agreement with simulations until the systems become so small that the dynamics are dominated by fade-out of the disease.
q-bio/0403007
Mark Ya. Azbel'
Mark Ya. Azbel'
Dynamics of mortality in protected populations
Invited talk at Conference on old age
null
null
null
q-bio.QM q-bio.PE
null
Demographic data and recent experiments verify earlier predictions that mortality has short (few percent of the life span) memory of the previous life history, may be significantly decreased, reset to its value at a much younger age, and (until certain age) eliminated. Such mortality dynamics is demonstrated to be characteristic only of evolutionary unprecedented protected populations. When conditions improve, their mortality decreases stepwise. At crossovers the rate of decrease rapidly changes. The crossovers manifest the edges of the stairs in the universal ladder of rapid mortality adjustment to changing conditions. Mortality is dominated by the established universal law which reduces it to few biologically explicit parameters and which is verified with human and fly mortality data. Specific experiments to test universality of the law for other animals, and to unravel the mechanism of stepwise life extension, are suggested.
[ { "created": "Thu, 4 Mar 2004 13:09:59 GMT", "version": "v1" } ]
2007-05-23
[ [ "Azbel'", "Mark Ya.", "" ] ]
Demographic data and recent experiments verify earlier predictions that mortality has short (few percent of the life span) memory of the previous life history, may be significantly decreased, reset to its value at a much younger age, and (until certain age) eliminated. Such mortality dynamics is demonstrated to be characteristic only of evolutionary unprecedented protected populations. When conditions improve, their mortality decreases stepwise. At crossovers the rate of decrease rapidly changes. The crossovers manifest the edges of the stairs in the universal ladder of rapid mortality adjustment to changing conditions. Mortality is dominated by the established universal law which reduces it to few biologically explicit parameters and which is verified with human and fly mortality data. Specific experiments to test universality of the law for other animals, and to unravel the mechanism of stepwise life extension, are suggested.
1708.08426
Th\'eo Michelot
Th\'eo Michelot, Paul G. Blackwell, Jason Matthiopoulos
Linking resource selection and step selection models for habitat preferences in animals
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The two dominant approaches for the analysis of species-habitat associations in animals have been shown to reach divergent conclusions. Models fitted from the viewpoint of an individual (step selection functions), once scaled up, do not agree with models fitted from a population viewpoint (resource selection functions). We explain this fundamental incompatibility, and propose a solution by introducing to the animal movement field a novel use for the well-known family of Markov chain Monte Carlo (MCMC) algorithms. By design, the step selection rules of MCMC lead to a steady-state distribution that coincides with a given underlying function: the target distribution. We therefore propose an analogy between the movements of an animal and the movements of a MCMC sampler, to guarantee convergence of the step selection rules to the parameters underlying the population's utilisation distribution. We introduce a rejection-free MCMC algorithm, the local Gibbs sampler, that better resembles real animal movement, and discuss the wide range of biological assumptions that it can accommodate. We illustrate our method with simulations on a known utilisation distribution, and show theoretically and empirically that locations simulated from the local Gibbs sampler give rise to the correct resource selection function. Using simulated data, we demonstrate how this framework can be used to estimate resource selection and movement parameters.
[ { "created": "Mon, 28 Aug 2017 17:16:28 GMT", "version": "v1" }, { "created": "Tue, 26 Jun 2018 13:07:26 GMT", "version": "v2" } ]
2018-06-27
[ [ "Michelot", "Théo", "" ], [ "Blackwell", "Paul G.", "" ], [ "Matthiopoulos", "Jason", "" ] ]
The two dominant approaches for the analysis of species-habitat associations in animals have been shown to reach divergent conclusions. Models fitted from the viewpoint of an individual (step selection functions), once scaled up, do not agree with models fitted from a population viewpoint (resource selection functions). We explain this fundamental incompatibility, and propose a solution by introducing to the animal movement field a novel use for the well-known family of Markov chain Monte Carlo (MCMC) algorithms. By design, the step selection rules of MCMC lead to a steady-state distribution that coincides with a given underlying function: the target distribution. We therefore propose an analogy between the movements of an animal and the movements of a MCMC sampler, to guarantee convergence of the step selection rules to the parameters underlying the population's utilisation distribution. We introduce a rejection-free MCMC algorithm, the local Gibbs sampler, that better resembles real animal movement, and discuss the wide range of biological assumptions that it can accommodate. We illustrate our method with simulations on a known utilisation distribution, and show theoretically and empirically that locations simulated from the local Gibbs sampler give rise to the correct resource selection function. Using simulated data, we demonstrate how this framework can be used to estimate resource selection and movement parameters.
1610.01114
Peter St. John PhD
Peter C. St. John, Michael F. Crowley, and Yannick J. Bomble
Efficient estimation of the maximum metabolic productivity of batch systems
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. This metric is analogous to the maximum theoretical yield, a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. The proposed method follows traditional assumptions of dynamic flux balance analysis: (1) that internal metabolite fluxes are governed by a pseudo-steady state, and (2) that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to explicitly solve for the trade-off curve between maximum productivity and yield. We demonstrate the method on succinate production in two common microbial hosts, Escherichia coli and Actinobacillus succinogenes, revealing that nearly optimal yields and productivities can be achieved with only two discrete flux stages.
[ { "created": "Tue, 4 Oct 2016 18:10:25 GMT", "version": "v1" } ]
2016-10-05
[ [ "John", "Peter C. St.", "" ], [ "Crowley", "Michael F.", "" ], [ "Bomble", "Yannick J.", "" ] ]
Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. This metric is analogous to the maximum theoretical yield, a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. The proposed method follows traditional assumptions of dynamic flux balance analysis: (1) that internal metabolite fluxes are governed by a pseudo-steady state, and (2) that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to explicitly solve for the trade-off curve between maximum productivity and yield. We demonstrate the method on succinate production in two common microbial hosts, Escherichia coli and Actinobacillus succinogenes, revealing that nearly optimal yields and productivities can be achieved with only two discrete flux stages.
q-bio/0607014
Georgy Karev
Georgy P. Karev, Artem S. Novozhilov, Eugene V. Koonin
Mathematical modeling of tumor therapy with oncolytic viruses: Effects of parametric heterogeneity on cell dynamics
45 pages, 6 figures; submitted to Biology Direct
null
null
null
q-bio.TO q-bio.QM
null
One of the mechanisms that ensure cancer robustness is tumor heterogeneity, and its effects on tumor cells dynamics have to be taken into account when studying cancer progression. There is no unifying theoretical framework in mathematical modeling of carcinogenesis that would account for parametric heterogeneity. Here we formulate a modeling approach that naturally takes stock of inherent cancer cell heterogeneity and illustrate it with a model of interaction between a tumor and an oncolytic virus. We show that several phenomena that are absent in homogeneous models, such as cancer recurrence, tumor dormancy, an others, appear in heterogeneous setting. We also demonstrate that, within the applied modeling framework, to overcome the adverse effect of tumor cell heterogeneity on cancer progression, a heterogeneous population of an oncolytic virus must be used. Heterogeneity in parameters of the model, such as tumor cell susceptibility to virus infection and virus replication rate, can lead to complex, time-dependent behaviors of the tumor. Thus, irregular, quasi-chaotic behavior of the tumor-virus system can be caused not only by random perturbations but also by the heterogeneity of the tumor and the virus. The modeling approach described here reveals the importance of tumor cell and virus heterogeneity for the outcome of cancer therapy. It should be straightforward to apply these techniques to mathematical modeling of other types of anticancer therapy.
[ { "created": "Mon, 10 Jul 2006 17:31:13 GMT", "version": "v1" } ]
2007-05-23
[ [ "Karev", "Georgy P.", "" ], [ "Novozhilov", "Artem S.", "" ], [ "Koonin", "Eugene V.", "" ] ]
One of the mechanisms that ensure cancer robustness is tumor heterogeneity, and its effects on tumor cells dynamics have to be taken into account when studying cancer progression. There is no unifying theoretical framework in mathematical modeling of carcinogenesis that would account for parametric heterogeneity. Here we formulate a modeling approach that naturally takes stock of inherent cancer cell heterogeneity and illustrate it with a model of interaction between a tumor and an oncolytic virus. We show that several phenomena that are absent in homogeneous models, such as cancer recurrence, tumor dormancy, an others, appear in heterogeneous setting. We also demonstrate that, within the applied modeling framework, to overcome the adverse effect of tumor cell heterogeneity on cancer progression, a heterogeneous population of an oncolytic virus must be used. Heterogeneity in parameters of the model, such as tumor cell susceptibility to virus infection and virus replication rate, can lead to complex, time-dependent behaviors of the tumor. Thus, irregular, quasi-chaotic behavior of the tumor-virus system can be caused not only by random perturbations but also by the heterogeneity of the tumor and the virus. The modeling approach described here reveals the importance of tumor cell and virus heterogeneity for the outcome of cancer therapy. It should be straightforward to apply these techniques to mathematical modeling of other types of anticancer therapy.
2405.12815
Valentin Puente
Valentin Puente-Varona
Could a Computer Architect Understand our Brain?
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a highly speculative model encompassing the cortex, thalamus, and hippocampus of the mammalian brain. While the majority of computational neuroscience models are founded upon empirical evidence, this model is predicated upon a hardware proposal for a machine learning accelerator. Such a device was designed to perform a specific task, such as speech recognition. The design process employed the principles and techniques typically used by computer architects in the design of devices such as processors. However, it also sought to maintain plausibility with biological systems in accordance with the current understanding of the mammalian brain. In the course of our research, we have identified a functional framework that may help to fill the gaps in current neuroscience, thereby facilitating the explanations for many elusive cognitive-level effects. This paper does not describe the device itself or the rationale behind the design decision, but instead, it presents a concise description of the derived model. In brief, the model provides a functional definition of the cortical column and its structural definition by the minicolumns. It also offers a descriptive model for the corticothalamic and corticostriatal loops, a functional proposal for the hippocampal complex, and a simplified view of the brainstem circuitry involved in auditory processing. The proposed model appears to provide an explanation for a number of cognitive phenomena, including some ERP effects, bottom-up and top-down attention, and the relationship between phenomena such as the cocktail party effect, anterograde and retrograde amnesia following hippocampal complex damage, and so forth.
[ { "created": "Tue, 21 May 2024 14:15:52 GMT", "version": "v1" } ]
2024-05-22
[ [ "Puente-Varona", "Valentin", "" ] ]
This paper presents a highly speculative model encompassing the cortex, thalamus, and hippocampus of the mammalian brain. While the majority of computational neuroscience models are founded upon empirical evidence, this model is predicated upon a hardware proposal for a machine learning accelerator. Such a device was designed to perform a specific task, such as speech recognition. The design process employed the principles and techniques typically used by computer architects in the design of devices such as processors. However, it also sought to maintain plausibility with biological systems in accordance with the current understanding of the mammalian brain. In the course of our research, we have identified a functional framework that may help to fill the gaps in current neuroscience, thereby facilitating the explanations for many elusive cognitive-level effects. This paper does not describe the device itself or the rationale behind the design decision, but instead, it presents a concise description of the derived model. In brief, the model provides a functional definition of the cortical column and its structural definition by the minicolumns. It also offers a descriptive model for the corticothalamic and corticostriatal loops, a functional proposal for the hippocampal complex, and a simplified view of the brainstem circuitry involved in auditory processing. The proposed model appears to provide an explanation for a number of cognitive phenomena, including some ERP effects, bottom-up and top-down attention, and the relationship between phenomena such as the cocktail party effect, anterograde and retrograde amnesia following hippocampal complex damage, and so forth.
1305.3556
Jason Perlmutter
Jason D. Perlmutter, Cong Qiao, Michael F. Hagan
Viral genome structures are optimal for capsid assembly
25 pages, 14 figures. Accepted for publication in eLife
eLife 2013;2:e00632
10.7554/eLife.00632
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how virus capsids assemble around their nucleic acid (NA) genomes could promote efforts to block viral propagation or to reengineer capsids for gene therapy applications. We develop a coarse-grained model of capsid proteins and NAs with which we investigate assembly dynamics and thermodynamics. In contrast to recent theoretical models, we find that capsids spontaneously `overcharge'; i.e., the negative charge of the NA exceeds the positive charge on capsid. When applied to specific viruses, the optimal NA lengths closely correspond to the natural genome lengths. Calculations based on linear polyelectrolytes rather than base-paired NAs underpredict the optimal length, demonstrating the importance of NA structure to capsid assembly. These results suggest that electrostatics, excluded volume, and NA tertiary structure are sufficient to predict assembly thermodynamics and that the ability of viruses to selectively encapsidate their genomic NAs can be explained, at least in part, on a thermodynamic basis.
[ { "created": "Wed, 15 May 2013 17:26:57 GMT", "version": "v1" } ]
2014-05-15
[ [ "Perlmutter", "Jason D.", "" ], [ "Qiao", "Cong", "" ], [ "Hagan", "Michael F.", "" ] ]
Understanding how virus capsids assemble around their nucleic acid (NA) genomes could promote efforts to block viral propagation or to reengineer capsids for gene therapy applications. We develop a coarse-grained model of capsid proteins and NAs with which we investigate assembly dynamics and thermodynamics. In contrast to recent theoretical models, we find that capsids spontaneously `overcharge'; i.e., the negative charge of the NA exceeds the positive charge on capsid. When applied to specific viruses, the optimal NA lengths closely correspond to the natural genome lengths. Calculations based on linear polyelectrolytes rather than base-paired NAs underpredict the optimal length, demonstrating the importance of NA structure to capsid assembly. These results suggest that electrostatics, excluded volume, and NA tertiary structure are sufficient to predict assembly thermodynamics and that the ability of viruses to selectively encapsidate their genomic NAs can be explained, at least in part, on a thermodynamic basis.
1306.3619
Marco Kienzle
Marco Kienzle, Anthony J. Courtney and Michael F. O'Neill
Environmental and fishing effects on the dynamic of brown tiger prawn (Penaeus esculentus) in Moreton Bay (Australia)
revised manuscript following reviewers comments + adding data and code for readers
Fisheries Research 155 (2014) 138-148
10.1016/j.fishres.2014.02.030
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This analysis of the variations of brown tiger prawn (Penaeus esculentus) catch in the Moreton Bay multispecies trawl fishery estimated catchability using a delay difference model. It integrated several factors responsible for variations in catchability: targeting of fishing effort, increasing fishing power and changing availability. An analysis of covariance was used to define fishing events targeted at brown tiger prawns. A general linear model estimated inter-annual variations of fishing power. Temperature induced changes in prawn behaviour played an important role in the dynamic of this fishery. Maximum likelihood estimates of targeted catchability ($3.92 \pm 0.40 \ 10^{-4}$ boat-days$^{-1}$) were twice as large as non-targeted catchability ($1.91 \pm 0.24 \ 10^{-4}$ boat-days$^{-1}$). The causes of recent decline in fishing effort in this fishery were discussed.
[ { "created": "Sun, 16 Jun 2013 01:59:32 GMT", "version": "v1" }, { "created": "Wed, 4 Dec 2013 06:03:42 GMT", "version": "v2" }, { "created": "Wed, 18 Dec 2013 10:55:18 GMT", "version": "v3" } ]
2014-05-30
[ [ "Kienzle", "Marco", "" ], [ "Courtney", "Anthony J.", "" ], [ "O'Neill", "Michael F.", "" ] ]
This analysis of the variations of brown tiger prawn (Penaeus esculentus) catch in the Moreton Bay multispecies trawl fishery estimated catchability using a delay difference model. It integrated several factors responsible for variations in catchability: targeting of fishing effort, increasing fishing power and changing availability. An analysis of covariance was used to define fishing events targeted at brown tiger prawns. A general linear model estimated inter-annual variations of fishing power. Temperature induced changes in prawn behaviour played an important role in the dynamic of this fishery. Maximum likelihood estimates of targeted catchability ($3.92 \pm 0.40 \ 10^{-4}$ boat-days$^{-1}$) were twice as large as non-targeted catchability ($1.91 \pm 0.24 \ 10^{-4}$ boat-days$^{-1}$). The causes of recent decline in fishing effort in this fishery were discussed.
1204.5725
Martin Burger
Natalie Emken, Andreas P\"uschel, Martin Burger
Mathematical Modelling of Polarizing GTPases in Developing Axons
null
null
null
null
q-bio.SC math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this paper is to contribute to the basic understanding of neuronal polarization mechanisms by developing and studying a reaction-diffusion model for protein activation and inactivation. In particular we focus on a feedback loop between PI3 kinase and certain GTPases, and study its behaviour in dependence of neurite lengths. We find that if an ultrasensitive activation is included, the model can produce polarization at a critical length as observed in experiments. Symmetry breaking to polarization in the longer neurite is found only if active transport of a substance, in our case active PI3 kinase, is included into the model.
[ { "created": "Wed, 25 Apr 2012 18:14:18 GMT", "version": "v1" }, { "created": "Tue, 29 May 2012 14:49:15 GMT", "version": "v2" } ]
2012-05-30
[ [ "Emken", "Natalie", "" ], [ "Püschel", "Andreas", "" ], [ "Burger", "Martin", "" ] ]
The aim of this paper is to contribute to the basic understanding of neuronal polarization mechanisms by developing and studying a reaction-diffusion model for protein activation and inactivation. In particular we focus on a feedback loop between PI3 kinase and certain GTPases, and study its behaviour in dependence of neurite lengths. We find that if an ultrasensitive activation is included, the model can produce polarization at a critical length as observed in experiments. Symmetry breaking to polarization in the longer neurite is found only if active transport of a substance, in our case active PI3 kinase, is included into the model.
1508.03719
Sora Yoon
Sora Yoon, Dougu Nam
Biases in differential expression analysis of RNA-seq data: A matter of replicate type
18 pages, 6 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In differential expression (DE) analysis of RNA-seq count data, it is known that genes with a larger read number are more likely to be differentially expressed. This bias has a profound effect on the subsequent Gene Ontology (GO) analysis by perturbing the ranks of gene-sets. Another known bias is that the commonly used parametric DE analysis methods (e.g., edgeR, DESeq and baySeq) tend to yield more DE genes as the sequencing depth is increased. We nevertheless show that these biases are in fact confined to data of the technical replicate type. We also show the GO or gene-set enrichment analysis methods applied to technical replicate data result in considerable number of false positives. In conclusion, the current DE and enrichment analysis methods can be confidently used for biological replicate count data, while caution should be exercised when analysing technical replicate data.
[ { "created": "Sat, 15 Aug 2015 10:27:47 GMT", "version": "v1" } ]
2015-08-18
[ [ "Yoon", "Sora", "" ], [ "Nam", "Dougu", "" ] ]
In differential expression (DE) analysis of RNA-seq count data, it is known that genes with a larger read number are more likely to be differentially expressed. This bias has a profound effect on the subsequent Gene Ontology (GO) analysis by perturbing the ranks of gene-sets. Another known bias is that the commonly used parametric DE analysis methods (e.g., edgeR, DESeq and baySeq) tend to yield more DE genes as the sequencing depth is increased. We nevertheless show that these biases are in fact confined to data of the technical replicate type. We also show the GO or gene-set enrichment analysis methods applied to technical replicate data result in considerable number of false positives. In conclusion, the current DE and enrichment analysis methods can be confidently used for biological replicate count data, while caution should be exercised when analysing technical replicate data.
1310.0558
Liane Gabora
Liane Gabora, Eric O. Scott, and Stuart Kauffman
A Quantum Model of Exaptation: Incorporating Potentiality into Evolutionary Theory
27 pages
Progress in Biophysics & Molecular Biology, 113(1), 108-116
null
null
q-bio.PE quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phenomenon of preadaptation, or exaptation (wherein a trait that originally evolved to solve one problem is co-opted to solve a new problem) presents a formidable challenge to efforts to describe biological phenomena using a classical (Kolmogorovian) mathematical framework. We develop a quantum framework for exaptation with examples from both biological and cultural evolution. The state of a trait is written as a linear superposition of a set of basis states, or possible forms the trait could evolve into, in a complex Hilbert space. These basis states are represented by mutually orthogonal unit vectors, each weighted by an amplitude term. The choice of possible forms (basis states) depends on the adaptive function of interest (e.g., ability to metabolize lactose or thermoregulate), which plays the role of the observable. Observables are represented by self-adjoint operators on the Hilbert space. The possible forms (basis states) corresponding to this adaptive function (observable) are called eigenstates. The framework incorporates key features of exaptation: potentiality, contextuality, nonseparability, and emergence of new features. However, since it requires that one enumerate all possible contexts, its predictive value is limited, consistent with the assertion that there exists no biological equivalent to "laws of motion" by which we can predict the evolution of the biosphere.
[ { "created": "Wed, 2 Oct 2013 03:35:45 GMT", "version": "v1" } ]
2013-10-03
[ [ "Gabora", "Liane", "" ], [ "Scott", "Eric O.", "" ], [ "Kauffman", "Stuart", "" ] ]
The phenomenon of preadaptation, or exaptation (wherein a trait that originally evolved to solve one problem is co-opted to solve a new problem) presents a formidable challenge to efforts to describe biological phenomena using a classical (Kolmogorovian) mathematical framework. We develop a quantum framework for exaptation with examples from both biological and cultural evolution. The state of a trait is written as a linear superposition of a set of basis states, or possible forms the trait could evolve into, in a complex Hilbert space. These basis states are represented by mutually orthogonal unit vectors, each weighted by an amplitude term. The choice of possible forms (basis states) depends on the adaptive function of interest (e.g., ability to metabolize lactose or thermoregulate), which plays the role of the observable. Observables are represented by self-adjoint operators on the Hilbert space. The possible forms (basis states) corresponding to this adaptive function (observable) are called eigenstates. The framework incorporates key features of exaptation: potentiality, contextuality, nonseparability, and emergence of new features. However, since it requires that one enumerate all possible contexts, its predictive value is limited, consistent with the assertion that there exists no biological equivalent to "laws of motion" by which we can predict the evolution of the biosphere.
0711.1069
Roberto De Leo
Roberto De Leo, Sergio Demelio
Numerical analysis of solitons profiles in a composite model for DNA to rsion dynamics
16 pages, 9 figures
null
null
null
q-bio.BM q-bio.QM
null
We present the results of our numerical analysis of a "composite" model of DNA which generalizes a well-known elementary torsional model of Yakushevich by allowing bases to move independently from the backbone. The model shares with the Yakushevich model many features and results but it represents an improvement from both the conceptual and the phenomenological point of view. It provides a more realistic description of DNA and possibly a justification for the use of models which consider the DNA chain as uniform. It shows that the existence of solitons is a generic feature of the underlying nonlinear dynamics and is to a large extent independent of the detailed modelling of DNA. As opposite to the Yakushevich model, where it is needed to use an unphysical value for the torsion in order to induce the correct velocity of sound, the model we consider supports solitonic solutions, qualitatively and quantitatively very similar to the Yakushevich solitons, in a fully realistic range of all the physical parameters characterizing the DNA.
[ { "created": "Wed, 7 Nov 2007 11:35:13 GMT", "version": "v1" } ]
2007-11-08
[ [ "De Leo", "Roberto", "" ], [ "Demelio", "Sergio", "" ] ]
We present the results of our numerical analysis of a "composite" model of DNA which generalizes a well-known elementary torsional model of Yakushevich by allowing bases to move independently from the backbone. The model shares with the Yakushevich model many features and results but it represents an improvement from both the conceptual and the phenomenological point of view. It provides a more realistic description of DNA and possibly a justification for the use of models which consider the DNA chain as uniform. It shows that the existence of solitons is a generic feature of the underlying nonlinear dynamics and is to a large extent independent of the detailed modelling of DNA. As opposite to the Yakushevich model, where it is needed to use an unphysical value for the torsion in order to induce the correct velocity of sound, the model we consider supports solitonic solutions, qualitatively and quantitatively very similar to the Yakushevich solitons, in a fully realistic range of all the physical parameters characterizing the DNA.
1006.1103
Alexey Mazur K
Alexey K. Mazur
Anharmonic Torsional Stiffness of DNA Revealed under Small External Torques
8 pages, 5 figures, to appear in Phys. Rev. Lett
Phys. Rev. Lett. 105, 018102, 2010
10.1103/PhysRevLett.105.018102
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA supercoiling plays an important role in a variety of cellular processes. The torsional stress related with supercoiling may be also involved in gene regulation through the local structure and dynamics of the double helix. To check this possibility steady torsional stress was applied to DNA in the course of all-atom molecular dynamics simulations. It is found that small static untwisting significantly reduces the torsional persistence length ($l_t$) of GC-alternating DNA. For the AT-alternating sequence a smaller effect of the opposite sign is observed. As a result, the measured $l_t$ values are similar under zero stress, but diverge with untwisting. The effect is traced to sequence-specific asymmetry of local torsional fluctuations, and it should be small in long random DNA due to compensation. In contrast, the stiffness of special short sequences can vary significantly, which gives a simple possibility of gene regulation via probabilities of strong fluctuations. These results have important implications for the role of local DNA twisting in complexes with transcription factors.
[ { "created": "Sun, 6 Jun 2010 13:52:06 GMT", "version": "v1" } ]
2015-05-19
[ [ "Mazur", "Alexey K.", "" ] ]
DNA supercoiling plays an important role in a variety of cellular processes. The torsional stress related with supercoiling may be also involved in gene regulation through the local structure and dynamics of the double helix. To check this possibility steady torsional stress was applied to DNA in the course of all-atom molecular dynamics simulations. It is found that small static untwisting significantly reduces the torsional persistence length ($l_t$) of GC-alternating DNA. For the AT-alternating sequence a smaller effect of the opposite sign is observed. As a result, the measured $l_t$ values are similar under zero stress, but diverge with untwisting. The effect is traced to sequence-specific asymmetry of local torsional fluctuations, and it should be small in long random DNA due to compensation. In contrast, the stiffness of special short sequences can vary significantly, which gives a simple possibility of gene regulation via probabilities of strong fluctuations. These results have important implications for the role of local DNA twisting in complexes with transcription factors.
2303.02981
J\'ozsef Z. Farkas
Carles Barril, \`Angel Calsina, Odo Diekmann, J\'ozsef Z. Farkas
On competition through growth reduction
null
Journal of Mathematical Biology 88, 66 (2024)
10.1007/s00285-024-02084-x
null
q-bio.PE math.AP
http://creativecommons.org/licenses/by/4.0/
We consider a population organised hierarchically with respect to size in such a way that the growth rate of each individual depends only on the presence of larger individuals. As a concrete example one might think of a forest, in which the incidence of light on a tree (and hence how fast it grows) is affected by shading of taller trees. The model is formulated as a delay equation, more specifically a scalar renewal equation, for the population birth rate. After discussing the well-posedness of the model, we analyse how many stationary birth rates the equation can have in terms of the functional parameters of the model. In particular we show that, under reasonable and rather general assumptions, only one stationary birth rate can exist besides the trivial one (associated to the state in which there are no individuals and the population birth rate is zero). We give conditions for this non-trivial stationary birth rate to exist and we analyse its stability using the principle of linearised stability for delay equations. Finally we relate the results to an alternative formulation of the model taking the form of a quasilinear partial differential equation for the population size-density.
[ { "created": "Mon, 6 Mar 2023 09:15:52 GMT", "version": "v1" }, { "created": "Mon, 22 Apr 2024 10:02:17 GMT", "version": "v2" } ]
2024-04-23
[ [ "Barril", "Carles", "" ], [ "Calsina", "Àngel", "" ], [ "Diekmann", "Odo", "" ], [ "Farkas", "József Z.", "" ] ]
We consider a population organised hierarchically with respect to size in such a way that the growth rate of each individual depends only on the presence of larger individuals. As a concrete example one might think of a forest, in which the incidence of light on a tree (and hence how fast it grows) is affected by shading of taller trees. The model is formulated as a delay equation, more specifically a scalar renewal equation, for the population birth rate. After discussing the well-posedness of the model, we analyse how many stationary birth rates the equation can have in terms of the functional parameters of the model. In particular we show that, under reasonable and rather general assumptions, only one stationary birth rate can exist besides the trivial one (associated to the state in which there are no individuals and the population birth rate is zero). We give conditions for this non-trivial stationary birth rate to exist and we analyse its stability using the principle of linearised stability for delay equations. Finally we relate the results to an alternative formulation of the model taking the form of a quasilinear partial differential equation for the population size-density.
2301.10370
Dzmitry Rumiantsau
Dzmitry Rumiantsau (1), Annick Lesne (2 and 3), Marc-Thorsten H\"utt (1) ((1) Department of Life Sciences and Chemistry, Constructor University, Bremen, Germany, (2) Sorbonne Universit\'e, CNRS, Laboratoire de Physique Th\'eorique de la Mati\`ere Condens\'ee, LPTMC, Paris, France, (3) Institut de G\'en\'etique Mol\'eculaire de Montpellier, University of Montpellier, CNRS, Montpellier, France)
Predicting attractors from spectral properties of stylized gene regulatory networks
Main text: 14 pages, 11 figures. Supplements: 22 pages, 16 figures. Submitted to: Physical Review E
null
10.1103/PhysRevE.108.014402
null
q-bio.MN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How the architecture of gene regulatory networks ultimately shapes gene expression patterns is an open question, which has been approached from a multitude of angles. The dominant strategy has been to identify non-random features in these networks and then argue for the function of these features using mechanistic modelling. Here we establish the foundation of an alternative approach by studying the correlation of eigenvectors with synthetic gene expression data simulated with a basic and popular model of gene expression dynamics -- attractors of Boolean threshold dynamics in signed directed graphs. Eigenvectors of the graph Laplacian are known to explain collective dynamical states (stationary patterns) in Turing dynamics on graphs. In this study, we show that eigenvectors can also predict collective states (attractors) for a markedly different type of dynamics, Boolean threshold dynamics, and category of graphs, signed directed graphs. However, the overall predictive power depends on details of the network architecture, in a predictable fashion. Our results are a set of statistical observations, providing the first systematic step towards a further theoretical understanding of the role of eigenvectors in dynamics on graphs.
[ { "created": "Wed, 25 Jan 2023 00:57:22 GMT", "version": "v1" } ]
2023-07-19
[ [ "Rumiantsau", "Dzmitry", "", "2 and 3" ], [ "Lesne", "Annick", "", "2 and 3" ], [ "Hütt", "Marc-Thorsten", "" ] ]
How the architecture of gene regulatory networks ultimately shapes gene expression patterns is an open question, which has been approached from a multitude of angles. The dominant strategy has been to identify non-random features in these networks and then argue for the function of these features using mechanistic modelling. Here we establish the foundation of an alternative approach by studying the correlation of eigenvectors with synthetic gene expression data simulated with a basic and popular model of gene expression dynamics -- attractors of Boolean threshold dynamics in signed directed graphs. Eigenvectors of the graph Laplacian are known to explain collective dynamical states (stationary patterns) in Turing dynamics on graphs. In this study, we show that eigenvectors can also predict collective states (attractors) for a markedly different type of dynamics, Boolean threshold dynamics, and category of graphs, signed directed graphs. However, the overall predictive power depends on details of the network architecture, in a predictable fashion. Our results are a set of statistical observations, providing the first systematic step towards a further theoretical understanding of the role of eigenvectors in dynamics on graphs.
1409.5045
Patrick St-Amant
Patrick St-Amant
A Unified Mathematical Language for Medicine and Science
98 pages
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A unified mathematical language for medicine and science will be presented. Using this language, models for DNA replication, protein synthesis, chemical reactions, neurons and a cardiac cycle of a heart have been built. Models for Turing machines, cellular automaton, fractals and physical systems are also represented with the use of this language. Interestingly, the language comes with a way to represent probability theory concepts and also programming statements. With this language, questions and processes in medicine can be represented as systems of equations; and solutions to these equations are viewed as treatments or previously unknown processes. This language can serve as the framework for the creation of a large interactive open-access scientific database that allows extensive mathematical medicine computations. It can also serve as a basis for exploring ideas related to what could be called metascience.
[ { "created": "Mon, 15 Sep 2014 03:24:12 GMT", "version": "v1" } ]
2014-09-18
[ [ "St-Amant", "Patrick", "" ] ]
A unified mathematical language for medicine and science will be presented. Using this language, models for DNA replication, protein synthesis, chemical reactions, neurons and a cardiac cycle of a heart have been built. Models for Turing machines, cellular automaton, fractals and physical systems are also represented with the use of this language. Interestingly, the language comes with a way to represent probability theory concepts and also programming statements. With this language, questions and processes in medicine can be represented as systems of equations; and solutions to these equations are viewed as treatments or previously unknown processes. This language can serve as the framework for the creation of a large interactive open-access scientific database that allows extensive mathematical medicine computations. It can also serve as a basis for exploring ideas related to what could be called metascience.
2404.04858
Tony Lindeberg
Tony Lindeberg
Do the receptive fields in the primary visual cortex span a variability over the degree of elongation of the receptive fields?
22 pages, 15 figures. Note: Companion paper regarding theoretical analysis in arXiv:2304.11920
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents results of combining (i) theoretical analysis regarding connections between the orientation selectivity and the elongation of receptive fields for the affine Gaussian derivative model with (ii) biological measurements of orientation selectivity in the primary visual cortex, to investigate if (iii) the receptive fields can be regarded as spanning a variability in the degree of elongation. From an in-depth theoretical analysis of idealized models for the receptive fields of simple and complex cells in the primary visual cortex, we have established that the directional selectivity becomes more narrow with increasing elongation of the receptive fields. By comparison with previously established biological results, concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, we demonstrate that those underlying theoretical predictions, in combination with these biological results, are consistent with a previously formulated biological hypothesis, stating that the biological receptive field shapes should span the degrees of freedom in affine image transformations, to support affine covariance over the population of receptive fields in the primary visual cortex. Based on this possible indirect support for the working hypothesis concerning affine covariance, we formulate a set of testable predictions that could be used to, with neurophysiological experiments, judge if the receptive fields in the primary visual cortex of higher mammals could be regarded as spanning a variability over the eccentricity or the elongation of the receptive fields, and, if so, then also characterize if such a variability would, in a structured way, be related to the pinwheel structure in the visual cortex.
[ { "created": "Sun, 7 Apr 2024 08:06:12 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 05:29:13 GMT", "version": "v2" }, { "created": "Thu, 11 Apr 2024 10:44:55 GMT", "version": "v3" }, { "created": "Fri, 3 May 2024 05:03:37 GMT", "version": "v4" }, { "created": "Tue, 21 May 2024 14:15:04 GMT", "version": "v5" }, { "created": "Mon, 10 Jun 2024 06:52:07 GMT", "version": "v6" } ]
2024-06-11
[ [ "Lindeberg", "Tony", "" ] ]
This paper presents results of combining (i) theoretical analysis regarding connections between the orientation selectivity and the elongation of receptive fields for the affine Gaussian derivative model with (ii) biological measurements of orientation selectivity in the primary visual cortex, to investigate if (iii) the receptive fields can be regarded as spanning a variability in the degree of elongation. From an in-depth theoretical analysis of idealized models for the receptive fields of simple and complex cells in the primary visual cortex, we have established that the directional selectivity becomes more narrow with increasing elongation of the receptive fields. By comparison with previously established biological results, concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, we demonstrate that those underlying theoretical predictions, in combination with these biological results, are consistent with a previously formulated biological hypothesis, stating that the biological receptive field shapes should span the degrees of freedom in affine image transformations, to support affine covariance over the population of receptive fields in the primary visual cortex. Based on this possible indirect support for the working hypothesis concerning affine covariance, we formulate a set of testable predictions that could be used to, with neurophysiological experiments, judge if the receptive fields in the primary visual cortex of higher mammals could be regarded as spanning a variability over the eccentricity or the elongation of the receptive fields, and, if so, then also characterize if such a variability would, in a structured way, be related to the pinwheel structure in the visual cortex.
1606.00786
Stephen Quake
Stephanie Tzouanas Schmidt, Stephanie M. Zimmerman, Jianbin Wang, Stuart K. Kim, Stephen R. Quake
Cell lineage tracing using nuclease barcoding
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lineage tracing, the determination and mapping of progeny arising from single cells, is an important approach enabling the elucidation of mechanisms underlying diverse biological processes ranging from development to disease. We developed a dynamic sequence-based barcode for lineage tracing and have demonstrated its performance in C. elegans, a model organism whose lineage tree is well established. The strategy we use creates lineage trees based upon the introduction of specific mutations into cells and the propagation of these mutations to daughter cells at each cell division. We present an experimental proof of concept along with a corresponding simulation and analytical model for deeper understanding of the coding capacity of the system. By introducing mutations in a predictable manner using CRISPR/Cas9, our technology will enable more complete investigations of cellular processes.
[ { "created": "Thu, 2 Jun 2016 18:19:05 GMT", "version": "v1" } ]
2016-06-03
[ [ "Schmidt", "Stephanie Tzouanas", "" ], [ "Zimmerman", "Stephanie M.", "" ], [ "Wang", "Jianbin", "" ], [ "Kim", "Stuart K.", "" ], [ "Quake", "Stephen R.", "" ] ]
Lineage tracing, the determination and mapping of progeny arising from single cells, is an important approach enabling the elucidation of mechanisms underlying diverse biological processes ranging from development to disease. We developed a dynamic sequence-based barcode for lineage tracing and have demonstrated its performance in C. elegans, a model organism whose lineage tree is well established. The strategy we use creates lineage trees based upon the introduction of specific mutations into cells and the propagation of these mutations to daughter cells at each cell division. We present an experimental proof of concept along with a corresponding simulation and analytical model for deeper understanding of the coding capacity of the system. By introducing mutations in a predictable manner using CRISPR/Cas9, our technology will enable more complete investigations of cellular processes.
q-bio/0606013
Abdul Salam Jarrah
Abdul Salam Jarrah, Blessilda Raposa, Reinhard Laubenbacher
Nested Canalyzing, unate cascade, and polynomial functions
To appear in Physica D: Nonlinear Phenomena
null
null
null
q-bio.QM math.AC
null
This paper focuses on the study of certain classes of Boolean functions that have appeared in several different contexts. Nested canalyzing functions have been studied recently in the context of Boolean network models of gene regulatory networks. In the same context, polynomial functions over finite fields have been used to develop network inference methods for gene regulatory networks. Finally, unate cascade functions have been studied in the design of logic circuits and binary decision diagrams. This paper shows that the class of nested canalyzing functions is equal to that of unate cascade functions. Furthermore, it provides a description of nested canalyzing functions as a certain type of Boolean polynomial function. Using the polynomial framework one can show that the class of nested canalyzing functions, or, equivalently, the class of unate cascade functions, forms an algebraic variety which makes their analysis amenable to the use of techniques from algebraic geometry and computational algebra. As a corollary of the functional equivalence derived here, a formula in the literature for the number of unate cascade functions provides such a formula for the number of nested canalyzing functions.
[ { "created": "Mon, 12 Jun 2006 19:56:44 GMT", "version": "v1" }, { "created": "Mon, 12 Jun 2006 20:01:46 GMT", "version": "v2" }, { "created": "Thu, 26 Jul 2007 02:56:38 GMT", "version": "v3" } ]
2007-07-26
[ [ "Jarrah", "Abdul Salam", "" ], [ "Raposa", "Blessilda", "" ], [ "Laubenbacher", "Reinhard", "" ] ]
This paper focuses on the study of certain classes of Boolean functions that have appeared in several different contexts. Nested canalyzing functions have been studied recently in the context of Boolean network models of gene regulatory networks. In the same context, polynomial functions over finite fields have been used to develop network inference methods for gene regulatory networks. Finally, unate cascade functions have been studied in the design of logic circuits and binary decision diagrams. This paper shows that the class of nested canalyzing functions is equal to that of unate cascade functions. Furthermore, it provides a description of nested canalyzing functions as a certain type of Boolean polynomial function. Using the polynomial framework one can show that the class of nested canalyzing functions, or, equivalently, the class of unate cascade functions, forms an algebraic variety which makes their analysis amenable to the use of techniques from algebraic geometry and computational algebra. As a corollary of the functional equivalence derived here, a formula in the literature for the number of unate cascade functions provides such a formula for the number of nested canalyzing functions.
2010.15081
Xilin Liu
Xilin Liu, Hongjie Zhu, Tian Qiu, Srihari Y. Sritharan, Dengteng Ge, Shu Yang, Milin Zhang, Andrew G. Richardson, Timothy H. Lucas, Nader Engheta, and Jan Van der Spiegel
A Fully Integrated Sensor-Brain-Machine Interface System for Restoring Somatosensation
12 pages, 17 figures
IEEE Sensors Journal, 2020
10.1109/JSEN.2020.3030899
null
q-bio.NC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensory feedback is critical to the performance of neural prostheses that restore movement control after neurological injury. Recent advances in direct neural control of paralyzed arms present new requirements for miniaturized, low-power sensor systems. To address this challenge, we developed a fully-integrated wireless sensor-brain-machine interface (SBMI) system for communicating key somatosensory signals, fingertip forces and limb joint angles, to the brain. The system consists of a tactile force sensor, an electrogoniometer, and a neural interface. The tactile force sensor features a novel optical waveguide on CMOS design for sensing. The electrogoniometer integrates an ultra low-power digital signal processor (DSP) for real-time joint angle measurement. The neural interface enables bidirectional neural stimulation and recording. Innovative designs of sensors and sensing interfaces, analog-to-digital converters (ADC) and ultra wide-band (UWB) wireless transceivers have been developed. The prototypes have been fabricated in 180nm standard CMOS technology and tested on the bench and in vivo. The developed system provides a novel solution for providing somatosensory feedback to next-generation neural prostheses.
[ { "created": "Sat, 17 Oct 2020 04:58:45 GMT", "version": "v1" } ]
2020-10-29
[ [ "Liu", "Xilin", "" ], [ "Zhu", "Hongjie", "" ], [ "Qiu", "Tian", "" ], [ "Sritharan", "Srihari Y.", "" ], [ "Ge", "Dengteng", "" ], [ "Yang", "Shu", "" ], [ "Zhang", "Milin", "" ], [ "Richardson", "Andrew G.", "" ], [ "Lucas", "Timothy H.", "" ], [ "Engheta", "Nader", "" ], [ "Van der Spiegel", "Jan", "" ] ]
Sensory feedback is critical to the performance of neural prostheses that restore movement control after neurological injury. Recent advances in direct neural control of paralyzed arms present new requirements for miniaturized, low-power sensor systems. To address this challenge, we developed a fully-integrated wireless sensor-brain-machine interface (SBMI) system for communicating key somatosensory signals, fingertip forces and limb joint angles, to the brain. The system consists of a tactile force sensor, an electrogoniometer, and a neural interface. The tactile force sensor features a novel optical waveguide on CMOS design for sensing. The electrogoniometer integrates an ultra low-power digital signal processor (DSP) for real-time joint angle measurement. The neural interface enables bidirectional neural stimulation and recording. Innovative designs of sensors and sensing interfaces, analog-to-digital converters (ADC) and ultra wide-band (UWB) wireless transceivers have been developed. The prototypes have been fabricated in 180nm standard CMOS technology and tested on the bench and in vivo. The developed system provides a novel solution for providing somatosensory feedback to next-generation neural prostheses.
1004.1212
Mike Steel Prof.
Olivier Gascuel and Mike Steel
Inferring ancestral sequences in taxon-rich phylogenies
32 pages, 5 figures, 1 table.
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical consistency in phylogenetics has traditionally referred to the accuracy of estimating phylogenetic parameters for a fixed number of species as we increase the number of characters. However, as sequences are often of fixed length (e.g. for a gene) although we are often able to sample more taxa, it is useful to consider a dual type of statistical consistency where we increase the number of species, rather than characters. This raises some basic questions: what can we learn about the evolutionary process as we increase the number of species? In particular, does having more species allow us to infer the ancestral state of characters accurately? This question is particularly relevant when sequence site evolution varies in a complex way from character to character, as well as for reconstructing ancestral sequences. In this paper, we assemble a collection of results to analyse various approaches for inferring ancestral information with increasing accuracy as the number of taxa increases.
[ { "created": "Thu, 8 Apr 2010 00:22:26 GMT", "version": "v1" } ]
2010-04-09
[ [ "Gascuel", "Olivier", "" ], [ "Steel", "Mike", "" ] ]
Statistical consistency in phylogenetics has traditionally referred to the accuracy of estimating phylogenetic parameters for a fixed number of species as we increase the number of characters. However, as sequences are often of fixed length (e.g. for a gene) although we are often able to sample more taxa, it is useful to consider a dual type of statistical consistency where we increase the number of species, rather than characters. This raises some basic questions: what can we learn about the evolutionary process as we increase the number of species? In particular, does having more species allow us to infer the ancestral state of characters accurately? This question is particularly relevant when sequence site evolution varies in a complex way from character to character, as well as for reconstructing ancestral sequences. In this paper, we assemble a collection of results to analyse various approaches for inferring ancestral information with increasing accuracy as the number of taxa increases.
2012.10115
L\'eonard Dekens
L\'eonard Dekens
Evolutionary dynamics of complex traits in sexual populations in a strongly heterogeneous environment: how normal?
null
null
null
null
q-bio.PE math.AP
http://creativecommons.org/licenses/by/4.0/
When studying the dynamics of trait distribution of populations in a heterogeneous environment, classical models from quantitative genetics choose to look at its system of moments, specifically the first two ones. Additionally, in order to close the resulting system of equations, they often assume that the trait distribution is Gaussian (see for instance Ronce and Kirkpatrick 2001). The aim of this paper is to introduce a mathematical framework that follows the whole trait distribution (without prior assumption) to study evolutionary dynamics of sexually reproducing populations. Specifically, it focuses on complex traits, whose inheritance can be encoded by the infinitesimal model of segregation (Fisher 1919). We show that it allows us to derive a regime in which our model gives the same dynamics as when assuming a Gaussian trait distribution. To support that, we compare the stationary problems of the system of moments derived from our model with the one given in Ronce and Kirkpatrick 2001 and show that they are equivalent under this regime and do not need to be otherwise. Moreover, under this regime of equivalence, we show that a separation bewteen ecological and evolutionary time scales arises. A fast relaxation toward monomorphism allows us to reduce the complexity of the system of moments, using a slow-fast analysis. This reduction leads us to complete, still in this regime, the analytical description of the bistable asymmetrical equilibria numerically found in Ronce and Kirkpatrick 2001. More globally, we provide explicit modelling hypotheses that allow for such local adaptation patterns to occur.
[ { "created": "Fri, 18 Dec 2020 09:15:23 GMT", "version": "v1" } ]
2020-12-21
[ [ "Dekens", "Léonard", "" ] ]
When studying the dynamics of trait distribution of populations in a heterogeneous environment, classical models from quantitative genetics choose to look at its system of moments, specifically the first two ones. Additionally, in order to close the resulting system of equations, they often assume that the trait distribution is Gaussian (see for instance Ronce and Kirkpatrick 2001). The aim of this paper is to introduce a mathematical framework that follows the whole trait distribution (without prior assumption) to study evolutionary dynamics of sexually reproducing populations. Specifically, it focuses on complex traits, whose inheritance can be encoded by the infinitesimal model of segregation (Fisher 1919). We show that it allows us to derive a regime in which our model gives the same dynamics as when assuming a Gaussian trait distribution. To support that, we compare the stationary problems of the system of moments derived from our model with the one given in Ronce and Kirkpatrick 2001 and show that they are equivalent under this regime and do not need to be otherwise. Moreover, under this regime of equivalence, we show that a separation bewteen ecological and evolutionary time scales arises. A fast relaxation toward monomorphism allows us to reduce the complexity of the system of moments, using a slow-fast analysis. This reduction leads us to complete, still in this regime, the analytical description of the bistable asymmetrical equilibria numerically found in Ronce and Kirkpatrick 2001. More globally, we provide explicit modelling hypotheses that allow for such local adaptation patterns to occur.
1712.02943
Allen Tannenbaum
Maryam Pouryahya, James Mathews, Allen Tannenbaum
Comparing Three Notions of Discrete Ricci Curvature on Biological Networks
23 pages
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present work, we study the properties of biological networks by applying analogous notions of fundamental concepts in Riemannian geometry and optimal mass transport to discrete networks described by weighted graphs. Specifically, we employ possible generalizations of the notion of Ricci curvature on Riemannian manifold to discrete spaces in order to infer certain robustness properties of the networks of interest. We compare three possible discrete notions of Ricci curvature (Olivier Ricci curvature, Bakry-\'Emery Ricci curvature, and Forman Ricci curvature) on some model and biological networks. While the exact relationship of each of the three definitions of curvature to one another is still not known, they do yield similar results on our biological networks of interest. These notions are initially defined on positively weighted graphs; however, Forman-Ricci curvature can also be defined on directed positively weighted networks. We will generalize this notion of directed Forman Ricci curvature on the network to a form that also considers the signs of the directions (e. g., activator and repressor in transcription networks). We call this notion the \emph{signed-control Ricci curvature}. Given real biological networks are almost always directed and possess positive and negative controls, the notion of signed-control curvature can elucidate the network-based study of these complex networks. Finally, we compare the results of these notions of Ricci curvature on networks to some known network measures, namely, betweenness centrality and clustering coefficients on graphs.
[ { "created": "Fri, 8 Dec 2017 05:17:03 GMT", "version": "v1" } ]
2017-12-11
[ [ "Pouryahya", "Maryam", "" ], [ "Mathews", "James", "" ], [ "Tannenbaum", "Allen", "" ] ]
In the present work, we study the properties of biological networks by applying analogous notions of fundamental concepts in Riemannian geometry and optimal mass transport to discrete networks described by weighted graphs. Specifically, we employ possible generalizations of the notion of Ricci curvature on Riemannian manifold to discrete spaces in order to infer certain robustness properties of the networks of interest. We compare three possible discrete notions of Ricci curvature (Olivier Ricci curvature, Bakry-\'Emery Ricci curvature, and Forman Ricci curvature) on some model and biological networks. While the exact relationship of each of the three definitions of curvature to one another is still not known, they do yield similar results on our biological networks of interest. These notions are initially defined on positively weighted graphs; however, Forman-Ricci curvature can also be defined on directed positively weighted networks. We will generalize this notion of directed Forman Ricci curvature on the network to a form that also considers the signs of the directions (e. g., activator and repressor in transcription networks). We call this notion the \emph{signed-control Ricci curvature}. Given real biological networks are almost always directed and possess positive and negative controls, the notion of signed-control curvature can elucidate the network-based study of these complex networks. Finally, we compare the results of these notions of Ricci curvature on networks to some known network measures, namely, betweenness centrality and clustering coefficients on graphs.
2307.13004
Changkun Jiang
Zihao Li, Changkun Jiang, and Jianqiang Li
DeepGATGO: A Hierarchical Pretraining-Based Graph-Attention Model for Automatic Protein Function Prediction
Accepted in BIOKDD'23
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic protein function prediction (AFP) is classified as a large-scale multi-label classification problem aimed at automating protein enrichment analysis to eliminate the current reliance on labor-intensive wet-lab methods. Currently, popular methods primarily combine protein-related information and Gene Ontology (GO) terms to generate final functional predictions. For example, protein sequences, structural information, and protein-protein interaction networks are integrated as prior knowledge to fuse with GO term embeddings and generate the ultimate prediction results. However, these methods are limited by the difficulty in obtaining structural information or network topology information, as well as the accuracy of such data. Therefore, more and more methods that only use protein sequences for protein function prediction have been proposed, which is a more reliable and computationally cheaper approach. However, the existing methods fail to fully extract feature information from protein sequences or label data because they do not adequately consider the intrinsic characteristics of the data itself. Therefore, we propose a sequence-based hierarchical prediction method, DeepGATGO, which processes protein sequences and GO term labels hierarchically, and utilizes graph attention networks (GATs) and contrastive learning for protein function prediction. Specifically, we compute embeddings of the sequence and label data using pre-trained models to reduce computational costs and improve the embedding accuracy. Then, we use GATs to dynamically extract the structural information of non-Euclidean data, and learn general features of the label dataset with contrastive learning by constructing positive and negative example samples. Experimental results demonstrate that our proposed model exhibits better scalability in GO term enrichment analysis on large-scale datasets.
[ { "created": "Mon, 24 Jul 2023 07:01:32 GMT", "version": "v1" } ]
2023-07-26
[ [ "Li", "Zihao", "" ], [ "Jiang", "Changkun", "" ], [ "Li", "Jianqiang", "" ] ]
Automatic protein function prediction (AFP) is classified as a large-scale multi-label classification problem aimed at automating protein enrichment analysis to eliminate the current reliance on labor-intensive wet-lab methods. Currently, popular methods primarily combine protein-related information and Gene Ontology (GO) terms to generate final functional predictions. For example, protein sequences, structural information, and protein-protein interaction networks are integrated as prior knowledge to fuse with GO term embeddings and generate the ultimate prediction results. However, these methods are limited by the difficulty in obtaining structural information or network topology information, as well as the accuracy of such data. Therefore, more and more methods that only use protein sequences for protein function prediction have been proposed, which is a more reliable and computationally cheaper approach. However, the existing methods fail to fully extract feature information from protein sequences or label data because they do not adequately consider the intrinsic characteristics of the data itself. Therefore, we propose a sequence-based hierarchical prediction method, DeepGATGO, which processes protein sequences and GO term labels hierarchically, and utilizes graph attention networks (GATs) and contrastive learning for protein function prediction. Specifically, we compute embeddings of the sequence and label data using pre-trained models to reduce computational costs and improve the embedding accuracy. Then, we use GATs to dynamically extract the structural information of non-Euclidean data, and learn general features of the label dataset with contrastive learning by constructing positive and negative example samples. Experimental results demonstrate that our proposed model exhibits better scalability in GO term enrichment analysis on large-scale datasets.
0805.1289
Vasile Morariu
Vasile V. Morariu
A limiting rule for the variability of coding sequence length in microbial genomes
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mean length and the variability of coding sequences for 48 genomes of bacteria and archaea were analyzed. It was found that the plotted data can be described by an angular area. This suggests the followings: a) The variability of a genome increases as the mean length increases; b) There is an upper and a lower limit for variability for a given mean length; c) Extrapolation of the upper and lower limits to lower mean values converges to a single point which might be assimilated to a primordial cell. The whole picture is reminding of a process which starts from a single cell and evolves into more and more species which, in turn, show more and more variability.
[ { "created": "Fri, 9 May 2008 07:13:36 GMT", "version": "v1" } ]
2008-05-12
[ [ "Morariu", "Vasile V.", "" ] ]
The mean length and the variability of coding sequences for 48 genomes of bacteria and archaea were analyzed. It was found that the plotted data can be described by an angular area. This suggests the followings: a) The variability of a genome increases as the mean length increases; b) There is an upper and a lower limit for variability for a given mean length; c) Extrapolation of the upper and lower limits to lower mean values converges to a single point which might be assimilated to a primordial cell. The whole picture is reminding of a process which starts from a single cell and evolves into more and more species which, in turn, show more and more variability.
2103.12808
John Lang
J.C. Lang
Use of mathematical modelling to assess respiratory syncytial virus epidemiology and interventions: A literature review
24 pages, 2 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
Respiratory syncytial virus (RSV) is a leading cause of acute lower respiratory tract infection worldwide, resulting in approximately sixty thousand annual hospitalizations of <5-year-olds in the United States alone and three million annual hospitalizations globally. The development of over 40 vaccines and immunoprophylactic interventions targeting RSV has the potential to significantly reduce the disease burden from RSV infection in the near future. In the context of RSV, a highly contagious pathogen, dynamic transmission models (DTMs) are valuable tools in the evaluation and comparison of the effectiveness of different interventions. This review, the first of its kind for RSV DTMs, provides a valuable foundation for future modelling efforts and highlights important gaps in our understanding of RSV epidemics. Specifically, we have searched the literature using Web of Science, Scopus, Embase, and PubMed to identify all published manuscripts reporting the development of DTMs focused on the population transmission of RSV. We reviewed the resulting studies and summarized the structure, parameterization, and results of the models developed therein. We anticipate that future RSV DTMs, combined with cost-effectiveness evaluations, will play a significant role in shaping decision making in the development and implementation of intervention programs.
[ { "created": "Tue, 23 Mar 2021 19:34:08 GMT", "version": "v1" } ]
2021-03-25
[ [ "Lang", "J. C.", "" ] ]
Respiratory syncytial virus (RSV) is a leading cause of acute lower respiratory tract infection worldwide, resulting in approximately sixty thousand annual hospitalizations of <5-year-olds in the United States alone and three million annual hospitalizations globally. The development of over 40 vaccines and immunoprophylactic interventions targeting RSV has the potential to significantly reduce the disease burden from RSV infection in the near future. In the context of RSV, a highly contagious pathogen, dynamic transmission models (DTMs) are valuable tools in the evaluation and comparison of the effectiveness of different interventions. This review, the first of its kind for RSV DTMs, provides a valuable foundation for future modelling efforts and highlights important gaps in our understanding of RSV epidemics. Specifically, we have searched the literature using Web of Science, Scopus, Embase, and PubMed to identify all published manuscripts reporting the development of DTMs focused on the population transmission of RSV. We reviewed the resulting studies and summarized the structure, parameterization, and results of the models developed therein. We anticipate that future RSV DTMs, combined with cost-effectiveness evaluations, will play a significant role in shaping decision making in the development and implementation of intervention programs.
1306.2584
Dimitris Anastassiou
Wei-Yi Cheng, Tai-Hsien Ou Yang, Hui Shen, Peter W. Laird, Dimitris Anastassiou and the Cancer Genome Atlas Research Network
Multi-cancer molecular signatures and their interrelationships
[07.11.2013 v2] Additional authors and acknowledgements for people who contributed to the interpretation of attractor signatures. Summarized table for all 18 signatures. Comments on possible functions
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although cancer is known to be characterized by several unifying biological hallmarks, systems biology has had limited success in identifying molecular signatures present in in all types of cancer. The current availability of rich data sets from many different cancer types provides an opportunity for thorough computational data mining in search of such common patterns. Here we report the identification of 18 "pan-cancer" molecular signatures resulting from analysis of data sets containing values from mRNA expression, microRNA expression, DNA methylation, and protein activity, from twelve different cancer types. The membership of many of these signatures points to particular biological mechanisms related to cancer progression, suggesting that they represent important attributes of cancer in need of being elucidated for potential applications in diagnostic, prognostic and therapeutic products applicable to multiple cancer types.
[ { "created": "Tue, 11 Jun 2013 17:07:17 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2013 23:30:29 GMT", "version": "v2" } ]
2013-07-15
[ [ "Cheng", "Wei-Yi", "" ], [ "Yang", "Tai-Hsien Ou", "" ], [ "Shen", "Hui", "" ], [ "Laird", "Peter W.", "" ], [ "Anastassiou", "Dimitris", "" ], [ "Network", "the Cancer Genome Atlas Research", "" ] ]
Although cancer is known to be characterized by several unifying biological hallmarks, systems biology has had limited success in identifying molecular signatures present in in all types of cancer. The current availability of rich data sets from many different cancer types provides an opportunity for thorough computational data mining in search of such common patterns. Here we report the identification of 18 "pan-cancer" molecular signatures resulting from analysis of data sets containing values from mRNA expression, microRNA expression, DNA methylation, and protein activity, from twelve different cancer types. The membership of many of these signatures points to particular biological mechanisms related to cancer progression, suggesting that they represent important attributes of cancer in need of being elucidated for potential applications in diagnostic, prognostic and therapeutic products applicable to multiple cancer types.
1902.01481
Constantinos Siettos
Katiana Kontolati and Constantinos Siettos
Numerical analysis of a mechanotransduction dynamical model reveals homoclinic bifurcations of extracellular matrix mediated oscillations of the mesenchymal stem cell fate
null
International Journal of Non-Linear Mechanics, 113, 146-157, 2019
10.1016/j.ijnonlinmec.2019.04.001
null
q-bio.CB math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform one and two-parameter numerical bifurcation analysis of a mechanotransduction model approximating the dynamics of mesenchymal stem cell differentiation into neurons, adipocytes, myocytes and osteoblasts. For our analysis, we use as bifurcation parameters the stiffness of the extracellular matrix and parameters linked with the positive feedback mechanisms that up-regulate the production of the YAP/TAZ transcriptional regulators (TRs) and the cell adhesion area. Our analysis reveals a rich nonlinear behaviour of the cell differentiation including regimes of hysteresis and multistability, stable oscillations of the effective adhesion area, the YAP/TAZ TRs and the PPAR$\gamma$ receptors associated with the adipogenic fate, as well as homoclinic bifurcations that interrupt relatively high-amplitude oscillations abruptly. The two-parameter bifurcation analysis of the Andronov-Hopf points that give birth to the oscillating patterns predicts their existence for soft extracellular substrates ($<1kPa$), a regime that favours the neurogenic and the adipogenic cell fate. Furthermore, in these regimes, the analysis reveals the presence of homoclinic bifurcations that result in the sudden loss of the stable oscillations of the cell-substrate adhesion towards weaker adhesion and high expression levels of the gene encoding Tubulin beta-3 chain, thus favouring the phase transition from the adipogenic to the neurogenic fate.
[ { "created": "Mon, 4 Feb 2019 22:22:11 GMT", "version": "v1" } ]
2023-03-16
[ [ "Kontolati", "Katiana", "" ], [ "Siettos", "Constantinos", "" ] ]
We perform one and two-parameter numerical bifurcation analysis of a mechanotransduction model approximating the dynamics of mesenchymal stem cell differentiation into neurons, adipocytes, myocytes and osteoblasts. For our analysis, we use as bifurcation parameters the stiffness of the extracellular matrix and parameters linked with the positive feedback mechanisms that up-regulate the production of the YAP/TAZ transcriptional regulators (TRs) and the cell adhesion area. Our analysis reveals a rich nonlinear behaviour of the cell differentiation including regimes of hysteresis and multistability, stable oscillations of the effective adhesion area, the YAP/TAZ TRs and the PPAR$\gamma$ receptors associated with the adipogenic fate, as well as homoclinic bifurcations that interrupt relatively high-amplitude oscillations abruptly. The two-parameter bifurcation analysis of the Andronov-Hopf points that give birth to the oscillating patterns predicts their existence for soft extracellular substrates ($<1kPa$), a regime that favours the neurogenic and the adipogenic cell fate. Furthermore, in these regimes, the analysis reveals the presence of homoclinic bifurcations that result in the sudden loss of the stable oscillations of the cell-substrate adhesion towards weaker adhesion and high expression levels of the gene encoding Tubulin beta-3 chain, thus favouring the phase transition from the adipogenic to the neurogenic fate.
0904.0506
Zhou Tianshou
Jiajun Zhang, Zhanjiang Yuan, Tianshou Zhou
Synchronization and clustering of synthetic genetic networks: A role for cis-regulatory modules
30 pages, 8 figures
Phys. Rev. E 79, 041903 (2009)
10.1103/PhysRevE.79.041903
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effect of signal integration through cis-regulatory modules (CRMs) on synchronization and clustering of populations of two-component genetic oscillators coupled by quorum sensing is in detail investigated. We find that the CRMs play an important role in achieving synchronization and clustering. For this, we investigate 6 possible cis-regulatory input functions (CRIFs) with AND, OR, ANDN, ORN, XOR, and EQU types of responses in two possible kinds of cell-to-cell communications: activator-regulated communication (i.e., the autoinducer regulates the activator) and repressor-regulated communication (i.e., the autoinducer regulates the repressor). Both theoretical analysis and numerical simulation show that different CRMs drive fundamentally different cellular patterns, such as complete synchronization, various cluster-balanced states and several cluster-nonbalanced states.
[ { "created": "Fri, 3 Apr 2009 03:44:12 GMT", "version": "v1" } ]
2009-04-06
[ [ "Zhang", "Jiajun", "" ], [ "Yuan", "Zhanjiang", "" ], [ "Zhou", "Tianshou", "" ] ]
The effect of signal integration through cis-regulatory modules (CRMs) on synchronization and clustering of populations of two-component genetic oscillators coupled by quorum sensing is in detail investigated. We find that the CRMs play an important role in achieving synchronization and clustering. For this, we investigate 6 possible cis-regulatory input functions (CRIFs) with AND, OR, ANDN, ORN, XOR, and EQU types of responses in two possible kinds of cell-to-cell communications: activator-regulated communication (i.e., the autoinducer regulates the activator) and repressor-regulated communication (i.e., the autoinducer regulates the repressor). Both theoretical analysis and numerical simulation show that different CRMs drive fundamentally different cellular patterns, such as complete synchronization, various cluster-balanced states and several cluster-nonbalanced states.
1607.06980
Siddharth Mehrotra
Siddharth Mehrotra, Anuj Shukla, Dipanjan Roy
Neurophysiological Investigation of Context Modulation based on Musical Stimulus
null
International Conference On Music Perception And Cognition. San Francisco, CA: ICMPC, 2016. 243-246
10.13140/RG.2.1.3091.3524
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are numerous studies which suggest that perhaps music is truly the language of emotions. Music seems to have an almost willful, evasive quality, defying simple explanation, and indeed requires deeper neurophysiological investigations to gain a better understanding. The current study makes an attempt in that direction to explore the effect of context on music perception. To investigate the same, we measured Galvanic Skin Responses (GSR) and self-reported emotion on 18 participants while listening to different Ragas (musical stimulus) composed of different Rasa's (emotional expression) in the different context (Neutral, Pleasant, and Unpleasant). The IAPS pictures were used to induce the emotional context in participants. Our results from this study suggest that the context can modulate emotional response in music perception but only for a shorter time scale. Interestingly, here we demonstrate by combining GSR and self-reports that this effect gradually vanishes over time and shows emotional adaptation irrespective of context. The overall findings suggest that specific context effects of music perception are transitory in nature and gets saturated on a longer time scale.
[ { "created": "Sat, 23 Jul 2016 22:55:00 GMT", "version": "v1" } ]
2016-07-26
[ [ "Mehrotra", "Siddharth", "" ], [ "Shukla", "Anuj", "" ], [ "Roy", "Dipanjan", "" ] ]
There are numerous studies which suggest that perhaps music is truly the language of emotions. Music seems to have an almost willful, evasive quality, defying simple explanation, and indeed requires deeper neurophysiological investigations to gain a better understanding. The current study makes an attempt in that direction to explore the effect of context on music perception. To investigate the same, we measured Galvanic Skin Responses (GSR) and self-reported emotion on 18 participants while listening to different Ragas (musical stimulus) composed of different Rasa's (emotional expression) in the different context (Neutral, Pleasant, and Unpleasant). The IAPS pictures were used to induce the emotional context in participants. Our results from this study suggest that the context can modulate emotional response in music perception but only for a shorter time scale. Interestingly, here we demonstrate by combining GSR and self-reports that this effect gradually vanishes over time and shows emotional adaptation irrespective of context. The overall findings suggest that specific context effects of music perception are transitory in nature and gets saturated on a longer time scale.
2212.09705
Vincent Painchaud
Vincent Painchaud, Patrick Desrosiers and Nicolas Doyon
The determining role of covariances in large networks of stochastic neurons
33 pages, 9 figures
null
null
null
q-bio.NC math.DS physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Biological neural networks are notoriously hard to model due to their stochastic behavior and high dimensionality. We tackle this problem by constructing a dynamical model of both the expectations and covariances of the fractions of active and refractory neurons in the network's populations. We do so by describing the evolution of the states of individual neurons with a continuous-time Markov chain, from which we formally derive a low-dimensional dynamical system. This is done by solving a moment closure problem in a way that is compatible with the nonlinearity and boundedness of the activation function. Our dynamical system captures the behavior of the high-dimensional stochastic model even in cases where the mean-field approximation fails to do so. Taking into account the second-order moments modifies the solutions that would be obtained with the mean-field approximation, and can lead to the appearance or disappearance of fixed points and limit cycles. We moreover perform numerical experiments where the mean-field approximation leads to periodically oscillating solutions, while the solutions of the second-order model can be interpreted as an average taken over many realizations of the stochastic model. Altogether, our results highlight the importance of including higher moments when studying stochastic networks and deepen our understanding of correlated neuronal activity.
[ { "created": "Mon, 19 Dec 2022 18:34:06 GMT", "version": "v1" }, { "created": "Mon, 17 Apr 2023 03:49:02 GMT", "version": "v2" }, { "created": "Mon, 20 Nov 2023 16:35:13 GMT", "version": "v3" } ]
2023-11-21
[ [ "Painchaud", "Vincent", "" ], [ "Desrosiers", "Patrick", "" ], [ "Doyon", "Nicolas", "" ] ]
Biological neural networks are notoriously hard to model due to their stochastic behavior and high dimensionality. We tackle this problem by constructing a dynamical model of both the expectations and covariances of the fractions of active and refractory neurons in the network's populations. We do so by describing the evolution of the states of individual neurons with a continuous-time Markov chain, from which we formally derive a low-dimensional dynamical system. This is done by solving a moment closure problem in a way that is compatible with the nonlinearity and boundedness of the activation function. Our dynamical system captures the behavior of the high-dimensional stochastic model even in cases where the mean-field approximation fails to do so. Taking into account the second-order moments modifies the solutions that would be obtained with the mean-field approximation, and can lead to the appearance or disappearance of fixed points and limit cycles. We moreover perform numerical experiments where the mean-field approximation leads to periodically oscillating solutions, while the solutions of the second-order model can be interpreted as an average taken over many realizations of the stochastic model. Altogether, our results highlight the importance of including higher moments when studying stochastic networks and deepen our understanding of correlated neuronal activity.
1408.4912
Steffen Rulands Dr
Steffen Rulands, David Jahn, Erwin Frey
Specialization and Bet Hedging in Heterogeneous Populations
null
Phys. Rev. Lett. 113, 108102 (2014)
10.1103/PhysRevLett.113.108102
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phenotypic heterogeneity is a strategy commonly used by bacteria to rapidly adapt to changing environmental conditions. Here, we study the interplay between phenotypic heterogeneity and genetic diversity in spatially extended populations. By analyzing the spatio-temporal dynamics, we show that the level of mobility and the type of competition qualitatively influence the persistence of phenotypic heterogeneity. While direct competition generally promotes persistence of phenotypic heterogeneity, specialization dominates in models with indirect competition irrespective of the degree of mobility.
[ { "created": "Thu, 21 Aug 2014 08:23:37 GMT", "version": "v1" }, { "created": "Thu, 4 Sep 2014 15:30:39 GMT", "version": "v2" } ]
2014-09-05
[ [ "Rulands", "Steffen", "" ], [ "Jahn", "David", "" ], [ "Frey", "Erwin", "" ] ]
Phenotypic heterogeneity is a strategy commonly used by bacteria to rapidly adapt to changing environmental conditions. Here, we study the interplay between phenotypic heterogeneity and genetic diversity in spatially extended populations. By analyzing the spatio-temporal dynamics, we show that the level of mobility and the type of competition qualitatively influence the persistence of phenotypic heterogeneity. While direct competition generally promotes persistence of phenotypic heterogeneity, specialization dominates in models with indirect competition irrespective of the degree of mobility.
1510.04306
Corey S. O'Hern
Jennifer C. Gaines, W. Wendell Smith, Lynne Regan, and Corey S. O'Hern
Random close packing in protein cores
5 pages, 4 figures
Phys. Rev. E 93, 032415 (2016)
10.1103/PhysRevE.93.032415
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shortly after the determination of the first protein x-ray crystal structures, researchers analyzed their cores and reported packing fractions $\phi \approx 0.75$, a value that is similar to close packing equal-sized spheres. A limitation of these analyses was the use of `extended atom' models, rather than the more physically accurate `explicit hydrogen' model. The validity of using the explicit hydrogen model is proved by its ability to predict the side chain dihedral angle distributions observed in proteins. We employ the explicit hydrogen model to calculate the packing fraction of the cores of over $200$ high resolution protein structures. We find that these protein cores have $\phi \approx 0.55$, which is comparable to random close-packing of non-spherical particles. This result provides a deeper understanding of the physical basis of protein structure that will enable predictions of the effects of amino acid mutations and design of new functional proteins.
[ { "created": "Wed, 14 Oct 2015 20:46:03 GMT", "version": "v1" } ]
2016-04-06
[ [ "Gaines", "Jennifer C.", "" ], [ "Smith", "W. Wendell", "" ], [ "Regan", "Lynne", "" ], [ "O'Hern", "Corey S.", "" ] ]
Shortly after the determination of the first protein x-ray crystal structures, researchers analyzed their cores and reported packing fractions $\phi \approx 0.75$, a value that is similar to close packing equal-sized spheres. A limitation of these analyses was the use of `extended atom' models, rather than the more physically accurate `explicit hydrogen' model. The validity of using the explicit hydrogen model is proved by its ability to predict the side chain dihedral angle distributions observed in proteins. We employ the explicit hydrogen model to calculate the packing fraction of the cores of over $200$ high resolution protein structures. We find that these protein cores have $\phi \approx 0.55$, which is comparable to random close-packing of non-spherical particles. This result provides a deeper understanding of the physical basis of protein structure that will enable predictions of the effects of amino acid mutations and design of new functional proteins.
1106.6320
Joel Miller
Joel C. Miller and Anja C. Slim and Erik M. Volz
Edge-Based Compartmental Modeling for Infectious Disease Spread Part I: An Overview
null
J. R. Soc. Interface (2012) vol. 9 no. 70 890-906
10.1098/rsif.2011.0403
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The primary tool for predicting infectious disease spread and intervention effectiveness is the mass action Susceptible-Infected-Recovered model of Kermack and McKendrick. Its usefulness derives largely from its conceptual and mathematical simplicity; however, it incorrectly assumes all individuals have the same contact rate and contacts are fleeting. This paper is the first of three investigating edge-based compartmental modeling, a technique eliminating these assumptions. In this paper, we derive simple ordinary differential equation models capturing social heterogeneity (heterogeneous contact rates) while explicitly considering the impact of contact duration. We introduce a graphical interpretation allowing for easy derivation and communication of the model. This paper focuses on the technique and how to apply it in different contexts. The companion papers investigate choosing the appropriate level of complexity for a model and how to apply edge-based compartmental modeling to populations with various sub-structures.
[ { "created": "Thu, 30 Jun 2011 18:10:31 GMT", "version": "v1" } ]
2015-09-03
[ [ "Miller", "Joel C.", "" ], [ "Slim", "Anja C.", "" ], [ "Volz", "Erik M.", "" ] ]
The primary tool for predicting infectious disease spread and intervention effectiveness is the mass action Susceptible-Infected-Recovered model of Kermack and McKendrick. Its usefulness derives largely from its conceptual and mathematical simplicity; however, it incorrectly assumes all individuals have the same contact rate and contacts are fleeting. This paper is the first of three investigating edge-based compartmental modeling, a technique eliminating these assumptions. In this paper, we derive simple ordinary differential equation models capturing social heterogeneity (heterogeneous contact rates) while explicitly considering the impact of contact duration. We introduce a graphical interpretation allowing for easy derivation and communication of the model. This paper focuses on the technique and how to apply it in different contexts. The companion papers investigate choosing the appropriate level of complexity for a model and how to apply edge-based compartmental modeling to populations with various sub-structures.
0804.3128
Filippo Menolascina
Filippo Menolascina, Vitoantonio Bevilacqua, Caterina Ciminelli, Stefania Tommasi and Angelo Paradiso
Developing a Theoretical Framework for Optofluidic Device Designing for System Identification in Systems Biology: the EGFR Study Case
null
null
null
null
q-bio.OT q-bio.MN
http://creativecommons.org/licenses/by/3.0/
Identification of dynamics underlying biochemical pathways of interest in oncology is a primary goal in current systems biology. Understanding structures and interactions that govern the evolution of such systems is believed to be a cornerstone in this research. Systems theory and systems identification theory are primary resources for this task since they both provide a self consistent framework for modelling and manipulating models of dynamical systems that are best suited for the problem under investigation. We address herein the issue of obtaining an informative dataset ZN to be used as starting point for identification of EGFR pathway dynamics. In order to match experimental identifiability criteria we propose a theoretical framework for input stimulus design based on dynamical properties of the system under investigation. A feasible optofluidic design has been designed on the basis of the spectral properties of the driving inputs that maximize information content after the theoretical studies.
[ { "created": "Sat, 19 Apr 2008 07:21:07 GMT", "version": "v1" } ]
2009-09-29
[ [ "Menolascina", "Filippo", "" ], [ "Bevilacqua", "Vitoantonio", "" ], [ "Ciminelli", "Caterina", "" ], [ "Tommasi", "Stefania", "" ], [ "Paradiso", "Angelo", "" ] ]
Identification of dynamics underlying biochemical pathways of interest in oncology is a primary goal in current systems biology. Understanding structures and interactions that govern the evolution of such systems is believed to be a cornerstone in this research. Systems theory and systems identification theory are primary resources for this task since they both provide a self consistent framework for modelling and manipulating models of dynamical systems that are best suited for the problem under investigation. We address herein the issue of obtaining an informative dataset ZN to be used as starting point for identification of EGFR pathway dynamics. In order to match experimental identifiability criteria we propose a theoretical framework for input stimulus design based on dynamical properties of the system under investigation. A feasible optofluidic design has been designed on the basis of the spectral properties of the driving inputs that maximize information content after the theoretical studies.
2210.15549
Nicolas Privault
Nicolas Privault and Mich\`ele Thieullen
Closed-form modeling of neuronal spike train statistics using multivariate Hawkes cumulants
null
null
10.1103/PhysRevE.106.054410
null
q-bio.NC math.PR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive exact analytical expressions for the cumulants of any orders of neuronal membrane potentials driven by spike trains in a multivariate Hawkes process model with excitation and inhibition. Such expressions can be used for the prediction and sensitivity analysis of the statistical behavior of the model over time, and to estimate the probability densities of neuronal membrane potentials using Gram-Charlier expansions. Our results are shown to provide a better alternative to Monte Carlo estimates via stochastic simulations, and computer codes based on combinatorial recursions are included.
[ { "created": "Thu, 27 Oct 2022 15:40:46 GMT", "version": "v1" } ]
2022-11-30
[ [ "Privault", "Nicolas", "" ], [ "Thieullen", "Michèle", "" ] ]
We derive exact analytical expressions for the cumulants of any orders of neuronal membrane potentials driven by spike trains in a multivariate Hawkes process model with excitation and inhibition. Such expressions can be used for the prediction and sensitivity analysis of the statistical behavior of the model over time, and to estimate the probability densities of neuronal membrane potentials using Gram-Charlier expansions. Our results are shown to provide a better alternative to Monte Carlo estimates via stochastic simulations, and computer codes based on combinatorial recursions are included.
2003.14152
Malay Banerjee
Malay Banerjee, Alexey Tokarev, Vitaly Volpert
Immuno-epidemiological model of two-stage epidemic growth
12 pages, 6 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epidemiological data on seasonal influenza show that the growth rate of the number of infected individuals can increase passing from one exponential growth rate to another one with a larger exponent. Such behavior is not described by conventional epidemiological models. In this work an immuno-epidemiological model is proposed in order to describe this two-stage growth. It takes into account that the growth in the number of infected individuals increases the initial viral load and provides a passage from the first stage of epidemic where only people with weak immune response are infected to the second stage where people with strong immune response are also infected. This scenario may be viewed as an increase of the effective number of susceptible increasing the effective growth rate of infected.
[ { "created": "Tue, 31 Mar 2020 12:38:11 GMT", "version": "v1" } ]
2020-04-01
[ [ "Banerjee", "Malay", "" ], [ "Tokarev", "Alexey", "" ], [ "Volpert", "Vitaly", "" ] ]
Epidemiological data on seasonal influenza show that the growth rate of the number of infected individuals can increase passing from one exponential growth rate to another one with a larger exponent. Such behavior is not described by conventional epidemiological models. In this work an immuno-epidemiological model is proposed in order to describe this two-stage growth. It takes into account that the growth in the number of infected individuals increases the initial viral load and provides a passage from the first stage of epidemic where only people with weak immune response are infected to the second stage where people with strong immune response are also infected. This scenario may be viewed as an increase of the effective number of susceptible increasing the effective growth rate of infected.
1509.02958
Blake Stacey
Blake C. Stacey
Multiscale Structure in Eco-Evolutionary Dynamics
PhD thesis, 274 pages. Includes and updates material from arXiv:1110.3845
null
null
null
q-bio.PE cond-mat.stat-mech nlin.CG quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a complex system, the individual components are neither so tightly coupled or correlated that they can all be treated as a single unit, nor so uncorrelated that they can be approximated as independent entities. Instead, patterns of interdependency lead to structure at multiple scales of organization. Evolution excels at producing such complex structures. In turn, the existence of these complex interrelationships within a biological system affects the evolutionary dynamics of that system. I present a mathematical formalism for multiscale structure, grounded in information theory, which makes these intuitions quantitative, and I show how dynamics defined in terms of population genetics or evolutionary game theory can lead to multiscale organization. For complex systems, "more is different," and I address this from several perspectives. Spatial host--consumer models demonstrate the importance of the structures which can arise due to dynamical pattern formation. Evolutionary game theory reveals the novel effects which can result from multiplayer games, nonlinear payoffs and ecological stochasticity. Replicator dynamics in an environment with mesoscale structure relates to generalized conditionalization rules in probability theory. The idea of natural selection "acting at multiple levels" has been mathematized in a variety of ways, not all of which are equivalent. We will face down the confusion, using the experience developed over the course of this thesis to clarify the situation.
[ { "created": "Wed, 9 Sep 2015 21:27:58 GMT", "version": "v1" } ]
2015-09-15
[ [ "Stacey", "Blake C.", "" ] ]
In a complex system, the individual components are neither so tightly coupled or correlated that they can all be treated as a single unit, nor so uncorrelated that they can be approximated as independent entities. Instead, patterns of interdependency lead to structure at multiple scales of organization. Evolution excels at producing such complex structures. In turn, the existence of these complex interrelationships within a biological system affects the evolutionary dynamics of that system. I present a mathematical formalism for multiscale structure, grounded in information theory, which makes these intuitions quantitative, and I show how dynamics defined in terms of population genetics or evolutionary game theory can lead to multiscale organization. For complex systems, "more is different," and I address this from several perspectives. Spatial host--consumer models demonstrate the importance of the structures which can arise due to dynamical pattern formation. Evolutionary game theory reveals the novel effects which can result from multiplayer games, nonlinear payoffs and ecological stochasticity. Replicator dynamics in an environment with mesoscale structure relates to generalized conditionalization rules in probability theory. The idea of natural selection "acting at multiple levels" has been mathematized in a variety of ways, not all of which are equivalent. We will face down the confusion, using the experience developed over the course of this thesis to clarify the situation.
1407.6675
Yi-Kuo Yu
Gelio Alves and Yi-Kuo Yu
Mass spectrometry based protein identification with accurate statistical significance assignment
23 pages, 15 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/publicdomain/
Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. Results: We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database $P$-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level $E$-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Soric formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. Availability: The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit
[ { "created": "Thu, 24 Jul 2014 18:14:51 GMT", "version": "v1" } ]
2014-07-25
[ [ "Alves", "Gelio", "" ], [ "Yu", "Yi-Kuo", "" ] ]
Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. Results: We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database $P$-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level $E$-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Soric formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. Availability: The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit
q-bio/0605012
Meng-Ru Li
Meng-Ru Li, Henry Greenside
Stable Propagation of a Burst Through a One-Dimensional Homogeneous Excitatory Chain Model of Songbird Nucleus HVC
13 pages, 6 figures
null
10.1103/PhysRevE.74.011918
null
q-bio.NC
null
We demonstrate numerically that a brief burst consisting of two to six spikes can propagate in a stable manner through a one-dimensional homogeneous feedforward chain of non-bursting neurons with excitatory synaptic connections. Our results are obtained for two kinds of neuronal models, leaky integrate-and-fire (LIF) neurons and Hodgkin-Huxley (HH) neurons with five conductances. Over a range of parameters such as the maximum synaptic conductance, both kinds of chains are found to have multiple attractors of propagating bursts, with each attractor being distinguished by the number of spikes and total duration of the propagating burst. These results make plausible the hypothesis that sparse precisely-timed sequential bursts observed in projection neurons of nucleus HVC of a singing zebra finch are intrinsic and causally related.
[ { "created": "Tue, 9 May 2006 02:53:30 GMT", "version": "v1" } ]
2009-11-13
[ [ "Li", "Meng-Ru", "" ], [ "Greenside", "Henry", "" ] ]
We demonstrate numerically that a brief burst consisting of two to six spikes can propagate in a stable manner through a one-dimensional homogeneous feedforward chain of non-bursting neurons with excitatory synaptic connections. Our results are obtained for two kinds of neuronal models, leaky integrate-and-fire (LIF) neurons and Hodgkin-Huxley (HH) neurons with five conductances. Over a range of parameters such as the maximum synaptic conductance, both kinds of chains are found to have multiple attractors of propagating bursts, with each attractor being distinguished by the number of spikes and total duration of the propagating burst. These results make plausible the hypothesis that sparse precisely-timed sequential bursts observed in projection neurons of nucleus HVC of a singing zebra finch are intrinsic and causally related.
2302.00652
Xingang Wang Professor
Ya Wang, Liang Wang, Huawei Fan, Jun Ma, Hui Cao, and Xingang Wang
Breathing cluster in complex neuron-astrocyte networks
14 pages, 6 figures
null
null
null
q-bio.NC nlin.PS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Brain activities are featured by spatially distributed neural clusters of coherent firings and a spontaneous switching of the clusters between the synchrony and asynchrony states. Evidences from {\it in vivo} experiments suggest that astrocytes, a type of glial cell regarded previously as providing only structural and metabolic supports to neurons, participate actively in brain functions and play a crucial role in regulating the neural firing activities, yet the mechanism remains unknown. Introducing astrocyte as a reservoir of the glutamate released from neuron synapses, here we propose the model of complex neuron-astrocyte network and employ it to explore the roles of astrocyte in regulating the synchronization behaviors of networked neurons. It is found that a fraction of neurons on the network can be synchronized as a cluster, while the remaining neurons are kept as desynchronized. Moreover, during the course of network evolution, the cluster is switching between the synchrony and asynchrony states intermittently, henceforth the phenomenon of ``breathing cluster". By the method of symmetry-based analysis, we conduct a theoretical investigation on the stability of the cluster and the mechanism generating the breathing activities. It is revealed that the contents of the cluster are determined by the network symmetry and the breathing activities are due to the interplay between the neural network and the astrocyte. The breathing phenomenon is demonstrated in network models of different structures and neural dynamics. The studies give insights into the cellular mechanism of astrocytes in regulating neural activities, and shed lights onto the spontaneous state switching of the neocortex.
[ { "created": "Thu, 26 Jan 2023 10:23:08 GMT", "version": "v1" } ]
2023-02-02
[ [ "Wang", "Ya", "" ], [ "Wang", "Liang", "" ], [ "Fan", "Huawei", "" ], [ "Ma", "Jun", "" ], [ "Cao", "Hui", "" ], [ "Wang", "Xingang", "" ] ]
Brain activities are featured by spatially distributed neural clusters of coherent firings and a spontaneous switching of the clusters between the synchrony and asynchrony states. Evidences from {\it in vivo} experiments suggest that astrocytes, a type of glial cell regarded previously as providing only structural and metabolic supports to neurons, participate actively in brain functions and play a crucial role in regulating the neural firing activities, yet the mechanism remains unknown. Introducing astrocyte as a reservoir of the glutamate released from neuron synapses, here we propose the model of complex neuron-astrocyte network and employ it to explore the roles of astrocyte in regulating the synchronization behaviors of networked neurons. It is found that a fraction of neurons on the network can be synchronized as a cluster, while the remaining neurons are kept as desynchronized. Moreover, during the course of network evolution, the cluster is switching between the synchrony and asynchrony states intermittently, henceforth the phenomenon of ``breathing cluster". By the method of symmetry-based analysis, we conduct a theoretical investigation on the stability of the cluster and the mechanism generating the breathing activities. It is revealed that the contents of the cluster are determined by the network symmetry and the breathing activities are due to the interplay between the neural network and the astrocyte. The breathing phenomenon is demonstrated in network models of different structures and neural dynamics. The studies give insights into the cellular mechanism of astrocytes in regulating neural activities, and shed lights onto the spontaneous state switching of the neocortex.