id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0410032
Paul van der Schoot
P. van der Schoot and R. Bruinsma
Electrostatics and the Assembly of an RNA Virus
41 pages, 4 figures
null
10.1103/PhysRevE.71.061928
null
q-bio.BM q-bio.SC
null
Electrostatic interactions play a central role in the assembly of single-stranded RNA viruses. Under physiological conditions of salinity and acidity, virus capsid assembly requires the presence of genomic material that is oppositely charged to the core proteins. In this paper we apply basic polymer physics and statistical mechanics methods to the self-assembly of a synthetic virus encapsidating generic polyelectrolyte molecules. We find that (i) the mean concentration of the encapsidated polyelectrolyte material depends on the surface charge density, the radius of the capsid, and the linear charge density of the polymer but neither on the salt concentration or the Kuhn length, (ii) the total charge of the capsid interior is equal but opposite to that of the empty capsid, a form of charge reversal. Unlike natural viruses, synthetic viruses are predicted not to be under an osmotic swelling pressure. The design condition that self-assembly only produces filled capsids is shown to coincide with the condition that the capsid surface charge exceeds the desorption threshold of polymer surface adsorption. We compare our results with studies on the self-assembly of both synthetic and natural viruses.
[ { "created": "Wed, 27 Oct 2004 20:02:04 GMT", "version": "v1" } ]
2009-11-10
[ [ "van der Schoot", "P.", "" ], [ "Bruinsma", "R.", "" ] ]
Electrostatic interactions play a central role in the assembly of single-stranded RNA viruses. Under physiological conditions of salinity and acidity, virus capsid assembly requires the presence of genomic material that is oppositely charged to the core proteins. In this paper we apply basic polymer physics and statistical mechanics methods to the self-assembly of a synthetic virus encapsidating generic polyelectrolyte molecules. We find that (i) the mean concentration of the encapsidated polyelectrolyte material depends on the surface charge density, the radius of the capsid, and the linear charge density of the polymer but neither on the salt concentration or the Kuhn length, (ii) the total charge of the capsid interior is equal but opposite to that of the empty capsid, a form of charge reversal. Unlike natural viruses, synthetic viruses are predicted not to be under an osmotic swelling pressure. The design condition that self-assembly only produces filled capsids is shown to coincide with the condition that the capsid surface charge exceeds the desorption threshold of polymer surface adsorption. We compare our results with studies on the self-assembly of both synthetic and natural viruses.
2207.00584
Siyuan Shan
Vishal Athreya Baskaran, Jolene Ranek, Siyuan Shan, Natalie Stanley, Junier B. Oliva
Distribution-based Sketching of Single-Cell Samples
Accepted by ACM-BCB 2022
null
10.1145/3535508.3545539
null
q-bio.QM cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Modern high-throughput single-cell immune profiling technologies, such as flow and mass cytometry and single-cell RNA sequencing can readily measure the expression of a large number of protein or gene features across the millions of cells in a multi-patient cohort. While bioinformatics approaches can be used to link immune cell heterogeneity to external variables of interest, such as, clinical outcome or experimental label, they often struggle to accommodate such a large number of profiled cells. To ease this computational burden, a limited number of cells are typically \emph{sketched} or subsampled from each patient. However, existing sketching approaches fail to adequately subsample rare cells from rare cell-populations, or fail to preserve the true frequencies of particular immune cell-types. Here, we propose a novel sketching approach based on Kernel Herding that selects a limited subsample of all cells while preserving the underlying frequencies of immune cell-types. We tested our approach on three flow and mass cytometry datasets and on one single-cell RNA sequencing dataset and demonstrate that the sketched cells (1) more accurately represent the overall cellular landscape and (2) facilitate increased performance in downstream analysis tasks, such as classifying patients according to their clinical outcome. An implementation of sketching with Kernel Herding is publicly available at \url{https://github.com/vishalathreya/Set-Summarization}.
[ { "created": "Thu, 30 Jun 2022 19:43:06 GMT", "version": "v1" } ]
2022-07-05
[ [ "Baskaran", "Vishal Athreya", "" ], [ "Ranek", "Jolene", "" ], [ "Shan", "Siyuan", "" ], [ "Stanley", "Natalie", "" ], [ "Oliva", "Junier B.", "" ] ]
Modern high-throughput single-cell immune profiling technologies, such as flow and mass cytometry and single-cell RNA sequencing can readily measure the expression of a large number of protein or gene features across the millions of cells in a multi-patient cohort. While bioinformatics approaches can be used to link immune cell heterogeneity to external variables of interest, such as, clinical outcome or experimental label, they often struggle to accommodate such a large number of profiled cells. To ease this computational burden, a limited number of cells are typically \emph{sketched} or subsampled from each patient. However, existing sketching approaches fail to adequately subsample rare cells from rare cell-populations, or fail to preserve the true frequencies of particular immune cell-types. Here, we propose a novel sketching approach based on Kernel Herding that selects a limited subsample of all cells while preserving the underlying frequencies of immune cell-types. We tested our approach on three flow and mass cytometry datasets and on one single-cell RNA sequencing dataset and demonstrate that the sketched cells (1) more accurately represent the overall cellular landscape and (2) facilitate increased performance in downstream analysis tasks, such as classifying patients according to their clinical outcome. An implementation of sketching with Kernel Herding is publicly available at \url{https://github.com/vishalathreya/Set-Summarization}.
2403.03234
Yair Schiff
Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, and Volodymyr Kuleshov
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling
ICML 2024; Code to reproduce our experiments is available at https://github.com/kuleshov-group/caduceus
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Large-scale sequence modeling has sparked rapid advances that now extend into biology and genomics. However, modeling genomic sequences introduces challenges such as the need to model long-range token interactions, the effects of upstream and downstream regions of the genome, and the reverse complementarity (RC) of DNA. Here, we propose an architecture motivated by these challenges that builds off the long-range Mamba block, and extends it to a BiMamba component that supports bi-directionality, and to a MambaDNA block that additionally supports RC equivariance. We use MambaDNA as the basis of Caduceus, the first family of RC equivariant bi-directional long-range DNA language models, and we introduce pre-training and fine-tuning strategies that yield Caduceus DNA foundation models. Caduceus outperforms previous long-range models on downstream benchmarks; on a challenging long-range variant effect prediction task, Caduceus exceeds the performance of 10x larger models that do not leverage bi-directionality or equivariance.
[ { "created": "Tue, 5 Mar 2024 01:42:51 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 21:02:37 GMT", "version": "v2" } ]
2024-06-07
[ [ "Schiff", "Yair", "" ], [ "Kao", "Chia-Hsiang", "" ], [ "Gokaslan", "Aaron", "" ], [ "Dao", "Tri", "" ], [ "Gu", "Albert", "" ], [ "Kuleshov", "Volodymyr", "" ] ]
Large-scale sequence modeling has sparked rapid advances that now extend into biology and genomics. However, modeling genomic sequences introduces challenges such as the need to model long-range token interactions, the effects of upstream and downstream regions of the genome, and the reverse complementarity (RC) of DNA. Here, we propose an architecture motivated by these challenges that builds off the long-range Mamba block, and extends it to a BiMamba component that supports bi-directionality, and to a MambaDNA block that additionally supports RC equivariance. We use MambaDNA as the basis of Caduceus, the first family of RC equivariant bi-directional long-range DNA language models, and we introduce pre-training and fine-tuning strategies that yield Caduceus DNA foundation models. Caduceus outperforms previous long-range models on downstream benchmarks; on a challenging long-range variant effect prediction task, Caduceus exceeds the performance of 10x larger models that do not leverage bi-directionality or equivariance.
1810.06844
Kristina Wicke
Mareike Fischer and Michelle Galla and Lina Herbst and Yangjing Long and Kristina Wicke
Classes of treebased networks
45 pages, 26 figures
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, so-called treebased phylogenetic networks have gained considerable interest in the literature, where a treebased network is a network that can be constructed from a phylogenetic tree, called the base tree, by adding additional edges. The main aim of this manuscript is to provide some sufficient criteria for treebasedness by reducing phylogenetic networks to related graph structures. While it is generally known that deciding whether a network is treebased is NP-complete, one of these criteria, namely edgebasedness, can be verified in linear time. Surprisingly, the class of edgebased networks is closely related to a well-known family of graphs, namely the class of generalized series parallel graphs, and we will explore this relationship in full detail. Additionally, we introduce further classes of treebased networks and analyze their relationships.
[ { "created": "Tue, 16 Oct 2018 07:22:35 GMT", "version": "v1" }, { "created": "Wed, 17 Oct 2018 10:11:33 GMT", "version": "v2" }, { "created": "Fri, 16 Aug 2019 14:03:08 GMT", "version": "v3" }, { "created": "Wed, 27 Nov 2019 08:32:38 GMT", "version": "v4" } ]
2019-11-28
[ [ "Fischer", "Mareike", "" ], [ "Galla", "Michelle", "" ], [ "Herbst", "Lina", "" ], [ "Long", "Yangjing", "" ], [ "Wicke", "Kristina", "" ] ]
Recently, so-called treebased phylogenetic networks have gained considerable interest in the literature, where a treebased network is a network that can be constructed from a phylogenetic tree, called the base tree, by adding additional edges. The main aim of this manuscript is to provide some sufficient criteria for treebasedness by reducing phylogenetic networks to related graph structures. While it is generally known that deciding whether a network is treebased is NP-complete, one of these criteria, namely edgebasedness, can be verified in linear time. Surprisingly, the class of edgebased networks is closely related to a well-known family of graphs, namely the class of generalized series parallel graphs, and we will explore this relationship in full detail. Additionally, we introduce further classes of treebased networks and analyze their relationships.
1703.05755
Guillermo Abramson
Laila D. Kazimierski, Marcelo N. Kuperman, Horacio S. Wio and Guillermo Abramson
Waves of seed propagation induced by delayed animal dispersion
Accepted in Journal of Theoretical Biology
null
10.1016/j.jtbi.2017.09.030
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a model of seed dispersal that considers the inclusion of an animal disperser moving diffusively, feeding on fruits and transporting the seeds, which are later deposited and capable of germination. The dynamics depends on several population parameters of growth, decay, harvesting, transport, digestion and germination. In particular, the deposition of transported seeds at places away from their collection sites produces a delay in the dynamics, whose effects are the focus of this work. Analytical and numerical solutions of different simplified scenarios show the existence of travelling waves. The effect of zoochory is apparent in the increase of the velocity of these waves. The results support the hypothesis of the relevance of animal mediated seed dispersion when trying to understand the origin of the high rates of vegetable invasion observed in real systems.
[ { "created": "Thu, 16 Mar 2017 17:56:39 GMT", "version": "v1" }, { "created": "Mon, 2 Oct 2017 12:58:13 GMT", "version": "v2" } ]
2017-10-03
[ [ "Kazimierski", "Laila D.", "" ], [ "Kuperman", "Marcelo N.", "" ], [ "Wio", "Horacio S.", "" ], [ "Abramson", "Guillermo", "" ] ]
We study a model of seed dispersal that considers the inclusion of an animal disperser moving diffusively, feeding on fruits and transporting the seeds, which are later deposited and capable of germination. The dynamics depends on several population parameters of growth, decay, harvesting, transport, digestion and germination. In particular, the deposition of transported seeds at places away from their collection sites produces a delay in the dynamics, whose effects are the focus of this work. Analytical and numerical solutions of different simplified scenarios show the existence of travelling waves. The effect of zoochory is apparent in the increase of the velocity of these waves. The results support the hypothesis of the relevance of animal mediated seed dispersion when trying to understand the origin of the high rates of vegetable invasion observed in real systems.
1912.03934
Lionel Gil
Dora Matzakos-Karvouniari, Bruno Cessac and L. Gil
Noise driven broadening of the neural synchronisation transition in stage II retinal waves
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on a biophysical model of retinal Starburst Amacrine Cell (SAC) \cite{karvouniari-gil-etal:19} we analyse here the dynamics of retinal waves, arising during the visual system development. Waves are induced by spontaneous bursting of SACs and their coupling via acetycholine. We show that, despite the acetylcholine coupling intensity has been experimentally observed to change during development \cite{zheng-lee-etal:04}, SACs retinal waves can nevertheless stay in a regime with power law distributions, reminiscent of a critical regime. Thus, this regime occurs on a range of coupling parameters instead of a single point as in usual phase transitions. We explain this phenomenon thanks to a coherence-resonance mechanism, where noise is responsible for the broadening of the critical coupling strength range.
[ { "created": "Mon, 9 Dec 2019 09:58:51 GMT", "version": "v1" } ]
2019-12-10
[ [ "Matzakos-Karvouniari", "Dora", "" ], [ "Cessac", "Bruno", "" ], [ "Gil", "L.", "" ] ]
Based on a biophysical model of retinal Starburst Amacrine Cell (SAC) \cite{karvouniari-gil-etal:19} we analyse here the dynamics of retinal waves, arising during the visual system development. Waves are induced by spontaneous bursting of SACs and their coupling via acetycholine. We show that, despite the acetylcholine coupling intensity has been experimentally observed to change during development \cite{zheng-lee-etal:04}, SACs retinal waves can nevertheless stay in a regime with power law distributions, reminiscent of a critical regime. Thus, this regime occurs on a range of coupling parameters instead of a single point as in usual phase transitions. We explain this phenomenon thanks to a coherence-resonance mechanism, where noise is responsible for the broadening of the critical coupling strength range.
1301.6931
Marco M\"oller
Marco M\"oller and Barbara Drossel
Scaling laws in critical random Boolean networks with general in- and out-degree distributions
null
null
10.1103/PhysRevE.87.052106
null
q-bio.MN cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We evaluate analytically and numerically the size of the frozen core and various scaling laws for critical Boolean networks that have a power-law in- and/or out-degree distribution. To this purpose, we generalize an efficient method that has previously been used for conventional random Boolean networks and for networks with power-law in-degree distributions. With this generalization, we can also deal with power-law out-degree distributions. When the power-law exponent is between 2 and 3, the second moment of the distribution diverges with network size, and the scaling exponent of the nonfrozen nodes depends on the degree distribution exponent. Furthermore, the exponent depends also on the dependence of the cutoff of the degree distribution on the system size. Altogether, we obtain an impressive number of different scaling laws depending on the type of cutoff as well as on the exponents of the in- and out-degree distributions. We confirm our scaling arguments and analytical considerations by numerical investigations.
[ { "created": "Tue, 29 Jan 2013 14:32:55 GMT", "version": "v1" } ]
2015-06-12
[ [ "Möller", "Marco", "" ], [ "Drossel", "Barbara", "" ] ]
We evaluate analytically and numerically the size of the frozen core and various scaling laws for critical Boolean networks that have a power-law in- and/or out-degree distribution. To this purpose, we generalize an efficient method that has previously been used for conventional random Boolean networks and for networks with power-law in-degree distributions. With this generalization, we can also deal with power-law out-degree distributions. When the power-law exponent is between 2 and 3, the second moment of the distribution diverges with network size, and the scaling exponent of the nonfrozen nodes depends on the degree distribution exponent. Furthermore, the exponent depends also on the dependence of the cutoff of the degree distribution on the system size. Altogether, we obtain an impressive number of different scaling laws depending on the type of cutoff as well as on the exponents of the in- and out-degree distributions. We confirm our scaling arguments and analytical considerations by numerical investigations.
2310.15194
Jie Ruan
Shiang Hu, Jie Ruan, Juan Hou, Pedro Antonio Valdes-Sosa, Zhao Lv
How do the resting EEG preprocessing states affect the outcomes of postprocessing?
null
null
null
null
q-bio.NC cs.HC eess.SP q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plenty of artifact removal tools and pipelines have been developed to correct the EEG recordings and discover the values below the waveforms. Without visual inspection from the experts, it is susceptible to derive improper preprocessing states, like the insufficient preprocessed EEG (IPE), and the excessive preprocessed EEG (EPE). However, little is known about the impacts of IPE or EPE on the postprocessing in the frequency, spatial and temporal domains, particularly as to the spectra and the functional connectivity (FC) analysis. Here, the clean EEG (CE) was synthesized as the ground truth based on the New-York head model and the multivariate autoregressive model. Later, the IPE and the EPE were simulated by injecting the Gaussian noise and losing the brain activities, respectively. Then, the impacts on postprocessing were quantified by the deviation caused by the IPE or EPE from the CE as to the 4 temporal statistics, the multichannel power, the cross spectra, the dispersion of source imaging, and the properties of scalp EEG network. Lastly, the association analysis was performed between the PaLOSi metric and the varying trends of postprocessing with the evolution of preprocessing states. This study shed light on how the postprocessing outcomes are affected by the preprocessing states and PaLOSi may be a potential effective quality metric.
[ { "created": "Sun, 22 Oct 2023 08:08:46 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 14:53:48 GMT", "version": "v2" } ]
2023-12-13
[ [ "Hu", "Shiang", "" ], [ "Ruan", "Jie", "" ], [ "Hou", "Juan", "" ], [ "Valdes-Sosa", "Pedro Antonio", "" ], [ "Lv", "Zhao", "" ] ]
Plenty of artifact removal tools and pipelines have been developed to correct the EEG recordings and discover the values below the waveforms. Without visual inspection from the experts, it is susceptible to derive improper preprocessing states, like the insufficient preprocessed EEG (IPE), and the excessive preprocessed EEG (EPE). However, little is known about the impacts of IPE or EPE on the postprocessing in the frequency, spatial and temporal domains, particularly as to the spectra and the functional connectivity (FC) analysis. Here, the clean EEG (CE) was synthesized as the ground truth based on the New-York head model and the multivariate autoregressive model. Later, the IPE and the EPE were simulated by injecting the Gaussian noise and losing the brain activities, respectively. Then, the impacts on postprocessing were quantified by the deviation caused by the IPE or EPE from the CE as to the 4 temporal statistics, the multichannel power, the cross spectra, the dispersion of source imaging, and the properties of scalp EEG network. Lastly, the association analysis was performed between the PaLOSi metric and the varying trends of postprocessing with the evolution of preprocessing states. This study shed light on how the postprocessing outcomes are affected by the preprocessing states and PaLOSi may be a potential effective quality metric.
1105.2362
Liaofu Luo
Liaofu Luo
Protein Photo-folding and Quantum Folding Theory
17 pages, 1 figure
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rates of protein folding with photon absorption or emission and the cross section of photon -protein inelastic scattering are calculated from the quantum folding theory by use of standard field-theoretical method. All these protein photo-folding processes are compared with common protein folding without interaction of photons (nonradiative folding). It is demonstrated that there exists a common factor (thermo-averaged overlap integral of vibration wave function, TAOI) for protein folding and protein photo-folding. Based on this finding it is predicted that: 1) the stimulated photo-folding rates show the same temperature dependence as protein folding; 2) the spectral line of electronic transition is broadened to a band which includes abundant vibration spectrum without and with conformational transition and the width of the vibration spectral line is largely reduced; 3) the resonance fluorescence cross section changes with temperature obeying the same law (Luo-Lu's law). The particular form of the folding rate - temperature relation and the abundant spectral structure imply the existence of a set of quantum oscillators in the transition process and these oscillators are mainly of torsion type of low frequency, imply the quantum tunneling between protein conformations does exist in folding and photo-folding processes and the tunneling is rooted deeply in the coherent motion of the conformational-electronic system.
[ { "created": "Thu, 12 May 2011 03:18:55 GMT", "version": "v1" } ]
2011-05-13
[ [ "Luo", "Liaofu", "" ] ]
The rates of protein folding with photon absorption or emission and the cross section of photon -protein inelastic scattering are calculated from the quantum folding theory by use of standard field-theoretical method. All these protein photo-folding processes are compared with common protein folding without interaction of photons (nonradiative folding). It is demonstrated that there exists a common factor (thermo-averaged overlap integral of vibration wave function, TAOI) for protein folding and protein photo-folding. Based on this finding it is predicted that: 1) the stimulated photo-folding rates show the same temperature dependence as protein folding; 2) the spectral line of electronic transition is broadened to a band which includes abundant vibration spectrum without and with conformational transition and the width of the vibration spectral line is largely reduced; 3) the resonance fluorescence cross section changes with temperature obeying the same law (Luo-Lu's law). The particular form of the folding rate - temperature relation and the abundant spectral structure imply the existence of a set of quantum oscillators in the transition process and these oscillators are mainly of torsion type of low frequency, imply the quantum tunneling between protein conformations does exist in folding and photo-folding processes and the tunneling is rooted deeply in the coherent motion of the conformational-electronic system.
1503.07796
Joachim Krug
Stefan Nowak and Joachim Krug
Analysis of adaptive walks on NK fitness landscapes with different interaction schemes
29 pages, 9 figures
J. Stat. Mech. (2015) P06014
10.1088/1742-5468/2015/06/P06014
null
q-bio.PE cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fitness landscapes are genotype to fitness mappings commonly used in evolutionary biology and computer science which are closely related to spin glass models. In this paper, we study the NK model for fitness landscapes where the interaction scheme between genes can be explicitly defined. The focus is on how this scheme influences the overall shape of the landscape. Our main tool for the analysis are adaptive walks, an idealized dynamics by which the population moves uphill in fitness and terminates at a local fitness maximum. We use three different types of walks and investigate how their length (the number of steps required to reach a local peak) and height (the fitness at the endpoint of the walk) depend on the dimensionality and structure of the landscape. We find that the distribution of local maxima over the landscape is particularly sensitive to the choice of interaction pattern. Most quantities that we measure are simply correlated to the rank of the scheme, which is equal to the number of nonzero coefficients in the expansion of the fitness landscape in terms of Walsh functions.
[ { "created": "Thu, 26 Mar 2015 17:24:20 GMT", "version": "v1" }, { "created": "Thu, 11 Jun 2015 10:36:25 GMT", "version": "v2" } ]
2015-06-12
[ [ "Nowak", "Stefan", "" ], [ "Krug", "Joachim", "" ] ]
Fitness landscapes are genotype to fitness mappings commonly used in evolutionary biology and computer science which are closely related to spin glass models. In this paper, we study the NK model for fitness landscapes where the interaction scheme between genes can be explicitly defined. The focus is on how this scheme influences the overall shape of the landscape. Our main tool for the analysis are adaptive walks, an idealized dynamics by which the population moves uphill in fitness and terminates at a local fitness maximum. We use three different types of walks and investigate how their length (the number of steps required to reach a local peak) and height (the fitness at the endpoint of the walk) depend on the dimensionality and structure of the landscape. We find that the distribution of local maxima over the landscape is particularly sensitive to the choice of interaction pattern. Most quantities that we measure are simply correlated to the rank of the scheme, which is equal to the number of nonzero coefficients in the expansion of the fitness landscape in terms of Walsh functions.
1207.2242
Claus Metzner
F. Stadler, C. Metzner, J. Steinwachs, B. Fabry
Inhomogeneous ensembles of correlated random walkers
null
null
null
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete time random walks, in which a step of random sign but constant length $\delta x$ is performed after each time interval $\delta t$, are widely used models for stochastic processes. In the case of a correlated random walk, the next step has the same sign as the previous one with a probability $q \neq 1/2$. We extend this model to an inhomogeneous ensemble of random walkers with a given distribution of persistence probabilites $p(q)$ and show that remarkable statistical properties can result from this inhomogenity: Depending on the distribution $p(q)$, we find that the probability density $p(\Delta x, \Delta t)$ for a displacement $\Delta x$ after lagtime $\Delta t$ can have a leptocurtic shape and that mean squared displacements can increase approximately like a fractional powerlaw with $\Delta t$. For the special case of persistence parameters distributed equally in the full range $q \in [0,1]$, the mean squared displacement is derived analytically. The model is further extended by allowing different step lengths $\delta x_j$ for each member $j$ of the ensemble. We show that two ensembles $[\delta t, {(q_j,\delta x_j)}]$ and $[\delta t^{\prime}, {(q^{\prime}_j,\delta x^{\prime}_j)}]$ defined at different time intervals $\delta t\neq\delta t^{\prime}$ can have the same statistical properties at long lagtimes $\Delta t$, if their parameters are related by a certain scaling transformation. Finally, we argue that similar statistical properties are expected for homogeneous ensembles, in which the parameters $(q_j(t),\delta x_j(t))$ of each individual walker fluctuate temporarily, provided the parameters can be considered constant for time periods $T\gg\Delta t$ longer than the considered lagtime $\Delta t$.
[ { "created": "Tue, 10 Jul 2012 06:58:58 GMT", "version": "v1" } ]
2012-07-11
[ [ "Stadler", "F.", "" ], [ "Metzner", "C.", "" ], [ "Steinwachs", "J.", "" ], [ "Fabry", "B.", "" ] ]
Discrete time random walks, in which a step of random sign but constant length $\delta x$ is performed after each time interval $\delta t$, are widely used models for stochastic processes. In the case of a correlated random walk, the next step has the same sign as the previous one with a probability $q \neq 1/2$. We extend this model to an inhomogeneous ensemble of random walkers with a given distribution of persistence probabilites $p(q)$ and show that remarkable statistical properties can result from this inhomogenity: Depending on the distribution $p(q)$, we find that the probability density $p(\Delta x, \Delta t)$ for a displacement $\Delta x$ after lagtime $\Delta t$ can have a leptocurtic shape and that mean squared displacements can increase approximately like a fractional powerlaw with $\Delta t$. For the special case of persistence parameters distributed equally in the full range $q \in [0,1]$, the mean squared displacement is derived analytically. The model is further extended by allowing different step lengths $\delta x_j$ for each member $j$ of the ensemble. We show that two ensembles $[\delta t, {(q_j,\delta x_j)}]$ and $[\delta t^{\prime}, {(q^{\prime}_j,\delta x^{\prime}_j)}]$ defined at different time intervals $\delta t\neq\delta t^{\prime}$ can have the same statistical properties at long lagtimes $\Delta t$, if their parameters are related by a certain scaling transformation. Finally, we argue that similar statistical properties are expected for homogeneous ensembles, in which the parameters $(q_j(t),\delta x_j(t))$ of each individual walker fluctuate temporarily, provided the parameters can be considered constant for time periods $T\gg\Delta t$ longer than the considered lagtime $\Delta t$.
1612.07425
Petter Holme
Petter Holme, Nelly Litvak
Cost-efficient vaccination protocols for network epidemiology
null
null
10.1371/journal.pcbi.1005696
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate methods to vaccinate contact networks -- i.e. removing nodes in such a way that disease spreading is hindered as much as possible -- with respect to their cost-efficiency. Any real implementation of such protocols would come with costs related both to the vaccination itself, and gathering of information about the network. Disregarding this, we argue, would lead to erroneous evaluation of vaccination protocols. We use the susceptible-infected-recovered model -- the generic model for diseases making patients immune upon recovery -- as our disease-spreading scenario, and analyze outbreaks on both empirical and model networks. For different relative costs, different protocols dominate. For high vaccination costs and low costs of gathering information, the so-called acquaintance vaccination is the most cost efficient. For other parameter values, protocols designed for query-efficient identification of the network's largest degrees are most efficient.
[ { "created": "Thu, 22 Dec 2016 03:07:34 GMT", "version": "v1" }, { "created": "Sat, 20 May 2017 05:10:34 GMT", "version": "v2" } ]
2017-11-01
[ [ "Holme", "Petter", "" ], [ "Litvak", "Nelly", "" ] ]
We investigate methods to vaccinate contact networks -- i.e. removing nodes in such a way that disease spreading is hindered as much as possible -- with respect to their cost-efficiency. Any real implementation of such protocols would come with costs related both to the vaccination itself, and gathering of information about the network. Disregarding this, we argue, would lead to erroneous evaluation of vaccination protocols. We use the susceptible-infected-recovered model -- the generic model for diseases making patients immune upon recovery -- as our disease-spreading scenario, and analyze outbreaks on both empirical and model networks. For different relative costs, different protocols dominate. For high vaccination costs and low costs of gathering information, the so-called acquaintance vaccination is the most cost efficient. For other parameter values, protocols designed for query-efficient identification of the network's largest degrees are most efficient.
0708.0426
Patricia Faisca
Rui D.M. Travasso, M.M. Telo da Gama and P.F.N. Faisca
Pathways to folding, nucleation events and native geometry
Accepted in J. Chem. Phys
null
10.1063/1.2777150
null
q-bio.BM
null
We perform extensive Monte Carlo simulations of a lattice model and the Go potential to investigate the existence of folding pathways at the level of contact cluster formation for two native structures with markedly different geometries. Our analysis of folding pathways revealed a common underlying folding mechanism, based on nucleation phenomena, for both protein models. However, folding to the more complex geometry (i.e. that with more non-local contacts) is driven by a folding nucleus whose geometric traits more closely resemble those of the native fold. For this geometry folding is clearly a more cooperative process.
[ { "created": "Thu, 2 Aug 2007 21:35:18 GMT", "version": "v1" } ]
2009-11-13
[ [ "Travasso", "Rui D. M.", "" ], [ "da Gama", "M. M. Telo", "" ], [ "Faisca", "P. F. N.", "" ] ]
We perform extensive Monte Carlo simulations of a lattice model and the Go potential to investigate the existence of folding pathways at the level of contact cluster formation for two native structures with markedly different geometries. Our analysis of folding pathways revealed a common underlying folding mechanism, based on nucleation phenomena, for both protein models. However, folding to the more complex geometry (i.e. that with more non-local contacts) is driven by a folding nucleus whose geometric traits more closely resemble those of the native fold. For this geometry folding is clearly a more cooperative process.
1911.11840
Mahsa Yazdani
Mahsa Yazdani and Omid Tavakoli
The Effect of Salt Shock on Growth and Pigment Accumulation of Dunaliella Salina
null
The 5th International Symposium on Biological Engineering and Natural Sciences, August 14-16, 2017, Osaka, Japan
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dunaliella Salina is a halotolerant microalga with great pharmaceutical and industrial potential, which commonly exists in hypersaline environments. Moreover, it is the best commercial source of beta-carotene (which has high anti-oxidant properties) in comparison to other microalgae. In this study, we investigated growth and accumulations of chlorophyll a and b, beta-carotene, and carotenoid after salt shock in 1, 1.5, 2, 2.5, 3 M concentrations of NaCl. The highest cell growth rate was observed in 1 M salt shock at 22-25 centigrade with a light intensity of 2.084 (mW.cm)^(-2), a light period of 12-12, and at an initial pH of about 7.1. Although the cell growth was enhanced in 1 and 1.5M, further increase in salt content harmed cell growth. The most considerable beta-carotene quantity was attained after 1M salt shock. According to the experimental observations, it was seen that the salt shock in some concentrations is one of the practical approaches to improve the accumulation of pigments. Keywords: beta-carotene, Dunaliella salina, salt shock, pigment accumulation.
[ { "created": "Tue, 26 Nov 2019 21:31:09 GMT", "version": "v1" } ]
2019-11-28
[ [ "Yazdani", "Mahsa", "" ], [ "Tavakoli", "Omid", "" ] ]
Dunaliella Salina is a halotolerant microalga with great pharmaceutical and industrial potential, which commonly exists in hypersaline environments. Moreover, it is the best commercial source of beta-carotene (which has high anti-oxidant properties) in comparison to other microalgae. In this study, we investigated growth and accumulations of chlorophyll a and b, beta-carotene, and carotenoid after salt shock in 1, 1.5, 2, 2.5, 3 M concentrations of NaCl. The highest cell growth rate was observed in 1 M salt shock at 22-25 centigrade with a light intensity of 2.084 (mW.cm)^(-2), a light period of 12-12, and at an initial pH of about 7.1. Although the cell growth was enhanced in 1 and 1.5M, further increase in salt content harmed cell growth. The most considerable beta-carotene quantity was attained after 1M salt shock. According to the experimental observations, it was seen that the salt shock in some concentrations is one of the practical approaches to improve the accumulation of pigments. Keywords: beta-carotene, Dunaliella salina, salt shock, pigment accumulation.
q-bio/0608010
Chunguang Li
Chunguang Li, Luonan Chen, Kazuyuki Aihara
Transient Resetting: A Novel Mechanism for Synchrony and Its Biological Examples
17 pages, 7 figures
PLoS Computational Biology 2 (8): e103, 2006
10.1371/journal.pcbi.0020103
null
q-bio.MN nlin.CD
null
The study of synchronization in biological systems is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels. In this paper, by using simple dynamical systems theory, we present a novel mechanism, named transient resetting, for the synchronization of uncoupled biological oscillators with stimuli. This mechanism not only can unify and extend many existing results on (deterministic and stochastic) stimulus-induced synchrony, but also may actually play an important role in biological rhythms. We argue that transient resetting is a possible mechanism for the synchronization in many biological organisms, which might also be further used in medical therapy of rhythmic disorders. Examples on the synchronization of neural and circadian oscillators are presented to verify our hypothesis.
[ { "created": "Fri, 4 Aug 2006 03:02:30 GMT", "version": "v1" } ]
2007-05-23
[ [ "Li", "Chunguang", "" ], [ "Chen", "Luonan", "" ], [ "Aihara", "Kazuyuki", "" ] ]
The study of synchronization in biological systems is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels. In this paper, by using simple dynamical systems theory, we present a novel mechanism, named transient resetting, for the synchronization of uncoupled biological oscillators with stimuli. This mechanism not only can unify and extend many existing results on (deterministic and stochastic) stimulus-induced synchrony, but also may actually play an important role in biological rhythms. We argue that transient resetting is a possible mechanism for the synchronization in many biological organisms, which might also be further used in medical therapy of rhythmic disorders. Examples on the synchronization of neural and circadian oscillators are presented to verify our hypothesis.
2206.12240
Sirui Liu
Sirui Liu, Jun Zhang, Haotian Chu, Min Wang, Boxin Xue, Ningxi Ni, Jialiang Yu, Yuhao Xie, Zhenyu Chen, Mengyun Chen, Yuan Liu, Piya Patra, Fan Xu, Jie Chen, Zidong Wang, Lijiang Yang, Fan Yu, Lei Chen, Yi Qin Gao
PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Proteins are essential component of human life and their structures are important for function and mechanism analysis. Recent work has shown the potential of AI-driven methods for protein structure prediction. However, the development of new models is restricted by the lack of dataset and benchmark training procedure. To the best of our knowledge, the existing open source datasets are far less to satisfy the needs of modern protein sequence-structure related research. To solve this problem, we present the first million-level protein structure prediction dataset with high coverage and diversity, named as PSP. This dataset consists of 570k true structure sequences (10TB) and 745k complementary distillation sequences (15TB). We provide in addition the benchmark training procedure for SOTA protein structure prediction model on this dataset. We validate the utility of this dataset for training by participating CAMEO contest in which our model won the first place. We hope our PSP dataset together with the training benchmark can enable a broader community of AI/biology researchers for AI-driven protein related research.
[ { "created": "Fri, 24 Jun 2022 14:08:44 GMT", "version": "v1" } ]
2022-06-27
[ [ "Liu", "Sirui", "" ], [ "Zhang", "Jun", "" ], [ "Chu", "Haotian", "" ], [ "Wang", "Min", "" ], [ "Xue", "Boxin", "" ], [ "Ni", "Ningxi", "" ], [ "Yu", "Jialiang", "" ], [ "Xie", "Yuhao", "" ], [ "Chen", "Zhenyu", "" ], [ "Chen", "Mengyun", "" ], [ "Liu", "Yuan", "" ], [ "Patra", "Piya", "" ], [ "Xu", "Fan", "" ], [ "Chen", "Jie", "" ], [ "Wang", "Zidong", "" ], [ "Yang", "Lijiang", "" ], [ "Yu", "Fan", "" ], [ "Chen", "Lei", "" ], [ "Gao", "Yi Qin", "" ] ]
Proteins are essential component of human life and their structures are important for function and mechanism analysis. Recent work has shown the potential of AI-driven methods for protein structure prediction. However, the development of new models is restricted by the lack of dataset and benchmark training procedure. To the best of our knowledge, the existing open source datasets are far less to satisfy the needs of modern protein sequence-structure related research. To solve this problem, we present the first million-level protein structure prediction dataset with high coverage and diversity, named as PSP. This dataset consists of 570k true structure sequences (10TB) and 745k complementary distillation sequences (15TB). We provide in addition the benchmark training procedure for SOTA protein structure prediction model on this dataset. We validate the utility of this dataset for training by participating CAMEO contest in which our model won the first place. We hope our PSP dataset together with the training benchmark can enable a broader community of AI/biology researchers for AI-driven protein related research.
1304.0479
Wendy Ingram
Wendy Marie Ingram, Leeanne M Goodrich, Ellen A Robey, Michael B Eisen
Mice Infected with Low-virulence Strains of Toxoplasma gondii Lose their Innate Aversion to Cat Urine, Even after Extensive Parasite Clearance
14 pages, 3 figures
null
10.1371/journal.pone.0075246
null
q-bio.TO q-bio.NC
http://creativecommons.org/licenses/by/3.0/
Toxoplasma gondii chronic infection in rodent secondary hosts has been reported to lead to a loss of innate, hard-wired fear toward cats, its primary host. However the generality of this response across T. gondii strains and the underlying mechanism for this pathogen mediated behavioral change remain unknown. To begin exploring these questions, we evaluated the effects of infection with two previously uninvestigated isolates from the three major North American clonal lineages of T. gondii, Type III and an attenuated strain of Type I. Using an hour-long open field activity assay optimized for this purpose, we measured mouse aversion toward predator and non-predator urines. We show that loss of innate aversion of cat urine is a general trait caused by infection with any of the three major clonal lineages of parasite. Surprisingly, we found that infection with the attenuated Type I parasite results in sustained loss of aversion at times post infection when neither parasite nor ongoing brain inflammation were detectable. This suggests that T. gondii-mediated interruption of mouse innate aversion toward cat urine may occur during early acute infection in a permanent manner, not requiring persistence of parasitecysts or continuing brain inflammation.
[ { "created": "Mon, 1 Apr 2013 20:53:18 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2013 21:08:20 GMT", "version": "v2" } ]
2014-03-05
[ [ "Ingram", "Wendy Marie", "" ], [ "Goodrich", "Leeanne M", "" ], [ "Robey", "Ellen A", "" ], [ "Eisen", "Michael B", "" ] ]
Toxoplasma gondii chronic infection in rodent secondary hosts has been reported to lead to a loss of innate, hard-wired fear toward cats, its primary host. However the generality of this response across T. gondii strains and the underlying mechanism for this pathogen mediated behavioral change remain unknown. To begin exploring these questions, we evaluated the effects of infection with two previously uninvestigated isolates from the three major North American clonal lineages of T. gondii, Type III and an attenuated strain of Type I. Using an hour-long open field activity assay optimized for this purpose, we measured mouse aversion toward predator and non-predator urines. We show that loss of innate aversion of cat urine is a general trait caused by infection with any of the three major clonal lineages of parasite. Surprisingly, we found that infection with the attenuated Type I parasite results in sustained loss of aversion at times post infection when neither parasite nor ongoing brain inflammation were detectable. This suggests that T. gondii-mediated interruption of mouse innate aversion toward cat urine may occur during early acute infection in a permanent manner, not requiring persistence of parasitecysts or continuing brain inflammation.
1603.02414
Christos Skiadas H
Christos H Skiadas
The Health-Mortality Approach in Estimating the Healthy Life Years Lost Compared to the Global Burden of Disease Studies and Applications
26 pages, 11 figures, 6 tables. arXiv admin note: substantial text overlap with arXiv:1510.07346
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a series of methods and models in order to explore the Global Burden of Disease Study and the provided healthy life expectancy HALE estimates from the World Health Organization WHO based on the mortality mx of a population provided in a classical life table and a mortality diagram. Our estimates are compared with the HALE estimates for the World territories and the WHO regions along with providing comparative results with to findings of Chang, Molla, Truman et al. (2015) on the Differences in healthy life expectancy for the US population by sex, race or ethnicity and geographic region in 2008 and from Yong and Saito (2009) regarding Trends in healthy life expectancy in Japan. From the mortality point of view we have developed a simple model for the estimation of a characteristic parameter b related to the healthy life years lost to disability and providing full application details along with characteristic parameter selection and stability of the coefficients. We also provide a direct estimation method of the parameter b from the life tables. We straighten the importance of our methodology by proposing and applying estimates of the parameter b by using the Gompertz and the Weibull models. From the Health State point of view we summarize the main points of the first exit time theory to life table data and present the basic models starting from the first related model published by Janssen and Skiadas (1995). Even more we develop the simpler 2-parameter health state model and an extension of a model expressing the infant mortality to a 4-parameter model which is the simpler model providing very good fitting on the logarithm of the force of mortality. More important is the use of the Health State Function and the relative impact on mortality to find an estimate for the healthy life years lost to disability.
[ { "created": "Tue, 8 Mar 2016 08:31:51 GMT", "version": "v1" } ]
2016-03-09
[ [ "Skiadas", "Christos H", "" ] ]
We propose a series of methods and models in order to explore the Global Burden of Disease Study and the provided healthy life expectancy HALE estimates from the World Health Organization WHO based on the mortality mx of a population provided in a classical life table and a mortality diagram. Our estimates are compared with the HALE estimates for the World territories and the WHO regions along with providing comparative results with to findings of Chang, Molla, Truman et al. (2015) on the Differences in healthy life expectancy for the US population by sex, race or ethnicity and geographic region in 2008 and from Yong and Saito (2009) regarding Trends in healthy life expectancy in Japan. From the mortality point of view we have developed a simple model for the estimation of a characteristic parameter b related to the healthy life years lost to disability and providing full application details along with characteristic parameter selection and stability of the coefficients. We also provide a direct estimation method of the parameter b from the life tables. We straighten the importance of our methodology by proposing and applying estimates of the parameter b by using the Gompertz and the Weibull models. From the Health State point of view we summarize the main points of the first exit time theory to life table data and present the basic models starting from the first related model published by Janssen and Skiadas (1995). Even more we develop the simpler 2-parameter health state model and an extension of a model expressing the infant mortality to a 4-parameter model which is the simpler model providing very good fitting on the logarithm of the force of mortality. More important is the use of the Health State Function and the relative impact on mortality to find an estimate for the healthy life years lost to disability.
2306.13429
Leonardo Novelli
Leonardo Novelli, Karl Friston, Adeel Razi
Spectral Dynamic Causal Modelling: A Didactic Introduction and its Relationship with Functional Connectivity
null
null
null
null
q-bio.NC cs.CE
http://creativecommons.org/licenses/by/4.0/
We present a didactic introduction to spectral Dynamic Causal Modelling (DCM), a Bayesian state-space modelling approach used to infer effective connectivity from non-invasive neuroimaging data. Spectral DCM is currently the most widely applied DCM variant for resting-state functional MRI analysis. Our aim is to explain its technical foundations to an audience with limited expertise in state-space modelling and spectral data analysis. Particular attention will be paid to cross-spectral density, which is the most distinctive feature of spectral DCM and is closely related to functional connectivity, as measured by (zero-lag) Pearson correlations. In fact, the model parameters estimated by spectral DCM are those that best reproduce the cross-correlations between all measurements--at all time lags--including the zero-lag correlations that are usually interpreted as functional connectivity. We derive the functional connectivity matrix from the model equations and show how changing a single effective connectivity parameter can affect all pairwise correlations. To complicate matters, the pairs of brain regions showing the largest changes in functional connectivity do not necessarily coincide with those presenting the largest changes in effective connectivity. We discuss the implications and conclude with a comprehensive summary of the assumptions and limitations of spectral DCM.
[ { "created": "Fri, 23 Jun 2023 10:46:39 GMT", "version": "v1" }, { "created": "Wed, 6 Sep 2023 02:24:23 GMT", "version": "v2" } ]
2023-09-07
[ [ "Novelli", "Leonardo", "" ], [ "Friston", "Karl", "" ], [ "Razi", "Adeel", "" ] ]
We present a didactic introduction to spectral Dynamic Causal Modelling (DCM), a Bayesian state-space modelling approach used to infer effective connectivity from non-invasive neuroimaging data. Spectral DCM is currently the most widely applied DCM variant for resting-state functional MRI analysis. Our aim is to explain its technical foundations to an audience with limited expertise in state-space modelling and spectral data analysis. Particular attention will be paid to cross-spectral density, which is the most distinctive feature of spectral DCM and is closely related to functional connectivity, as measured by (zero-lag) Pearson correlations. In fact, the model parameters estimated by spectral DCM are those that best reproduce the cross-correlations between all measurements--at all time lags--including the zero-lag correlations that are usually interpreted as functional connectivity. We derive the functional connectivity matrix from the model equations and show how changing a single effective connectivity parameter can affect all pairwise correlations. To complicate matters, the pairs of brain regions showing the largest changes in functional connectivity do not necessarily coincide with those presenting the largest changes in effective connectivity. We discuss the implications and conclude with a comprehensive summary of the assumptions and limitations of spectral DCM.
1711.07258
Marie-Constance Corsi
Marie-Constance Corsi, Mario Chavez, Denis Schwartz, Laurent Hugueville, Ankit N. Khambhati, Danielle S. Bassett, Fabrizio De Vico Fallani
Integrating EEG and MEG signals to improve motor imagery classification in brain-computer interfaces
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a fusion approach that combines features from simultaneously recorded electroencephalographic (EEG) and magnetoencephalographic (MEG) signals to improve classification performances in motor imagery-based brain-computer interfaces (BCIs). We applied our approach to a group of 15 healthy subjects and found a significant classification performance enhancement as compared to standard single-modality approaches in the alpha and beta bands. Taken together, our findings demonstrate the advantage of considering multimodal approaches as complementary tools for improving the impact of non-invasive BCIs.
[ { "created": "Mon, 20 Nov 2017 11:30:15 GMT", "version": "v1" }, { "created": "Mon, 26 Mar 2018 13:27:13 GMT", "version": "v2" } ]
2018-03-28
[ [ "Corsi", "Marie-Constance", "" ], [ "Chavez", "Mario", "" ], [ "Schwartz", "Denis", "" ], [ "Hugueville", "Laurent", "" ], [ "Khambhati", "Ankit N.", "" ], [ "Bassett", "Danielle S.", "" ], [ "Fallani", "Fabrizio De Vico", "" ] ]
We propose a fusion approach that combines features from simultaneously recorded electroencephalographic (EEG) and magnetoencephalographic (MEG) signals to improve classification performances in motor imagery-based brain-computer interfaces (BCIs). We applied our approach to a group of 15 healthy subjects and found a significant classification performance enhancement as compared to standard single-modality approaches in the alpha and beta bands. Taken together, our findings demonstrate the advantage of considering multimodal approaches as complementary tools for improving the impact of non-invasive BCIs.
0910.4077
Marcin Zag\'orski
Z. Burda, A. Krzywicki, O. C. Martin, M. Zagorski
Sparse essential interactions in model networks of gene regulation
9 pages, 5 figures
null
null
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks typically have low in-degrees, whereby any given gene is regulated by few of the genes in the network. What mechanisms might be responsible for these low in-degrees? Starting with an accepted framework of the binding of transcription factors to DNA, we consider a simple model of gene regulatory dynamics. In this model, we show that the constraint of having a given function leads to the emergence of minimum connectivities compatible with function. We exhibit mathematically this behavior within a limit of our model and show that it also arises in the full model. As a consequence, functionality in these gene networks is parsimonious, i.e., is concentrated on a sparse number of interactions as measured for instance by their essentiality. Our model thus provides a simple mechanism for the emergence of sparse regulatory networks, and leads to very heterogeneous effects of mutations.
[ { "created": "Wed, 21 Oct 2009 12:58:49 GMT", "version": "v1" } ]
2009-10-22
[ [ "Burda", "Z.", "" ], [ "Krzywicki", "A.", "" ], [ "Martin", "O. C.", "" ], [ "Zagorski", "M.", "" ] ]
Gene regulatory networks typically have low in-degrees, whereby any given gene is regulated by few of the genes in the network. What mechanisms might be responsible for these low in-degrees? Starting with an accepted framework of the binding of transcription factors to DNA, we consider a simple model of gene regulatory dynamics. In this model, we show that the constraint of having a given function leads to the emergence of minimum connectivities compatible with function. We exhibit mathematically this behavior within a limit of our model and show that it also arises in the full model. As a consequence, functionality in these gene networks is parsimonious, i.e., is concentrated on a sparse number of interactions as measured for instance by their essentiality. Our model thus provides a simple mechanism for the emergence of sparse regulatory networks, and leads to very heterogeneous effects of mutations.
1711.11161
Pedro M. F. Pereira
Pedro M. F. Pereira
Can Complex Collective Behaviour Be Generated Through Randomness, Memory and a Pinch of Luck?
null
null
null
null
q-bio.PE nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine Learning techniques have been used to teach computer programs how to play games as complicated as Chess and Go. These were achieved using powerful tools such as Neural Networks and Parallel Computing on Supercomputers. In this paper, we define a model of populational growth and evolution based on the idea of Reinforcement Learning, but using only the 3 sources stated in the title processed on a low-tier laptop. The model correctly predicts the development of a population around food sources and their migration in search of a new one when the known ones become saturated. Additionally, we compared our model to a pure random one and the population number was fitted to a logistic function for two interesting evolutions of the system.
[ { "created": "Wed, 29 Nov 2017 23:56:29 GMT", "version": "v1" } ]
2017-12-01
[ [ "Pereira", "Pedro M. F.", "" ] ]
Machine Learning techniques have been used to teach computer programs how to play games as complicated as Chess and Go. These were achieved using powerful tools such as Neural Networks and Parallel Computing on Supercomputers. In this paper, we define a model of populational growth and evolution based on the idea of Reinforcement Learning, but using only the 3 sources stated in the title processed on a low-tier laptop. The model correctly predicts the development of a population around food sources and their migration in search of a new one when the known ones become saturated. Additionally, we compared our model to a pure random one and the population number was fitted to a logistic function for two interesting evolutions of the system.
1008.1063
Attila Szolnoki
Gyorgy Szabo, Attila Szolnoki, Melinda Varga, Livia Hanusovszky
Ordering in spatial evolutionary games for pairwise collective strategy updates
9 pages, 6 figures; accepted for publication in Physical Review E
Physical Review E 82 (2010) 026110
10.1103/PhysRevE.82.026110
null
q-bio.PE cond-mat.stat-mech physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary $2 \times 2$ games are studied with players located on a square lattice. During the evolution the randomly chosen neighboring players try to maximize their collective income by adopting a random strategy pair with a probability dependent on the difference of their summed payoffs between the final and initial state assuming quenched strategies in their neighborhood. In the case of the anti-coordination game this system behaves alike an anti-ferromagnetic kinetic Ising model. Within a wide region of social dilemmas this dynamical rule supports the formation of similar spatial arrangement of the cooperators and defectors ensuring the optimum total payoff if the temptation to choose defection exceeds a threshold value dependent on the sucker's payoff. The comparison of the results with those achieved for pairwise imitation and myopic strategy updates has indicated the relevant advantage of pairwise collective strategy update in the maintenance of cooperation.
[ { "created": "Thu, 5 Aug 2010 20:01:31 GMT", "version": "v1" } ]
2010-08-23
[ [ "Szabo", "Gyorgy", "" ], [ "Szolnoki", "Attila", "" ], [ "Varga", "Melinda", "" ], [ "Hanusovszky", "Livia", "" ] ]
Evolutionary $2 \times 2$ games are studied with players located on a square lattice. During the evolution the randomly chosen neighboring players try to maximize their collective income by adopting a random strategy pair with a probability dependent on the difference of their summed payoffs between the final and initial state assuming quenched strategies in their neighborhood. In the case of the anti-coordination game this system behaves alike an anti-ferromagnetic kinetic Ising model. Within a wide region of social dilemmas this dynamical rule supports the formation of similar spatial arrangement of the cooperators and defectors ensuring the optimum total payoff if the temptation to choose defection exceeds a threshold value dependent on the sucker's payoff. The comparison of the results with those achieved for pairwise imitation and myopic strategy updates has indicated the relevant advantage of pairwise collective strategy update in the maintenance of cooperation.
1609.04316
Carlo Nicolini
Carlo Nicolini, C\'ecile Bordier, Angelo Bifone
Community detection in weighted brain connectivity networks beyond the resolution limit
27 pages with 6 figures and 1 table. Conference version for CCS2016
null
null
null
q-bio.NC physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph theory provides a powerful framework to investigate brain functional connectivity networks and their modular organization. However, most graph-based methods suffer from a fundamental resolution limit that may have affected previous studies and prevented detection of modules, or communities, that are smaller than a specific scale. Surprise, a resolution-limit-free function rooted in discrete probability theory, has been recently introduced and applied to brain networks, revealing a wide size-distribution of functional modules, in contrast with many previous reports. However, the use of Surprise is limited to binary networks, while brain networks are intrinsically weighted, reflecting a continuous distribution of connectivity strengths between different brain regions. Here, we propose Asymptotical Surprise, a continuous version of Surprise, for the study of weighted brain connectivity networks, and validate this approach in synthetic networks endowed with a ground-truth modular structure. We compare Asymptotical Surprise with leading community detection methods currently in use and show its superior sensitivity in the detection of small modules even in the presence of noise and intersubject variability such as those observed in fMRI data. Finally, we apply our novel approach to functional connectivity networks from resting state fMRI experimenta, and demonstrate a heterogeneous modular organization, with a wide distribution of clusters spanning multiple scales.
[ { "created": "Wed, 14 Sep 2016 15:36:49 GMT", "version": "v1" } ]
2016-09-15
[ [ "Nicolini", "Carlo", "" ], [ "Bordier", "Cécile", "" ], [ "Bifone", "Angelo", "" ] ]
Graph theory provides a powerful framework to investigate brain functional connectivity networks and their modular organization. However, most graph-based methods suffer from a fundamental resolution limit that may have affected previous studies and prevented detection of modules, or communities, that are smaller than a specific scale. Surprise, a resolution-limit-free function rooted in discrete probability theory, has been recently introduced and applied to brain networks, revealing a wide size-distribution of functional modules, in contrast with many previous reports. However, the use of Surprise is limited to binary networks, while brain networks are intrinsically weighted, reflecting a continuous distribution of connectivity strengths between different brain regions. Here, we propose Asymptotical Surprise, a continuous version of Surprise, for the study of weighted brain connectivity networks, and validate this approach in synthetic networks endowed with a ground-truth modular structure. We compare Asymptotical Surprise with leading community detection methods currently in use and show its superior sensitivity in the detection of small modules even in the presence of noise and intersubject variability such as those observed in fMRI data. Finally, we apply our novel approach to functional connectivity networks from resting state fMRI experimenta, and demonstrate a heterogeneous modular organization, with a wide distribution of clusters spanning multiple scales.
1809.09450
Alican Ozkan
Alican Ozkan, Neda Ghousifam, P. Jack Hoopes, Marissa Nichole Rylander
In Vitro Vascularized Liver and Tumor Tissue Microenvironments on a Chip for Dynamic Determination of Nanoparticle Transport and Toxicity
42 pages, 10 figures, 1 table
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the development of a vascularized breast tumor and healthy or tumorigenic liver microenvironments-on-a-chip connected in series. This is the first description of a vascularized multi tissue-on-a-chip microenvironment for modeling cancerous breast and cancerous/healthy liver microenvironments, to allow for the study of dynamic and spatial transport of particles. This device enables the dynamic determination of vessel permeability, the measurement of drug and nanoparticle transport, and the assessment of the associated efficacy and toxicity to the liver. The platform is utilized to determine the effect of particle size on the spatiotemporal diffusion of particles through each microenvironment, both independently and in response to the circulation of particles in varying sequences of microenvironments. The results show that when breast cancer cells were cultured in the microenvironments they had a 2.62-fold higher vessel porosity relative to vessels within healthy liver microenvironments. Hence, the permeability of the tumor microenvironment increased by 2.35- and 2.77-fold compared to a healthy liver for small and large particles, respectively. The ECM accumulation rate of larger particles was 2.57-fold lower than smaller particles in a healthy liver. However, the accumulation rate was 5.57-fold greater in the breast tumor microenvironment. These results are in agreement with comparable in vivo studies. Ultimately, the platform could be utilized to determine the impact of the tissue or tumor microenvironment, or drug and nanoparticle properties, on transport, efficacy, selectivity, and toxicity in a dynamic, and high throughput manner for use in treatment optimization.
[ { "created": "Tue, 25 Sep 2018 13:02:54 GMT", "version": "v1" }, { "created": "Wed, 28 Nov 2018 03:05:35 GMT", "version": "v2" } ]
2018-11-29
[ [ "Ozkan", "Alican", "" ], [ "Ghousifam", "Neda", "" ], [ "Hoopes", "P. Jack", "" ], [ "Rylander", "Marissa Nichole", "" ] ]
This paper presents the development of a vascularized breast tumor and healthy or tumorigenic liver microenvironments-on-a-chip connected in series. This is the first description of a vascularized multi tissue-on-a-chip microenvironment for modeling cancerous breast and cancerous/healthy liver microenvironments, to allow for the study of dynamic and spatial transport of particles. This device enables the dynamic determination of vessel permeability, the measurement of drug and nanoparticle transport, and the assessment of the associated efficacy and toxicity to the liver. The platform is utilized to determine the effect of particle size on the spatiotemporal diffusion of particles through each microenvironment, both independently and in response to the circulation of particles in varying sequences of microenvironments. The results show that when breast cancer cells were cultured in the microenvironments they had a 2.62-fold higher vessel porosity relative to vessels within healthy liver microenvironments. Hence, the permeability of the tumor microenvironment increased by 2.35- and 2.77-fold compared to a healthy liver for small and large particles, respectively. The ECM accumulation rate of larger particles was 2.57-fold lower than smaller particles in a healthy liver. However, the accumulation rate was 5.57-fold greater in the breast tumor microenvironment. These results are in agreement with comparable in vivo studies. Ultimately, the platform could be utilized to determine the impact of the tissue or tumor microenvironment, or drug and nanoparticle properties, on transport, efficacy, selectivity, and toxicity in a dynamic, and high throughput manner for use in treatment optimization.
2403.13851
Lucas B\"ottcher
Lucas B\"ottcher, Luis L. Fonseca, Reinhard C. Laubenbacher
Control of Medical Digital Twins with Artificial Neural Networks
13 pages, 5 figures
null
null
null
q-bio.QM cs.LG cs.SY eess.SY math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of personalized medicine is to tailor interventions to an individual patient's unique characteristics. A key technology for this purpose involves medical digital twins, computational models of human biology that can be personalized and dynamically updated to incorporate patient-specific data collected over time. Certain aspects of human biology, such as the immune system, are not easily captured with physics-based models, such as differential equations. Instead, they are often multi-scale, stochastic, and hybrid. This poses a challenge to existing model-based control and optimization approaches that cannot be readily applied to such models. Recent advances in automatic differentiation and neural-network control methods hold promise in addressing complex control problems. However, the application of these approaches to biomedical systems is still in its early stages. This work introduces dynamics-informed neural-network controllers as an alternative approach to control of medical digital twins. As a first use case for this method, the focus is on agent-based models, a versatile and increasingly common modeling platform in biomedicine. The effectiveness of the proposed neural-network control method is illustrated and benchmarked against other methods with two widely-used agent-based model types. The relevance of the method introduced here extends beyond medical digital twins to other complex dynamical systems.
[ { "created": "Mon, 18 Mar 2024 19:30:46 GMT", "version": "v1" } ]
2024-03-22
[ [ "Böttcher", "Lucas", "" ], [ "Fonseca", "Luis L.", "" ], [ "Laubenbacher", "Reinhard C.", "" ] ]
The objective of personalized medicine is to tailor interventions to an individual patient's unique characteristics. A key technology for this purpose involves medical digital twins, computational models of human biology that can be personalized and dynamically updated to incorporate patient-specific data collected over time. Certain aspects of human biology, such as the immune system, are not easily captured with physics-based models, such as differential equations. Instead, they are often multi-scale, stochastic, and hybrid. This poses a challenge to existing model-based control and optimization approaches that cannot be readily applied to such models. Recent advances in automatic differentiation and neural-network control methods hold promise in addressing complex control problems. However, the application of these approaches to biomedical systems is still in its early stages. This work introduces dynamics-informed neural-network controllers as an alternative approach to control of medical digital twins. As a first use case for this method, the focus is on agent-based models, a versatile and increasingly common modeling platform in biomedicine. The effectiveness of the proposed neural-network control method is illustrated and benchmarked against other methods with two widely-used agent-based model types. The relevance of the method introduced here extends beyond medical digital twins to other complex dynamical systems.
1304.8045
Buhm Han
Buhm Han, Jae Hoon Sul, Eleazar Eskin, Paul I. W. de Bakker, Soumya Raychaudhuri
A general framework for meta-analyzing dependent studies with overlapping subjects in association mapping
1/17/14: Minor text changes
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meta-analysis of genome-wide association studies is increasingly popular and many meta-analytic methods have been recently proposed. A majority of meta-analytic methods combine information from multiple studies by assuming that studies are independent since individuals collected in one study are unlikely to be collected again by another study. However, it has become increasingly common to utilize the same control individuals among multiple studies to reduce genotyping or sequencing cost. This causes those studies that share the same individuals to be dependent, and spurious associations may arise if overlapping subjects are not taken into account in a meta-analysis. In this paper, we propose a general framework for meta-analyzing dependent studies with overlapping subjects. Given dependent studies, our approach "decouples" the studies into independent studies such that meta-analysis methods assuming independent studies can be applied. This enables many meta-analysis methods, such as the random effects model, to account for overlapping subjects. Another advantage is that one can continue to use preferred software in the analysis pipeline which may not support overlapping subjects. Using simulations and the Wellcome Trust Case Control Consortium data, we show that our decoupling approach allows both the fixed and the random effects models to account for overlapping subjects while retaining desirable false positive rate and power.
[ { "created": "Tue, 30 Apr 2013 16:01:14 GMT", "version": "v1" }, { "created": "Mon, 7 Oct 2013 21:53:36 GMT", "version": "v2" }, { "created": "Fri, 17 Jan 2014 17:59:19 GMT", "version": "v3" } ]
2014-01-20
[ [ "Han", "Buhm", "" ], [ "Sul", "Jae Hoon", "" ], [ "Eskin", "Eleazar", "" ], [ "de Bakker", "Paul I. W.", "" ], [ "Raychaudhuri", "Soumya", "" ] ]
Meta-analysis of genome-wide association studies is increasingly popular and many meta-analytic methods have been recently proposed. A majority of meta-analytic methods combine information from multiple studies by assuming that studies are independent since individuals collected in one study are unlikely to be collected again by another study. However, it has become increasingly common to utilize the same control individuals among multiple studies to reduce genotyping or sequencing cost. This causes those studies that share the same individuals to be dependent, and spurious associations may arise if overlapping subjects are not taken into account in a meta-analysis. In this paper, we propose a general framework for meta-analyzing dependent studies with overlapping subjects. Given dependent studies, our approach "decouples" the studies into independent studies such that meta-analysis methods assuming independent studies can be applied. This enables many meta-analysis methods, such as the random effects model, to account for overlapping subjects. Another advantage is that one can continue to use preferred software in the analysis pipeline which may not support overlapping subjects. Using simulations and the Wellcome Trust Case Control Consortium data, we show that our decoupling approach allows both the fixed and the random effects models to account for overlapping subjects while retaining desirable false positive rate and power.
2002.00245
Mandev Gill
Mandev S. Gill, Philippe Lemey, Marc A. Suchard, Andrew Rambaut, Guy Baele
Online Bayesian phylodynamic inference in BEAST with application to epidemic reconstruction
20 pages, 3 figures
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing pathogen dynamics from genetic data as they become available during an outbreak or epidemic represents an important statistical scenario in which observations arrive sequentially in time and one is interested in performing inference in an 'online' fashion. Widely-used Bayesian phylogenetic inference packages are not set up for this purpose, generally requiring one to recompute trees and evolutionary model parameters de novo when new data arrive. To accommodate increasing data flow in a Bayesian phylogenetic framework, we introduce a methodology to efficiently update the posterior distribution with newly available genetic data. Our procedure is implemented in the BEAST 1.10 software package, and relies on a distance-based measure to insert new taxa into the current estimate of the phylogeny and imputes plausible values for new model parameters to accommodate growing dimensionality. This augmentation creates informed starting values and re-uses optimally tuned transition kernels for posterior exploration of growing data sets, reducing the time necessary to converge to target posterior distributions. We apply our framework to data from the recent West African Ebola virus epidemic and demonstrate a considerable reduction in time required to obtain posterior estimates at different time points of the outbreak. Beyond epidemic monitoring, this framework easily finds other applications within the phylogenetics community, where changes in the data -- in terms of alignment changes, sequence addition or removal -- present common scenarios that can benefit from online inference.
[ { "created": "Sat, 1 Feb 2020 17:30:59 GMT", "version": "v1" } ]
2020-02-04
[ [ "Gill", "Mandev S.", "" ], [ "Lemey", "Philippe", "" ], [ "Suchard", "Marc A.", "" ], [ "Rambaut", "Andrew", "" ], [ "Baele", "Guy", "" ] ]
Reconstructing pathogen dynamics from genetic data as they become available during an outbreak or epidemic represents an important statistical scenario in which observations arrive sequentially in time and one is interested in performing inference in an 'online' fashion. Widely-used Bayesian phylogenetic inference packages are not set up for this purpose, generally requiring one to recompute trees and evolutionary model parameters de novo when new data arrive. To accommodate increasing data flow in a Bayesian phylogenetic framework, we introduce a methodology to efficiently update the posterior distribution with newly available genetic data. Our procedure is implemented in the BEAST 1.10 software package, and relies on a distance-based measure to insert new taxa into the current estimate of the phylogeny and imputes plausible values for new model parameters to accommodate growing dimensionality. This augmentation creates informed starting values and re-uses optimally tuned transition kernels for posterior exploration of growing data sets, reducing the time necessary to converge to target posterior distributions. We apply our framework to data from the recent West African Ebola virus epidemic and demonstrate a considerable reduction in time required to obtain posterior estimates at different time points of the outbreak. Beyond epidemic monitoring, this framework easily finds other applications within the phylogenetics community, where changes in the data -- in terms of alignment changes, sequence addition or removal -- present common scenarios that can benefit from online inference.
1912.10489
Tai Sing Lee
Siming Yan, Xuyang Fang, Bowen Xiao, Harold Rockwell, Yimeng Zhang, Tai Sing Lee
Recurrent Feedback Improves Feedforward Representations in Deep Neural Networks
10 pages, 5 figures
null
null
null
q-bio.NC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The abundant recurrent horizontal and feedback connections in the primate visual cortex are thought to play an important role in bringing global and semantic contextual information to early visual areas during perceptual inference, helping to resolve local ambiguity and fill in missing details. In this study, we find that introducing feedback loops and horizontal recurrent connections to a deep convolution neural network (VGG16) allows the network to become more robust against noise and occlusion during inference, even in the initial feedforward pass. This suggests that recurrent feedback and contextual modulation transform the feedforward representations of the network in a meaningful and interesting way. We study the population codes of neurons in the network, before and after learning with feedback, and find that learning with feedback yielded an increase in discriminability (measured by d-prime) between the different object classes in the population codes of the neurons in the feedforward path, even at the earliest layer that receives feedback. We find that recurrent feedback, by injecting top-down semantic meaning to the population activities, helps the network learn better feedforward paths to robustly map noisy image patches to the latent representations corresponding to important visual concepts of each object class, resulting in greater robustness of the network against noises and occlusion as well as better fine-grained recognition.
[ { "created": "Sun, 22 Dec 2019 17:40:19 GMT", "version": "v1" } ]
2019-12-24
[ [ "Yan", "Siming", "" ], [ "Fang", "Xuyang", "" ], [ "Xiao", "Bowen", "" ], [ "Rockwell", "Harold", "" ], [ "Zhang", "Yimeng", "" ], [ "Lee", "Tai Sing", "" ] ]
The abundant recurrent horizontal and feedback connections in the primate visual cortex are thought to play an important role in bringing global and semantic contextual information to early visual areas during perceptual inference, helping to resolve local ambiguity and fill in missing details. In this study, we find that introducing feedback loops and horizontal recurrent connections to a deep convolution neural network (VGG16) allows the network to become more robust against noise and occlusion during inference, even in the initial feedforward pass. This suggests that recurrent feedback and contextual modulation transform the feedforward representations of the network in a meaningful and interesting way. We study the population codes of neurons in the network, before and after learning with feedback, and find that learning with feedback yielded an increase in discriminability (measured by d-prime) between the different object classes in the population codes of the neurons in the feedforward path, even at the earliest layer that receives feedback. We find that recurrent feedback, by injecting top-down semantic meaning to the population activities, helps the network learn better feedforward paths to robustly map noisy image patches to the latent representations corresponding to important visual concepts of each object class, resulting in greater robustness of the network against noises and occlusion as well as better fine-grained recognition.
2003.00328
Gerhard Mayer
Gerhard Mayer
Mass spectrometry for semi-experimental protein structure determination and modeling
28 pages
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The structure of proteins is essential for its function. The determination of protein structures is possible by experimental or predicted by computational methods, but also a combination of both approaches is possible. Here, first an overview about experimental structure determination methods with their pros and cons is given. Then we describe how mass spectrometry is useful for semi-experimental integrative protein structure determination. We review the methodology and describe software programs supporting such integrated protein structure prediction approaches, making use of distance constraints got from mass spectrometry cross-linking experiments
[ { "created": "Sat, 29 Feb 2020 18:43:09 GMT", "version": "v1" } ]
2020-03-03
[ [ "Mayer", "Gerhard", "" ] ]
The structure of proteins is essential for its function. The determination of protein structures is possible by experimental or predicted by computational methods, but also a combination of both approaches is possible. Here, first an overview about experimental structure determination methods with their pros and cons is given. Then we describe how mass spectrometry is useful for semi-experimental integrative protein structure determination. We review the methodology and describe software programs supporting such integrated protein structure prediction approaches, making use of distance constraints got from mass spectrometry cross-linking experiments
2307.11033
Roberto Corral L\'opez
Roberto Corral L\'opez and Samir Suweis and Sandro Azaele and Miguel A. Mu\~noz
Stochastic trade-offs and the emergence of diversification in E. coli evolution experiments
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Laboratory experiments with bacterial colonies, under well-controlled conditions often lead to evolutionary diversification, where at least two ecotypes emerge from an initially monomorphic population. Empirical evidence suggests that such "evolutionary branching" occurs stochastically, even under fixed and stable conditions. This stochastic nature is characterized by: (i) occurrence in a significant fraction, but not all, of experimental settings, (ii) emergence at widely varying times, and (iii) variable relative abundances of the resulting subpopulations across experiments. Theoretical approaches to understanding evolutionary branching under these conditions have been previously developed within the (deterministic) framework of "adaptive dynamics." Here, we advance the understanding of the stochastic nature of evolutionary outcomes by introducing the concept of "stochastic trade-offs" as opposed to "hard" ones. The key idea is that the stochasticity of mutations occurs in a high-dimensional trait space and this translates into variability that is constrained to a flexible tradeoff curve. By incorporating this additional source of stochasticity, we are able to account for the observed empirical variability and make predictions regarding the likelihood of evolutionary branching under different conditions. This approach effectively bridges the gap between theoretical predictions and experimental observations, providing insights into when and how evolutionary branching is more likely to occur in laboratory experiments.
[ { "created": "Thu, 20 Jul 2023 17:08:05 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 06:19:04 GMT", "version": "v2" } ]
2024-07-24
[ [ "López", "Roberto Corral", "" ], [ "Suweis", "Samir", "" ], [ "Azaele", "Sandro", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Laboratory experiments with bacterial colonies, under well-controlled conditions often lead to evolutionary diversification, where at least two ecotypes emerge from an initially monomorphic population. Empirical evidence suggests that such "evolutionary branching" occurs stochastically, even under fixed and stable conditions. This stochastic nature is characterized by: (i) occurrence in a significant fraction, but not all, of experimental settings, (ii) emergence at widely varying times, and (iii) variable relative abundances of the resulting subpopulations across experiments. Theoretical approaches to understanding evolutionary branching under these conditions have been previously developed within the (deterministic) framework of "adaptive dynamics." Here, we advance the understanding of the stochastic nature of evolutionary outcomes by introducing the concept of "stochastic trade-offs" as opposed to "hard" ones. The key idea is that the stochasticity of mutations occurs in a high-dimensional trait space and this translates into variability that is constrained to a flexible tradeoff curve. By incorporating this additional source of stochasticity, we are able to account for the observed empirical variability and make predictions regarding the likelihood of evolutionary branching under different conditions. This approach effectively bridges the gap between theoretical predictions and experimental observations, providing insights into when and how evolutionary branching is more likely to occur in laboratory experiments.
1312.5492
Shinya Kuroda
Takuya Koumura, Hidetoshi Urakubo, Kaoru Ohashi, Masashi Fujii and Shinya Kuroda
Stochasticity in Ca$^{2+}$ increase in spines enables robust and sensitive information coding
47 pages, 4 figures, 8 supplementary figures
null
10.1371/journal.pone.0099040
null
q-bio.MN q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A dendritic spine is a very small structure (~0.1 {\mu}m$^3$) of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca$^{2+}$ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca$^{2+}$ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca$^{2+}$ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca$^{2+}$ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca$^{2+}$ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.
[ { "created": "Thu, 19 Dec 2013 11:49:19 GMT", "version": "v1" }, { "created": "Tue, 25 Feb 2014 11:55:53 GMT", "version": "v2" } ]
2014-06-18
[ [ "Koumura", "Takuya", "" ], [ "Urakubo", "Hidetoshi", "" ], [ "Ohashi", "Kaoru", "" ], [ "Fujii", "Masashi", "" ], [ "Kuroda", "Shinya", "" ] ]
A dendritic spine is a very small structure (~0.1 {\mu}m$^3$) of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca$^{2+}$ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca$^{2+}$ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca$^{2+}$ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca$^{2+}$ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca$^{2+}$ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.
2305.01580
Li Kai
Li Kai, Li Ning, Zhang Wei, Gao Ming
Molecular design method based on novel molecular representation and variational auto-encoder
13 pages, 7 figures, conference: NIAI
4th International Conference on Natural Language Processing, Information Retrieval and AI (NIAI 2023), Volume 13, Number 03, February 2023, pp. 23-35, 2023. CS & IT - CSCP 2023
10.5121/csit.2023.130303
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the traditional VAE, a novel neural network model is presented, with the latest molecular representation, SELFIES, to improve the effect of generating new molecules. In this model, multi-layer convolutional network and Fisher information are added to the original encoding layer to learn the data characteristics and guide the encoding process, which makes the features of the data hiding layer more aggregated, and integrates the Long Short Term Memory neural network (LSTM) into the decoding layer for better data generation, which effectively solves the degradation phenomenon generated by the encoding layer and decoding layer of the original VAE model. Through experiments on zinc molecular data sets, it is found that the similarity in the new VAE is 8.47% higher than that of the original ones. SELFIES are better at generating a variety of molecules than the traditional molecular representation, SELFIES. Experiments have shown that using SELFIES and the new VAE model presented in this paper can improve the effectiveness of generating new molecules.
[ { "created": "Mon, 20 Feb 2023 05:11:53 GMT", "version": "v1" } ]
2023-05-03
[ [ "Kai", "Li", "" ], [ "Ning", "Li", "" ], [ "Wei", "Zhang", "" ], [ "Ming", "Gao", "" ] ]
Based on the traditional VAE, a novel neural network model is presented, with the latest molecular representation, SELFIES, to improve the effect of generating new molecules. In this model, multi-layer convolutional network and Fisher information are added to the original encoding layer to learn the data characteristics and guide the encoding process, which makes the features of the data hiding layer more aggregated, and integrates the Long Short Term Memory neural network (LSTM) into the decoding layer for better data generation, which effectively solves the degradation phenomenon generated by the encoding layer and decoding layer of the original VAE model. Through experiments on zinc molecular data sets, it is found that the similarity in the new VAE is 8.47% higher than that of the original ones. SELFIES are better at generating a variety of molecules than the traditional molecular representation, SELFIES. Experiments have shown that using SELFIES and the new VAE model presented in this paper can improve the effectiveness of generating new molecules.
1606.03235
Pablo Villegas G\'ongora
Pablo Villegas, Jos\'e Ruiz-Franco, Jorge Hidalgo, Miguel A. Mu\~noz
Intrinsic noise and deviations from criticality in Boolean gene-regulatory networks
14 pages, 6 figures and 1 table. Submitted to Scientific Reports
Scientific Reports 6, 34743 (2016)
10.1038/srep34743
null
q-bio.MN cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks can be successfully modeled as Boolean networks. A much discussed hypothesis says that such model networks reproduce empirical findings the best if they are tuned to operate at criticality, i.e. at the borderline between their ordered and disordered phases. Critical networks have been argued to lead to a number of functional advantages such as maximal dynamical range, maximal sensitivity to environmental changes, as well as to an excellent trade off between stability and flexibility. Here, we study the effect of noise within the context of Boolean networks trained to learn complex tasks under supervision. We verify that quasi-critical networks are the ones learning in the fastest possible way --even for asynchronous updating rules-- and that the larger the task complexity the smaller the distance to criticality. On the other hand, when additional sources of intrinsic noise in the network states and/or in its wiring pattern are introduced, the optimally performing networks become clearly subcritical. These results suggest that in order to compensate for inherent stochasticity, regulatory and other type of biological networks might become subcritical rather than being critical, all the most if the task to be performed has limited complexity.
[ { "created": "Fri, 10 Jun 2016 09:06:11 GMT", "version": "v1" } ]
2016-10-12
[ [ "Villegas", "Pablo", "" ], [ "Ruiz-Franco", "José", "" ], [ "Hidalgo", "Jorge", "" ], [ "Muñoz", "Miguel A.", "" ] ]
Gene regulatory networks can be successfully modeled as Boolean networks. A much discussed hypothesis says that such model networks reproduce empirical findings the best if they are tuned to operate at criticality, i.e. at the borderline between their ordered and disordered phases. Critical networks have been argued to lead to a number of functional advantages such as maximal dynamical range, maximal sensitivity to environmental changes, as well as to an excellent trade off between stability and flexibility. Here, we study the effect of noise within the context of Boolean networks trained to learn complex tasks under supervision. We verify that quasi-critical networks are the ones learning in the fastest possible way --even for asynchronous updating rules-- and that the larger the task complexity the smaller the distance to criticality. On the other hand, when additional sources of intrinsic noise in the network states and/or in its wiring pattern are introduced, the optimally performing networks become clearly subcritical. These results suggest that in order to compensate for inherent stochasticity, regulatory and other type of biological networks might become subcritical rather than being critical, all the most if the task to be performed has limited complexity.
0911.1843
Cecile Fauvelot
Cecile Fauvelot (COREUS), Francesca Bertozzi (CIRSA), Federica Costantini (CIRSA), Laura Airoldi (CIRSA), Marco Abbiati (CIRSA)
Lower genetic diversity in the limpet Patella caerulea on urban coastal structures compared to natural rocky habitats
null
Marine Biology 156 (2009) 2313
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human-made structures are increasingly found in marine coastal habitats. The aim of the present study was to explore whether urban coastal structures can affect the genetic variation of hard-bottom species. We conducted a population genetic analysis on the limpet Patella caerulea sampled in both natural and artificial habitats along the Adriatic coast. Five microsatellite loci were used to test for differences in genetic diversity and structure among samples. Three microsatellite loci showed strong Hardy-Weinberg disequilibrium likely linked with the presence of null alleles. Genetic diversity was significantly higher in natural habitat than in artificial habitat. A weak but significant differentiation over all limpet samples was observed, but not related to the type of habitat. While the exact causes of the differences in genetic diversity deserve further investigation, these results clearly point that the expansion of urban structures can lead to genetic diversity loss at regional scales.
[ { "created": "Tue, 10 Nov 2009 07:16:54 GMT", "version": "v1" } ]
2009-11-11
[ [ "Fauvelot", "Cecile", "", "COREUS" ], [ "Bertozzi", "Francesca", "", "CIRSA" ], [ "Costantini", "Federica", "", "CIRSA" ], [ "Airoldi", "Laura", "", "CIRSA" ], [ "Abbiati", "Marco", "", "CIRSA" ] ]
Human-made structures are increasingly found in marine coastal habitats. The aim of the present study was to explore whether urban coastal structures can affect the genetic variation of hard-bottom species. We conducted a population genetic analysis on the limpet Patella caerulea sampled in both natural and artificial habitats along the Adriatic coast. Five microsatellite loci were used to test for differences in genetic diversity and structure among samples. Three microsatellite loci showed strong Hardy-Weinberg disequilibrium likely linked with the presence of null alleles. Genetic diversity was significantly higher in natural habitat than in artificial habitat. A weak but significant differentiation over all limpet samples was observed, but not related to the type of habitat. While the exact causes of the differences in genetic diversity deserve further investigation, these results clearly point that the expansion of urban structures can lead to genetic diversity loss at regional scales.
2109.11985
Matt Holzer
Ashley Armbruster, Matt Holzer, Noah Roselli, Lena Underwood
Epidemic spreading on complex networks as front propagation into an unstable state
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study epidemic arrival times in meta-population disease models through the lens of front propagation into unstable states. We demonstrate that several features of invasion fronts in the PDE context are also relevant to the network case. We show that the susceptible-infected-recovered model on a network is linearly determined in the sense that the arrival times in the nonlinear system are approximated by the arrival times of the instability in the system linearized near the disease free state. Arrival time predictions are extended to an susceptible-exposed-infected-recovered model. We then study a recent model of social epidemics where high order interactions of individuals lead to faster invasion speeds. For these pushed fronts we compute corrections to the estimated arrival time in this case. Finally, we show how inhomogeneities in local infection rates lead to faster average arrival times.
[ { "created": "Fri, 24 Sep 2021 14:17:44 GMT", "version": "v1" }, { "created": "Tue, 18 Oct 2022 17:41:02 GMT", "version": "v2" } ]
2022-10-19
[ [ "Armbruster", "Ashley", "" ], [ "Holzer", "Matt", "" ], [ "Roselli", "Noah", "" ], [ "Underwood", "Lena", "" ] ]
We study epidemic arrival times in meta-population disease models through the lens of front propagation into unstable states. We demonstrate that several features of invasion fronts in the PDE context are also relevant to the network case. We show that the susceptible-infected-recovered model on a network is linearly determined in the sense that the arrival times in the nonlinear system are approximated by the arrival times of the instability in the system linearized near the disease free state. Arrival time predictions are extended to an susceptible-exposed-infected-recovered model. We then study a recent model of social epidemics where high order interactions of individuals lead to faster invasion speeds. For these pushed fronts we compute corrections to the estimated arrival time in this case. Finally, we show how inhomogeneities in local infection rates lead to faster average arrival times.
1704.08851
Sebastian Weichwald
Sebastian Weichwald and Moritz Grosse-Wentrup
The right tool for the right question --- beyond the encoding versus decoding dichotomy
preprint
null
null
null
q-bio.NC stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are two major questions that neuroimaging studies attempt to answer: First, how are sensory stimuli represented in the brain (which we term the stimulus-based setting)? And, second, how does the brain generate cognition (termed the response-based setting)? There has been a lively debate in the neuroimaging community whether encoding and decoding models can provide insights into these questions. In this commentary, we construct two simple and analytically tractable examples to demonstrate that while an encoding model analysis helps with the former, neither model is appropriate to satisfactorily answer the latter question. Consequently, we argue that if we want to understand how the brain generates cognition, we need to move beyond the encoding versus decoding dichotomy and instead discuss and develop tools that are specifically tailored to our endeavour.
[ { "created": "Fri, 28 Apr 2017 08:56:47 GMT", "version": "v1" } ]
2017-05-01
[ [ "Weichwald", "Sebastian", "" ], [ "Grosse-Wentrup", "Moritz", "" ] ]
There are two major questions that neuroimaging studies attempt to answer: First, how are sensory stimuli represented in the brain (which we term the stimulus-based setting)? And, second, how does the brain generate cognition (termed the response-based setting)? There has been a lively debate in the neuroimaging community whether encoding and decoding models can provide insights into these questions. In this commentary, we construct two simple and analytically tractable examples to demonstrate that while an encoding model analysis helps with the former, neither model is appropriate to satisfactorily answer the latter question. Consequently, we argue that if we want to understand how the brain generates cognition, we need to move beyond the encoding versus decoding dichotomy and instead discuss and develop tools that are specifically tailored to our endeavour.
1808.04458
Mohammadsadegh Ghiasi
Mohammad S. Ghiasi, Jason E. Chen, Edward K. Rodriguez, Ashkan Vaziri, Ara Nazarian
Computational Modeling of the Effects of Inflammatory Response and Granulation Tissue Properties on Human Bone Fracture Healing
25 Pages, 7 Figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone healing process includes four phases: inflammatory response, soft callus formation, hard callus development, and remodeling. Mechanobiological models have been used to investigate the role of various mechanical and biological factors on the bone healing. However, the initial phase of healing, which includes the inflammatory response, the granulation tissue formation and the initial callus formation during the first few days post-fracture, are generally neglected in such studies. In this study, we developed a finite-element-based model to simulate different levels of diffusion coefficient for mesenchymal stem cell (MSC) migration, Young's modulus of granulation tissue, callus thickness and interfragmentary gap size to understand the modulatory effects of these initial phase parameters on bone healing. The results showed that faster MSC migration, stiffer granulation tissue, thicker callus and smaller interfragmentary gap enhanced healing to some extent. After a certain threshold, a state of saturation was reached for MSC migration rate, granulation tissue stiffness and callus thickness. Therefore, a parametric study was performed to verify that the callus formed at the initial phase, in agreement with experimental observations, has an ideal range of geometry and material properties to have the most efficient healing time. Findings from this paper quantified the effects of the healing initial phase on healing outcome to better understand the biological and mechanobiological mechanisms and their utilization in the design and optimization of treatment strategies. Simulation outcomes also demonstrated that for fractures, where bone segments are in close proximity, callus development is not required. This finding is consistent with the concepts of primary and secondary bone healing.
[ { "created": "Mon, 13 Aug 2018 20:37:01 GMT", "version": "v1" } ]
2018-08-15
[ [ "Ghiasi", "Mohammad S.", "" ], [ "Chen", "Jason E.", "" ], [ "Rodriguez", "Edward K.", "" ], [ "Vaziri", "Ashkan", "" ], [ "Nazarian", "Ara", "" ] ]
Bone healing process includes four phases: inflammatory response, soft callus formation, hard callus development, and remodeling. Mechanobiological models have been used to investigate the role of various mechanical and biological factors on the bone healing. However, the initial phase of healing, which includes the inflammatory response, the granulation tissue formation and the initial callus formation during the first few days post-fracture, are generally neglected in such studies. In this study, we developed a finite-element-based model to simulate different levels of diffusion coefficient for mesenchymal stem cell (MSC) migration, Young's modulus of granulation tissue, callus thickness and interfragmentary gap size to understand the modulatory effects of these initial phase parameters on bone healing. The results showed that faster MSC migration, stiffer granulation tissue, thicker callus and smaller interfragmentary gap enhanced healing to some extent. After a certain threshold, a state of saturation was reached for MSC migration rate, granulation tissue stiffness and callus thickness. Therefore, a parametric study was performed to verify that the callus formed at the initial phase, in agreement with experimental observations, has an ideal range of geometry and material properties to have the most efficient healing time. Findings from this paper quantified the effects of the healing initial phase on healing outcome to better understand the biological and mechanobiological mechanisms and their utilization in the design and optimization of treatment strategies. Simulation outcomes also demonstrated that for fractures, where bone segments are in close proximity, callus development is not required. This finding is consistent with the concepts of primary and secondary bone healing.
1803.10044
Roland L. Knorr
Roland L. Knorr, Jan Steinkuehler and Rumiana Dimova
Micron-sized domains in quasi single-component giant vesicles
null
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Giant unilamellar vesicles (GUVs), are a convenient tool to study membrane-bound processes using optical microscopy. An increasing number of studies highlights the potential of these model membranes when addressing questions in membrane biophysics and cell biology. Among them, phase transitions and domain formation, dynamics and stability in raft-like mixtures are probably some of the most intensively investigated. In doing so, many research teams rely on standard protocols for GUV preparation and handling involving the use of sugar solutions. Here, we demonstrate that following such a standard approach can lead to abnormal formation of micron-sized domains in GUVs grown from only a single phospholipid. The membrane heterogeneity is visualized by means of a small fraction (0.1 mol%) of a fluorescent lipid dye. For dipalmitoylphosphatidylcholine GUVs, different types of membrane heterogeneities were detected. First, an unexpected formation of micron-sized dye-depleted domains was observed upon cooling. These domains nucleated about 10 K above the lipid main phase transition temperature, TM. In addition, upon further cooling of the GUVs down to the immediate vicinity of TM, stripe-like dye-enriched structures around the domains are detected. The micron-sized domains in quasi single-component GUVs were observed also when using two other lipids. Whereas the stripe structures are related to the phase transition of the lipid, the dye-excluding domains seem to be caused by traces of impurities present in the glucose. Supplementing glucose solutions with nm-sized liposomes at millimolar lipid concentration suppresses the formation of the micron-sized domains, presumably by providing competitive binding of the impurities to the liposome membrane in excess. It is likely that such traces of impurities can significantly alter lipid phase diagrams and cause differences among reported ones.
[ { "created": "Tue, 27 Mar 2018 12:38:46 GMT", "version": "v1" }, { "created": "Mon, 25 Jun 2018 19:17:10 GMT", "version": "v2" } ]
2018-06-27
[ [ "Knorr", "Roland L.", "" ], [ "Steinkuehler", "Jan", "" ], [ "Dimova", "Rumiana", "" ] ]
Giant unilamellar vesicles (GUVs), are a convenient tool to study membrane-bound processes using optical microscopy. An increasing number of studies highlights the potential of these model membranes when addressing questions in membrane biophysics and cell biology. Among them, phase transitions and domain formation, dynamics and stability in raft-like mixtures are probably some of the most intensively investigated. In doing so, many research teams rely on standard protocols for GUV preparation and handling involving the use of sugar solutions. Here, we demonstrate that following such a standard approach can lead to abnormal formation of micron-sized domains in GUVs grown from only a single phospholipid. The membrane heterogeneity is visualized by means of a small fraction (0.1 mol%) of a fluorescent lipid dye. For dipalmitoylphosphatidylcholine GUVs, different types of membrane heterogeneities were detected. First, an unexpected formation of micron-sized dye-depleted domains was observed upon cooling. These domains nucleated about 10 K above the lipid main phase transition temperature, TM. In addition, upon further cooling of the GUVs down to the immediate vicinity of TM, stripe-like dye-enriched structures around the domains are detected. The micron-sized domains in quasi single-component GUVs were observed also when using two other lipids. Whereas the stripe structures are related to the phase transition of the lipid, the dye-excluding domains seem to be caused by traces of impurities present in the glucose. Supplementing glucose solutions with nm-sized liposomes at millimolar lipid concentration suppresses the formation of the micron-sized domains, presumably by providing competitive binding of the impurities to the liposome membrane in excess. It is likely that such traces of impurities can significantly alter lipid phase diagrams and cause differences among reported ones.
1307.7861
Aaron Darling
Michal N\'an\'asi, Tom\'a\v{s} Vina\v{r}, and Bro\v{n}a Brejov\'a
Probabilistic Approaches to Alignment with Tandem Repeats
Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013)
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a simple tractable pair hidden Markov model for pairwise sequence alignment that accounts for the presence of short tandem repeats. Using the framework of gain functions, we design several optimization criteria for decoding this model and describe the resulting decoding algorithms, ranging from the traditional Viterbi and posterior decoding to block-based decoding algorithms specialized for our model. We compare the accuracy of individual decoding algorithms on simulated data and find our approach superior to the classical three-state pair HMM in simulations.
[ { "created": "Tue, 30 Jul 2013 08:02:34 GMT", "version": "v1" } ]
2013-07-31
[ [ "Nánási", "Michal", "" ], [ "Vinař", "Tomáš", "" ], [ "Brejová", "Broňa", "" ] ]
We propose a simple tractable pair hidden Markov model for pairwise sequence alignment that accounts for the presence of short tandem repeats. Using the framework of gain functions, we design several optimization criteria for decoding this model and describe the resulting decoding algorithms, ranging from the traditional Viterbi and posterior decoding to block-based decoding algorithms specialized for our model. We compare the accuracy of individual decoding algorithms on simulated data and find our approach superior to the classical three-state pair HMM in simulations.
2212.07638
Muhammad Anwari Leksono
Muhammad Anwari Leksono and Ayu Purwarianti
Sequential Labelling and DNABERT For Splice Site Prediction in Homo Sapiens DNA
revision 1, 5 figures, 3 tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genome sequencing technology has improved significantly in few last years and resulted in abundance genetic data. Artificial intelligence has been employed to analyze genetic data in response to its sheer size and variability. Gene prediction on single DNA has been conducted using various deep learning architectures to discover splice sites and therefore discover intron and exon region. Recent predictions are carried out with models trained on sequence with fixed splice site location which eliminates possibility of multiple splice sites existence in single sequence. This paper proposes sequential labelling to predict splice sites regardless their position in sequence. Sequential labelling is carried out on DNA to determine intron and exon region and thus discover splice sites. Sequential labelling models used are based on pretrained DNABERT-3 which has been trained on human genome. Both fine-tuning and feature-based approach are tested. Proposed model is benchmarked against latest sequential labelling model designed for mutation type and location prediction. While achieving high F1 scores on validation data, both baseline and proposed model perform poorly on test data. Error and test results analysis reveal that model experience overfitting and therefore, model is deemed not suitable for splice site prediction.
[ { "created": "Thu, 15 Dec 2022 07:18:36 GMT", "version": "v1" }, { "created": "Thu, 16 Mar 2023 11:41:59 GMT", "version": "v2" } ]
2023-03-17
[ [ "Leksono", "Muhammad Anwari", "" ], [ "Purwarianti", "Ayu", "" ] ]
Genome sequencing technology has improved significantly in few last years and resulted in abundance genetic data. Artificial intelligence has been employed to analyze genetic data in response to its sheer size and variability. Gene prediction on single DNA has been conducted using various deep learning architectures to discover splice sites and therefore discover intron and exon region. Recent predictions are carried out with models trained on sequence with fixed splice site location which eliminates possibility of multiple splice sites existence in single sequence. This paper proposes sequential labelling to predict splice sites regardless their position in sequence. Sequential labelling is carried out on DNA to determine intron and exon region and thus discover splice sites. Sequential labelling models used are based on pretrained DNABERT-3 which has been trained on human genome. Both fine-tuning and feature-based approach are tested. Proposed model is benchmarked against latest sequential labelling model designed for mutation type and location prediction. While achieving high F1 scores on validation data, both baseline and proposed model perform poorly on test data. Error and test results analysis reveal that model experience overfitting and therefore, model is deemed not suitable for splice site prediction.
1403.5686
Haris Vikalo
Xiaohu Shen, Manohar Shamaiah, and Haris Vikalo
Iterative Learning for Reference-Guided DNA Sequence Assembly from Short Reads: Algorithms and Limits of Performance
Submitted to IEEE Transactions on Signal Processing
null
10.1109/TSP.2014.2333564
null
q-bio.GN cs.CE cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent emergence of next-generation DNA sequencing technology has enabled acquisition of genetic information at unprecedented scales. In order to determine the genetic blueprint of an organism, sequencing platforms typically employ so-called shotgun sequencing strategy to oversample the target genome with a library of relatively short overlapping reads. The order of nucleotides in the reads is determined by processing the acquired noisy signals generated by the sequencing instrument. Assembly of a genome from potentially erroneous short reads is a computationally daunting task even in the scenario where a reference genome exists. Errors and gaps in the reference, and perfect repeat regions in the target, further render the assembly challenging and cause inaccuracies. In this paper, we formulate the reference-guided sequence assembly problem as the inference of the genome sequence on a bipartite graph and solve it using a message-passing algorithm. The proposed algorithm can be interpreted as the well-known classical belief propagation scheme under a certain prior. Unlike existing state-of-the-art methods, the proposed algorithm combines the information provided by the reads without needing to know reliability of the short reads (so-called quality scores). Relation of the message-passing algorithm to a provably convergent power iteration scheme is discussed. To evaluate and benchmark the performance of the proposed technique, we find an analytical expression for the probability of error of a genie-aided maximum a posteriori (MAP) decision scheme. Results on both simulated and experimental data demonstrate that the proposed message-passing algorithm outperforms commonly used state-of-the-art tools, and it nearly achieves the performance of the aforementioned MAP decision scheme.
[ { "created": "Sat, 22 Mar 2014 16:59:53 GMT", "version": "v1" } ]
2015-06-19
[ [ "Shen", "Xiaohu", "" ], [ "Shamaiah", "Manohar", "" ], [ "Vikalo", "Haris", "" ] ]
Recent emergence of next-generation DNA sequencing technology has enabled acquisition of genetic information at unprecedented scales. In order to determine the genetic blueprint of an organism, sequencing platforms typically employ so-called shotgun sequencing strategy to oversample the target genome with a library of relatively short overlapping reads. The order of nucleotides in the reads is determined by processing the acquired noisy signals generated by the sequencing instrument. Assembly of a genome from potentially erroneous short reads is a computationally daunting task even in the scenario where a reference genome exists. Errors and gaps in the reference, and perfect repeat regions in the target, further render the assembly challenging and cause inaccuracies. In this paper, we formulate the reference-guided sequence assembly problem as the inference of the genome sequence on a bipartite graph and solve it using a message-passing algorithm. The proposed algorithm can be interpreted as the well-known classical belief propagation scheme under a certain prior. Unlike existing state-of-the-art methods, the proposed algorithm combines the information provided by the reads without needing to know reliability of the short reads (so-called quality scores). Relation of the message-passing algorithm to a provably convergent power iteration scheme is discussed. To evaluate and benchmark the performance of the proposed technique, we find an analytical expression for the probability of error of a genie-aided maximum a posteriori (MAP) decision scheme. Results on both simulated and experimental data demonstrate that the proposed message-passing algorithm outperforms commonly used state-of-the-art tools, and it nearly achieves the performance of the aforementioned MAP decision scheme.
2001.11972
Shawn Gu
Shawn Gu and Tijana Milenkovic
Data-driven biological network alignment that uses topological, sequence, and functional information
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many proteins remain functionally unannotated. Sequence alignment (SA) uncovers missing annotations by transferring functional knowledge between species' sequence-conserved regions. Because SA is imperfect, network alignment (NA) complements SA by transferring functional knowledge between conserved biological network, rather than just sequence, regions of different species. Existing NA assumes that it is topological similarity (isomorphic-like matching) between network regions that corresponds to the regions' functional relatedness. However, we recently found that functionally unrelated proteins are almost as topologically similar as functionally related proteins. So, we redefined NA as a data-driven framework, TARA, which learns from network and protein functional data what kind of topological relatedness (rather than similarity) between proteins corresponds to the proteins' functional relatedness. TARA used topological information (within each network) but not sequence information (between proteins across networks). Yet, its alignments yielded higher protein functional prediction accuracy than alignments of existing NA methods, even those that used both topological and sequence information. Here, we propose TARA++ that is also data-driven, like TARA and unlike other existing methods, but that uses across-network sequence information on top of within-network topological information, unlike TARA. To deal with the within-and-across-network analysis, we adapt social network embedding to the problem of biological NA. TARA++ outperforms protein functional prediction accuracy of existing methods.
[ { "created": "Fri, 31 Jan 2020 17:43:13 GMT", "version": "v1" }, { "created": "Fri, 12 Jun 2020 19:36:28 GMT", "version": "v2" } ]
2020-06-16
[ [ "Gu", "Shawn", "" ], [ "Milenkovic", "Tijana", "" ] ]
Many proteins remain functionally unannotated. Sequence alignment (SA) uncovers missing annotations by transferring functional knowledge between species' sequence-conserved regions. Because SA is imperfect, network alignment (NA) complements SA by transferring functional knowledge between conserved biological network, rather than just sequence, regions of different species. Existing NA assumes that it is topological similarity (isomorphic-like matching) between network regions that corresponds to the regions' functional relatedness. However, we recently found that functionally unrelated proteins are almost as topologically similar as functionally related proteins. So, we redefined NA as a data-driven framework, TARA, which learns from network and protein functional data what kind of topological relatedness (rather than similarity) between proteins corresponds to the proteins' functional relatedness. TARA used topological information (within each network) but not sequence information (between proteins across networks). Yet, its alignments yielded higher protein functional prediction accuracy than alignments of existing NA methods, even those that used both topological and sequence information. Here, we propose TARA++ that is also data-driven, like TARA and unlike other existing methods, but that uses across-network sequence information on top of within-network topological information, unlike TARA. To deal with the within-and-across-network analysis, we adapt social network embedding to the problem of biological NA. TARA++ outperforms protein functional prediction accuracy of existing methods.
2311.17755
Franck Andre
Juan Gonz\'alez-Cuevas, Ricardo Arg\"uello, Marcos Florentin, Franck M. Andr\'e (METSY), Lluis Mir (IGR, METSY)
Experimental and Theoretical Brownian Dynamics Analysis of Ion Transport During Cellular Electroporation of E. coli Bacteria
Annals of Biomedical Engineering, 2023
null
10.1007/s10439-023-03353-4
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Escherichia coli bacterium is a rod-shaped organism composed of a complex double membrane structure. Knowledge of electric field driven ion transport through both membranes and the evolution of their induced permeabilization has important applications in biomedical engineering, delivery of genes and antibacterial agents. However, few studies have been conducted on Gram-negative bacteria in this regard considering the contribution of all ion types. To address this gap in knowledge, we have developed a deterministic and stochastic Brownian dynamics model to simulate in 3D space the motion of ions through pores formed in the plasma membranes of E. coli cells during electroporation. The diffusion coefficient, mobility, and translation time of Ca$^{2+}$, Mg$^{2+}$, Na$^+$, K$^+$, and Cl$^-$ ions within the pore region are estimated from the numerical model. Calculations of pore's conductance have been validated with experiments conducted at Gustave Roussy. From the simulations, it was found that the main driving force of ionic uptake during the pulse is the one due to the externally applied electric field. The results from this work provide a better understanding of ion transport during electroporation, aiding in the design of electrical pulses for maximizing ion throughput, primarily for application in cancer treatment.
[ { "created": "Wed, 29 Nov 2023 15:57:32 GMT", "version": "v1" } ]
2023-11-30
[ [ "González-Cuevas", "Juan", "", "METSY" ], [ "Argüello", "Ricardo", "", "METSY" ], [ "Florentin", "Marcos", "", "METSY" ], [ "André", "Franck M.", "", "METSY" ], [ "Mir", "Lluis", "", "IGR, METSY" ] ]
Escherichia coli bacterium is a rod-shaped organism composed of a complex double membrane structure. Knowledge of electric field driven ion transport through both membranes and the evolution of their induced permeabilization has important applications in biomedical engineering, delivery of genes and antibacterial agents. However, few studies have been conducted on Gram-negative bacteria in this regard considering the contribution of all ion types. To address this gap in knowledge, we have developed a deterministic and stochastic Brownian dynamics model to simulate in 3D space the motion of ions through pores formed in the plasma membranes of E. coli cells during electroporation. The diffusion coefficient, mobility, and translation time of Ca$^{2+}$, Mg$^{2+}$, Na$^+$, K$^+$, and Cl$^-$ ions within the pore region are estimated from the numerical model. Calculations of pore's conductance have been validated with experiments conducted at Gustave Roussy. From the simulations, it was found that the main driving force of ionic uptake during the pulse is the one due to the externally applied electric field. The results from this work provide a better understanding of ion transport during electroporation, aiding in the design of electrical pulses for maximizing ion throughput, primarily for application in cancer treatment.
1407.5503
Ellen Baake
Corinna Ernst and Ellen Baake
Rare event simulation in immune biology: Models of negative selection in T-cell maturation
13 pages, 5 figures; accepted for 10th International Workshop on Rare Event Simulation, Amsterdam, Aug. 27-29, 2014
null
null
null
q-bio.CB math.PR q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a probabilistic T-cell model that includes negative selection and takes contrasting models of tissue-restricted antigen (TRA) expression in the thymus into account. We start from the basic model of van den Berg, Rand, and Burroughs (2001) and include negative selection via individual-based T-cell modelling, in which each T-cell is defined by its stimulation rates to all relevant self antigens. We present a simulation approach based on partial tilting of the stimulation rates recognized by a single T-cell. We investigate the effects of negative selection for diverging modes of thymic antigen presentation, namely arbitrary TRA presentation, and more or less strict emulation of tissue-specific cell lines. We observe that negative selection leads to truncation of the tail of the distribution of the stimulation rates mature T-cells receive from self antigens, i.e., the self background is reduced. This increases the activation probabilities of single T-cells in the presence of non-self antigens.
[ { "created": "Mon, 21 Jul 2014 14:16:06 GMT", "version": "v1" } ]
2014-07-22
[ [ "Ernst", "Corinna", "" ], [ "Baake", "Ellen", "" ] ]
We present a probabilistic T-cell model that includes negative selection and takes contrasting models of tissue-restricted antigen (TRA) expression in the thymus into account. We start from the basic model of van den Berg, Rand, and Burroughs (2001) and include negative selection via individual-based T-cell modelling, in which each T-cell is defined by its stimulation rates to all relevant self antigens. We present a simulation approach based on partial tilting of the stimulation rates recognized by a single T-cell. We investigate the effects of negative selection for diverging modes of thymic antigen presentation, namely arbitrary TRA presentation, and more or less strict emulation of tissue-specific cell lines. We observe that negative selection leads to truncation of the tail of the distribution of the stimulation rates mature T-cells receive from self antigens, i.e., the self background is reduced. This increases the activation probabilities of single T-cells in the presence of non-self antigens.
1607.00483
Vaibhav Madhok
Vaibhav Madhok
Quasi-Species in High Dimensional Spaces
Ideas on high dimensionality geometry, concentration of measure and quasi-species evolution. Work in progress
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that, under certain assumptions, the fitness of almost all quasi-species becomes independent of mutational probabilities and the initial frequency distributions of the sequences in high dimensional sequence spaces. This result is the consequence of the concentration of measure on a high dimensional hypersphere and its extension to Lipschitz functions knows as the Levy's Lemma. Therefore, evolutionary dynamics almost always yields the same value for fitness of the quasi-species, independent of the mutational process and initial conditions, and is quite robust to mutational changes and fluctuations in initial conditions. Our results naturally extend to any Lipschitz function whose input parameters are the frequencies of individual constituents of the quasi-species. This suggests that the functional capabilities of high dimensional quasi-species are robust to fluctuations in the mutational probabilities and initial conditions.
[ { "created": "Sat, 2 Jul 2016 09:32:50 GMT", "version": "v1" } ]
2016-07-05
[ [ "Madhok", "Vaibhav", "" ] ]
We show that, under certain assumptions, the fitness of almost all quasi-species becomes independent of mutational probabilities and the initial frequency distributions of the sequences in high dimensional sequence spaces. This result is the consequence of the concentration of measure on a high dimensional hypersphere and its extension to Lipschitz functions knows as the Levy's Lemma. Therefore, evolutionary dynamics almost always yields the same value for fitness of the quasi-species, independent of the mutational process and initial conditions, and is quite robust to mutational changes and fluctuations in initial conditions. Our results naturally extend to any Lipschitz function whose input parameters are the frequencies of individual constituents of the quasi-species. This suggests that the functional capabilities of high dimensional quasi-species are robust to fluctuations in the mutational probabilities and initial conditions.
1811.03177
Younhun Kim
Younhun Kim, Frederic Koehler, Ankur Moitra, Elchanan Mossel and Govind Ramnarayan
How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories
38 pages, Appeared in RECOMB 2019
null
null
null
q-bio.PE math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction of population histories is a central problem in population genetics. Existing coalescent-based methods, like the seminal work of Li and Durbin (Nature, 2011), attempt to solve this problem using sequence data but have no rigorous guarantees. Determining the amount of data needed to correctly reconstruct population histories is a major challenge. Using a variety of tools from information theory, the theory of extremal polynomials, and approximation theory, we prove new sharp information-theoretic lower bounds on the problem of reconstructing population structure -- the history of multiple subpopulations that merge, split and change sizes over time. Our lower bounds are exponential in the number of subpopulations, even when reconstructing recent histories. We demonstrate the sharpness of our lower bounds by providing algorithms for distinguishing and learning population histories with matching dependence on the number of subpopulations. Along the way and of independent interest, we essentially determine the optimal number of samples needed to learn an exponential mixture distribution information-theoretically, proving the upper bound by analyzing natural (and efficient) algorithms for this problem.
[ { "created": "Wed, 7 Nov 2018 23:00:15 GMT", "version": "v1" }, { "created": "Wed, 8 May 2019 15:24:19 GMT", "version": "v2" } ]
2020-05-11
[ [ "Kim", "Younhun", "" ], [ "Koehler", "Frederic", "" ], [ "Moitra", "Ankur", "" ], [ "Mossel", "Elchanan", "" ], [ "Ramnarayan", "Govind", "" ] ]
Reconstruction of population histories is a central problem in population genetics. Existing coalescent-based methods, like the seminal work of Li and Durbin (Nature, 2011), attempt to solve this problem using sequence data but have no rigorous guarantees. Determining the amount of data needed to correctly reconstruct population histories is a major challenge. Using a variety of tools from information theory, the theory of extremal polynomials, and approximation theory, we prove new sharp information-theoretic lower bounds on the problem of reconstructing population structure -- the history of multiple subpopulations that merge, split and change sizes over time. Our lower bounds are exponential in the number of subpopulations, even when reconstructing recent histories. We demonstrate the sharpness of our lower bounds by providing algorithms for distinguishing and learning population histories with matching dependence on the number of subpopulations. Along the way and of independent interest, we essentially determine the optimal number of samples needed to learn an exponential mixture distribution information-theoretically, proving the upper bound by analyzing natural (and efficient) algorithms for this problem.
1204.5999
Michael Deem
Dirk M. Lorenz, Alice Jeng, and Michael W. Deem
The Emergence of Modularity in Biological Systems
54 pages, 25 figures
Physics of Life Reviews, 8 (2011) 129-160
10.1016/j.plrev.2011.02.003
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this review, we discuss modularity and hierarchy in biological systems. We review examples from protein structure, genetics, and biological networks of modular partitioning of the geometry of biological space. We review theories to explain modular organization of biology, with a focus on explaining how biology may spontaneously organize to a structured form. That is, we seek to explain how biology nucleated from among the many possibilities in chemistry. The emergence of modular organization of biological structure will be described as a symmetry-breaking phase transition, with modularity as the order parameter. Experimental support for this description will be reviewed. Examples will be presented from pathogen structure, metabolic networks, gene networks, and protein-protein interaction networks. Additional examples will be presented from ecological food networks, developmental pathways, physiology, and social networks.
[ { "created": "Thu, 26 Apr 2012 18:45:00 GMT", "version": "v1" } ]
2015-06-04
[ [ "Lorenz", "Dirk M.", "" ], [ "Jeng", "Alice", "" ], [ "Deem", "Michael W.", "" ] ]
In this review, we discuss modularity and hierarchy in biological systems. We review examples from protein structure, genetics, and biological networks of modular partitioning of the geometry of biological space. We review theories to explain modular organization of biology, with a focus on explaining how biology may spontaneously organize to a structured form. That is, we seek to explain how biology nucleated from among the many possibilities in chemistry. The emergence of modular organization of biological structure will be described as a symmetry-breaking phase transition, with modularity as the order parameter. Experimental support for this description will be reviewed. Examples will be presented from pathogen structure, metabolic networks, gene networks, and protein-protein interaction networks. Additional examples will be presented from ecological food networks, developmental pathways, physiology, and social networks.
2401.10211
Zhengyi Li
Zhengyi Li, Menglu Li, Lida Zhu, Wen Zhang
Improving PTM Site Prediction by Coupling of Multi-Granularity Structure and Multi-Scale Sequence Representation
null
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein post-translational modification (PTM) site prediction is a fundamental task in bioinformatics. Several computational methods have been developed to predict PTM sites. However, existing methods ignore the structure information and merely utilize protein sequences. Furthermore, designing a more fine-grained structure representation learning method is urgently needed as PTM is a biological event that occurs at the atom granularity. In this paper, we propose a PTM site prediction method by Coupling of Multi-Granularity structure and Multi-Scale sequence representation, PTM-CMGMS for brevity. Specifically, multigranularity structure-aware representation learning is designed to learn neighborhood structure representations at the amino acid, atom, and whole protein granularity from AlphaFold predicted structures, followed by utilizing contrastive learning to optimize the structure representations.Additionally, multi-scale sequence representation learning is used to extract context sequence information, and motif generated by aligning all context sequences of PTM sites assists the prediction. Extensive experiments on three datasets show that PTM-CMGMS outperforms the state-of-the-art methods.
[ { "created": "Thu, 4 Jan 2024 20:49:32 GMT", "version": "v1" } ]
2024-01-19
[ [ "Li", "Zhengyi", "" ], [ "Li", "Menglu", "" ], [ "Zhu", "Lida", "" ], [ "Zhang", "Wen", "" ] ]
Protein post-translational modification (PTM) site prediction is a fundamental task in bioinformatics. Several computational methods have been developed to predict PTM sites. However, existing methods ignore the structure information and merely utilize protein sequences. Furthermore, designing a more fine-grained structure representation learning method is urgently needed as PTM is a biological event that occurs at the atom granularity. In this paper, we propose a PTM site prediction method by Coupling of Multi-Granularity structure and Multi-Scale sequence representation, PTM-CMGMS for brevity. Specifically, multigranularity structure-aware representation learning is designed to learn neighborhood structure representations at the amino acid, atom, and whole protein granularity from AlphaFold predicted structures, followed by utilizing contrastive learning to optimize the structure representations.Additionally, multi-scale sequence representation learning is used to extract context sequence information, and motif generated by aligning all context sequences of PTM sites assists the prediction. Extensive experiments on three datasets show that PTM-CMGMS outperforms the state-of-the-art methods.
2109.00364
Florian Franke
Florian Franke, Sebatian Aland, Hans-Joachim B\"ohme, Anja Voss-B\"ohme, Steffen Lange
Is cell segregation like oil and water: asymptotic versus transitory regime
41 pages, 11+11 figures, 1+1 table
PLoS Computational Biology, September 2022
10.1371/journal.pcbi.1010460
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Segregation of different cell types is a crucial process for the pattern formation in tissues, in particular during embryogenesis. Since the involved cell interactions are complex and difficult to measure individually in experiments, mathematical modelling plays an increasingly important role to unravel the mechanisms governing segregation. The analysis of these theoretical models focuses mainly on the asymptotic behavior at large times, in a steady regime and for large numbers of cells. Most famously, cell-segregation models based on the minimization of the total surface energy, a mechanism also driving the demixing of immiscible fluids, are known to exhibit asymptotically a particular algebraic scaling behavior. However, it is not clear, whether the asymptotic regime of the numerical models is relevant at the spatio-temporal scales of actual biological processes and in-vitro experiments. By developing a mapping between cell-based models and experimental settings, we are able to directly compare previous experimental data to numerical simulations of cell segregation quantitatively. We demonstrate that the experiments are reproduced by the transitory regime of the models rather than the asymptotic one. Our work puts a new perspective on previous model-driven conclusions on cell segregation mechanisms.
[ { "created": "Wed, 1 Sep 2021 12:59:52 GMT", "version": "v1" }, { "created": "Thu, 28 Oct 2021 06:04:19 GMT", "version": "v2" }, { "created": "Wed, 13 Apr 2022 09:31:25 GMT", "version": "v3" }, { "created": "Fri, 22 Jul 2022 08:25:09 GMT", "version": "v4" } ]
2022-09-21
[ [ "Franke", "Florian", "" ], [ "Aland", "Sebatian", "" ], [ "Böhme", "Hans-Joachim", "" ], [ "Voss-Böhme", "Anja", "" ], [ "Lange", "Steffen", "" ] ]
Segregation of different cell types is a crucial process for the pattern formation in tissues, in particular during embryogenesis. Since the involved cell interactions are complex and difficult to measure individually in experiments, mathematical modelling plays an increasingly important role to unravel the mechanisms governing segregation. The analysis of these theoretical models focuses mainly on the asymptotic behavior at large times, in a steady regime and for large numbers of cells. Most famously, cell-segregation models based on the minimization of the total surface energy, a mechanism also driving the demixing of immiscible fluids, are known to exhibit asymptotically a particular algebraic scaling behavior. However, it is not clear, whether the asymptotic regime of the numerical models is relevant at the spatio-temporal scales of actual biological processes and in-vitro experiments. By developing a mapping between cell-based models and experimental settings, we are able to directly compare previous experimental data to numerical simulations of cell segregation quantitatively. We demonstrate that the experiments are reproduced by the transitory regime of the models rather than the asymptotic one. Our work puts a new perspective on previous model-driven conclusions on cell segregation mechanisms.
2403.11517
Haibao Wang
Haibao Wang, Jun Kai Ho, Fan L. Cheng, Shuntaro C. Aoki, Yusuke Muraki, Misato Tanaka and Yukiyasu Kamitani
Inter-individual and inter-site neural code conversion without shared stimuli
null
null
null
null
q-bio.NC cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inter-individual variability in fine-grained functional brain organization poses challenges for scalable data analysis and modeling. Functional alignment techniques can help mitigate these individual differences but typically require paired brain data with the same stimuli between individuals, which is often unavailable. We present a neural code conversion method that overcomes this constraint by optimizing conversion parameters based on the discrepancy between the stimulus contents represented by original and converted brain activity patterns. This approach, combined with hierarchical features of deep neural networks (DNNs) as latent content representations, achieves conversion accuracy comparable to methods using shared stimuli. The converted brain activity from a source subject can be accurately decoded using the target's pre-trained decoders, producing high-quality visual image reconstructions that rival within-individual decoding, even with data across different sites and limited training samples. Our approach offers a promising framework for scalable neural data analysis and modeling and a foundation for brain-to-brain communication.
[ { "created": "Mon, 18 Mar 2024 07:10:52 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2024 11:16:16 GMT", "version": "v2" } ]
2024-08-02
[ [ "Wang", "Haibao", "" ], [ "Ho", "Jun Kai", "" ], [ "Cheng", "Fan L.", "" ], [ "Aoki", "Shuntaro C.", "" ], [ "Muraki", "Yusuke", "" ], [ "Tanaka", "Misato", "" ], [ "Kamitani", "Yukiyasu", "" ] ]
Inter-individual variability in fine-grained functional brain organization poses challenges for scalable data analysis and modeling. Functional alignment techniques can help mitigate these individual differences but typically require paired brain data with the same stimuli between individuals, which is often unavailable. We present a neural code conversion method that overcomes this constraint by optimizing conversion parameters based on the discrepancy between the stimulus contents represented by original and converted brain activity patterns. This approach, combined with hierarchical features of deep neural networks (DNNs) as latent content representations, achieves conversion accuracy comparable to methods using shared stimuli. The converted brain activity from a source subject can be accurately decoded using the target's pre-trained decoders, producing high-quality visual image reconstructions that rival within-individual decoding, even with data across different sites and limited training samples. Our approach offers a promising framework for scalable neural data analysis and modeling and a foundation for brain-to-brain communication.
1908.04875
Casey Fleeter
Casey M. Fleeter, Gianluca Geraci, Daniele E. Schiavazzi, Andrew M. Kahn, Alison L. Marsden
Multilevel and multifidelity uncertainty quantification for cardiovascular hemodynamics
null
null
10.1016/j.cma.2020.113030
null
q-bio.QM physics.comp-ph stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard approaches for uncertainty quantification in cardiovascular modeling pose challenges due to the large number of uncertain inputs and the significant computational cost of realistic three-dimensional simulations. We propose an efficient uncertainty quantification framework utilizing a multilevel multifidelity Monte Carlo estimator to improve the accuracy of hemodynamic quantities of interest while maintaining reasonable computational cost. This is achieved by leveraging three cardiovascular model fidelities, each with varying spatial resolution to rigorously quantify the variability in hemodynamic outputs. We employ two low-fidelity models to construct several different estimators. Our goal is to investigate and compare the efficiency of estimators built from combinations of these low-fidelity and high-fidelity models. We demonstrate this framework on healthy and diseased models of aortic and coronary anatomy, including uncertainties in material property and boundary condition parameters. We seek to demonstrate that for this application it is possible to accelerate the convergence of the estimators by utilizing a MLMF paradigm. Therefore, we compare our approach to Monte Carlo and multilevel Monte Carlo estimators based only on three-dimensional simulations. We demonstrate significant reduction in total computational cost with the MLMF estimators. We also examine the differing properties of the MLMF estimators in healthy versus diseased models, as well as global versus local quantities of interest. As expected, global quantities and healthy models show larger reductions than local quantities and diseased model, as the latter rely more heavily on the highest fidelity model evaluations. In all cases, our workflow coupling Dakota's MLMF estimators with the SimVascular cardiovascular modeling framework makes uncertainty quantification feasible for constrained computational budgets.
[ { "created": "Tue, 13 Aug 2019 22:10:47 GMT", "version": "v1" }, { "created": "Thu, 16 Apr 2020 23:44:26 GMT", "version": "v2" } ]
2020-04-20
[ [ "Fleeter", "Casey M.", "" ], [ "Geraci", "Gianluca", "" ], [ "Schiavazzi", "Daniele E.", "" ], [ "Kahn", "Andrew M.", "" ], [ "Marsden", "Alison L.", "" ] ]
Standard approaches for uncertainty quantification in cardiovascular modeling pose challenges due to the large number of uncertain inputs and the significant computational cost of realistic three-dimensional simulations. We propose an efficient uncertainty quantification framework utilizing a multilevel multifidelity Monte Carlo estimator to improve the accuracy of hemodynamic quantities of interest while maintaining reasonable computational cost. This is achieved by leveraging three cardiovascular model fidelities, each with varying spatial resolution to rigorously quantify the variability in hemodynamic outputs. We employ two low-fidelity models to construct several different estimators. Our goal is to investigate and compare the efficiency of estimators built from combinations of these low-fidelity and high-fidelity models. We demonstrate this framework on healthy and diseased models of aortic and coronary anatomy, including uncertainties in material property and boundary condition parameters. We seek to demonstrate that for this application it is possible to accelerate the convergence of the estimators by utilizing a MLMF paradigm. Therefore, we compare our approach to Monte Carlo and multilevel Monte Carlo estimators based only on three-dimensional simulations. We demonstrate significant reduction in total computational cost with the MLMF estimators. We also examine the differing properties of the MLMF estimators in healthy versus diseased models, as well as global versus local quantities of interest. As expected, global quantities and healthy models show larger reductions than local quantities and diseased model, as the latter rely more heavily on the highest fidelity model evaluations. In all cases, our workflow coupling Dakota's MLMF estimators with the SimVascular cardiovascular modeling framework makes uncertainty quantification feasible for constrained computational budgets.
q-bio/0502037
Jean-Pascal Pfister
Jean-Pascal Pfister, Taro Toyoizumi, David Barber, Wulfram Gerstner
Optimal Spike-Timing Dependent Plasticity for Precise Action Potential Firing
27 pages, 10 figures
null
null
null
q-bio.NC
null
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies can be described by a two-phase learning window similar to that of Spike-Timing Dependent Plasticity (STDP). If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. The presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality,are discussed.
[ { "created": "Thu, 24 Feb 2005 16:28:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pfister", "Jean-Pascal", "" ], [ "Toyoizumi", "Taro", "" ], [ "Barber", "David", "" ], [ "Gerstner", "Wulfram", "" ] ]
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies can be described by a two-phase learning window similar to that of Spike-Timing Dependent Plasticity (STDP). If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. The presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality,are discussed.
2208.00935
Jia Qi Yip
Jia Qi Yip, Dianwen Ng, Bin Ma, Konstantin Pervushin, Eng Siong Chng
Amino Acid Classification in 2D NMR Spectra via Acoustic Signal Embeddings
null
null
null
null
q-bio.QM eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nuclear Magnetic Resonance (NMR) is used in structural biology to experimentally determine the structure of proteins, which is used in many areas of biology and is an important part of drug development. Unfortunately, NMR data can cost thousands of dollars per sample to collect and it can take a specialist weeks to assign the observed resonances to specific chemical groups. There has thus been growing interest in the NMR community to use deep learning to automate NMR data annotation. Due to similarities between NMR and audio data, we propose that methods used in acoustic signal processing can be applied to NMR as well. Using a simulated amino acid dataset, we show that by swapping out filter banks with a trainable convolutional encoder, acoustic signal embeddings from speaker verification models can be used for amino acid classification in 2D NMR spectra by treating each amino acid as a unique speaker. On an NMR dataset comparable in size with of 46 hours of audio, we achieve a classification performance of 97.7% on a 20-class problem. We also achieve a 23% relative improvement by using an acoustic embedding model compared to an existing NMR-based model.
[ { "created": "Mon, 1 Aug 2022 15:36:22 GMT", "version": "v1" } ]
2022-08-03
[ [ "Yip", "Jia Qi", "" ], [ "Ng", "Dianwen", "" ], [ "Ma", "Bin", "" ], [ "Pervushin", "Konstantin", "" ], [ "Chng", "Eng Siong", "" ] ]
Nuclear Magnetic Resonance (NMR) is used in structural biology to experimentally determine the structure of proteins, which is used in many areas of biology and is an important part of drug development. Unfortunately, NMR data can cost thousands of dollars per sample to collect and it can take a specialist weeks to assign the observed resonances to specific chemical groups. There has thus been growing interest in the NMR community to use deep learning to automate NMR data annotation. Due to similarities between NMR and audio data, we propose that methods used in acoustic signal processing can be applied to NMR as well. Using a simulated amino acid dataset, we show that by swapping out filter banks with a trainable convolutional encoder, acoustic signal embeddings from speaker verification models can be used for amino acid classification in 2D NMR spectra by treating each amino acid as a unique speaker. On an NMR dataset comparable in size with of 46 hours of audio, we achieve a classification performance of 97.7% on a 20-class problem. We also achieve a 23% relative improvement by using an acoustic embedding model compared to an existing NMR-based model.
q-bio/0507041
Jim Bashford
J.D. Bashford and P.D. Jarvis
A base pairing model of duplex formation I: Watson-Crick pairing geometries
Latex file, 13 pages, no figures. Refereed draft of manuscript submitted to Biopolymers
Biopolymers 78: 287-297, 2005
10.1002/bip.20282
UTAS-PHYS-2004-05
q-bio.BM
null
We present a base-pairing model of oligonuleotide duplex formation and show in detail its equivalence to the Nearest-Neighbour dimer methods from fits to free energy of duplex formation data for short DNA-DNA and DNA-RNA hybrids containing only Watson Crick pairs. In this approach the connection between rank-deficient polymer and rank-determinant oligonucleotide parameter, sets for DNA duplexes is transparent. The method is generalised to include RNA/DNA hybrids where the rank-deficient model with 11 dimer parameters in fact provides marginally improved predictions relative to the standard method with 16 independent dimer parameters ($\Delta G$ mean errors of 4.5 and 5.4 % respectively).
[ { "created": "Thu, 28 Jul 2005 04:37:03 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bashford", "J. D.", "" ], [ "Jarvis", "P. D.", "" ] ]
We present a base-pairing model of oligonuleotide duplex formation and show in detail its equivalence to the Nearest-Neighbour dimer methods from fits to free energy of duplex formation data for short DNA-DNA and DNA-RNA hybrids containing only Watson Crick pairs. In this approach the connection between rank-deficient polymer and rank-determinant oligonucleotide parameter, sets for DNA duplexes is transparent. The method is generalised to include RNA/DNA hybrids where the rank-deficient model with 11 dimer parameters in fact provides marginally improved predictions relative to the standard method with 16 independent dimer parameters ($\Delta G$ mean errors of 4.5 and 5.4 % respectively).
2005.02388
Endre Cs\'oka
Endre Cs\'oka
Application-oriented mathematical algorithms for group testing
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have a large number of samples and we want to find the infected ones using as few number of tests as possible. We can use group testing which tells about a small group of people whether at least one of them is infected. Group testing is particularly efficient if the infection rate is low. The goal of this article is to summarize and extend the mathematical knowledge about the most efficient group testing algorithms, focusing on real-life applications instead of pure mathematical motivations and approaches.
[ { "created": "Tue, 5 May 2020 14:40:46 GMT", "version": "v1" } ]
2020-05-07
[ [ "Csóka", "Endre", "" ] ]
We have a large number of samples and we want to find the infected ones using as few number of tests as possible. We can use group testing which tells about a small group of people whether at least one of them is infected. Group testing is particularly efficient if the infection rate is low. The goal of this article is to summarize and extend the mathematical knowledge about the most efficient group testing algorithms, focusing on real-life applications instead of pure mathematical motivations and approaches.
2402.10308
Wouter-Jan Rappel
Timothy J Tyree, Patrick Murphy, Wouter-Jan Rappel
Annihilation dynamics during spiral defect chaos revealed by particle models
11 pages, 11 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Pair-annihilation events are ubiquitous in a variety of spatially extended systems and are often studied using computationally expensive simulations. Here we develop an approach in which we simulate the pair-annihilation of spiral wave tips in cardiac models using a computationally efficient particle model. Spiral wave tips are represented as particles with dynamics governed by diffusive behavior and short-ranged attraction. The parameters for diffusion and attraction are obtained by comparing particle motion to the trajectories of spiral wave tips in cardiac models during spiral defect chaos. The particle model reproduces the annihilation rates of the cardiac models and can determine the statistics of spiral wave dynamics, including its mean termination time. We show that increasing the attraction coefficient sharply decreases the mean termination time, making it a possible target for pharmaceutical intervention
[ { "created": "Thu, 15 Feb 2024 20:20:29 GMT", "version": "v1" } ]
2024-02-19
[ [ "Tyree", "Timothy J", "" ], [ "Murphy", "Patrick", "" ], [ "Rappel", "Wouter-Jan", "" ] ]
Pair-annihilation events are ubiquitous in a variety of spatially extended systems and are often studied using computationally expensive simulations. Here we develop an approach in which we simulate the pair-annihilation of spiral wave tips in cardiac models using a computationally efficient particle model. Spiral wave tips are represented as particles with dynamics governed by diffusive behavior and short-ranged attraction. The parameters for diffusion and attraction are obtained by comparing particle motion to the trajectories of spiral wave tips in cardiac models during spiral defect chaos. The particle model reproduces the annihilation rates of the cardiac models and can determine the statistics of spiral wave dynamics, including its mean termination time. We show that increasing the attraction coefficient sharply decreases the mean termination time, making it a possible target for pharmaceutical intervention
1710.00718
Yannis Pantazis
Yannis Pantazis and Ioannis Tsamardinos
A Unified Approach for Sparse Dynamical System Inference from Temporal Measurements
13 pages, 3 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal variations in biological systems and more generally in natural sciences are typically modelled as a set of Ordinary, Partial, or Stochastic Differential or Difference Equations. Algorithms for learning the structure and the parameters of a dynamical system are distinguished based on whether time is discrete or continuous, observations are time-series or time-course, and whether the system is deterministic or stochastic, however, there is no approach able to handle the various types of dynamical systems simultaneously. In this paper, we present a unified approach to infer both the structure and the parameters of nonlinear dynamical systems of any type under the restriction of being linear with respect to the unknown parameters. Our approach, which is named Unified Sparse Dynamics Learning (USDL), constitutes of two steps. First, an atemporal system of equations is derived through the application of the weak formulation. Then, assuming a sparse representation for the dynamical system, we show that the inference problem can be expressed as a sparse signal recovery problem, allowing the application of an extensive body of algorithms and theoretical results. Results on simulated data demonstrate the efficacy and superiority of the USDL algorithm as a function of the experimental setup (sample size, number of time measurements, number of interventions, noise level). Additionally, USDL's accuracy significantly correlates with theoretical metrics such as the exact recovery coefficient. On real single-cell data, the proposed approach is able to induce high-confidence subgraphs of the signaling pathway. USDL algorithm has been integrated in SCENERY (\url{http://scenery.csd.uoc.gr/}); an online tool for single-cell mass cytometry analytics.
[ { "created": "Mon, 2 Oct 2017 15:16:55 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2017 18:48:45 GMT", "version": "v2" }, { "created": "Sat, 19 Jan 2019 21:25:01 GMT", "version": "v3" } ]
2019-01-23
[ [ "Pantazis", "Yannis", "" ], [ "Tsamardinos", "Ioannis", "" ] ]
Temporal variations in biological systems and more generally in natural sciences are typically modelled as a set of Ordinary, Partial, or Stochastic Differential or Difference Equations. Algorithms for learning the structure and the parameters of a dynamical system are distinguished based on whether time is discrete or continuous, observations are time-series or time-course, and whether the system is deterministic or stochastic, however, there is no approach able to handle the various types of dynamical systems simultaneously. In this paper, we present a unified approach to infer both the structure and the parameters of nonlinear dynamical systems of any type under the restriction of being linear with respect to the unknown parameters. Our approach, which is named Unified Sparse Dynamics Learning (USDL), constitutes of two steps. First, an atemporal system of equations is derived through the application of the weak formulation. Then, assuming a sparse representation for the dynamical system, we show that the inference problem can be expressed as a sparse signal recovery problem, allowing the application of an extensive body of algorithms and theoretical results. Results on simulated data demonstrate the efficacy and superiority of the USDL algorithm as a function of the experimental setup (sample size, number of time measurements, number of interventions, noise level). Additionally, USDL's accuracy significantly correlates with theoretical metrics such as the exact recovery coefficient. On real single-cell data, the proposed approach is able to induce high-confidence subgraphs of the signaling pathway. USDL algorithm has been integrated in SCENERY (\url{http://scenery.csd.uoc.gr/}); an online tool for single-cell mass cytometry analytics.
0711.0715
Swarnendu Tripathi
Swarnendu Tripathi and John J. Portman
Inherent flexibility and protein function: the open/closed conformational transition of the N-terminal domain of calmodulin
21 pages, 8 figures
J. Chem. Phys. 128, 205104 (2008)
10.1063/1.2928634
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The key to understanding a protein's function often lies in its conformational dynamics. We develop a coarse-grained variational model to investigate the interplay between structural transitions, conformational flexibility and function of N-terminal calmodulin (nCaM) domain. In this model, two energy basins corresponding to the ``closed'' apo conformation and ``open'' holo conformation of nCaM domain are connected by a uniform interpolation parameter. The resulting detailed transition route from our model is largely consistent with the recently proposed EF$\beta$-scaffold mechanism in EF-hand family proteins. We find that the N-terminal part in calcium binding loops I and II shows higher flexibility than the C-terminal part which form this EF$\beta$-scaffold structure. The structural transition of binding loops I and II are compared in detail. Our model predicts that binding loop II, with higher flexibility and early structural change than binding loop I, dominates the conformational transition in nCaM domain.
[ { "created": "Mon, 5 Nov 2007 18:29:14 GMT", "version": "v1" }, { "created": "Thu, 10 Jul 2008 17:13:30 GMT", "version": "v2" } ]
2008-07-10
[ [ "Tripathi", "Swarnendu", "" ], [ "Portman", "John J.", "" ] ]
The key to understanding a protein's function often lies in its conformational dynamics. We develop a coarse-grained variational model to investigate the interplay between structural transitions, conformational flexibility and function of N-terminal calmodulin (nCaM) domain. In this model, two energy basins corresponding to the ``closed'' apo conformation and ``open'' holo conformation of nCaM domain are connected by a uniform interpolation parameter. The resulting detailed transition route from our model is largely consistent with the recently proposed EF$\beta$-scaffold mechanism in EF-hand family proteins. We find that the N-terminal part in calcium binding loops I and II shows higher flexibility than the C-terminal part which form this EF$\beta$-scaffold structure. The structural transition of binding loops I and II are compared in detail. Our model predicts that binding loop II, with higher flexibility and early structural change than binding loop I, dominates the conformational transition in nCaM domain.
1107.4104
Jiapu Zhang
Jiapu Zhang, David Y. Gao, and Johh Yearwood
A novel canonical dual computational approach for prion AGAAAAGA amyloid fibril molecular modeling
null
J Theor Biol 284 (1) 149-157 (2011); selected by Protein Crystallography Newsletter Volume 3, No. 9, September 2011, Crystallography Times; Prions Research Today Volume 7 Issue 7, July 2011, p.14; the 18th of the Top 25 Hottest Articles (picked up from papers of Jul 2011 to Sept 2011 of J Theor Biol)
10.1016/j.jtbi.2011.06.024
null
q-bio.BM cs.CE math-ph math.MP math.OC
http://creativecommons.org/licenses/by-nc-sa/3.0/
Many experimental studies have shown that the prion AGAAAAGA palindrome hydrophobic region (113-120) has amyloid fibril forming properties and plays an important role in prion diseases. However, due to the unstable, noncrystalline and insoluble nature of the amyloid fibril, to date structural information on AGAAAAGA region (113-120) has been very limited. This region falls just within the N-terminal unstructured region PrP (1-123) of prion proteins. Traditional X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy experimental methods cannot be used to get its structural information. Under this background, this paper introduces a novel approach of the canonical dual theory to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils. The novel and powerful canonical dual computational approach introduced in this paper is for the molecular modeling of prion AGAAAAGA amyloid fibrils, and that the optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils presented in this paper are useful for the drive to find treatments for prion diseases in the field of medicinal chemistry. Overall, this paper presents an important method and provides useful information for treatments of prion diseases. Overall, this paper could be of interest to the general readership of Theoretical Biology.
[ { "created": "Mon, 18 Jul 2011 23:20:34 GMT", "version": "v1" } ]
2013-12-10
[ [ "Zhang", "Jiapu", "" ], [ "Gao", "David Y.", "" ], [ "Yearwood", "Johh", "" ] ]
Many experimental studies have shown that the prion AGAAAAGA palindrome hydrophobic region (113-120) has amyloid fibril forming properties and plays an important role in prion diseases. However, due to the unstable, noncrystalline and insoluble nature of the amyloid fibril, to date structural information on AGAAAAGA region (113-120) has been very limited. This region falls just within the N-terminal unstructured region PrP (1-123) of prion proteins. Traditional X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy experimental methods cannot be used to get its structural information. Under this background, this paper introduces a novel approach of the canonical dual theory to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils. The novel and powerful canonical dual computational approach introduced in this paper is for the molecular modeling of prion AGAAAAGA amyloid fibrils, and that the optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils presented in this paper are useful for the drive to find treatments for prion diseases in the field of medicinal chemistry. Overall, this paper presents an important method and provides useful information for treatments of prion diseases. Overall, this paper could be of interest to the general readership of Theoretical Biology.
1912.00985
Joana Fradinho
Joana Fradinho, Adrian Oehmen and Maria Reis
Improving polyhydroxyalkanoates production in phototrophic mixed cultures by optimizing accumulator reactor operating conditions
29 pages, 4 figures, 4 tables
null
10.1016/j.ijbiomac.2018.12.270
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polyhydroxyalkanoates (PHAs) production with phototrophic mixed cultures (PMCs) has been recently proposed. These cultures can be selected under the permanent presence of carbon and the PHA production can be enhanced in subsequent accumulation steps. To optimize the PHA production in accumulator reactors, this work evaluated the impact of 1) initial acetate concentration, 2) light intensity, 3) removal of residual nitrogen on the culture performance. Results indicate that low acetate concentration (<30CmM) and specific light intensities around 20W/gX are optimal operating conditions that lead to high polyhydroxybutyrate (PHB) storage yields (0.83+-0.07 Cmol-PHB/Cmol-Acet) and specific PHB production rates of 2.21+-0.07 Cmol-PHB/Cmol X d. This rate is three times higher than previously registered in non-optimized accumulation tests and enabled a PHA content increase from 15 to 30% in less than 4h. Also, it was shown for the first time, the capability of a PMC to use a real waste, fermented cheese whey, to produce PHA with a hydroxyvalerate (HV) content of 12%. These results confirm that fermented wastes can be used as substrates for PHA production with PMCs and that the energy levels in sunlight that lead to specific light intensities from 10 to 20W/gX are sufficient to drive phototrophic PHA production processes.
[ { "created": "Mon, 2 Dec 2019 18:23:42 GMT", "version": "v1" } ]
2019-12-03
[ [ "Fradinho", "Joana", "" ], [ "Oehmen", "Adrian", "" ], [ "Reis", "Maria", "" ] ]
Polyhydroxyalkanoates (PHAs) production with phototrophic mixed cultures (PMCs) has been recently proposed. These cultures can be selected under the permanent presence of carbon and the PHA production can be enhanced in subsequent accumulation steps. To optimize the PHA production in accumulator reactors, this work evaluated the impact of 1) initial acetate concentration, 2) light intensity, 3) removal of residual nitrogen on the culture performance. Results indicate that low acetate concentration (<30CmM) and specific light intensities around 20W/gX are optimal operating conditions that lead to high polyhydroxybutyrate (PHB) storage yields (0.83+-0.07 Cmol-PHB/Cmol-Acet) and specific PHB production rates of 2.21+-0.07 Cmol-PHB/Cmol X d. This rate is three times higher than previously registered in non-optimized accumulation tests and enabled a PHA content increase from 15 to 30% in less than 4h. Also, it was shown for the first time, the capability of a PMC to use a real waste, fermented cheese whey, to produce PHA with a hydroxyvalerate (HV) content of 12%. These results confirm that fermented wastes can be used as substrates for PHA production with PMCs and that the energy levels in sunlight that lead to specific light intensities from 10 to 20W/gX are sufficient to drive phototrophic PHA production processes.
2209.05688
M. Ali Al-Radhawi
M. Ali Al-Radhawi, Shubham Tripathi, Yun Zhang, Eduardo D. Sontag, and Herbert Levine
Epigenetic factor competition reshapes the EMT landscape
null
Proc Natl Acad Sci USA, 119:e2210844119, 2022
10.1073/pnas.2210844119
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The emergence of and transitions between distinct phenotypes in isogenic cells can be attributed to the intricate interplay of epigenetic marks, external signals, and gene regulatory elements. These elements include chromatin remodelers, histone modifiers, transcription factors, and regulatory RNAs. Mathematical models known as Gene Regulatory Networks (GRNs) are an increasingly important tool to unravel the workings of such complex networks. In such models, epigenetic factors are usually proposed to act on the chromatin regions directly involved in the expression of relevant genes. However, it has been well-established that these factors operate globally and compete with each other for targets genome-wide. Therefore, a perturbation of the activity of a regulator can redistribute epigenetic marks across the genome and modulate the levels of competing regulators. In this paper, we propose a conceptual and mathematical modeling framework that incorporates both local and global competition effects between antagonistic epigenetic regulators in addition to local transcription factors, and show the counter-intuitive consequences of such interactions. We apply our approach to recent experimental findings on the Epithelial-Mesenchymal Transition (EMT). We show that it can explain the puzzling experimental data as well provide new verifiable predictions.
[ { "created": "Tue, 13 Sep 2022 01:57:49 GMT", "version": "v1" } ]
2022-10-19
[ [ "Al-Radhawi", "M. Ali", "" ], [ "Tripathi", "Shubham", "" ], [ "Zhang", "Yun", "" ], [ "Sontag", "Eduardo D.", "" ], [ "Levine", "Herbert", "" ] ]
The emergence of and transitions between distinct phenotypes in isogenic cells can be attributed to the intricate interplay of epigenetic marks, external signals, and gene regulatory elements. These elements include chromatin remodelers, histone modifiers, transcription factors, and regulatory RNAs. Mathematical models known as Gene Regulatory Networks (GRNs) are an increasingly important tool to unravel the workings of such complex networks. In such models, epigenetic factors are usually proposed to act on the chromatin regions directly involved in the expression of relevant genes. However, it has been well-established that these factors operate globally and compete with each other for targets genome-wide. Therefore, a perturbation of the activity of a regulator can redistribute epigenetic marks across the genome and modulate the levels of competing regulators. In this paper, we propose a conceptual and mathematical modeling framework that incorporates both local and global competition effects between antagonistic epigenetic regulators in addition to local transcription factors, and show the counter-intuitive consequences of such interactions. We apply our approach to recent experimental findings on the Epithelial-Mesenchymal Transition (EMT). We show that it can explain the puzzling experimental data as well provide new verifiable predictions.
1210.5348
Erich Schmid
Erich W. Schmid and Wolfgang Fink
Operational Design Considerations for Retinal Prostheses
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three critical improvements for present day and future retinal vision implants are proposed and discussed: (1) A time profile for the stimulation current that leads predominantly to transverse stimulation of nerve cells; (2) auxiliary electric currents for electric field shaping with a time profile chosen such that these currents have small probability to cause stimulation; and (3) a local area scanning procedure that results in high pixel density for image/percept formation (except for losses at the boundary of an electrode array).
[ { "created": "Fri, 19 Oct 2012 09:07:33 GMT", "version": "v1" } ]
2012-10-22
[ [ "Schmid", "Erich W.", "" ], [ "Fink", "Wolfgang", "" ] ]
Three critical improvements for present day and future retinal vision implants are proposed and discussed: (1) A time profile for the stimulation current that leads predominantly to transverse stimulation of nerve cells; (2) auxiliary electric currents for electric field shaping with a time profile chosen such that these currents have small probability to cause stimulation; and (3) a local area scanning procedure that results in high pixel density for image/percept formation (except for losses at the boundary of an electrode array).
2401.00102
Phuc Nguyen
Phuc Nguyen, Rohit Arora, Elliot D. Hill, Jasper Braun, Alexandra Morgan, Liza M. Quintana, Gabrielle Mazzoni, Ghee Rye Lee, Rima Arnaout, Ramy Arnaout
$\textit{greylock}$: A Python Package for Measuring The Composition of Complex Datasets
42 pages, many figures. Many thanks to Ralf Bundschuh for help with the submission process
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Machine-learning datasets are typically characterized by measuring their size and class balance. However, there exists a richer and potentially more useful set of measures, termed diversity measures, that incorporate elements' frequencies and between-element similarities. Although these have been available in the R and Julia programming languages for other applications, they have not been as readily available in Python, which is widely used for machine learning, and are not easily applied to machine-learning-sized datasets without special coding considerations. To address these issues, we developed $\textit{greylock}$, a Python package that calculates diversity measures and is tailored to large datasets. $\textit{greylock}$ can calculate any of the frequency-sensitive measures of Hill's D-number framework, and going beyond Hill, their similarity-sensitive counterparts (Greylock is a mountain). $\textit{greylock}$ also outputs measures that compare datasets (beta diversities). We first briefly review the D-number framework, illustrating how it incorporates elements' frequencies and between-element similarities. We then describe $\textit{greylock}$'s key features and usage. We end with several examples - immunomics, metagenomics, computational pathology, and medical imaging - illustrating $\textit{greylock}$'s applicability across a range of dataset types and fields.
[ { "created": "Fri, 29 Dec 2023 23:51:48 GMT", "version": "v1" } ]
2024-01-02
[ [ "Nguyen", "Phuc", "" ], [ "Arora", "Rohit", "" ], [ "Hill", "Elliot D.", "" ], [ "Braun", "Jasper", "" ], [ "Morgan", "Alexandra", "" ], [ "Quintana", "Liza M.", "" ], [ "Mazzoni", "Gabrielle", "" ], [ "Lee", "Ghee Rye", "" ], [ "Arnaout", "Rima", "" ], [ "Arnaout", "Ramy", "" ] ]
Machine-learning datasets are typically characterized by measuring their size and class balance. However, there exists a richer and potentially more useful set of measures, termed diversity measures, that incorporate elements' frequencies and between-element similarities. Although these have been available in the R and Julia programming languages for other applications, they have not been as readily available in Python, which is widely used for machine learning, and are not easily applied to machine-learning-sized datasets without special coding considerations. To address these issues, we developed $\textit{greylock}$, a Python package that calculates diversity measures and is tailored to large datasets. $\textit{greylock}$ can calculate any of the frequency-sensitive measures of Hill's D-number framework, and going beyond Hill, their similarity-sensitive counterparts (Greylock is a mountain). $\textit{greylock}$ also outputs measures that compare datasets (beta diversities). We first briefly review the D-number framework, illustrating how it incorporates elements' frequencies and between-element similarities. We then describe $\textit{greylock}$'s key features and usage. We end with several examples - immunomics, metagenomics, computational pathology, and medical imaging - illustrating $\textit{greylock}$'s applicability across a range of dataset types and fields.
2103.09563
Werner M\"uller
Elham Yousefi and Werner G. M\"uller
Impact of the error structure on the design and analysis of enzyme kinetic models
null
null
null
null
q-bio.MN stat.ME
http://creativecommons.org/licenses/by/4.0/
The statistical analysis of enzyme kinetic reactions usually involves models of the response functions which are well defined on the basis of Michaelis-Menten type equations. The error structure however is often without good reason assumed as additive Gaussian noise. This simple assumption may lead to undesired properties of the analysis, particularly when simulations are involved and consequently negative simulated reaction rates may occur. In this study we investigate the effect of assuming multiplicative lognormal errors instead. While there is typically little impact on the estimates, the experimental designs and their efficiencies are decisively affected, particularly when it comes to model discrimination problems.
[ { "created": "Wed, 17 Mar 2021 10:59:23 GMT", "version": "v1" } ]
2021-03-19
[ [ "Yousefi", "Elham", "" ], [ "Müller", "Werner G.", "" ] ]
The statistical analysis of enzyme kinetic reactions usually involves models of the response functions which are well defined on the basis of Michaelis-Menten type equations. The error structure however is often without good reason assumed as additive Gaussian noise. This simple assumption may lead to undesired properties of the analysis, particularly when simulations are involved and consequently negative simulated reaction rates may occur. In this study we investigate the effect of assuming multiplicative lognormal errors instead. While there is typically little impact on the estimates, the experimental designs and their efficiencies are decisively affected, particularly when it comes to model discrimination problems.
1712.00843
Marinka Zitnik
Monica Agrawal, Marinka Zitnik, Jure Leskovec
Large-scale analysis of disease pathways in the human interactome
null
Pacific Symposium on Biocomputing 23:111-122(2018)
null
null
q-bio.MN cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discovering disease pathways, which can be defined as sets of proteins associated with a given disease, is an important problem that has the potential to provide clinically actionable insights for disease diagnosis, prognosis, and treatment. Computational methods aid the discovery by relying on protein-protein interaction (PPI) networks. They start with a few known disease-associated proteins and aim to find the rest of the pathway by exploring the PPI network around the known disease proteins. However, the success of such methods has been limited, and failure cases have not been well understood. Here we study the PPI network structure of 519 disease pathways. We find that 90% of pathways do not correspond to single well-connected components in the PPI network. Instead, proteins associated with a single disease tend to form many separate connected components/regions in the network. We then evaluate state-of-the-art disease pathway discovery methods and show that their performance is especially poor on diseases with disconnected pathways. Thus, we conclude that network connectivity structure alone may not be sufficient for disease pathway discovery. However, we show that higher-order network structures, such as small subgraphs of the pathway, provide a promising direction for the development of new methods.
[ { "created": "Sun, 3 Dec 2017 21:51:07 GMT", "version": "v1" } ]
2017-12-05
[ [ "Agrawal", "Monica", "" ], [ "Zitnik", "Marinka", "" ], [ "Leskovec", "Jure", "" ] ]
Discovering disease pathways, which can be defined as sets of proteins associated with a given disease, is an important problem that has the potential to provide clinically actionable insights for disease diagnosis, prognosis, and treatment. Computational methods aid the discovery by relying on protein-protein interaction (PPI) networks. They start with a few known disease-associated proteins and aim to find the rest of the pathway by exploring the PPI network around the known disease proteins. However, the success of such methods has been limited, and failure cases have not been well understood. Here we study the PPI network structure of 519 disease pathways. We find that 90% of pathways do not correspond to single well-connected components in the PPI network. Instead, proteins associated with a single disease tend to form many separate connected components/regions in the network. We then evaluate state-of-the-art disease pathway discovery methods and show that their performance is especially poor on diseases with disconnected pathways. Thus, we conclude that network connectivity structure alone may not be sufficient for disease pathway discovery. However, we show that higher-order network structures, such as small subgraphs of the pathway, provide a promising direction for the development of new methods.
1611.08929
Md Jahoor Alam
Md. Jahoor Alam
GnRH induced Phase Synchrony of Coupled Neurons
9 pages, 3 figures
null
null
null
q-bio.MN q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gonadotropin-releasing hormone (GnRH) is reported to control mammalian reproductive processes. GnRH a neurohormone which is pulsatile released into the pituitary portal blood by hypothalamic GnRH neurons. In the present study, the phase synchronization among a population of identical neurons subjected to a pool of coupling molecules GnRH in extracellular medium via mean-field coupling mechanism is investigated. In the model of populated neurons, GnRH is considered to be autocrine signaling molecule and is taken to be common to all neurons to act as synchronizing agent. The rate of synchrony is estimated qualitatively and quantitatively by measuring phase locking values, time evolution of the phase differences and recurrence plots. Our numerical results show a phase transition like behavior separating the synchronized and desynchronized regimes. We also investigated long range communication or relay information transfer for one dimensional array of such neurons.
[ { "created": "Sun, 27 Nov 2016 22:43:24 GMT", "version": "v1" } ]
2016-12-06
[ [ "Alam", "Md. Jahoor", "" ] ]
Gonadotropin-releasing hormone (GnRH) is reported to control mammalian reproductive processes. GnRH a neurohormone which is pulsatile released into the pituitary portal blood by hypothalamic GnRH neurons. In the present study, the phase synchronization among a population of identical neurons subjected to a pool of coupling molecules GnRH in extracellular medium via mean-field coupling mechanism is investigated. In the model of populated neurons, GnRH is considered to be autocrine signaling molecule and is taken to be common to all neurons to act as synchronizing agent. The rate of synchrony is estimated qualitatively and quantitatively by measuring phase locking values, time evolution of the phase differences and recurrence plots. Our numerical results show a phase transition like behavior separating the synchronized and desynchronized regimes. We also investigated long range communication or relay information transfer for one dimensional array of such neurons.
2108.05848
Ilan Gronau
Zehavit Leibovich and Ilan Gronau
Eliminating unwanted patterns with minimal interference
This research was done as part of Zehavit Leibovich's dissertation for an M.Sc degree in Computer Science. Relevant code available at https://github.com/zehavitc/EliminatingDNAPatterns.git
null
null
null
q-bio.BM cs.CE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Artificial synthesis of DNA molecules is an essential part of the study of biological mechanisms. The design of a synthetic DNA molecule usually involves many objectives. One of the important objectives is to eliminate short sequence patterns that correspond to binding sites of restriction enzymes or transcription factors. While many design tools address this problem, no adequate formal solution exists for the pattern elimination problem. In this work, we present a formal description of the elimination problem and suggest efficient algorithms that eliminate unwanted patterns and allow optimization of other objectives with minimal interference to the desired DNA functionality. Our approach is flexible, efficient, and straightforward, and therefore can be easily incorporated in existing DNA design tools, making them considerably more powerful.
[ { "created": "Tue, 3 Aug 2021 19:51:43 GMT", "version": "v1" } ]
2021-08-13
[ [ "Leibovich", "Zehavit", "" ], [ "Gronau", "Ilan", "" ] ]
Artificial synthesis of DNA molecules is an essential part of the study of biological mechanisms. The design of a synthetic DNA molecule usually involves many objectives. One of the important objectives is to eliminate short sequence patterns that correspond to binding sites of restriction enzymes or transcription factors. While many design tools address this problem, no adequate formal solution exists for the pattern elimination problem. In this work, we present a formal description of the elimination problem and suggest efficient algorithms that eliminate unwanted patterns and allow optimization of other objectives with minimal interference to the desired DNA functionality. Our approach is flexible, efficient, and straightforward, and therefore can be easily incorporated in existing DNA design tools, making them considerably more powerful.
2206.13345
Weifeng Li
Zechen Wang, Liangzhen Zheng, Sheng Wang, Mingzhi Lin, Zhihao Wang, Adams Wai-Kin Kong, Yuguang Mu, Yanjie Wei, Weifeng Li
A fully differentiable ligand pose optimization framework guided by deep learning and traditional scoring functions
null
Brief Bioinform . 2023 Jan 19;24(1):bbac520
10.1093/bib/bbac520
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The machine learning (ML) and deep learning (DL) techniques are widely recognized to be powerful tools for virtual drug screening. The recently reported ML- or DL-based scoring functions have shown exciting performance in predicting protein-ligand binding affinities with fruitful application prospects. However, the differentiation between highly similar ligand conformations, including the native binding pose (the global energy minimum state), remains challenging which could greatly enhance the docking. In this work, we propose a fully differentiable framework for ligand pose optimization based on a hybrid scoring function (SF) combined with a multi-layer perceptron (DeepRMSD) and the traditional AutoDock Vina SF. The DeepRMSD+Vina, which combines (1) the root mean square deviation (RMSD) of the docking pose with respect to the native pose and (2) the AutoDock Vina score, is fully differentiable thus is capable of optimizing the ligand binding pose to the energy-lowest conformation. Evaluated by the CASF-2016 docking power dataset, the DeepRMSD+Vina reaches a success rate of 95.4%, which is by far the best reported SF to date. Based on this SF, an end-to-end ligand pose optimization framework was implemented to improve the docking pose quality. We demonstrated that this method significantly improves the docking success rate (by 15%) in redocking and crossdocking tasks, revealing the high potentialities of this framework in drug design and discovery.
[ { "created": "Mon, 27 Jun 2022 14:49:40 GMT", "version": "v1" } ]
2023-07-07
[ [ "Wang", "Zechen", "" ], [ "Zheng", "Liangzhen", "" ], [ "Wang", "Sheng", "" ], [ "Lin", "Mingzhi", "" ], [ "Wang", "Zhihao", "" ], [ "Kong", "Adams Wai-Kin", "" ], [ "Mu", "Yuguang", "" ], [ "Wei", "Yanjie", "" ], [ "Li", "Weifeng", "" ] ]
The machine learning (ML) and deep learning (DL) techniques are widely recognized to be powerful tools for virtual drug screening. The recently reported ML- or DL-based scoring functions have shown exciting performance in predicting protein-ligand binding affinities with fruitful application prospects. However, the differentiation between highly similar ligand conformations, including the native binding pose (the global energy minimum state), remains challenging which could greatly enhance the docking. In this work, we propose a fully differentiable framework for ligand pose optimization based on a hybrid scoring function (SF) combined with a multi-layer perceptron (DeepRMSD) and the traditional AutoDock Vina SF. The DeepRMSD+Vina, which combines (1) the root mean square deviation (RMSD) of the docking pose with respect to the native pose and (2) the AutoDock Vina score, is fully differentiable thus is capable of optimizing the ligand binding pose to the energy-lowest conformation. Evaluated by the CASF-2016 docking power dataset, the DeepRMSD+Vina reaches a success rate of 95.4%, which is by far the best reported SF to date. Based on this SF, an end-to-end ligand pose optimization framework was implemented to improve the docking pose quality. We demonstrated that this method significantly improves the docking success rate (by 15%) in redocking and crossdocking tasks, revealing the high potentialities of this framework in drug design and discovery.
2311.16946
Jacob Durrant
Mayar Ahmed, Alex M. Maldonado, Jacob D. Durrant
From Byte to Bench to Bedside: Molecular Dynamics Simulations and Drug Discovery
15 pages including references, 0 figures
null
null
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
Molecular dynamics (MD) simulations and computer-aided drug design (CADD) have advanced substantially over the past two decades, thanks to continuous computer hardware and software improvements. Given these advancements, MD simulations are poised to become even more powerful tools for investigating the dynamic interactions between potential small-molecule drugs and their target proteins, with significant implications for pharmacological research.
[ { "created": "Tue, 28 Nov 2023 16:49:04 GMT", "version": "v1" } ]
2023-11-29
[ [ "Ahmed", "Mayar", "" ], [ "Maldonado", "Alex M.", "" ], [ "Durrant", "Jacob D.", "" ] ]
Molecular dynamics (MD) simulations and computer-aided drug design (CADD) have advanced substantially over the past two decades, thanks to continuous computer hardware and software improvements. Given these advancements, MD simulations are poised to become even more powerful tools for investigating the dynamic interactions between potential small-molecule drugs and their target proteins, with significant implications for pharmacological research.
1806.08454
Dalit Engelhardt
Dalit Engelhardt and Eugene I. Shakhnovich
Mutation rate variability as a driving force in adaptive evolution
null
Phys. Rev. E 99, 022424 (2019)
10.1103/PhysRevE.99.022424
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutation rate is a key determinant of the pace as well as outcome of evolution, and variability in this rate has been shown in different scenarios to play a key role in evolutionary adaptation and resistance evolution under stress caused by selective pressure. Here we investigate the dynamics of resistance fixation in a bacterial population with variable mutation rates and show that evolutionary outcomes are most sensitive to mutation rate variations when the population is subject to environmental and demographic conditions that suppress the evolutionary advantage of high-fitness subpopulations. By directly mapping a biophysical fitness function to the system-level dynamics of the population we show that both low and very high, but not intermediate, levels of stress in the form of an antibiotic result in a disproportionate effect of hypermutation on resistance fixation. We demonstrate how this behavior is directly tied to the extent of genetic hitchhiking in the system, the propagation of high-mutation rate cells through association with high-fitness mutations. Our results indicate a substantial role for mutation rate flexibility in the evolution of antibiotic resistance under conditions that present a weak advantage over wildtype to resistant cells.
[ { "created": "Thu, 21 Jun 2018 23:30:17 GMT", "version": "v1" }, { "created": "Sun, 3 Feb 2019 17:30:17 GMT", "version": "v2" } ]
2019-03-01
[ [ "Engelhardt", "Dalit", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Mutation rate is a key determinant of the pace as well as outcome of evolution, and variability in this rate has been shown in different scenarios to play a key role in evolutionary adaptation and resistance evolution under stress caused by selective pressure. Here we investigate the dynamics of resistance fixation in a bacterial population with variable mutation rates and show that evolutionary outcomes are most sensitive to mutation rate variations when the population is subject to environmental and demographic conditions that suppress the evolutionary advantage of high-fitness subpopulations. By directly mapping a biophysical fitness function to the system-level dynamics of the population we show that both low and very high, but not intermediate, levels of stress in the form of an antibiotic result in a disproportionate effect of hypermutation on resistance fixation. We demonstrate how this behavior is directly tied to the extent of genetic hitchhiking in the system, the propagation of high-mutation rate cells through association with high-fitness mutations. Our results indicate a substantial role for mutation rate flexibility in the evolution of antibiotic resistance under conditions that present a weak advantage over wildtype to resistant cells.
2203.10867
Emilio N.M. Cirillo
Claudio Durastanti and Emilio N.M. Cirillo and Ilaria De Benedictis and Mario Ledda and Antonio Sciortino and Antonella Lisi and Annalisa Convertino and Valentina Mussi
Statistical classification for Raman spectra of tumoral genomic DNA
null
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We exploit Surface-Enhanced Raman Scattering (SERS) to investigate aqueous droplets of genomic DNA deposited onto silver-coated silicon nanowires and we show that it is possible to efficiently discriminate between spectra of tumoral and healthy cells. To assess the robustness of the proposed technique, we develop two different statistical approaches, one based on the Principal Component Analysis of spectral data and one based on the computation of the $\ell^2$ distance between spectra. Both methods prove to be highly efficient and we test their accuracy via the so-called Cohen's $\kappa$ statistics. We show that the synergistic combination of the SERS spectroscopy and the statistical analysis methods leads to efficient and fast cancer diagnostic applications allowing a rapid and unexpansive discrimination between healthy and tumoral genomic DNA alternative to the more complex and expensive DNA sequencing.
[ { "created": "Mon, 21 Mar 2022 10:41:07 GMT", "version": "v1" } ]
2022-03-22
[ [ "Durastanti", "Claudio", "" ], [ "Cirillo", "Emilio N. M.", "" ], [ "De Benedictis", "Ilaria", "" ], [ "Ledda", "Mario", "" ], [ "Sciortino", "Antonio", "" ], [ "Lisi", "Antonella", "" ], [ "Convertino", "Annalisa", "" ], [ "Mussi", "Valentina", "" ] ]
We exploit Surface-Enhanced Raman Scattering (SERS) to investigate aqueous droplets of genomic DNA deposited onto silver-coated silicon nanowires and we show that it is possible to efficiently discriminate between spectra of tumoral and healthy cells. To assess the robustness of the proposed technique, we develop two different statistical approaches, one based on the Principal Component Analysis of spectral data and one based on the computation of the $\ell^2$ distance between spectra. Both methods prove to be highly efficient and we test their accuracy via the so-called Cohen's $\kappa$ statistics. We show that the synergistic combination of the SERS spectroscopy and the statistical analysis methods leads to efficient and fast cancer diagnostic applications allowing a rapid and unexpansive discrimination between healthy and tumoral genomic DNA alternative to the more complex and expensive DNA sequencing.
2301.03408
Christine Ahrends
Christine Ahrends (1), Diego Vidaurre (1 and 2) ((1) Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Denmark, (2) Department of Psychiatry, University of Oxford, United Kingdom)
Dynamic Functional Connectivity
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Most generally, dynamic functional connectivity (FC) refers to the non-instantaneous couplings across timeseries from a set of brain areas, here as measured by fMRI. This is in contrast to static FC, which is defined as purely instantaneous relations. In this chapter, we provide a hands-on description of a non-exhaustive selection of different methods used to estimate dynamic FC (such as sliding windows, clustering approaches, Hidden Markov Models, and multivariate autoregressive models), and we explain, using practical examples, how data should be prepared for dynamic FC analyses and how models of dynamic FC can be evaluated. We also discuss current developments in the dynamic FC research field, including challenges of reliability and reproducibility, and perspectives of using dynamic FC for prediction.
[ { "created": "Mon, 9 Jan 2023 15:04:12 GMT", "version": "v1" } ]
2023-01-10
[ [ "Ahrends", "Christine", "", "1 and 2" ], [ "Vidaurre", "Diego", "", "1 and 2" ] ]
Most generally, dynamic functional connectivity (FC) refers to the non-instantaneous couplings across timeseries from a set of brain areas, here as measured by fMRI. This is in contrast to static FC, which is defined as purely instantaneous relations. In this chapter, we provide a hands-on description of a non-exhaustive selection of different methods used to estimate dynamic FC (such as sliding windows, clustering approaches, Hidden Markov Models, and multivariate autoregressive models), and we explain, using practical examples, how data should be prepared for dynamic FC analyses and how models of dynamic FC can be evaluated. We also discuss current developments in the dynamic FC research field, including challenges of reliability and reproducibility, and perspectives of using dynamic FC for prediction.
1806.06412
Yuri A. Dabaghian
Luca Perotti, Justin DeVito, Daniel Bessis, Yuri Dabaghian
Discrete structure of the brain rhythms
17 pages, 9 figures
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal activity in the brain generates synchronous oscillations of the Local Field Potential (LFP). The traditional analyses of the LFPs are based on decomposing the signal into simpler components, such as sinusoidal harmonics. However, a common drawback of such methods is that the decomposition primitives are usually presumed from the onset, which may bias our understanding of the signal's structure. Here, we introduce an alternative approach that allows an impartial, high resolution, hands-off decomposition of the brain waves into a small number of discrete, frequency-modulated oscillatory processes, which we call oscillons. In particular, we demonstrate that mouse hippocampal LFP contain a single oscillon that occupies the $\theta$-frequency band and a couple of $\gamma$-oscillons that correspond, respectively, to slow and fast $\gamma$-waves. Since the oscillons were identified empirically, they may represent the actual, physical structure of synchronous oscillations in neuronal ensembles, whereas Fourier-defined "brain waves" are nothing but poorly resolved oscillons.
[ { "created": "Sun, 17 Jun 2018 16:42:50 GMT", "version": "v1" } ]
2018-06-19
[ [ "Perotti", "Luca", "" ], [ "DeVito", "Justin", "" ], [ "Bessis", "Daniel", "" ], [ "Dabaghian", "Yuri", "" ] ]
Neuronal activity in the brain generates synchronous oscillations of the Local Field Potential (LFP). The traditional analyses of the LFPs are based on decomposing the signal into simpler components, such as sinusoidal harmonics. However, a common drawback of such methods is that the decomposition primitives are usually presumed from the onset, which may bias our understanding of the signal's structure. Here, we introduce an alternative approach that allows an impartial, high resolution, hands-off decomposition of the brain waves into a small number of discrete, frequency-modulated oscillatory processes, which we call oscillons. In particular, we demonstrate that mouse hippocampal LFP contain a single oscillon that occupies the $\theta$-frequency band and a couple of $\gamma$-oscillons that correspond, respectively, to slow and fast $\gamma$-waves. Since the oscillons were identified empirically, they may represent the actual, physical structure of synchronous oscillations in neuronal ensembles, whereas Fourier-defined "brain waves" are nothing but poorly resolved oscillons.
1503.05440
Lionel Roques
L. Roques, E. Walker, P. Franck, S. Soubeyrand, E. K. Klein
Using genetic data to estimate diffusion rates in heterogeneous landscapes
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
[ { "created": "Wed, 18 Mar 2015 15:03:16 GMT", "version": "v1" } ]
2015-03-19
[ [ "Roques", "L.", "" ], [ "Walker", "E.", "" ], [ "Franck", "P.", "" ], [ "Soubeyrand", "S.", "" ], [ "Klein", "E. K.", "" ] ]
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
1705.10854
Larissa Albantakis
Larissa Albantakis
A Tale of Two Animats: What does it take to have goals?
This article is a contribution to the FQXi 2016-2017 essay contest "Wandering Towards a Goal"
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.
[ { "created": "Tue, 30 May 2017 20:19:17 GMT", "version": "v1" } ]
2017-06-01
[ [ "Albantakis", "Larissa", "" ] ]
What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.
1611.01037
Valery Kirzhner
Valery Kirzhner, Zeev Volkovich, Renata Avros and Katerina Korenblat
Analysis of Metagenome Composition by the Method of Random Primers
18 pages, 4 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenome, a mixture of different genomes (as a rule, bacterial), represents a pattern, and the analysis of its composition is, currently, one of the challenging problems of bioinformatics. In the present study, the possibility of evaluating metagenome composition by DNA-marker methods is investigated. These methods are based on using primers, short nucleic acid fragments. Each primer picks out of the tested genome the fragment set specific just for this genome, which is called its spectrum (for the given primer) and is used for identifying the genome. The DNA-marker method, applied to a metagenome, also gives its spectrum, which, obviously, represents the union of the spectra of all genomes belonging to the metagenome. Thus each primer provides a projection of the genomes and of the metagenome onto the corresponding spectra set. Here we propose to apply the random projection (random primer) approach for analyzing metagenome composition and present some estimates of the method effectiveness for the case of Random Amplified Polymorphic DNA (RAPD) technology.
[ { "created": "Wed, 2 Nov 2016 18:43:14 GMT", "version": "v1" } ]
2016-11-04
[ [ "Kirzhner", "Valery", "" ], [ "Volkovich", "Zeev", "" ], [ "Avros", "Renata", "" ], [ "Korenblat", "Katerina", "" ] ]
Metagenome, a mixture of different genomes (as a rule, bacterial), represents a pattern, and the analysis of its composition is, currently, one of the challenging problems of bioinformatics. In the present study, the possibility of evaluating metagenome composition by DNA-marker methods is investigated. These methods are based on using primers, short nucleic acid fragments. Each primer picks out of the tested genome the fragment set specific just for this genome, which is called its spectrum (for the given primer) and is used for identifying the genome. The DNA-marker method, applied to a metagenome, also gives its spectrum, which, obviously, represents the union of the spectra of all genomes belonging to the metagenome. Thus each primer provides a projection of the genomes and of the metagenome onto the corresponding spectra set. Here we propose to apply the random projection (random primer) approach for analyzing metagenome composition and present some estimates of the method effectiveness for the case of Random Amplified Polymorphic DNA (RAPD) technology.
1603.01789
Pu Tian
Shiyang Long and Pu Tian
Nonlinear backbone torsional pair correlations in proteins
25 pages, 8 figures
Scientific Report, 6:34481, 2016
10.1038/srep34481
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein allostery requires dynamical structural correlations. Physical origin of which, however, remain elusive despite intensive studies during last two decades. Based on analysis of molecular dynamics (MD) simulation trajectories for ten proteins with different sizes and folds, we found that nonlinear backbone torsional pair (BTP) correlations, which are spatially more long-ranged and are mainly executed by loop residues, exist extensively in most analyzed proteins. Examination of torsional motion for correlated BTPs suggested that aharmonic torsional state transitions are essential for such non-linear correlations, which correspondingly occur on widely different and relatively longer time scales. In contrast, BTP correlations between backbone torsions in stable $\alpha$ helices and $\beta$ strands are mainly linear and spatially more short-ranged, and are more likely to associate with intra-well torsional dynamics. Further analysis revealed that the direct cause of non-linear contributions are heterogeneous, and in extreme cases canceling, linear correlations associated with different torsional states of participating torsions. Therefore, torsional state transitions of participating torsions for a correlated BTP are only necessary but not sufficient condition for significant non-linear contributions. These findings implicate a general search strategy for novel allosteric modulation of protein activities. Meanwhile, it was suggested that ensemble averaged correlation calculation and static contact network analysis, while insightful, are not sufficient to elucidate mechanisms underlying allosteric signal transmission in general, dynamical and time scale resolved analysis are essential.
[ { "created": "Sun, 6 Mar 2016 05:41:23 GMT", "version": "v1" }, { "created": "Fri, 6 May 2016 02:55:23 GMT", "version": "v2" } ]
2017-02-23
[ [ "Long", "Shiyang", "" ], [ "Tian", "Pu", "" ] ]
Protein allostery requires dynamical structural correlations. Physical origin of which, however, remain elusive despite intensive studies during last two decades. Based on analysis of molecular dynamics (MD) simulation trajectories for ten proteins with different sizes and folds, we found that nonlinear backbone torsional pair (BTP) correlations, which are spatially more long-ranged and are mainly executed by loop residues, exist extensively in most analyzed proteins. Examination of torsional motion for correlated BTPs suggested that aharmonic torsional state transitions are essential for such non-linear correlations, which correspondingly occur on widely different and relatively longer time scales. In contrast, BTP correlations between backbone torsions in stable $\alpha$ helices and $\beta$ strands are mainly linear and spatially more short-ranged, and are more likely to associate with intra-well torsional dynamics. Further analysis revealed that the direct cause of non-linear contributions are heterogeneous, and in extreme cases canceling, linear correlations associated with different torsional states of participating torsions. Therefore, torsional state transitions of participating torsions for a correlated BTP are only necessary but not sufficient condition for significant non-linear contributions. These findings implicate a general search strategy for novel allosteric modulation of protein activities. Meanwhile, it was suggested that ensemble averaged correlation calculation and static contact network analysis, while insightful, are not sufficient to elucidate mechanisms underlying allosteric signal transmission in general, dynamical and time scale resolved analysis are essential.
1803.02136
Krzysztof Bartoszek
Krzysztof Bartoszek
Limit distribution of the quartet balance index for Aldous's b>=0-model
null
Applicationes Mathematicae 47:29-44, 2020
10.4064/am2385-6-2019
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper builds up on T. Martinez-Coronado, A. Mir, F. Rossello and G. Valiente's work "A balance index for phylogenetic trees based on quartets", introducing a new balance index for trees. We show here that this balance index, in the case of Aldous's b>=0-model, convergences weakly to a distribution that can be characterized as the fixed point of a contraction operator on a class of distributions.
[ { "created": "Tue, 6 Mar 2018 12:17:53 GMT", "version": "v1" }, { "created": "Wed, 21 Nov 2018 19:00:11 GMT", "version": "v2" }, { "created": "Fri, 30 Aug 2019 06:26:37 GMT", "version": "v3" } ]
2020-11-23
[ [ "Bartoszek", "Krzysztof", "" ] ]
This paper builds up on T. Martinez-Coronado, A. Mir, F. Rossello and G. Valiente's work "A balance index for phylogenetic trees based on quartets", introducing a new balance index for trees. We show here that this balance index, in the case of Aldous's b>=0-model, convergences weakly to a distribution that can be characterized as the fixed point of a contraction operator on a class of distributions.
2405.06851
Francesca Mignacco
Francesca Mignacco, Chi-Ning Chou, SueYeon Chung
Nonlinear classification of neural manifolds with contextual information
5 pages, 5 figures
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how neural systems efficiently process information through distributed representations is a fundamental challenge at the interface of neuroscience and machine learning. Recent approaches analyze the statistical and geometrical attributes of neural representations as population-level mechanistic descriptors of task implementation. In particular, manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifolds. However, this metric has been limited to linear readouts. Here, we propose a theoretical framework that overcomes this limitation by leveraging contextual input information. We derive an exact formula for the context-dependent capacity that depends on manifold geometry and context correlations, and validate it on synthetic and real data. Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis. As context-dependent nonlinearity is ubiquitous in neural systems, our data-driven and theoretically grounded approach promises to elucidate context-dependent computation across scales, datasets, and models.
[ { "created": "Fri, 10 May 2024 23:37:31 GMT", "version": "v1" } ]
2024-05-14
[ [ "Mignacco", "Francesca", "" ], [ "Chou", "Chi-Ning", "" ], [ "Chung", "SueYeon", "" ] ]
Understanding how neural systems efficiently process information through distributed representations is a fundamental challenge at the interface of neuroscience and machine learning. Recent approaches analyze the statistical and geometrical attributes of neural representations as population-level mechanistic descriptors of task implementation. In particular, manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifolds. However, this metric has been limited to linear readouts. Here, we propose a theoretical framework that overcomes this limitation by leveraging contextual input information. We derive an exact formula for the context-dependent capacity that depends on manifold geometry and context correlations, and validate it on synthetic and real data. Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis. As context-dependent nonlinearity is ubiquitous in neural systems, our data-driven and theoretically grounded approach promises to elucidate context-dependent computation across scales, datasets, and models.
1509.06863
Youdong Mao
Zhou Yu, Wei Li Wang, Luis R. Castillo-Menendez, Joseph Sodroski, Youdong Mao
On the parameters affecting dual-target-function evaluation of single-particle selection from cryo-electron micrographs
62 pages, 11 figures. arXiv admin note: text overlap with arXiv:1309.2618
BMC Bioinformatics 2019; 20:169
10.1186/s12859-019-2714-8
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the analysis of frozen hydrated biomolecules by single-particle cryo-electron microscopy, template-based particle picking by a target function called fast local correlation (FLC) allows a large number of particle images to be automatically picked from micrographs. A second, independent target function based on maximum likelihood (ML) can be used to align the images and verify the presence of signal in the picked particles. Although the paradigm of this dual-target-function (DTF) evaluation of single-particle selection has been practiced in recent years, it remains unclear how the performance of this DTF approach is affected by the signal-to-noise ratio of the images and by the choice of references for FLC and ML. Here we examine this problem through a systematic study of simulated data, followed by experimental substantiation. We quantitatively pinpoint the critical signal-to-noise ratio (SNR), at which the DTF approach starts losing its ability to select and verify particles from cryo-EM micrographs. A Gaussian model is shown to be as effective in picking particles as a single projection view of the imaged molecule in the tested cases. For both simulated micrographs and real cryo-EM data of the 173-kDa glucose isomerase complex, we found that the use of a Gaussian model to initialize the target functions suppressed the detrimental effect of reference bias in template-based particle selection. Given a sufficient signal-to-noise ratio in the images and the appropriate choice of references, the DTF approach can expedite the automated assembly of single-particle data sets.
[ { "created": "Wed, 23 Sep 2015 07:13:28 GMT", "version": "v1" } ]
2019-04-16
[ [ "Yu", "Zhou", "" ], [ "Wang", "Wei Li", "" ], [ "Castillo-Menendez", "Luis R.", "" ], [ "Sodroski", "Joseph", "" ], [ "Mao", "Youdong", "" ] ]
In the analysis of frozen hydrated biomolecules by single-particle cryo-electron microscopy, template-based particle picking by a target function called fast local correlation (FLC) allows a large number of particle images to be automatically picked from micrographs. A second, independent target function based on maximum likelihood (ML) can be used to align the images and verify the presence of signal in the picked particles. Although the paradigm of this dual-target-function (DTF) evaluation of single-particle selection has been practiced in recent years, it remains unclear how the performance of this DTF approach is affected by the signal-to-noise ratio of the images and by the choice of references for FLC and ML. Here we examine this problem through a systematic study of simulated data, followed by experimental substantiation. We quantitatively pinpoint the critical signal-to-noise ratio (SNR), at which the DTF approach starts losing its ability to select and verify particles from cryo-EM micrographs. A Gaussian model is shown to be as effective in picking particles as a single projection view of the imaged molecule in the tested cases. For both simulated micrographs and real cryo-EM data of the 173-kDa glucose isomerase complex, we found that the use of a Gaussian model to initialize the target functions suppressed the detrimental effect of reference bias in template-based particle selection. Given a sufficient signal-to-noise ratio in the images and the appropriate choice of references, the DTF approach can expedite the automated assembly of single-particle data sets.
1206.0973
Uwe C. T\"auber
Ulrich Dobramysl and Uwe C. Tauber (Virginia Tech)
Environmental vs. demographic variability in two-species predator-prey models
5 pages, 4 figures included; to appear in Phys. Rev. Lett. (2013)
Phys. Rev. Lett. 110 (2013) 048105
10.1103/PhysRevLett.110.048105
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the competing effects and relative importance of intrinsic demographic and environmental variability on the evolutionary dynamics of a stochastic two-species Lotka-Volterra model by means of Monte Carlo simulations on a two-dimensional lattice. Individuals are assigned inheritable predation efficiencies; quenched randomness in the spatially varying reaction rates serves as environmental noise. We find that environmental variability enhances the population densities of both predators and prey while demographic variability leads to essentially neutral optimization.
[ { "created": "Tue, 5 Jun 2012 15:59:50 GMT", "version": "v1" }, { "created": "Fri, 28 Dec 2012 17:43:28 GMT", "version": "v2" } ]
2013-01-28
[ [ "Dobramysl", "Ulrich", "", "Virginia Tech" ], [ "Tauber", "Uwe C.", "", "Virginia Tech" ] ]
We investigate the competing effects and relative importance of intrinsic demographic and environmental variability on the evolutionary dynamics of a stochastic two-species Lotka-Volterra model by means of Monte Carlo simulations on a two-dimensional lattice. Individuals are assigned inheritable predation efficiencies; quenched randomness in the spatially varying reaction rates serves as environmental noise. We find that environmental variability enhances the population densities of both predators and prey while demographic variability leads to essentially neutral optimization.
1605.03553
Kieran Fox
Kieran C.R. Fox, Yoona Kang, Michael Lifshitz, Kalina Christoff
Increasing cognitive-emotional flexibility with meditation and hypnosis: The cognitive neuroscience of de-automatization
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meditation and hypnosis both aim to facilitate cognitive-emotional flexibility, i.e., the "de-automatization" of thought and behavior. However, little research or theory has addressed how internal thought patterns might change after such interventions, even though alterations in the internal flow of consciousness may precede externally observable changes in behavior. This chapter outlines three mechanisms by which meditation or hypnosis might alter or reduce automatic associations and elaborations of spontaneous thought: by an overall reduction of the chaining of thoughts into an associative stream; by de-automatizing and diversifying the content of thought chains (i.e., increasing thought flexibility or variety); and, finally, by re-automatizing chains of thought along desired or valued paths (i.e., forming new, voluntarily chosen mental habits). The authors discuss behavioral and cognitive neuroscientific evidence demonstrating the influence of hypnosis and meditation on internal cognition and highlight the putative neurobiological basis, as well as potential benefits, of these forms of de-automatization.
[ { "created": "Wed, 11 May 2016 19:06:35 GMT", "version": "v1" } ]
2016-05-12
[ [ "Fox", "Kieran C. R.", "" ], [ "Kang", "Yoona", "" ], [ "Lifshitz", "Michael", "" ], [ "Christoff", "Kalina", "" ] ]
Meditation and hypnosis both aim to facilitate cognitive-emotional flexibility, i.e., the "de-automatization" of thought and behavior. However, little research or theory has addressed how internal thought patterns might change after such interventions, even though alterations in the internal flow of consciousness may precede externally observable changes in behavior. This chapter outlines three mechanisms by which meditation or hypnosis might alter or reduce automatic associations and elaborations of spontaneous thought: by an overall reduction of the chaining of thoughts into an associative stream; by de-automatizing and diversifying the content of thought chains (i.e., increasing thought flexibility or variety); and, finally, by re-automatizing chains of thought along desired or valued paths (i.e., forming new, voluntarily chosen mental habits). The authors discuss behavioral and cognitive neuroscientific evidence demonstrating the influence of hypnosis and meditation on internal cognition and highlight the putative neurobiological basis, as well as potential benefits, of these forms of de-automatization.
2210.02183
Fabiano L. Ribeiro
William Roberto Luiz S. Pereira and Fabiano L. Ribeiro
The metabolic origins of big size in aquatic mammals
null
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The group of large aquatic mammals has representatives being the largest living beings on earth, surpassing the weight and size of dinosaurs. In this paper, we present some empirical evidence and a mathematical model to argue that fat accumulation in marine mammals triggers a series of metabolic events that result in these animals' increased size. Our study starts by analysing 43 ontogenetic trajectories of species of different types and sizes. For instance, the analyses include organisms with asymptotic mass from 27g (Taiwan field mouse) to $2.10^{7}$g (grey whale). The available data allows us to determine all available species' ontogenetic parameters (catabolism and anabolism constant, scaling exponent and asymptotic mass). The analyses of those data show a minimisation of catabolism and scaling exponent in marine mammals compared to other species analysed. We present a possible explanation for this, arguing that the large proportion of adipose tissue in these animals can cause this minimisation. That is because adipocytes have different scaling properties in comparison to non-adipose (typical) cells, expressed in reduced energetic demand and lower metabolism. The conclusion is that when we have an animal with a relatively large amount of adipose tissue, as is the case of aquatic mammals, the cellular metabolic rate decreases compared to other animals with the same mass but with proportionally smaller fat tissue. A final consequence of this cause-effect process is the increase of the asymptotic mass of these mammals.
[ { "created": "Tue, 4 Oct 2022 17:32:33 GMT", "version": "v1" } ]
2022-10-06
[ [ "Pereira", "William Roberto Luiz S.", "" ], [ "Ribeiro", "Fabiano L.", "" ] ]
The group of large aquatic mammals has representatives being the largest living beings on earth, surpassing the weight and size of dinosaurs. In this paper, we present some empirical evidence and a mathematical model to argue that fat accumulation in marine mammals triggers a series of metabolic events that result in these animals' increased size. Our study starts by analysing 43 ontogenetic trajectories of species of different types and sizes. For instance, the analyses include organisms with asymptotic mass from 27g (Taiwan field mouse) to $2.10^{7}$g (grey whale). The available data allows us to determine all available species' ontogenetic parameters (catabolism and anabolism constant, scaling exponent and asymptotic mass). The analyses of those data show a minimisation of catabolism and scaling exponent in marine mammals compared to other species analysed. We present a possible explanation for this, arguing that the large proportion of adipose tissue in these animals can cause this minimisation. That is because adipocytes have different scaling properties in comparison to non-adipose (typical) cells, expressed in reduced energetic demand and lower metabolism. The conclusion is that when we have an animal with a relatively large amount of adipose tissue, as is the case of aquatic mammals, the cellular metabolic rate decreases compared to other animals with the same mass but with proportionally smaller fat tissue. A final consequence of this cause-effect process is the increase of the asymptotic mass of these mammals.
0710.1622
Peter Csermely
Robin Palotai, Mate S. Szalay, Peter Csermely
Chaperones as integrators of cellular networks: Changes of cellular integrity in stress and diseases
13 pages, 3 figures, 1 glossary
IUBMB Life (2008) 60, 10-15
10.1002/iub.8
null
q-bio.MN
null
Cellular networks undergo rearrangements during stress and diseases. In un-stressed state the yeast protein-protein interaction network (interactome) is highly compact, and the centrally organized modules have a large overlap. During stress several original modules became more separated, and a number of novel modules also appear. A few basic functions, such as the proteasome preserve their central position. However, several functions with high energy demand, such the cell-cycle regulation loose their original centrality during stress. A number of key stress-dependent protein complexes, such as the disaggregation-specific chaperone, Hsp104, gain centrality in the stressed yeast interactome. Molecular chaperones, heat shock, or stress proteins form complex interaction networks (the chaperome) with each other and their partners. Here we show that the human chaperome recovers the segregation of protein synthesis-coupled and stress-related chaperones observed in yeast recently. Examination of yeast and human interactomes shows that (1) chaperones are inter-modular integrators of protein-protein interaction networks, which (2) often bridge hubs and (3) are favorite candidates for extensive phosphorylation. Moreover, chaperones (4) become more central in the organization of the isolated modules of the stressed yeast protein-protein interaction network, which highlights their importance in the de-coupling and re-coupling of network modules during and after stress. Chaperone-mediated evolvability of cellular networks may play a key role in cellular adaptation during stress and various polygenic and chronic diseases, such as cancer, diabetes or neurodegeneration.
[ { "created": "Mon, 8 Oct 2007 19:32:35 GMT", "version": "v1" }, { "created": "Sat, 23 Feb 2008 20:16:18 GMT", "version": "v2" } ]
2008-02-23
[ [ "Palotai", "Robin", "" ], [ "Szalay", "Mate S.", "" ], [ "Csermely", "Peter", "" ] ]
Cellular networks undergo rearrangements during stress and diseases. In un-stressed state the yeast protein-protein interaction network (interactome) is highly compact, and the centrally organized modules have a large overlap. During stress several original modules became more separated, and a number of novel modules also appear. A few basic functions, such as the proteasome preserve their central position. However, several functions with high energy demand, such the cell-cycle regulation loose their original centrality during stress. A number of key stress-dependent protein complexes, such as the disaggregation-specific chaperone, Hsp104, gain centrality in the stressed yeast interactome. Molecular chaperones, heat shock, or stress proteins form complex interaction networks (the chaperome) with each other and their partners. Here we show that the human chaperome recovers the segregation of protein synthesis-coupled and stress-related chaperones observed in yeast recently. Examination of yeast and human interactomes shows that (1) chaperones are inter-modular integrators of protein-protein interaction networks, which (2) often bridge hubs and (3) are favorite candidates for extensive phosphorylation. Moreover, chaperones (4) become more central in the organization of the isolated modules of the stressed yeast protein-protein interaction network, which highlights their importance in the de-coupling and re-coupling of network modules during and after stress. Chaperone-mediated evolvability of cellular networks may play a key role in cellular adaptation during stress and various polygenic and chronic diseases, such as cancer, diabetes or neurodegeneration.
1607.05398
Ross McVinish
R.J.G. Lester and R. McVinish
What causes the increase in aggregation as a parasite moves up a food chain?
This is a preprint. The definitive version has been published under the title "Does moving up a food chain increase aggregation in parasites?"
Journal of the Royal Society Interface, 13 (2016) 20160102
10.1098/rsif.2016.0102
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General laws in ecological parasitology are scarce. Here we evaluate data published by over 100 authors to determine whether the number of hosts in a life cycle is associated with the degree of aggregation of fish parasites at different stages. Parasite species were grouped taxonomically to produce 20 or more data points per group as far as possible. Most parasites that remained at one trophic level were less aggregated than those that had passed up a food chain. We use a stochastic model to show that high parasite overdispersion in predators can be solely the result of the accumulation of parasites in their prey. The model is further developed to show that a change in the predators feeding behaviour with age may further increase parasite aggregation.
[ { "created": "Tue, 19 Jul 2016 04:24:26 GMT", "version": "v1" } ]
2016-07-20
[ [ "Lester", "R. J. G.", "" ], [ "McVinish", "R.", "" ] ]
General laws in ecological parasitology are scarce. Here we evaluate data published by over 100 authors to determine whether the number of hosts in a life cycle is associated with the degree of aggregation of fish parasites at different stages. Parasite species were grouped taxonomically to produce 20 or more data points per group as far as possible. Most parasites that remained at one trophic level were less aggregated than those that had passed up a food chain. We use a stochastic model to show that high parasite overdispersion in predators can be solely the result of the accumulation of parasites in their prey. The model is further developed to show that a change in the predators feeding behaviour with age may further increase parasite aggregation.
1007.4461
Tsvi Tlusty
Yonatan Savir, Elad Noor, Ron Milo and Tsvi Tlusty
Cross-species analysis traces adaptation of Rubisco towards optimality in a low dimensional landscape
http://www.pnas.org/content/107/8/3475.short http://www.ncbi.nlm.nih.gov/pubmed/20142476 http://www.weizmann.ac.il/complex/tlusty/papers/PNAS2010.pdf
PNAS February 23, 2010 vol. 107 no. 8 3475-3480
10.1073/pnas.0911663107
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rubisco, probably the most abundant protein in the biosphere, performs an essential part in the process of carbon fixation through photosynthesis thus facilitating life on earth. Despite the significant effect that Rubisco has on the fitness of plants and other photosynthetic organisms, this enzyme is known to have a remarkably low catalytic rate and a tendency to confuse its substrate, carbon dioxide, with oxygen. This apparent inefficiency is puzzling and raises questions regarding the roles of evolution versus biochemical constraints in shaping Rubisco. Here we examine these questions by analyzing the measured kinetic parameters of Rubisco from various organisms in various environments. The analysis presented here suggests that the evolution of Rubisco is confined to an effectively one-dimensional landscape, which is manifested in simple power law correlations between its kinetic parameters. Within this one dimensional landscape, which may represent biochemical and structural constraints, Rubisco appears to be tuned to the intracellular environment in which it resides such that the net photosynthesis rate is nearly optimal. Our analysis indicates that the specificity of Rubisco is not the main determinant of its efficiency but rather the tradeoff between the carboxylation velocity and CO2 affinity. As a result, the presence of oxygen has only moderate effect on the optimal performance of Rubisco, which is determined mostly by the local CO2 concentration. Rubisco appears as an experimentally testable example for the evolution of proteins subject both to strong selection pressure and to biochemical constraints which strongly confine the evolutionary plasticity to a low dimensional landscape.
[ { "created": "Mon, 26 Jul 2010 13:51:15 GMT", "version": "v1" } ]
2010-07-27
[ [ "Savir", "Yonatan", "" ], [ "Noor", "Elad", "" ], [ "Milo", "Ron", "" ], [ "Tlusty", "Tsvi", "" ] ]
Rubisco, probably the most abundant protein in the biosphere, performs an essential part in the process of carbon fixation through photosynthesis thus facilitating life on earth. Despite the significant effect that Rubisco has on the fitness of plants and other photosynthetic organisms, this enzyme is known to have a remarkably low catalytic rate and a tendency to confuse its substrate, carbon dioxide, with oxygen. This apparent inefficiency is puzzling and raises questions regarding the roles of evolution versus biochemical constraints in shaping Rubisco. Here we examine these questions by analyzing the measured kinetic parameters of Rubisco from various organisms in various environments. The analysis presented here suggests that the evolution of Rubisco is confined to an effectively one-dimensional landscape, which is manifested in simple power law correlations between its kinetic parameters. Within this one dimensional landscape, which may represent biochemical and structural constraints, Rubisco appears to be tuned to the intracellular environment in which it resides such that the net photosynthesis rate is nearly optimal. Our analysis indicates that the specificity of Rubisco is not the main determinant of its efficiency but rather the tradeoff between the carboxylation velocity and CO2 affinity. As a result, the presence of oxygen has only moderate effect on the optimal performance of Rubisco, which is determined mostly by the local CO2 concentration. Rubisco appears as an experimentally testable example for the evolution of proteins subject both to strong selection pressure and to biochemical constraints which strongly confine the evolutionary plasticity to a low dimensional landscape.
1409.0675
Felix Polyakov
Felix Polyakov
Affine differential geometry and smoothness maximization as tools for identifying geometric movement primitives
The current version of the manuscript is result of significant revision. It contains novel solutions, some formulations and explanations have been corrected and in many parts of the text improved. The manuscript now contains discussion about performance of the compromised motor control system in the framework of the theory under consideration
null
null
null
q-bio.NC math.DG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroscientific studies of drawing-like movements usually analyze neural representation of either geometric (eg. direction, shape) or temporal (eg. speed) features of trajectories rather than trajectory's representation as a whole. This work is about empirically supported mathematical ideas behind splitting and merging geometric and temporal features which characterize biological movements. Movement primitives supposedly facilitate the efficiency of movements' representation in the brain and comply with different criteria for biological movements, among them kinematic smoothness and geometric constraint. Criterion for trajectories' maximal smoothness of arbitrary order $n$ is employed, $n = 3$ is the case of the minimum-jerk model. I derive a class of differential equations obeyed by movement paths for which $n$-th order maximally smooth trajectories have constant rate of accumulating geometric measurement along the drawn path. Constant rate of accumulating equi-affine arc corresponds to compliance with the two-thirds power-law model. Geometric measurement is invariant under a class of geometric transformations and may be chosen to be an arc in certain geometry. Equations' solutions presumably serve as candidates for geometric movement primitives. The derived class of differential equations consists of two parts. The first part is identical for all geometric parameterizations of the path. The second part enforces consistency with desired (geometric) parametrization of curves on solutions of the first part. Equations in different geometries in plane and in space and their known solutions are presented. Connection between geometric invariance, motion smoothness, compositionality and performance of the compromised motor control system is discussed. The derived class of differential equations is a novel tool for discovering candidates for geometric movement primitives.
[ { "created": "Tue, 2 Sep 2014 11:50:34 GMT", "version": "v1" }, { "created": "Thu, 16 Oct 2014 18:51:10 GMT", "version": "v2" }, { "created": "Mon, 29 Dec 2014 20:53:43 GMT", "version": "v3" }, { "created": "Wed, 27 Jan 2016 20:47:11 GMT", "version": "v4" } ]
2016-01-28
[ [ "Polyakov", "Felix", "" ] ]
Neuroscientific studies of drawing-like movements usually analyze neural representation of either geometric (eg. direction, shape) or temporal (eg. speed) features of trajectories rather than trajectory's representation as a whole. This work is about empirically supported mathematical ideas behind splitting and merging geometric and temporal features which characterize biological movements. Movement primitives supposedly facilitate the efficiency of movements' representation in the brain and comply with different criteria for biological movements, among them kinematic smoothness and geometric constraint. Criterion for trajectories' maximal smoothness of arbitrary order $n$ is employed, $n = 3$ is the case of the minimum-jerk model. I derive a class of differential equations obeyed by movement paths for which $n$-th order maximally smooth trajectories have constant rate of accumulating geometric measurement along the drawn path. Constant rate of accumulating equi-affine arc corresponds to compliance with the two-thirds power-law model. Geometric measurement is invariant under a class of geometric transformations and may be chosen to be an arc in certain geometry. Equations' solutions presumably serve as candidates for geometric movement primitives. The derived class of differential equations consists of two parts. The first part is identical for all geometric parameterizations of the path. The second part enforces consistency with desired (geometric) parametrization of curves on solutions of the first part. Equations in different geometries in plane and in space and their known solutions are presented. Connection between geometric invariance, motion smoothness, compositionality and performance of the compromised motor control system is discussed. The derived class of differential equations is a novel tool for discovering candidates for geometric movement primitives.
1109.6231
Jeremy Gunawardena
Jeremy Gunawardena
A linear elimination framework
27 pages, 8 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Key insights in molecular biology, such as enzyme kinetics, protein allostery and gene regulation emerged from quantitative analysis based on time-scale separation, allowing internal complexity to be eliminated and resulting in the well-known formulas of Michaelis-Menten, Monod-Wyman-Changeux and Ackers-Johnson-Shea. In systems biology, steady-state analysis has yielded eliminations that reveal emergent properties of multi-component networks. Here we show that these analyses of nonlinear biochemical systems are consequences of the same linear framework, consisting of a labelled, directed graph on which a Laplacian dynamics is defined, whose steady states can be algorithmically calculated. Analyses previously considered distinct are revealed as identical, while new methods of analysis become feasible.
[ { "created": "Wed, 28 Sep 2011 15:04:51 GMT", "version": "v1" } ]
2011-09-29
[ [ "Gunawardena", "Jeremy", "" ] ]
Key insights in molecular biology, such as enzyme kinetics, protein allostery and gene regulation emerged from quantitative analysis based on time-scale separation, allowing internal complexity to be eliminated and resulting in the well-known formulas of Michaelis-Menten, Monod-Wyman-Changeux and Ackers-Johnson-Shea. In systems biology, steady-state analysis has yielded eliminations that reveal emergent properties of multi-component networks. Here we show that these analyses of nonlinear biochemical systems are consequences of the same linear framework, consisting of a labelled, directed graph on which a Laplacian dynamics is defined, whose steady states can be algorithmically calculated. Analyses previously considered distinct are revealed as identical, while new methods of analysis become feasible.
2406.17086
Yifan Yang
Yifan Yang, Yutong Mao, Xufu Liu, Xiao Liu
BrainMAE: A Region-aware Self-supervised Learning Framework for Brain Signals
27 pages, 16 figures
null
null
null
q-bio.QM cs.LG q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The human brain is a complex, dynamic network, which is commonly studied using functional magnetic resonance imaging (fMRI) and modeled as network of Regions of interest (ROIs) for understanding various brain functions. Recent studies utilize deep learning approaches to learn the brain network representation based on functional connectivity (FC) profile, broadly falling into two main categories. The Fixed-FC approaches, utilizing the FC profile which represents the linear temporal relation within the brain network, are limited by failing to capture informative brain temporal dynamics. On the other hand, the Dynamic-FC approaches, modeling the evolving FC profile over time, often exhibit less satisfactory performance due to challenges in handling the inherent noisy nature of fMRI data. To address these challenges, we propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data. Our approach incorporates two essential components: a region-aware graph attention mechanism designed to capture the relationships between different brain ROIs, and a novel self-supervised masked autoencoding framework for effective model pre-training. These components enable the model to capture rich temporal dynamics of brain activity while maintaining resilience to inherent noise in fMRI data. Our experiments demonstrate that BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks. Finally, leveraging the model's inherent interpretability, our analysis of model-generated representations reveals findings that resonate with ongoing research in the field of neuroscience.
[ { "created": "Mon, 24 Jun 2024 19:16:24 GMT", "version": "v1" } ]
2024-06-26
[ [ "Yang", "Yifan", "" ], [ "Mao", "Yutong", "" ], [ "Liu", "Xufu", "" ], [ "Liu", "Xiao", "" ] ]
The human brain is a complex, dynamic network, which is commonly studied using functional magnetic resonance imaging (fMRI) and modeled as network of Regions of interest (ROIs) for understanding various brain functions. Recent studies utilize deep learning approaches to learn the brain network representation based on functional connectivity (FC) profile, broadly falling into two main categories. The Fixed-FC approaches, utilizing the FC profile which represents the linear temporal relation within the brain network, are limited by failing to capture informative brain temporal dynamics. On the other hand, the Dynamic-FC approaches, modeling the evolving FC profile over time, often exhibit less satisfactory performance due to challenges in handling the inherent noisy nature of fMRI data. To address these challenges, we propose Brain Masked Auto-Encoder (BrainMAE) for learning representations directly from fMRI time-series data. Our approach incorporates two essential components: a region-aware graph attention mechanism designed to capture the relationships between different brain ROIs, and a novel self-supervised masked autoencoding framework for effective model pre-training. These components enable the model to capture rich temporal dynamics of brain activity while maintaining resilience to inherent noise in fMRI data. Our experiments demonstrate that BrainMAE consistently outperforms established baseline methods by significant margins in four distinct downstream tasks. Finally, leveraging the model's inherent interpretability, our analysis of model-generated representations reveals findings that resonate with ongoing research in the field of neuroscience.
2402.16854
Divahar Sivanesan
Divahar Sivanesan
Attention Based Molecule Generation via Hierarchical Variational Autoencoder
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Molecule generation is a task made very difficult by the complex ways in which we represent molecules computationally. A common technique used in molecular generative modeling is to use SMILES strings with recurrent neural networks built into variational autoencoders - but these suffer from a myriad of issues: vanishing gradients, long-range forgetting, and invalid molecules. In this work, we show that by combining recurrent neural networks with convolutional networks in a hierarchical manner, we are able to both extract autoregressive information from SMILES strings while maintaining signal and long-range dependencies. This allows for generations with very high validity rates on the order of 95% when reconstructing known molecules. We also observe an average Tanimoto similarity of .6 between test set and reconstructed molecules, which suggests our method is able to map between SMILES strings and their learned representations in a more effective way than prior works using similar methods.
[ { "created": "Thu, 18 Jan 2024 21:45:12 GMT", "version": "v1" } ]
2024-02-28
[ [ "Sivanesan", "Divahar", "" ] ]
Molecule generation is a task made very difficult by the complex ways in which we represent molecules computationally. A common technique used in molecular generative modeling is to use SMILES strings with recurrent neural networks built into variational autoencoders - but these suffer from a myriad of issues: vanishing gradients, long-range forgetting, and invalid molecules. In this work, we show that by combining recurrent neural networks with convolutional networks in a hierarchical manner, we are able to both extract autoregressive information from SMILES strings while maintaining signal and long-range dependencies. This allows for generations with very high validity rates on the order of 95% when reconstructing known molecules. We also observe an average Tanimoto similarity of .6 between test set and reconstructed molecules, which suggests our method is able to map between SMILES strings and their learned representations in a more effective way than prior works using similar methods.
1201.5211
Alexei Ryabov
Alexei B. Ryabov
Phytoplankton competition in deep biomass maximum
13 pages, 7 figures; Theoretical Ecology 2012
null
10.1007/s12080-012-0158-0
null
q-bio.PE nlin.AO nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resource competition in heterogeneous environments is still an unresolved problem of theoretical ecology. In this article I analyze competition between two phytoplankton species in a deep water column, where the distributions of main resources (light and a limiting nutrient) have opposing gradients and co-limitation by both resources causes a deep biomass maximum. Assuming that the species have a trade-off in resource requirements and the water column is weakly mixed, I apply the invasion threshold analysis (Ryabov and Blasius 2011) to determine relations between environmental conditions and phytoplankton composition. Although species deplete resources in the interior of the water column, the resource levels at the bottom and surface remain high. As a result, the slope of resources gradients becomes a new crucial factor which, rather than the local resource values, determines the outcome of competition. The value of resource gradients nonlinearly depend on the density of consumers. This leads to complex relationships between environmental parameters and species composition. In particular, it is shown that an increase of both the incident light intensity or bottom nutrient concentrations favors the best light competitors, while an increase of the turbulent mixing or background turbidity favors the best nutrient competitors. These results might be important for prediction of species composition in deep ocean.
[ { "created": "Wed, 25 Jan 2012 09:22:13 GMT", "version": "v1" } ]
2012-01-26
[ [ "Ryabov", "Alexei B.", "" ] ]
Resource competition in heterogeneous environments is still an unresolved problem of theoretical ecology. In this article I analyze competition between two phytoplankton species in a deep water column, where the distributions of main resources (light and a limiting nutrient) have opposing gradients and co-limitation by both resources causes a deep biomass maximum. Assuming that the species have a trade-off in resource requirements and the water column is weakly mixed, I apply the invasion threshold analysis (Ryabov and Blasius 2011) to determine relations between environmental conditions and phytoplankton composition. Although species deplete resources in the interior of the water column, the resource levels at the bottom and surface remain high. As a result, the slope of resources gradients becomes a new crucial factor which, rather than the local resource values, determines the outcome of competition. The value of resource gradients nonlinearly depend on the density of consumers. This leads to complex relationships between environmental parameters and species composition. In particular, it is shown that an increase of both the incident light intensity or bottom nutrient concentrations favors the best light competitors, while an increase of the turbulent mixing or background turbidity favors the best nutrient competitors. These results might be important for prediction of species composition in deep ocean.
1004.4387
Areejit Samal
Pierre-Yves Bourguignon, Areejit Samal, Fran\c{c}ois K\'ep\`es, J\"urgen Jost, Olivier C. Martin
Challenges in experimental data integration within genome-scale metabolic models
5 pages
Algorithms for Molecular Biology, 5:20 (2010) http://www.almob.org/content/5/1/20
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A report of the meeting "Challenges in experimental data integration within genome-scale metabolic models", Institut Henri Poincar\'e, Paris, October 10-11 2009, organized by the CNRS-MPG joint program in Systems Biology.
[ { "created": "Sun, 25 Apr 2010 22:41:35 GMT", "version": "v1" } ]
2010-04-27
[ [ "Bourguignon", "Pierre-Yves", "" ], [ "Samal", "Areejit", "" ], [ "Képès", "François", "" ], [ "Jost", "Jürgen", "" ], [ "Martin", "Olivier C.", "" ] ]
A report of the meeting "Challenges in experimental data integration within genome-scale metabolic models", Institut Henri Poincar\'e, Paris, October 10-11 2009, organized by the CNRS-MPG joint program in Systems Biology.
1304.1565
Muhammad Asim Mubeen
Asim M. Mubeen, Kevin H. Knuth
Bayesian Odds-Ratio Filters: A Template-Based Method for Online Detection of P300 Evoked Responses
9 pages, 3 figures
null
null
null
q-bio.NC physics.med-ph stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Template-based signal detection most often relies on computing a correlation, or a dot product, between an incoming data stream and a signal template. While such a correlation results in an ongoing estimate of the magnitude of the signal in the data stream, it does not directly indicate the presence or absence of a signal. Instead, the problem of signal detection is one of model-selection. Here we explore the use of the Bayesian odds-ratio (OR), which is the ratio of posterior probabilities of a signal-plus-noise model over a noise-only model. We demonstrate this method by applying it to simulated electroencephalographic (EEG) signals based on the P300 response, which is widely used in both Brain Computer Interface (BCI) and Brain Machine Interface (BMI) systems. The efficacy of this algorithm is demonstrated by comparing the receiver operating characteristic (ROC) curves of the OR-based (logOR) filter to the usual correlation method where we find a significant improvement in P300 detection. The logOR filter promises to improve the accuracy and speed of the detection of evoked brain responses in BCI/BMI applications as well the detection of template signals in general.
[ { "created": "Thu, 4 Apr 2013 21:27:40 GMT", "version": "v1" } ]
2013-04-08
[ [ "Mubeen", "Asim M.", "" ], [ "Knuth", "Kevin H.", "" ] ]
Template-based signal detection most often relies on computing a correlation, or a dot product, between an incoming data stream and a signal template. While such a correlation results in an ongoing estimate of the magnitude of the signal in the data stream, it does not directly indicate the presence or absence of a signal. Instead, the problem of signal detection is one of model-selection. Here we explore the use of the Bayesian odds-ratio (OR), which is the ratio of posterior probabilities of a signal-plus-noise model over a noise-only model. We demonstrate this method by applying it to simulated electroencephalographic (EEG) signals based on the P300 response, which is widely used in both Brain Computer Interface (BCI) and Brain Machine Interface (BMI) systems. The efficacy of this algorithm is demonstrated by comparing the receiver operating characteristic (ROC) curves of the OR-based (logOR) filter to the usual correlation method where we find a significant improvement in P300 detection. The logOR filter promises to improve the accuracy and speed of the detection of evoked brain responses in BCI/BMI applications as well the detection of template signals in general.
2006.00115
Daniel Moyer
Daniel Moyer, Greg Ver Steeg, Paul M. Thompson
Overview of Scanner Invariant Representations
Accepted as a short paper in MIDL 2020. In accordance with the MIDL 2020 Call for Papers, this short paper is an overview of an already published work arXiv:1904.05375, and was submitted to MIDL in order to allow presentation and discussion at the meeting
null
null
MIDL/2020/ExtendedAbstract/yqm9RD_XHT
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pooled imaging data from multiple sources is subject to bias from each source. Studies that do not correct for these scanner/site biases at best lose statistical power, and at worst leave spurious correlations in their data. Estimation of the bias effects is non-trivial due to the paucity of data with correspondence across sites, so called "traveling phantom" data, which is expensive to collect. Nevertheless, numerous solutions leveraging direct correspondence have been proposed. In contrast to this, Moyer et al. (2019) proposes an unsupervised solution using invariant representations, one which does not require correspondence and thus does not require paired images. By leveraging the data processing inequality, an invariant representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to the underlying structure. In the present abstract we provide an overview of this method.
[ { "created": "Fri, 29 May 2020 22:56:47 GMT", "version": "v1" } ]
2020-06-02
[ [ "Moyer", "Daniel", "" ], [ "Steeg", "Greg Ver", "" ], [ "Thompson", "Paul M.", "" ] ]
Pooled imaging data from multiple sources is subject to bias from each source. Studies that do not correct for these scanner/site biases at best lose statistical power, and at worst leave spurious correlations in their data. Estimation of the bias effects is non-trivial due to the paucity of data with correspondence across sites, so called "traveling phantom" data, which is expensive to collect. Nevertheless, numerous solutions leveraging direct correspondence have been proposed. In contrast to this, Moyer et al. (2019) proposes an unsupervised solution using invariant representations, one which does not require correspondence and thus does not require paired images. By leveraging the data processing inequality, an invariant representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to the underlying structure. In the present abstract we provide an overview of this method.
1612.07106
Xerxes D. Arsiwalla
Xerxes D. Arsiwalla and Paul Verschure
The Global Dynamical Complexity of the Human Brain Network
16 pages, 6 figures
null
10.1007/s41109-016-0018-8
null
q-bio.NC cs.IT math.DS math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How much information do large brain networks integrate as a whole over the sum of their parts? Can the dynamical complexity of such networks be globally quantified in an information-theoretic way and be meaningfully coupled to brain function? Recently, measures of dynamical complexity such as integrated information have been proposed. However, problems related to the normalization and Bell number of partitions associated to these measures make these approaches computationally infeasible for large-scale brain networks. Our goal in this work is to address this problem. Our formulation of network integrated information is based on the Kullback-Leibler divergence between the multivariate distribution on the set of network states versus the corresponding factorized distribution over its parts. We find that implementing the maximum information partition optimizes computations. These methods are well-suited for large networks with linear stochastic dynamics. We compute the integrated information for both, the system's attractor states, as well as non-stationary dynamical states of the network. We then apply this formalism to brain networks to compute the integrated information for the human brain's connectome. Compared to a randomly re-wired network, we find that the specific topology of the brain generates greater information complexity.
[ { "created": "Wed, 21 Dec 2016 13:44:31 GMT", "version": "v1" } ]
2016-12-22
[ [ "Arsiwalla", "Xerxes D.", "" ], [ "Verschure", "Paul", "" ] ]
How much information do large brain networks integrate as a whole over the sum of their parts? Can the dynamical complexity of such networks be globally quantified in an information-theoretic way and be meaningfully coupled to brain function? Recently, measures of dynamical complexity such as integrated information have been proposed. However, problems related to the normalization and Bell number of partitions associated to these measures make these approaches computationally infeasible for large-scale brain networks. Our goal in this work is to address this problem. Our formulation of network integrated information is based on the Kullback-Leibler divergence between the multivariate distribution on the set of network states versus the corresponding factorized distribution over its parts. We find that implementing the maximum information partition optimizes computations. These methods are well-suited for large networks with linear stochastic dynamics. We compute the integrated information for both, the system's attractor states, as well as non-stationary dynamical states of the network. We then apply this formalism to brain networks to compute the integrated information for the human brain's connectome. Compared to a randomly re-wired network, we find that the specific topology of the brain generates greater information complexity.
1701.07061
Diego Mateos
D. M. Mateos, R. Guevara Erra, R. Wennberg, J.L. Perez Velazquez
Measures of Entropy and Complexity in altered states of consciousness
2 figures
null
null
null
q-bio.NC cond-mat.stat-mech
http://creativecommons.org/publicdomain/zero/1.0/
Quantification of complexity in neurophysiological signals has been studied using different methods, especially those from information or dynamical system theory. These studies revealed the dependence on different states of consciousness, particularly that wakefulness is characterized by larger complexity of brain signals perhaps due to the necessity of the brain to handle varied sensorimotor information. Thus these frameworks are very useful in attempts at quantifying cognitive states. We set out to analyze different types of signals including scalp and intracerebral electroencephalography (EEG), and magnetoencephalography (MEG) in subjects during different states of consciousness: awake, sleep stages and epileptic seizures. The signals were analyzed using a statistical (Permutation Entropy) and a deterministic (Permutation Lempel Ziv Complexity) analytical method. The results are presented in a complexity vs entropy graph, showing that the values of entropy and complexity of the signals tend to be greatest when the subjects are in fully alert states, falling in states with loss of awareness or consciousness. These results are robust for all three types of recordings. We propose that the investigation of the structure of cognition using the frameworks of complexity will reveal mechanistic aspects of brain dynamics associated not only with altered states of consciousness but also with normal and pathological conditions.
[ { "created": "Mon, 9 Jan 2017 20:10:15 GMT", "version": "v1" } ]
2017-01-26
[ [ "Mateos", "D. M.", "" ], [ "Erra", "R. Guevara", "" ], [ "Wennberg", "R.", "" ], [ "Velazquez", "J. L. Perez", "" ] ]
Quantification of complexity in neurophysiological signals has been studied using different methods, especially those from information or dynamical system theory. These studies revealed the dependence on different states of consciousness, particularly that wakefulness is characterized by larger complexity of brain signals perhaps due to the necessity of the brain to handle varied sensorimotor information. Thus these frameworks are very useful in attempts at quantifying cognitive states. We set out to analyze different types of signals including scalp and intracerebral electroencephalography (EEG), and magnetoencephalography (MEG) in subjects during different states of consciousness: awake, sleep stages and epileptic seizures. The signals were analyzed using a statistical (Permutation Entropy) and a deterministic (Permutation Lempel Ziv Complexity) analytical method. The results are presented in a complexity vs entropy graph, showing that the values of entropy and complexity of the signals tend to be greatest when the subjects are in fully alert states, falling in states with loss of awareness or consciousness. These results are robust for all three types of recordings. We propose that the investigation of the structure of cognition using the frameworks of complexity will reveal mechanistic aspects of brain dynamics associated not only with altered states of consciousness but also with normal and pathological conditions.
1707.00180
Melanie Weber
Melanie Weber, Johannes Stelzer, Emil Saucan, Alexander Naitsat, Gabriele Lohmann and J\"urgen Jost
Curvature-based Methods for Brain Network Analysis
Under Review
null
null
null
q-bio.NC cs.DM cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain forms functional networks on all spatial scales. Modern fMRI scanners allow to resolve functional brain data in high resolutions, allowing to study large-scale networks that relate to cognitive processes. The analysis of such networks forms a cornerstone of experimental neuroscience. Due to the immense size and complexity of the underlying data sets, efficient evaluation and visualization remain a challenge for data analysis. In this study, we combine recent advances in experimental neuroscience and applied mathematics to perform a mathematical characterization of complex networks constructed from fMRI data. We use task-related edge densities [Lohmann et al., 2016] for constructing networks of task-related changes in synchronization. This construction captures the dynamic formation of patterns of neuronal activity and therefore represents efficiently the connectivity structure between brain regions. Using geometric methods that utilize Forman-Ricci curvature as an edge-based network characteristic [Weber et al., 2017], we perform a mathematical analysis of the resulting complex networks. We motivate the use of edge-based characteristics to evaluate the network structure with geometric methods. The geometric features could aid in understanding the connectivity and interplay of brain regions in cognitive processes.
[ { "created": "Sat, 1 Jul 2017 17:55:28 GMT", "version": "v1" }, { "created": "Mon, 13 May 2019 16:03:12 GMT", "version": "v2" } ]
2019-05-14
[ [ "Weber", "Melanie", "" ], [ "Stelzer", "Johannes", "" ], [ "Saucan", "Emil", "" ], [ "Naitsat", "Alexander", "" ], [ "Lohmann", "Gabriele", "" ], [ "Jost", "Jürgen", "" ] ]
The human brain forms functional networks on all spatial scales. Modern fMRI scanners allow to resolve functional brain data in high resolutions, allowing to study large-scale networks that relate to cognitive processes. The analysis of such networks forms a cornerstone of experimental neuroscience. Due to the immense size and complexity of the underlying data sets, efficient evaluation and visualization remain a challenge for data analysis. In this study, we combine recent advances in experimental neuroscience and applied mathematics to perform a mathematical characterization of complex networks constructed from fMRI data. We use task-related edge densities [Lohmann et al., 2016] for constructing networks of task-related changes in synchronization. This construction captures the dynamic formation of patterns of neuronal activity and therefore represents efficiently the connectivity structure between brain regions. Using geometric methods that utilize Forman-Ricci curvature as an edge-based network characteristic [Weber et al., 2017], we perform a mathematical analysis of the resulting complex networks. We motivate the use of edge-based characteristics to evaluate the network structure with geometric methods. The geometric features could aid in understanding the connectivity and interplay of brain regions in cognitive processes.
2004.14767
Helmut Hlavacs
Helmut Hlavacs
How Often Should People be Tested for Corona to Avoid a Shutdown?
Please comment
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the well known SIR model, this paper develops a model for predicting the number of necessary testings of asymptomatic persons in order to push Reff below 1, thus suppressing an outbreak. The model considers R0, time for obtaining a test result, and effect of population discipline. The outcome are closed form expressions for the number of daily tests.
[ { "created": "Mon, 27 Apr 2020 17:59:47 GMT", "version": "v1" } ]
2020-05-01
[ [ "Hlavacs", "Helmut", "" ] ]
Based on the well known SIR model, this paper develops a model for predicting the number of necessary testings of asymptomatic persons in order to push Reff below 1, thus suppressing an outbreak. The model considers R0, time for obtaining a test result, and effect of population discipline. The outcome are closed form expressions for the number of daily tests.
1504.06574
Heng Li
Heng Li
FermiKit: assembly-based variant calling for Illumina resequencing data
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: FermiKit is a variant calling pipeline for Illumina data. It de novo assembles short reads and then maps the assembly against a reference genome to call SNPs, short insertions/deletions (INDELs) and structural variations (SVs). FermiKit takes about one day to assemble 30-fold human whole-genome data on a modern 16-core server with 85GB RAM at the peak, and calls variants in half an hour to an accuracy comparable to the current practice. FermiKit assembly is a reduced representation of raw data while retaining most of the original information. Availability and implementation: https://github.com/lh3/fermikit Contact: hengli@broadinstitute.org
[ { "created": "Fri, 24 Apr 2015 17:27:42 GMT", "version": "v1" } ]
2015-04-27
[ [ "Li", "Heng", "" ] ]
Summary: FermiKit is a variant calling pipeline for Illumina data. It de novo assembles short reads and then maps the assembly against a reference genome to call SNPs, short insertions/deletions (INDELs) and structural variations (SVs). FermiKit takes about one day to assemble 30-fold human whole-genome data on a modern 16-core server with 85GB RAM at the peak, and calls variants in half an hour to an accuracy comparable to the current practice. FermiKit assembly is a reduced representation of raw data while retaining most of the original information. Availability and implementation: https://github.com/lh3/fermikit Contact: hengli@broadinstitute.org