id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1709.04314
Christina Roggatz
Christina C. Roggatz, Mark Lorch and David M. Benoit
The influence of solvent representation on nuclear shielding calculations of protonation states of small biological molecules
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we assess the influence of solvation on the accuracy and reliability of nuclear shielding calculations for amino acids in comparison to experimental data. We focus particularly on the performance of solvation methods for different protonation states, as biological molecules occur almost exclusively in aqueous solution and are subject to protonation with pH. We identify significant shortcomings of current implicit solvent models and present a hybrid solvation approach that improves agreement with experimental data by taking into account the presence of direct interactions between amino acid protonation state and water molecules.
[ { "created": "Wed, 13 Sep 2017 13:21:46 GMT", "version": "v1" } ]
2017-09-14
[ [ "Roggatz", "Christina C.", "" ], [ "Lorch", "Mark", "" ], [ "Benoit", "David M.", "" ] ]
In this study, we assess the influence of solvation on the accuracy and reliability of nuclear shielding calculations for amino acids in comparison to experimental data. We focus particularly on the performance of solvation methods for different protonation states, as biological molecules occur almost exclusively in aqueous solution and are subject to protonation with pH. We identify significant shortcomings of current implicit solvent models and present a hybrid solvation approach that improves agreement with experimental data by taking into account the presence of direct interactions between amino acid protonation state and water molecules.
2101.11444
Pedro L. de Andres
Pedro L. de Andres, Lucia de Andres-Bragado, Linard D. Hoessly
Monitoring and Forecasting COVID-19: Statistical Heuristic Regression, Susceptible-Infected-Removed model and, Spatial Stochastics
33 pages, 14 figures, 3 tables
Front. Appl. Math. Stat. 21 May 2021
10.3389/fams.2021.650716
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The COVID-19 pandemic has had worldwide devastating effects on human lives, highlighting the need for tools to predict its development. Dynamics of such public-health threats can often be efficiently analysed through simple models that help to make quantitative timely policy decisions. We benchmark a minimal version of a Susceptible-Infected-Removed model for infectious diseases (SIR) coupled with a simple least-squares Statistical Heuristic Regression (SHR) based on a lognormal distribution. We derived the three free parameters for both models in several cases and tested them against the amount of data needed to bring accuracy in predictions. The SHR model is approximately +/- 2% accurate about 20 days past the second inflexion point in the daily curve of cases, while the SIR model reaches a similar accuracy a fortnight before. All the analyzed cases assert the utility of SHR and SIR approximants as a useful tool to forecast the evolution of the disease. Finally, we have studied simulated stochastic individual-based SIR dynamics, which yields a detailed spatial and temporal view of the disease that cannot be given by SIR or SHR methods.
[ { "created": "Tue, 26 Jan 2021 16:51:34 GMT", "version": "v1" }, { "created": "Thu, 3 Jun 2021 05:38:28 GMT", "version": "v2" } ]
2021-06-04
[ [ "de Andres", "Pedro L.", "" ], [ "de Andres-Bragado", "Lucia", "" ], [ "Hoessly", "Linard D.", "" ] ]
The COVID-19 pandemic has had worldwide devastating effects on human lives, highlighting the need for tools to predict its development. Dynamics of such public-health threats can often be efficiently analysed through simple models that help to make quantitative timely policy decisions. We benchmark a minimal version of a Susceptible-Infected-Removed model for infectious diseases (SIR) coupled with a simple least-squares Statistical Heuristic Regression (SHR) based on a lognormal distribution. We derived the three free parameters for both models in several cases and tested them against the amount of data needed to bring accuracy in predictions. The SHR model is approximately +/- 2% accurate about 20 days past the second inflexion point in the daily curve of cases, while the SIR model reaches a similar accuracy a fortnight before. All the analyzed cases assert the utility of SHR and SIR approximants as a useful tool to forecast the evolution of the disease. Finally, we have studied simulated stochastic individual-based SIR dynamics, which yields a detailed spatial and temporal view of the disease that cannot be given by SIR or SHR methods.
2201.07926
Evan Johnson
Evan Johnson and Alan Hastings
Resolving conceptual issues in Modern Coexistence Theory
48 pages, 9 figures, 6 tables
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this paper, we discuss the conceptual underpinnings of Modern Coexistence Theory (MCT), a quantitative framework for understanding ecological coexistence. In order to use MCT to infer how species are coexisting, one must relate a complex model (which simulates coexistence in the real world) to simple models in which previously proposed explanations for coexistence have been codified. This can be accomplished in three steps: 1) relating the construct of coexistence to invasion growth rates, 2) mathematically partitioning the invasion growth rates into coexistence mechanisms (i.e., classes of explanations for coexistence), and 3) relating coexistence mechanisms to simple explanations for coexistence. Previous research has primarily focused on step 2. Here, we discuss the other crucial steps and their implications for inferring the mechanisms of coexistence in real communities. Our discussion of step 3 -- relating coexistence mechanisms to simple explanations for coexistence -- serves a heuristic guide for hypothesizing about the causes of coexistence in new models; but also addresses misconceptions about coexistence mechanisms. For example, the storage effect has little to do with bet-hedging or "storage" via a robust life-history stage; relative nonlinearity is more likely to promote coexistence than originally thought; and fitness-density covariance is an amalgam of a large number of previously proposed explanations for coexistence (e.g., the competition-colonization trade-off, heteromyopia, spatially-varying resource supply ratios). Additionally, we review a number of topics in MCT, including the role of "scaling factors"; whether coexistence mechanisms are approximations; whether the magnitude or sign of invasion growth rates matters more; whether Hutchinson solved the paradox of the plankton; the scale-dependence of coexistence mechanisms; and much more.
[ { "created": "Thu, 20 Jan 2022 00:26:23 GMT", "version": "v1" } ]
2022-01-21
[ [ "Johnson", "Evan", "" ], [ "Hastings", "Alan", "" ] ]
In this paper, we discuss the conceptual underpinnings of Modern Coexistence Theory (MCT), a quantitative framework for understanding ecological coexistence. In order to use MCT to infer how species are coexisting, one must relate a complex model (which simulates coexistence in the real world) to simple models in which previously proposed explanations for coexistence have been codified. This can be accomplished in three steps: 1) relating the construct of coexistence to invasion growth rates, 2) mathematically partitioning the invasion growth rates into coexistence mechanisms (i.e., classes of explanations for coexistence), and 3) relating coexistence mechanisms to simple explanations for coexistence. Previous research has primarily focused on step 2. Here, we discuss the other crucial steps and their implications for inferring the mechanisms of coexistence in real communities. Our discussion of step 3 -- relating coexistence mechanisms to simple explanations for coexistence -- serves a heuristic guide for hypothesizing about the causes of coexistence in new models; but also addresses misconceptions about coexistence mechanisms. For example, the storage effect has little to do with bet-hedging or "storage" via a robust life-history stage; relative nonlinearity is more likely to promote coexistence than originally thought; and fitness-density covariance is an amalgam of a large number of previously proposed explanations for coexistence (e.g., the competition-colonization trade-off, heteromyopia, spatially-varying resource supply ratios). Additionally, we review a number of topics in MCT, including the role of "scaling factors"; whether coexistence mechanisms are approximations; whether the magnitude or sign of invasion growth rates matters more; whether Hutchinson solved the paradox of the plankton; the scale-dependence of coexistence mechanisms; and much more.
1504.05641
Sarthok Sircar
Sarthok Sircar and Anthony J. Roberts
Surface deformation and shear flow in ligand mediated cell adhesion
23 pages, 11 figures
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a single, unified, multi-scale model to study the attachment\detachment dynamics of two deforming, near spherical cells, coated with binding ligands and subject to a slow, homogeneous shear flow in a viscous fluid medium. The binding ligands on the surface of the cells experience attractive and repulsive forces in an ionic medium and exhibit finite resistance to rotation via bond tilting. The macroscale drag forces and couples describing the fluid flow inside the small separation gap between the cells, are calculated using a combination of methods in lubrication theory and previously published numerical results. For a select range of material and fluid parameters, a hysteretic transition of the sticking probability curves between the adhesion and fragmentation domain is attributed to a nonlinear relation between the total microscale binding forces and the separation gap between the cells. We show that adhesion is favored in highly ionic fluids, increased deformability of the cells, elastic binders and a higher fluid shear rate (until a critical value). Continuation of the limit points predict a bistable region, indicating an abrupt switching between the adhesion and fragmentation regimes at critical shear rates, and suggesting that adhesion of two deformable surfaces in shearing fluids may play a significant dynamical role in some cell adhesion applications.
[ { "created": "Wed, 22 Apr 2015 03:21:27 GMT", "version": "v1" } ]
2015-04-23
[ [ "Sircar", "Sarthok", "" ], [ "Roberts", "Anthony J.", "" ] ]
We present a single, unified, multi-scale model to study the attachment\detachment dynamics of two deforming, near spherical cells, coated with binding ligands and subject to a slow, homogeneous shear flow in a viscous fluid medium. The binding ligands on the surface of the cells experience attractive and repulsive forces in an ionic medium and exhibit finite resistance to rotation via bond tilting. The macroscale drag forces and couples describing the fluid flow inside the small separation gap between the cells, are calculated using a combination of methods in lubrication theory and previously published numerical results. For a select range of material and fluid parameters, a hysteretic transition of the sticking probability curves between the adhesion and fragmentation domain is attributed to a nonlinear relation between the total microscale binding forces and the separation gap between the cells. We show that adhesion is favored in highly ionic fluids, increased deformability of the cells, elastic binders and a higher fluid shear rate (until a critical value). Continuation of the limit points predict a bistable region, indicating an abrupt switching between the adhesion and fragmentation regimes at critical shear rates, and suggesting that adhesion of two deformable surfaces in shearing fluids may play a significant dynamical role in some cell adhesion applications.
2006.12926
Radu Dumitru Stochi\c{t}oiu
Radu D. Stochi\c{t}oiu, Marian Petrica, Traian Rebedea, Ionel Popescu, Marius Leordeanu
A self-supervised neural-analytic method to predict the evolution of COVID-19 in Romania
update author list
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by/4.0/
Analysing and understanding the transmission and evolution of the COVID-19 pandemic is mandatory to be able to design the best social and medical policies, foresee their outcomes and deal with all the subsequent socio-economic effects. We address this important problem from a computational and machine learning perspective. More specifically, we want to statistically estimate all the relevant parameters for the new coronavirus COVID-19, such as the reproduction number, fatality rate or length of infectiousness period, based on Romanian patients, as well as be able to predict future outcomes. This endeavor is important, since it is well known that these factors vary across the globe, and might be dependent on many causes, including social, medical, age and genetic factors. We use a recently published improved version of SEIR, which is the classic, established model for infectious diseases. We want to infer all the parameters of the model, which govern the evolution of the pandemic in Romania, based on the only reliable, true measurement, which is the number of deaths. Once the model parameters are estimated, we are able to predict all the other relevant measures, such as the number of exposed and infectious people. To this end, we propose a self-supervised approach to train a deep convolutional network to guess the correct set of Modified-SEIR model parameters, given the observed number of daily fatalities. Then, we refine the solution with a stochastic coordinate descent approach. We compare our deep learning optimization scheme with the classic grid search approach and show great improvement in both computational time and prediction accuracy. We find an optimistic result in the case fatality rate for Romania which may be around 0.3% and we also demonstrate that our model is able to correctly predict the number of daily fatalities for up to three weeks in the future.
[ { "created": "Tue, 23 Jun 2020 12:00:04 GMT", "version": "v1" }, { "created": "Sat, 5 Sep 2020 12:10:27 GMT", "version": "v2" } ]
2020-09-08
[ [ "Stochiţoiu", "Radu D.", "" ], [ "Petrica", "Marian", "" ], [ "Rebedea", "Traian", "" ], [ "Popescu", "Ionel", "" ], [ "Leordeanu", "Marius", "" ] ]
Analysing and understanding the transmission and evolution of the COVID-19 pandemic is mandatory to be able to design the best social and medical policies, foresee their outcomes and deal with all the subsequent socio-economic effects. We address this important problem from a computational and machine learning perspective. More specifically, we want to statistically estimate all the relevant parameters for the new coronavirus COVID-19, such as the reproduction number, fatality rate or length of infectiousness period, based on Romanian patients, as well as be able to predict future outcomes. This endeavor is important, since it is well known that these factors vary across the globe, and might be dependent on many causes, including social, medical, age and genetic factors. We use a recently published improved version of SEIR, which is the classic, established model for infectious diseases. We want to infer all the parameters of the model, which govern the evolution of the pandemic in Romania, based on the only reliable, true measurement, which is the number of deaths. Once the model parameters are estimated, we are able to predict all the other relevant measures, such as the number of exposed and infectious people. To this end, we propose a self-supervised approach to train a deep convolutional network to guess the correct set of Modified-SEIR model parameters, given the observed number of daily fatalities. Then, we refine the solution with a stochastic coordinate descent approach. We compare our deep learning optimization scheme with the classic grid search approach and show great improvement in both computational time and prediction accuracy. We find an optimistic result in the case fatality rate for Romania which may be around 0.3% and we also demonstrate that our model is able to correctly predict the number of daily fatalities for up to three weeks in the future.
1807.00192
Keisuke Ishihara
Keisuke Ishihara and Elly M. Tanaka
Spontaneous symmetry breaking and pattern formation of organoids
11 pages, 1 figure, review article
Current Opinion in Systems Biology, 2018
10.1016/j.coisb.2018.06.002
11:123--128
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent 3D organ reconstitution studies show that a group of stem cells can establish a body axis and acquire different fates in a spatially organized manner. How such symmetry breaking happens in the absence of external spatial cues, and how developmental circuits are built to permit this is largely unknown. Here, we review spontaneous symmetry breaking phenomena in organoids, and hypothesize underlying patterning mechanisms that involve interacting diffusible species. Recent theoretical advances offer new directions beyond the prototypical Turing model. Experiments guided by theory will allow us to predict and control organoid self-organization.
[ { "created": "Sat, 30 Jun 2018 15:25:29 GMT", "version": "v1" }, { "created": "Tue, 28 Aug 2018 19:56:58 GMT", "version": "v2" } ]
2018-10-29
[ [ "Ishihara", "Keisuke", "" ], [ "Tanaka", "Elly M.", "" ] ]
Recent 3D organ reconstitution studies show that a group of stem cells can establish a body axis and acquire different fates in a spatially organized manner. How such symmetry breaking happens in the absence of external spatial cues, and how developmental circuits are built to permit this is largely unknown. Here, we review spontaneous symmetry breaking phenomena in organoids, and hypothesize underlying patterning mechanisms that involve interacting diffusible species. Recent theoretical advances offer new directions beyond the prototypical Turing model. Experiments guided by theory will allow us to predict and control organoid self-organization.
1702.03183
Diego Fasoli
Diego Fasoli, Anna Cattani, Stefano Panzeri
Pattern Storage, Bifurcations and Higher-Order Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model
Main Text: 25 pages, 10 figures. Supplementary Materials: 25 pages, 3 figures. In version 1, we plotted Fig. 7 with an inhibitory current that was different from that stated in the figure caption. We fixed this in version 2, by replotting Fig. 7 with the correct value of the current. Results unchanged
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exactly solvable neural network models with asymmetric weights are rare, and exact solutions are available only in some mean-field approaches. In this article we find exact analytical solutions of an asymmetric spin-glass-like model of arbitrary size and we perform a complete study of its dynamical and statistical properties. The network has discrete-time evolution equations, binary firing rates and can be driven by noise with any distribution. We find analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. The conditional probability distribution of the firing rates allows us to introduce a new learning rule to store safely, under the presence of noise, point and cyclic attractors, with important applications in the field of content-addressable memories. Furthermore, we study the neuronal dynamics in terms of the bifurcation structure of the network. We derive analytically examples of the codimension one and codimension two bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. In particular, we find that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry-breaking. On the other hand, the joint probability distributions allow us to calculate analytically the higher-order correlation structure of the network, which reveals neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks composed of an arbitrary number of neurons, but for completeness we also derive the network equations in the mean-field limit and we study analytically their local bifurcations. All the analytical results are extensively validated by numerical simulations.
[ { "created": "Fri, 10 Feb 2017 14:33:50 GMT", "version": "v1" }, { "created": "Wed, 15 Feb 2017 18:56:49 GMT", "version": "v2" } ]
2017-02-16
[ [ "Fasoli", "Diego", "" ], [ "Cattani", "Anna", "" ], [ "Panzeri", "Stefano", "" ] ]
Exactly solvable neural network models with asymmetric weights are rare, and exact solutions are available only in some mean-field approaches. In this article we find exact analytical solutions of an asymmetric spin-glass-like model of arbitrary size and we perform a complete study of its dynamical and statistical properties. The network has discrete-time evolution equations, binary firing rates and can be driven by noise with any distribution. We find analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. The conditional probability distribution of the firing rates allows us to introduce a new learning rule to store safely, under the presence of noise, point and cyclic attractors, with important applications in the field of content-addressable memories. Furthermore, we study the neuronal dynamics in terms of the bifurcation structure of the network. We derive analytically examples of the codimension one and codimension two bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. In particular, we find that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry-breaking. On the other hand, the joint probability distributions allow us to calculate analytically the higher-order correlation structure of the network, which reveals neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks composed of an arbitrary number of neurons, but for completeness we also derive the network equations in the mean-field limit and we study analytically their local bifurcations. All the analytical results are extensively validated by numerical simulations.
2109.09357
Daniel Cooney
Daniel B. Cooney, Fernando W. Rossine, Dylan H. Morris, and Simon A. Levin
A PDE Model for Protocell Evolution and the Origin of Chromosomes via Multilevel Selection
75 pages, 22 figures
null
null
null
q-bio.PE math.AP math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of complex cellular life involved two major transitions: the encapsulation of self-replicating genetic entities into cellular units and the aggregation of individual genes into a collectively replicating genome. In this paper, we formulate a minimal model of the evolution of proto-chromosomes within protocells. We model a simple protocell composed of two types of genes: a "fast gene" with an advantage for gene-level self-replication and a "slow gene" that replicates more slowly at the gene level, but which confers an advantage for protocell-level reproduction. Protocell-level replication capacity depends on cellular composition of fast and slow genes. We use a partial differential equation to describe how the composition of genes within protocells evolves over time under within-cell and between-cell competition. We find that the gene-level advantage of fast replicators casts a long shadow on the multilevel dynamics of protocell evolution: no level of between-protocell competition can produce coexistence of the fast and slow replicators when the two genes are equally needed for protocell-level reproduction. By introducing a "dimer replicator" consisting of a linked pair of the slow and fast genes, we show analytically that coexistence between the two genes can be promoted in pairwise multilevel competition between fast and dimer replicators, and provide numerical evidence for coexistence in trimorphic competition between fast, slow, and dimer replicators. Our results suggest that dimerization, or the formation of a simple chromosome-like dimer replicator, can help to overcome the shadow of lower-level selection and work in concert with deterministic multilevel selection to allow for the coexistence of two genes that are complementary at the protocell-level but compete at the level of individual gene-level replication.
[ { "created": "Mon, 20 Sep 2021 08:14:28 GMT", "version": "v1" } ]
2021-09-21
[ [ "Cooney", "Daniel B.", "" ], [ "Rossine", "Fernando W.", "" ], [ "Morris", "Dylan H.", "" ], [ "Levin", "Simon A.", "" ] ]
The evolution of complex cellular life involved two major transitions: the encapsulation of self-replicating genetic entities into cellular units and the aggregation of individual genes into a collectively replicating genome. In this paper, we formulate a minimal model of the evolution of proto-chromosomes within protocells. We model a simple protocell composed of two types of genes: a "fast gene" with an advantage for gene-level self-replication and a "slow gene" that replicates more slowly at the gene level, but which confers an advantage for protocell-level reproduction. Protocell-level replication capacity depends on cellular composition of fast and slow genes. We use a partial differential equation to describe how the composition of genes within protocells evolves over time under within-cell and between-cell competition. We find that the gene-level advantage of fast replicators casts a long shadow on the multilevel dynamics of protocell evolution: no level of between-protocell competition can produce coexistence of the fast and slow replicators when the two genes are equally needed for protocell-level reproduction. By introducing a "dimer replicator" consisting of a linked pair of the slow and fast genes, we show analytically that coexistence between the two genes can be promoted in pairwise multilevel competition between fast and dimer replicators, and provide numerical evidence for coexistence in trimorphic competition between fast, slow, and dimer replicators. Our results suggest that dimerization, or the formation of a simple chromosome-like dimer replicator, can help to overcome the shadow of lower-level selection and work in concert with deterministic multilevel selection to allow for the coexistence of two genes that are complementary at the protocell-level but compete at the level of individual gene-level replication.
1711.07205
Erik Rybakken
Erik Rybakken, Nils Baas, Benjamin Dunn
Decoding of neural data using cohomological feature extraction
17 pages. This is the author's final version, and the article has been accepted for publication in Neural Computation
null
null
null
q-bio.NC math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus we capture head direction cells and decode the head direction from the neural population activity without having to process the behaviour of the mouse. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure which is correlated with the speed of the mouse.
[ { "created": "Mon, 20 Nov 2017 08:56:10 GMT", "version": "v1" }, { "created": "Tue, 21 Nov 2017 10:39:12 GMT", "version": "v2" }, { "created": "Wed, 22 Nov 2017 09:59:06 GMT", "version": "v3" }, { "created": "Thu, 8 Mar 2018 21:11:42 GMT", "version": "v4" }, { "cr...
2018-09-11
[ [ "Rybakken", "Erik", "" ], [ "Baas", "Nils", "" ], [ "Dunn", "Benjamin", "" ] ]
We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological feature extraction. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus we capture head direction cells and decode the head direction from the neural population activity without having to process the behaviour of the mouse. Interestingly, the decoded values convey more information about the neural activity than the tracked head direction does, with differences that have some spatial organization. Finally, we note that the residual population activity, after the head direction has been accounted for, retains some low-dimensional structure which is correlated with the speed of the mouse.
0709.1750
Steven N. Evans
Aubrey Clayton and Steven N. Evans
Mutation-selection balance with recombination: convergence to equilibrium for polynomial selection costs
21 pages
null
null
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a continuous-time dynamical system that models the evolving distribution of genotypes in an infinite population where genomes may have infinitely many or even a continuum of loci, mutations accumulate along lineages without back-mutation, added mutations reduce fitness, and recombination occurs on a faster time scale than mutation and selection. Some features of the model, such as existence and uniqueness of solutions and convergence to the dynamical system of an approximating sequence of discrete time models, were presented in earlier work by Evans, Steinsaltz, and Wachter for quite general selective costs. Here we study a special case where the selective cost of a genotype with a given accumulation of ancestral mutations from a wild type ancestor is a sum of costs attributable to each individual mutation plus successive interaction contributions from each $k$-tuple of mutations for $k$ up to some finite ``degree''. Using ideas from complex chemical reaction networks and a novel Lyapunov function, we establish that the phenomenon of mutation-selection balance occurs for such selection costs under mild conditions. That is, we show that the dynamical system has a unique equilibrium and that it converges to this equilibrium from all initial conditions.
[ { "created": "Wed, 12 Sep 2007 03:25:36 GMT", "version": "v1" }, { "created": "Tue, 3 Feb 2009 19:07:10 GMT", "version": "v2" } ]
2009-02-03
[ [ "Clayton", "Aubrey", "" ], [ "Evans", "Steven N.", "" ] ]
We study a continuous-time dynamical system that models the evolving distribution of genotypes in an infinite population where genomes may have infinitely many or even a continuum of loci, mutations accumulate along lineages without back-mutation, added mutations reduce fitness, and recombination occurs on a faster time scale than mutation and selection. Some features of the model, such as existence and uniqueness of solutions and convergence to the dynamical system of an approximating sequence of discrete time models, were presented in earlier work by Evans, Steinsaltz, and Wachter for quite general selective costs. Here we study a special case where the selective cost of a genotype with a given accumulation of ancestral mutations from a wild type ancestor is a sum of costs attributable to each individual mutation plus successive interaction contributions from each $k$-tuple of mutations for $k$ up to some finite ``degree''. Using ideas from complex chemical reaction networks and a novel Lyapunov function, we establish that the phenomenon of mutation-selection balance occurs for such selection costs under mild conditions. That is, we show that the dynamical system has a unique equilibrium and that it converges to this equilibrium from all initial conditions.
2206.06663
Lambert Leong
Lambert T. Leong, Michael C. Wong, Yannik Glaser, Thomas Wolfgruber, Steven B. Heymsfield, Peter Sadowski, John A. Shepherd
Quantitative Imaging Principles Improves Medical Image Learning
null
null
null
null
q-bio.QM cs.CV eess.IV
http://creativecommons.org/licenses/by-sa/4.0/
Fundamental differences between natural and medical images have recently favored the use of self-supervised learning (SSL) over ImageNet transfer learning for medical image applications. Differences between image types are primarily due to the imaging modality and medical images utilize a wide range of physics based techniques while natural images are captured using only visible light. While many have demonstrated that SSL on medical images has resulted in better downstream task performance, our work suggests that more performance can be gained. The scientific principles which are used to acquire medical images are not often considered when constructing learning problems. For this reason, we propose incorporating quantitative imaging principles during generative SSL to improve image quality and quantitative biological accuracy. We show that this training schema results in better starting states for downstream supervised training on limited data. Our model also generates images that validate on clinical quantitative analysis software.
[ { "created": "Tue, 14 Jun 2022 07:51:49 GMT", "version": "v1" }, { "created": "Fri, 1 Jul 2022 22:10:11 GMT", "version": "v2" }, { "created": "Mon, 11 Jul 2022 22:12:34 GMT", "version": "v3" } ]
2022-07-13
[ [ "Leong", "Lambert T.", "" ], [ "Wong", "Michael C.", "" ], [ "Glaser", "Yannik", "" ], [ "Wolfgruber", "Thomas", "" ], [ "Heymsfield", "Steven B.", "" ], [ "Sadowski", "Peter", "" ], [ "Shepherd", "John A.", ...
Fundamental differences between natural and medical images have recently favored the use of self-supervised learning (SSL) over ImageNet transfer learning for medical image applications. Differences between image types are primarily due to the imaging modality and medical images utilize a wide range of physics based techniques while natural images are captured using only visible light. While many have demonstrated that SSL on medical images has resulted in better downstream task performance, our work suggests that more performance can be gained. The scientific principles which are used to acquire medical images are not often considered when constructing learning problems. For this reason, we propose incorporating quantitative imaging principles during generative SSL to improve image quality and quantitative biological accuracy. We show that this training schema results in better starting states for downstream supervised training on limited data. Our model also generates images that validate on clinical quantitative analysis software.
1803.01496
Kurnia Susvitasari
Kurnia Susvitasari
Stochastic Model of SIR Epidemic Modelling
9 pages, 14 figures
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Threshold theorem is probably the most important development of mathematical epidemic modelling. Unfortunately, some models may not behave according to the threshold. In this paper, we will focus on the final outcome of SIR model with demography. The behaviour of the model approached by deteministic and stochastic models will be introduced, mainly using simulations. Furthermore, we will also investigate the dynamic of susceptibles in population in absence of infective. We have successfully showed that both deterministic and stochastic models performed similar results when $R_0 \leq 1$. That is, the disease-free stage in the epidemic. But when $R_0 > 1$, the deterministic and stochastic approaches had different interpretations.
[ { "created": "Mon, 5 Mar 2018 04:59:15 GMT", "version": "v1" } ]
2018-03-06
[ [ "Susvitasari", "Kurnia", "" ] ]
Threshold theorem is probably the most important development of mathematical epidemic modelling. Unfortunately, some models may not behave according to the threshold. In this paper, we will focus on the final outcome of SIR model with demography. The behaviour of the model approached by deteministic and stochastic models will be introduced, mainly using simulations. Furthermore, we will also investigate the dynamic of susceptibles in population in absence of infective. We have successfully showed that both deterministic and stochastic models performed similar results when $R_0 \leq 1$. That is, the disease-free stage in the epidemic. But when $R_0 > 1$, the deterministic and stochastic approaches had different interpretations.
2010.12660
Siavash Golkar
Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan M. Sengupta, Dmitri B. Chklovskii
A simple normative network approximates local non-Hebbian learning in the cortex
Body and supplementary materials of NeurIPS 2020 paper. 19 pages, 7 figures
null
null
null
q-bio.NC cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To guide behavior, the brain extracts relevant features from high-dimensional data streamed by sensory organs. Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals which provide context and task-relevant information. Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data. Mathematically, we start with a family of Reduced-Rank Regression (RRR) objective functions which include Reduced Rank (minimum) Mean Square Error (RRMSE) and Canonical Correlation Analysis (CCA), and derive novel offline and online optimization algorithms, which we call Bio-RRR. The online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex. We detail how, in our model, the calcium plateau potential can be interpreted as a backpropagating error signal. We demonstrate that, despite relying exclusively on biologically plausible local learning rules, our algorithms perform competitively with existing implementations of RRMSE and CCA.
[ { "created": "Fri, 23 Oct 2020 20:49:44 GMT", "version": "v1" } ]
2020-10-27
[ [ "Golkar", "Siavash", "" ], [ "Lipshutz", "David", "" ], [ "Bahroun", "Yanis", "" ], [ "Sengupta", "Anirvan M.", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
To guide behavior, the brain extracts relevant features from high-dimensional data streamed by sensory organs. Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals which provide context and task-relevant information. Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data. Mathematically, we start with a family of Reduced-Rank Regression (RRR) objective functions which include Reduced Rank (minimum) Mean Square Error (RRMSE) and Canonical Correlation Analysis (CCA), and derive novel offline and online optimization algorithms, which we call Bio-RRR. The online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex. We detail how, in our model, the calcium plateau potential can be interpreted as a backpropagating error signal. We demonstrate that, despite relying exclusively on biologically plausible local learning rules, our algorithms perform competitively with existing implementations of RRMSE and CCA.
q-bio/0611052
V. Breton
V. Breton (LPC-Clermont), N. Jacq (LPC-Clermont, CS-Si), M. Hofmann (SCAI)
Grid Added Value to Address Malaria
7 pages, 1 figure, 6th IEEE International Symposium on Cluster Computing and the Grid, Singapore, 16-19 may 2006, to appear in the proceedings
Dans IEEE Transactions on Information Technology in Biomedicine (12) - 6th IEEE International Symposium on Cluster Computing and the Grid, CCGrid06, Singapore (2006)
10.1109/TITB.2007.895930
null
q-bio.QM cs.DC
null
Through this paper, we call for a distributed, internet-based collaboration to address one of the worst plagues of our present world, malaria. The spirit is a non-proprietary peer-production of information-embedding goods. And we propose to use the grid technology to enable such a world wide "open source" like collaboration. The first step towards this vision has been achieved during the summer on the EGEE grid infrastructure where 46 million ligands were docked for a total amount of 80 CPU years in 6 weeks in the quest for new drugs.
[ { "created": "Fri, 17 Nov 2006 10:21:02 GMT", "version": "v1" } ]
2008-03-06
[ [ "Breton", "V.", "", "LPC-Clermont" ], [ "Jacq", "N.", "", "LPC-Clermont, CS-Si" ], [ "Hofmann", "M.", "", "SCAI" ] ]
Through this paper, we call for a distributed, internet-based collaboration to address one of the worst plagues of our present world, malaria. The spirit is a non-proprietary peer-production of information-embedding goods. And we propose to use the grid technology to enable such a world wide "open source" like collaboration. The first step towards this vision has been achieved during the summer on the EGEE grid infrastructure where 46 million ligands were docked for a total amount of 80 CPU years in 6 weeks in the quest for new drugs.
q-bio/0410025
Dmitri Parkhomchuk
Dmitri Parkhomchuk, Sergei Rodin
Two Major Paths of Gene-Duplicates Evolution
9 pages, 1 table, 7 figures
null
null
null
q-bio.PE q-bio.GN
null
Evolving genomes increase a number of their genes by gene duplications. To escape degradation in a functionless pseudogene, any gene duplicate needs to be guarded by negative (purifying) selection from otherwise inevitable fixation of degenerative mutations. In the present study we focus on the evolutionary stage at which new duplicates come under such surveillance. Our analyses of several genomes indicate that in about 10% gene pairs, selection begins to guard a new gene copy very soon after a duplication event whereas the vast majority (90%) of extra genes remain redundant and unrecognised by selection. Such duplicates accumulate all mutations (including degenerative) in neutral fashion and are actually destined to become pseudogenes. We revealed this "two-stream" evolutionary pattern by the analysis of mutations in 2nd versus 3rd codon positions but not by the routinely used ratio of amino acid replacements (R) versus silent substitutions (S), i.e. the '2nd vs. 3rd' metric proved to be more resolving than the traditional 'R vs. S' one for distinguishing neutrally evolving future pseudogenes from their functional counterparts controlled by negative selection. In gene databases for large genomes, hundreds of future pseudogenes are annotated as functional genes because they do look like intact and valuable by standard criteria, including even active transcription and translation. Apparently, these "pseudogenes-to-be" over-cloud and mimic those very infrequent gene duplicates with increased sequence evolution rates driven by positive selection.
[ { "created": "Wed, 20 Oct 2004 22:35:18 GMT", "version": "v1" } ]
2007-05-23
[ [ "Parkhomchuk", "Dmitri", "" ], [ "Rodin", "Sergei", "" ] ]
Evolving genomes increase a number of their genes by gene duplications. To escape degradation in a functionless pseudogene, any gene duplicate needs to be guarded by negative (purifying) selection from otherwise inevitable fixation of degenerative mutations. In the present study we focus on the evolutionary stage at which new duplicates come under such surveillance. Our analyses of several genomes indicate that in about 10% gene pairs, selection begins to guard a new gene copy very soon after a duplication event whereas the vast majority (90%) of extra genes remain redundant and unrecognised by selection. Such duplicates accumulate all mutations (including degenerative) in neutral fashion and are actually destined to become pseudogenes. We revealed this "two-stream" evolutionary pattern by the analysis of mutations in 2nd versus 3rd codon positions but not by the routinely used ratio of amino acid replacements (R) versus silent substitutions (S), i.e. the '2nd vs. 3rd' metric proved to be more resolving than the traditional 'R vs. S' one for distinguishing neutrally evolving future pseudogenes from their functional counterparts controlled by negative selection. In gene databases for large genomes, hundreds of future pseudogenes are annotated as functional genes because they do look like intact and valuable by standard criteria, including even active transcription and translation. Apparently, these "pseudogenes-to-be" over-cloud and mimic those very infrequent gene duplicates with increased sequence evolution rates driven by positive selection.
1810.06853
Mareike Fischer
Mareike Fischer and Michelle Galla and Lina Herbst and Yangjing Long and Kristina Wicke
Unrooted non-binary tree-based phylogenetic networks
null
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are a generalization of phylogenetic trees allowing for the representation of non-treelike evolutionary events such as hybridization. Typically, such networks have been analyzed based on their `level', i.e. based on the complexity of their 2-edge-connected components. However, recently the question of how `treelike' a phylogenetic network is has become the center of attention in various studies. This led to the introduction of \emph{tree-based networks}, i.e. networks that can be constructed from a phylogenetic tree, called the \emph{base tree}, by adding additional edges. While the concept of tree-basedness was originally introduced for rooted phylogenetic networks, it has recently also been considered for unrooted networks. In the present study, we compare and contrast findings obtained for unrooted \emph{binary} tree-based networks to unrooted \emph{non-binary} networks. In particular, while it is known that up to level 4 all unrooted binary networks are tree-based, we show that in the case of non-binary networks, this result only holds up to level 3.
[ { "created": "Tue, 16 Oct 2018 07:47:42 GMT", "version": "v1" }, { "created": "Wed, 17 Oct 2018 10:09:02 GMT", "version": "v2" }, { "created": "Thu, 9 May 2019 14:50:03 GMT", "version": "v3" }, { "created": "Thu, 7 May 2020 19:05:20 GMT", "version": "v4" } ]
2020-05-11
[ [ "Fischer", "Mareike", "" ], [ "Galla", "Michelle", "" ], [ "Herbst", "Lina", "" ], [ "Long", "Yangjing", "" ], [ "Wicke", "Kristina", "" ] ]
Phylogenetic networks are a generalization of phylogenetic trees allowing for the representation of non-treelike evolutionary events such as hybridization. Typically, such networks have been analyzed based on their `level', i.e. based on the complexity of their 2-edge-connected components. However, recently the question of how `treelike' a phylogenetic network is has become the center of attention in various studies. This led to the introduction of \emph{tree-based networks}, i.e. networks that can be constructed from a phylogenetic tree, called the \emph{base tree}, by adding additional edges. While the concept of tree-basedness was originally introduced for rooted phylogenetic networks, it has recently also been considered for unrooted networks. In the present study, we compare and contrast findings obtained for unrooted \emph{binary} tree-based networks to unrooted \emph{non-binary} networks. In particular, while it is known that up to level 4 all unrooted binary networks are tree-based, we show that in the case of non-binary networks, this result only holds up to level 3.
q-bio/0605041
Jesse Bloom
Jesse D Bloom, Alpan Raval, and Claus O Wilke
Thermodynamics of Neutral Protein Evolution
null
Genetics 175:1-12 (January 2007)
10.1534/genetics.106.061754
null
q-bio.PE q-bio.BM
null
Naturally evolving proteins gradually accumulate mutations while continuing to fold to thermodynamically stable native structures. This process of neutral protein evolution is an important mode of genetic change, and forms the basis for the molecular clock. Here we present a mathematical theory that predicts the number of accumulated mutations, the index of dispersion, and the distribution of stabilities in an evolving protein population from knowledge of the stability effects (ddG values) for single mutations. Our theory quantitatively describes how neutral evolution leads to marginally stable proteins, and provides formulae for calculating how fluctuations in stability cause an overdispersion of the molecular clock. It also shows that the structural influences on the rate of sequence evolution that have been observed in earlier simulations can be calculated using only the single-mutation ddG values. We consider both the case when the product of the population size and mutation rate is small and the case when this product is large, and show that in the latter case proteins evolve excess mutational robustness that is manifested by extra stability and increases the rate of sequence evolution. Our basic method is to treat protein evolution as a Markov process constrained by a minimal requirement for stable folding, enabling an evolutionary description of the proteins solely in terms of the experimentally measureable ddG values. All of our theoretical predictions are confirmed by simulations with model lattice proteins. Our work provides a mathematical foundation for understanding how protein biophysics helps shape the process of evolution.
[ { "created": "Wed, 24 May 2006 16:39:39 GMT", "version": "v1" }, { "created": "Thu, 11 Jan 2007 18:54:22 GMT", "version": "v2" } ]
2007-05-23
[ [ "Bloom", "Jesse D", "" ], [ "Raval", "Alpan", "" ], [ "Wilke", "Claus O", "" ] ]
Naturally evolving proteins gradually accumulate mutations while continuing to fold to thermodynamically stable native structures. This process of neutral protein evolution is an important mode of genetic change, and forms the basis for the molecular clock. Here we present a mathematical theory that predicts the number of accumulated mutations, the index of dispersion, and the distribution of stabilities in an evolving protein population from knowledge of the stability effects (ddG values) for single mutations. Our theory quantitatively describes how neutral evolution leads to marginally stable proteins, and provides formulae for calculating how fluctuations in stability cause an overdispersion of the molecular clock. It also shows that the structural influences on the rate of sequence evolution that have been observed in earlier simulations can be calculated using only the single-mutation ddG values. We consider both the case when the product of the population size and mutation rate is small and the case when this product is large, and show that in the latter case proteins evolve excess mutational robustness that is manifested by extra stability and increases the rate of sequence evolution. Our basic method is to treat protein evolution as a Markov process constrained by a minimal requirement for stable folding, enabling an evolutionary description of the proteins solely in terms of the experimentally measureable ddG values. All of our theoretical predictions are confirmed by simulations with model lattice proteins. Our work provides a mathematical foundation for understanding how protein biophysics helps shape the process of evolution.
1205.2851
Ovidiu Radulescu
Ovidiu Radulescu, Alexander N. Gorban, Andrei Zinovyev, Vincent Noel
Reduction of dynamical biochemical reaction networks in computational biology
null
Frontiers in Genetics, 2012, Volume3, Article 131
10.3389/fgene.2012.00131
null
q-bio.MN physics.chem-ph
http://creativecommons.org/licenses/publicdomain/
Biochemical networks are used in computational biology, to model the static and dynamical details of systems involved in cell signaling, metabolism, and regulation of gene expression. Parametric and structural uncertainty, as well as combinatorial explosion are strong obstacles against analyzing the dynamics of large models of this type. Multi-scaleness is another property of these networks, that can be used to get past some of these obstacles. Networks with many well separated time scales, can be reduced to simpler networks, in a way that depends only on the orders of magnitude and not on the exact values of the kinetic parameters. The main idea used for such robust simplifications of networks is the concept of dominance among model elements, allowing hierarchical organization of these elements according to their effects on the network dynamics. This concept finds a natural formulation in tropical geometry. We revisit, in the light of these new ideas, the main approaches to model reduction of reaction networks, such as quasi-steady state and quasi-equilibrium approximations, and provide practical recipes for model reduction of linear and nonlinear networks. We also discuss the application of model reduction to backward pruning machine learning techniques.
[ { "created": "Sun, 13 May 2012 10:09:39 GMT", "version": "v1" } ]
2014-10-15
[ [ "Radulescu", "Ovidiu", "" ], [ "Gorban", "Alexander N.", "" ], [ "Zinovyev", "Andrei", "" ], [ "Noel", "Vincent", "" ] ]
Biochemical networks are used in computational biology, to model the static and dynamical details of systems involved in cell signaling, metabolism, and regulation of gene expression. Parametric and structural uncertainty, as well as combinatorial explosion are strong obstacles against analyzing the dynamics of large models of this type. Multi-scaleness is another property of these networks, that can be used to get past some of these obstacles. Networks with many well separated time scales, can be reduced to simpler networks, in a way that depends only on the orders of magnitude and not on the exact values of the kinetic parameters. The main idea used for such robust simplifications of networks is the concept of dominance among model elements, allowing hierarchical organization of these elements according to their effects on the network dynamics. This concept finds a natural formulation in tropical geometry. We revisit, in the light of these new ideas, the main approaches to model reduction of reaction networks, such as quasi-steady state and quasi-equilibrium approximations, and provide practical recipes for model reduction of linear and nonlinear networks. We also discuss the application of model reduction to backward pruning machine learning techniques.
2105.05334
Jem Corcoran
J. N. Mueller, J. N. Corcoran
Coupling from the Past for the Stochastic Simulation of Chemical Reaction Networks
27 pages, 30 figures
null
null
null
q-bio.MN stat.CO
http://creativecommons.org/licenses/by/4.0/
Chemical reaction networks (CRNs) are fundamental computational models used to study the behavior of chemical reactions in well-mixed solutions. They have been used extensively to model a broad range of biological systems, and are primarily used when the more traditional model of deterministic continuous mass action kinetics is invalid due to small molecular counts. We present a perfect sampling algorithm to draw error-free samples from the stationary distributions of stochastic models for coupled, linear chemical reaction networks. The state spaces of such networks are given by all permissible combinations of molecular counts for each chemical species, and thereby grow exponentially with the numbers of species in the network. To avoid simulations involving large numbers of states, we propose a subset of chemical species such that coupling of paths started from these states guarantee coupling of paths started from all states in the state space and we show for the well-known Reversible Michaelis-Menten model that the subset does in fact guarantee perfect draws from the stationary distribution of interest. We compare solutions computed in two ways with this algorithm to those found analytically using the chemical master equation and we compare the distribution of coupling times for the two simulation approaches.
[ { "created": "Tue, 11 May 2021 20:20:02 GMT", "version": "v1" } ]
2021-05-13
[ [ "Mueller", "J. N.", "" ], [ "Corcoran", "J. N.", "" ] ]
Chemical reaction networks (CRNs) are fundamental computational models used to study the behavior of chemical reactions in well-mixed solutions. They have been used extensively to model a broad range of biological systems, and are primarily used when the more traditional model of deterministic continuous mass action kinetics is invalid due to small molecular counts. We present a perfect sampling algorithm to draw error-free samples from the stationary distributions of stochastic models for coupled, linear chemical reaction networks. The state spaces of such networks are given by all permissible combinations of molecular counts for each chemical species, and thereby grow exponentially with the numbers of species in the network. To avoid simulations involving large numbers of states, we propose a subset of chemical species such that coupling of paths started from these states guarantee coupling of paths started from all states in the state space and we show for the well-known Reversible Michaelis-Menten model that the subset does in fact guarantee perfect draws from the stationary distribution of interest. We compare solutions computed in two ways with this algorithm to those found analytically using the chemical master equation and we compare the distribution of coupling times for the two simulation approaches.
2011.05881
Salvador Chuli\'an
Salvador Chuli\'an, \'Alvaro-Mart\'inez Rubio, Mar\'ia Rosa, V\'ictor M. P\'erez-Garc\'ia
Mathematical models of Leukaemia and its treatment: A review
null
null
10.1007/s40324-022-00296-z
null
q-bio.TO math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Leukaemia accounts for around 3% of all cancer types diagnosed in adults, and is the most common type of cancer in children of paediatric age. There is increasing interest in the use of mathematical models in oncology to draw inferences and make predictions, providing a complementary picture to experimental biomedical models. In this paper we recapitulate the state of the art of mathematical modelling of leukaemia growth dynamics, in time and response to treatment. We intend to describe the mathematical methodologies, the biological aspects taken into account in the modelling, and the conclusions of each study. This review is intended to provide researchers in the field with solid background material, in order to achieve further breakthroughs in the promising field of mathematical biology.
[ { "created": "Wed, 4 Nov 2020 09:35:28 GMT", "version": "v1" } ]
2022-05-13
[ [ "Chulián", "Salvador", "" ], [ "Rubio", "Álvaro-Martínez", "" ], [ "Rosa", "María", "" ], [ "Pérez-García", "Víctor M.", "" ] ]
Leukaemia accounts for around 3% of all cancer types diagnosed in adults, and is the most common type of cancer in children of paediatric age. There is increasing interest in the use of mathematical models in oncology to draw inferences and make predictions, providing a complementary picture to experimental biomedical models. In this paper we recapitulate the state of the art of mathematical modelling of leukaemia growth dynamics, in time and response to treatment. We intend to describe the mathematical methodologies, the biological aspects taken into account in the modelling, and the conclusions of each study. This review is intended to provide researchers in the field with solid background material, in order to achieve further breakthroughs in the promising field of mathematical biology.
2403.19047
Yann Sakref
Yann Sakref, Olivier Rivoire
Design principles, growth laws, and competition of minimal autocatalysts
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The apparent difficulty of designing simple autocatalysts that grow exponentially in the absence of enzymes, external drives or ingenious internal mechanisms severely constrains scenarios for the emergence of evolution by natural selection in chemical and physical systems. Here, we systematically analyze these difficulties in the context of one of the simplest and most generic autocatalysts: a dimeric molecule that duplicates by templated ligation. We show that despite its simplicity, such an autocatalyst can achieve exponential growth autonomously. This only requires that the rate of the spontaneous dimerization, the interactions between molecules, and the concentrations of substrates and products are in appropriate ranges. We also show, however, that it is possible to design as simple sub-exponential autocatalysts that have an advantage over exponential autocatalysts when competing for a common resource. We reach these conclusions by developing a general theoretical framework based on kinetic barrier diagrams. Besides challenging commonly accepted assumptions in the field of the origin of life, our results provide a blueprint for the experimental realization of elementary autocatalysts exhibiting a form of natural selection, whether on a molecular or colloidal scale.
[ { "created": "Wed, 27 Mar 2024 22:54:31 GMT", "version": "v1" } ]
2024-03-29
[ [ "Sakref", "Yann", "" ], [ "Rivoire", "Olivier", "" ] ]
The apparent difficulty of designing simple autocatalysts that grow exponentially in the absence of enzymes, external drives or ingenious internal mechanisms severely constrains scenarios for the emergence of evolution by natural selection in chemical and physical systems. Here, we systematically analyze these difficulties in the context of one of the simplest and most generic autocatalysts: a dimeric molecule that duplicates by templated ligation. We show that despite its simplicity, such an autocatalyst can achieve exponential growth autonomously. This only requires that the rate of the spontaneous dimerization, the interactions between molecules, and the concentrations of substrates and products are in appropriate ranges. We also show, however, that it is possible to design as simple sub-exponential autocatalysts that have an advantage over exponential autocatalysts when competing for a common resource. We reach these conclusions by developing a general theoretical framework based on kinetic barrier diagrams. Besides challenging commonly accepted assumptions in the field of the origin of life, our results provide a blueprint for the experimental realization of elementary autocatalysts exhibiting a form of natural selection, whether on a molecular or colloidal scale.
2101.00004
Julia Siekiera
Julia Siekiera and Stefan Kramer
Deep Unsupervised Identification of Selected SNPs between Adapted Populations on Pool-seq Data
12 pages, 5 figures
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
The exploration of selected single nucleotide polymorphisms (SNPs) to identify genetic diversity between different sequencing population pools (Pool-seq) is a fundamental task in genetic research. As underlying sequence reads and their alignment are error-prone and univariate statistical solutions only take individual positions of the genome into account, the identification of selected SNPs remains a challenging process. Deep learning models like convolutional neural networks (CNNs) are able to consider large input areas in their decisions. We suggest an unsupervised pipeline to be independent of a rarely known ground truth. We train a supervised discriminator CNN to distinguish alignments from different populations and utilize the model for unsupervised SNP calling by applying explainable artificial intelligence methods. Our proposed multivariate method is based on two main assumptions: We assume (i) that instances having a high predictive certainty of being distinguishable are likely to contain genetic variants, and (ii) that selected SNPs are located at regions with input features having the highest influence on the model's decision process. We directly compare our method with statistical results on two different Pool-seq datasets and show that our solution is able to extend statistical results.
[ { "created": "Mon, 28 Dec 2020 22:28:44 GMT", "version": "v1" } ]
2021-01-05
[ [ "Siekiera", "Julia", "" ], [ "Kramer", "Stefan", "" ] ]
The exploration of selected single nucleotide polymorphisms (SNPs) to identify genetic diversity between different sequencing population pools (Pool-seq) is a fundamental task in genetic research. As underlying sequence reads and their alignment are error-prone and univariate statistical solutions only take individual positions of the genome into account, the identification of selected SNPs remains a challenging process. Deep learning models like convolutional neural networks (CNNs) are able to consider large input areas in their decisions. We suggest an unsupervised pipeline to be independent of a rarely known ground truth. We train a supervised discriminator CNN to distinguish alignments from different populations and utilize the model for unsupervised SNP calling by applying explainable artificial intelligence methods. Our proposed multivariate method is based on two main assumptions: We assume (i) that instances having a high predictive certainty of being distinguishable are likely to contain genetic variants, and (ii) that selected SNPs are located at regions with input features having the highest influence on the model's decision process. We directly compare our method with statistical results on two different Pool-seq datasets and show that our solution is able to extend statistical results.
1002.0068
Mauro Bologna
M. Bologna and J. C. Flores
Mathematical Model of Easter Island Society Collapse
9 pages, 1 figure, final version published on EuroPhysics Letters
EPL, 81 (2008) 48006
10.1209/0295-5075/81/48006
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society, starting from the fifth century until the last period of the society collapse (fifteen century). Based on historical reports, the available primary sources consisted almost exclusively on the trees. We describe the inhabitants and the resources as an isolated system and both considered as dynamic variables. A mathematical analysis about why the structure of the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters driving the interaction humans-environment and consequently leading to the collapse. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of other extinguished civilization (Cop\'an Maya) confirming, with a sufficiently precise estimation, the consistency of the adopted model.
[ { "created": "Sat, 30 Jan 2010 14:15:49 GMT", "version": "v1" } ]
2015-05-18
[ [ "Bologna", "M.", "" ], [ "Flores", "J. C.", "" ] ]
In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society, starting from the fifth century until the last period of the society collapse (fifteen century). Based on historical reports, the available primary sources consisted almost exclusively on the trees. We describe the inhabitants and the resources as an isolated system and both considered as dynamic variables. A mathematical analysis about why the structure of the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters driving the interaction humans-environment and consequently leading to the collapse. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of other extinguished civilization (Cop\'an Maya) confirming, with a sufficiently precise estimation, the consistency of the adopted model.
1406.0413
Brant Faircloth
Brant C. Faircloth, Michael G. Branstetter, Noor D. White, Se\'an G. Brady
Target enrichment of ultraconserved elements from arthropods provides a genomic perspective on relationships among Hymenoptera
null
null
10.1111/1755-0998.12328
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaining a genomic perspective on phylogeny requires the collection of data from many putatively independent loci collected across the genome. Among insects, an increasingly common approach to collecting this class of data involves transcriptome sequencing, because few insects have high-quality genome sequences available; assembling new genomes remains a limiting factor; the transcribed portion of the genome is a reasonable, reduced subset of the genome to target; and the data collected from transcribed portions of the genome are similar in composition to the types of data with which biologists have traditionally worked (e.g., exons). However, molecular techniques requiring RNA as a template are limited to using very high quality source materials, which are often unavailable from a large proportion of biologically important insect samples. Recent research suggests that DNA-based target enrichment of conserved genomic elements offers another path to collecting phylogenomic data across insect taxa, provided that conserved elements are present in and can be collected from insect genomes. Here, we identify a large set (n$=$1510) of ultraconserved elements (UCE) shared among the insect order Hymenoptera. We use in silico analyses to show that these loci accurately reconstruct relationships among genome-enabled Hymenoptera, and we design a set of baits for enriching these loci that researchers can use with DNA templates extracted from a variety of sources. We use our UCE bait set to enrich an average of 721 UCE loci from 30 hymenopteran taxa, and we use these UCE loci to reconstruct phylogenetic relationships spanning very old ($\geq$220 MYA) to very young ($\leq$1 MYA) divergences among hymenopteran lineages. In contrast to a recent study addressing hymenopteran phylogeny using transcriptome data, we found ants to be sister to all remaining aculeate lineages with complete support.
[ { "created": "Mon, 2 Jun 2014 15:30:30 GMT", "version": "v1" }, { "created": "Thu, 11 Sep 2014 14:28:33 GMT", "version": "v2" } ]
2014-09-12
[ [ "Faircloth", "Brant C.", "" ], [ "Branstetter", "Michael G.", "" ], [ "White", "Noor D.", "" ], [ "Brady", "Seán G.", "" ] ]
Gaining a genomic perspective on phylogeny requires the collection of data from many putatively independent loci collected across the genome. Among insects, an increasingly common approach to collecting this class of data involves transcriptome sequencing, because few insects have high-quality genome sequences available; assembling new genomes remains a limiting factor; the transcribed portion of the genome is a reasonable, reduced subset of the genome to target; and the data collected from transcribed portions of the genome are similar in composition to the types of data with which biologists have traditionally worked (e.g., exons). However, molecular techniques requiring RNA as a template are limited to using very high quality source materials, which are often unavailable from a large proportion of biologically important insect samples. Recent research suggests that DNA-based target enrichment of conserved genomic elements offers another path to collecting phylogenomic data across insect taxa, provided that conserved elements are present in and can be collected from insect genomes. Here, we identify a large set (n$=$1510) of ultraconserved elements (UCE) shared among the insect order Hymenoptera. We use in silico analyses to show that these loci accurately reconstruct relationships among genome-enabled Hymenoptera, and we design a set of baits for enriching these loci that researchers can use with DNA templates extracted from a variety of sources. We use our UCE bait set to enrich an average of 721 UCE loci from 30 hymenopteran taxa, and we use these UCE loci to reconstruct phylogenetic relationships spanning very old ($\geq$220 MYA) to very young ($\leq$1 MYA) divergences among hymenopteran lineages. In contrast to a recent study addressing hymenopteran phylogeny using transcriptome data, we found ants to be sister to all remaining aculeate lineages with complete support.
2010.02759
Antonio Paiva
Antonio R. Paiva and Giovanni Pilloni
Inferring Microbial Biomass Yield and Cell Weight using Probabilistic Macrochemical Modeling
Main article (13 pages, 7 figures, 2 tables); supplementary material (3 pages, 1 figures, 1 code listing)
null
10.1109/TCBB.2021.3139290
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Growth rates and biomass yields are key descriptors used in microbiology studies to understand how microbial species respond to changes in the environment. Of these, biomass yield estimates are typically obtained using cell counts and measurements of the feed substrate. These quantities are perturbed with measurement noise however. Perhaps most crucially, estimating biomass from cell counts, as needed to assess yields, relies on an assumed cell weight. Noise and discrepancies on these assumptions can lead to significant changes in conclusions regarding the microbes' response. This article proposes a methodology to address these challenges using probabilistic macrochemical models of microbial growth. It is shown that a model can be developed to fully use the experimental data, relax assumptions and greatly improve robustness to a priori estimates of the cell weight, and provides uncertainty estimates of key parameters. This methodology is demonstrated in the context of a specific case study and the estimation characteristics are validated in several scenarios using synthetically generated microbial growth data.
[ { "created": "Tue, 6 Oct 2020 14:23:21 GMT", "version": "v1" }, { "created": "Thu, 4 Feb 2021 18:17:12 GMT", "version": "v2" }, { "created": "Mon, 26 Jul 2021 19:42:13 GMT", "version": "v3" }, { "created": "Thu, 18 Nov 2021 15:39:31 GMT", "version": "v4" } ]
2022-05-02
[ [ "Paiva", "Antonio R.", "" ], [ "Pilloni", "Giovanni", "" ] ]
Growth rates and biomass yields are key descriptors used in microbiology studies to understand how microbial species respond to changes in the environment. Of these, biomass yield estimates are typically obtained using cell counts and measurements of the feed substrate. These quantities are perturbed with measurement noise however. Perhaps most crucially, estimating biomass from cell counts, as needed to assess yields, relies on an assumed cell weight. Noise and discrepancies on these assumptions can lead to significant changes in conclusions regarding the microbes' response. This article proposes a methodology to address these challenges using probabilistic macrochemical models of microbial growth. It is shown that a model can be developed to fully use the experimental data, relax assumptions and greatly improve robustness to a priori estimates of the cell weight, and provides uncertainty estimates of key parameters. This methodology is demonstrated in the context of a specific case study and the estimation characteristics are validated in several scenarios using synthetically generated microbial growth data.
1706.00935
Ricardo Guerrero
R. Guerrero, C. Qin, O. Oktay, C. Bowles, L. Chen, R. Joules, R. Wolz, M.C. Valdes-Hernandez, D.A. Dickie, J. Wardlaw and D. Rueckert
White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks
null
null
null
null
q-bio.TO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The accurate assessment of White matter hyperintensities (WMH) burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data. The manual delineation of WMHs is tedious, costly and time consuming. This is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense. Several automated methods aiming to tackle the challenges of WMH segmentation have been proposed, however cannot differentiate between WMH and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. As far as we know, this is the first time such differentiation task has explicitly been proposed. The proposed fully convolutional CNN architecture, is comprised of an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, were found to be in line with the associations found with the expert-annotated volumes.
[ { "created": "Sat, 3 Jun 2017 11:58:18 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2018 11:33:33 GMT", "version": "v2" } ]
2018-01-03
[ [ "Guerrero", "R.", "" ], [ "Qin", "C.", "" ], [ "Oktay", "O.", "" ], [ "Bowles", "C.", "" ], [ "Chen", "L.", "" ], [ "Joules", "R.", "" ], [ "Wolz", "R.", "" ], [ "Valdes-Hernandez", "M. C.", ...
The accurate assessment of White matter hyperintensities (WMH) burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data. The manual delineation of WMHs is tedious, costly and time consuming. This is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense. Several automated methods aiming to tackle the challenges of WMH segmentation have been proposed, however cannot differentiate between WMH and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. As far as we know, this is the first time such differentiation task has explicitly been proposed. The proposed fully convolutional CNN architecture, is comprised of an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, were found to be in line with the associations found with the expert-annotated volumes.
q-bio/0403040
Debashish Chowdhury
Dietrich Stauffer, Ambarish Kunwar and Debashish Chowdhury
Evolutionary ecology in-silico:evolving foodwebs, migrating population and speciation
12 pages of PS file, including LATEX text and 9 EPS figures
null
10.1016/j.physa.2004.12.036
null
q-bio.PE q-bio.QM
null
We have generalized our ``unified'' model of evolutionary ecology by taking into account the possible movements of the organisms from one ``patch'' to another within the same eco-system. We model the spatial extension of the eco-system (i.e., the geography) by a square lattice where each site corresponds to a distinct ``patch''. A self-organizing hierarchical food web describes the prey-predator relations in the eco-system. The same species at different patches have identical food habits but differ from each other in their reproductive characteristic features. By carrying out computer simulations up to $10^9$ time steps, we found that, depending on the values of the set of parameters, the distribution of the lifetimes of the species can be either exponential or a combination of power laws. Some of the other features of our ``unified'' model turn out to be robust against migration of the organisms.
[ { "created": "Sat, 27 Mar 2004 07:40:43 GMT", "version": "v1" } ]
2009-11-10
[ [ "Stauffer", "Dietrich", "" ], [ "Kunwar", "Ambarish", "" ], [ "Chowdhury", "Debashish", "" ] ]
We have generalized our ``unified'' model of evolutionary ecology by taking into account the possible movements of the organisms from one ``patch'' to another within the same eco-system. We model the spatial extension of the eco-system (i.e., the geography) by a square lattice where each site corresponds to a distinct ``patch''. A self-organizing hierarchical food web describes the prey-predator relations in the eco-system. The same species at different patches have identical food habits but differ from each other in their reproductive characteristic features. By carrying out computer simulations up to $10^9$ time steps, we found that, depending on the values of the set of parameters, the distribution of the lifetimes of the species can be either exponential or a combination of power laws. Some of the other features of our ``unified'' model turn out to be robust against migration of the organisms.
1811.08483
Jennifer Ross
Mengqi Xu, Lyanne Valdez, Aysuman Sen, and Jennifer L. Ross
Direct Single Molecule Imaging of Enhanced Enzyme Diffusion
null
Phys. Rev. Lett. 123, 128101 (2019)
10.1103/PhysRevLett.123.128101
null
q-bio.SC cond-mat.soft q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental results have shown that active enzymes can diffuse faster when they are in the presence of their substrates. Fluorescence correlation spectroscopy (FCS), which relies on analyzing the fluctuations in fluorescence intensity signal to measure the diffusion coefficient of particles, has typically been employed in most of the prior studies. However, flaws in the FCS method, due to its high sensitivity to the environment, have recently been evaluated, calling the prior diffusion results into question. It behooves us to adopt complimentary and direct methods to measure the mobility of enzymes in solution. Herein, we use a novel technique of direct single-molecule imaging to observe the diffusion of single enzymes. This technique is less sensitive to intensity fluctuations and gives the diffusion coefficient directly based on the trajectory of the enzymes. Our measurements recapitulate that enzyme diffusion is enhanced in the presence of its substrate and find that the relative increase in diffusion of a single enzyme is even higher than those previously reported using FCS. We also use this complementary method to test if the total enzyme concentration affects the relative increase in diffusion and if enzyme oligomerization state changes during catalytic turnover. We find that the diffusion increase is independent of the total background concentration of enzyme and the catalysis of substrate does not change the oligomerization state of enzymes.
[ { "created": "Tue, 20 Nov 2018 20:54:30 GMT", "version": "v1" } ]
2019-09-25
[ [ "Xu", "Mengqi", "" ], [ "Valdez", "Lyanne", "" ], [ "Sen", "Aysuman", "" ], [ "Ross", "Jennifer L.", "" ] ]
Recent experimental results have shown that active enzymes can diffuse faster when they are in the presence of their substrates. Fluorescence correlation spectroscopy (FCS), which relies on analyzing the fluctuations in fluorescence intensity signal to measure the diffusion coefficient of particles, has typically been employed in most of the prior studies. However, flaws in the FCS method, due to its high sensitivity to the environment, have recently been evaluated, calling the prior diffusion results into question. It behooves us to adopt complimentary and direct methods to measure the mobility of enzymes in solution. Herein, we use a novel technique of direct single-molecule imaging to observe the diffusion of single enzymes. This technique is less sensitive to intensity fluctuations and gives the diffusion coefficient directly based on the trajectory of the enzymes. Our measurements recapitulate that enzyme diffusion is enhanced in the presence of its substrate and find that the relative increase in diffusion of a single enzyme is even higher than those previously reported using FCS. We also use this complementary method to test if the total enzyme concentration affects the relative increase in diffusion and if enzyme oligomerization state changes during catalytic turnover. We find that the diffusion increase is independent of the total background concentration of enzyme and the catalysis of substrate does not change the oligomerization state of enzymes.
1208.2100
Anca Radulescu
Anca Radulescu
Input Statistics and Hebbian Crosstalk Effects
11 pages text, 5 figures, 1 page references, 2 appendices
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As an extension of prior work, we study inspecific Hebbian learning using the classical Oja model. We use a combination of analytical tools and numerical simulations to investigate how the effects of inspecificity (or synaptic "cross-talk") depend on the input statistics. We investigated a variety of patterns that appear in dimensions higher than 2 (and classified them based on covariance type and input bias). The effects of inspecificity on the learning outcome were found to depend very strongly on the nature of the input, and in some cases were very dramatic, making unlikely the existence of a generic neural algorithm to correct learning inaccuracy due to cross-talk. We discuss the possibility that sophisticated learning, such as presumably occurs in the neocortex, is enabled as much by special proofreading machinery for enhancing specificity, as by special algorithms.
[ { "created": "Fri, 10 Aug 2012 06:54:09 GMT", "version": "v1" } ]
2012-08-13
[ [ "Radulescu", "Anca", "" ] ]
As an extension of prior work, we study inspecific Hebbian learning using the classical Oja model. We use a combination of analytical tools and numerical simulations to investigate how the effects of inspecificity (or synaptic "cross-talk") depend on the input statistics. We investigated a variety of patterns that appear in dimensions higher than 2 (and classified them based on covariance type and input bias). The effects of inspecificity on the learning outcome were found to depend very strongly on the nature of the input, and in some cases were very dramatic, making unlikely the existence of a generic neural algorithm to correct learning inaccuracy due to cross-talk. We discuss the possibility that sophisticated learning, such as presumably occurs in the neocortex, is enabled as much by special proofreading machinery for enhancing specificity, as by special algorithms.
1911.09182
Andrey Sarantsev Mr
Olga Rumyantseva, Andrey Sarantsev, Nikolay Strigul
Autoregressive Modeling of Forest Dynamics
17 pages, 14 figures. Keywords: Forest biomass dynamics; random walk model; AR(1) process; Bayesian analysis; patch dynamics. Published in MDPI Forests, open access
null
null
null
q-bio.QM q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we employ autoregressive models developed in financial engineering for modeling of forest dynamics. Autoregressive models have some theoretical advantage over currently employed forest modeling approaches such as Markov chains and individual-based models, as autoregressive models are both analytically tractable and operate with continuous state space. We perform time series statistical analysis of forest biomass and basal area recorded in Quebec provincial forest inventories in 1970-2007. The geometric random walk model adequately describes the yearly average dynamics. For individual patches, we fit an AR(1) process capable to model negative feedback (mean-reversion). Overall, the best fit also turns out to be geometric random walk, however, the normality tests for residuals fail. In contrast, yearly means are adequately described by normal fluctuations, with annual growth, on average, 2.3%, but with standard deviation of order 40%. We use Bayesian analysis to account for uneven number of observations per year. This work demonstrates that autoregressive models represent a valuable tool for modeling of forest dynamics. In particular, they quantify stochastic effects of environmental disturbances and develop predictive empirical models on short and intermediate temporal scales.
[ { "created": "Wed, 20 Nov 2019 21:32:40 GMT", "version": "v1" } ]
2019-11-22
[ [ "Rumyantseva", "Olga", "" ], [ "Sarantsev", "Andrey", "" ], [ "Strigul", "Nikolay", "" ] ]
In this work, we employ autoregressive models developed in financial engineering for modeling of forest dynamics. Autoregressive models have some theoretical advantage over currently employed forest modeling approaches such as Markov chains and individual-based models, as autoregressive models are both analytically tractable and operate with continuous state space. We perform time series statistical analysis of forest biomass and basal area recorded in Quebec provincial forest inventories in 1970-2007. The geometric random walk model adequately describes the yearly average dynamics. For individual patches, we fit an AR(1) process capable to model negative feedback (mean-reversion). Overall, the best fit also turns out to be geometric random walk, however, the normality tests for residuals fail. In contrast, yearly means are adequately described by normal fluctuations, with annual growth, on average, 2.3%, but with standard deviation of order 40%. We use Bayesian analysis to account for uneven number of observations per year. This work demonstrates that autoregressive models represent a valuable tool for modeling of forest dynamics. In particular, they quantify stochastic effects of environmental disturbances and develop predictive empirical models on short and intermediate temporal scales.
2209.05002
Chunhe Li
Leijun Ye, Chunhe Li
Quantifying the attractor landscape and transition path of distributed working memory from large-scale brain network
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many cognitive processes, including working memory, recruit multiple distributed interacting brain regions to encode information. How to understand the underlying cognition function mechanism of working memory is a challenging problem, which involves neural circuit configuration from multiple brain regions as well as stochastic transition dynamics between brain states. The energy landscape idea provides a tool to study the global stability and stochastic transition dynamics in the distributed cognitive function system. However, how to quantify the energy landscape in a realistic large-scale brain network remains unclear. Here, based on an anatomically constrained computational model of large-scale macaque cortex, we quantified the underlying multistable attractor landscape of distributed working memory. In the absence of external stimulation, the landscape exhibits three stable attractors, a spontaneous state, and two memory states. In the attractor landscape framework, the working memory function is governed by the change of landscape topography and the switch of system state according to the task requirement. The barrier height inferred from landscape topography quantifies the global stability of memory state and robustness to non-selective random fluctuations and distractor stimuli. The kinetic transition path identified by the minimum action path approach reveals that the spontaneous state serves as an intermediate state during the switch between the two memory states, the memory stored in the cortical area with higher hierarchy is more stable, and information flow follows the direction of hierarchical structure. These results provide new insights into the underlying mechanism of distributed working memory function, and the landscape and kinetic path approach can be applied to other cognitive function-related problems in brain networks.
[ { "created": "Mon, 12 Sep 2022 03:24:09 GMT", "version": "v1" } ]
2022-09-13
[ [ "Ye", "Leijun", "" ], [ "Li", "Chunhe", "" ] ]
Many cognitive processes, including working memory, recruit multiple distributed interacting brain regions to encode information. How to understand the underlying cognition function mechanism of working memory is a challenging problem, which involves neural circuit configuration from multiple brain regions as well as stochastic transition dynamics between brain states. The energy landscape idea provides a tool to study the global stability and stochastic transition dynamics in the distributed cognitive function system. However, how to quantify the energy landscape in a realistic large-scale brain network remains unclear. Here, based on an anatomically constrained computational model of large-scale macaque cortex, we quantified the underlying multistable attractor landscape of distributed working memory. In the absence of external stimulation, the landscape exhibits three stable attractors, a spontaneous state, and two memory states. In the attractor landscape framework, the working memory function is governed by the change of landscape topography and the switch of system state according to the task requirement. The barrier height inferred from landscape topography quantifies the global stability of memory state and robustness to non-selective random fluctuations and distractor stimuli. The kinetic transition path identified by the minimum action path approach reveals that the spontaneous state serves as an intermediate state during the switch between the two memory states, the memory stored in the cortical area with higher hierarchy is more stable, and information flow follows the direction of hierarchical structure. These results provide new insights into the underlying mechanism of distributed working memory function, and the landscape and kinetic path approach can be applied to other cognitive function-related problems in brain networks.
2004.07342
Tahmineh Azizi
Tahmineh Azizi, Robert Mugabi
Mathematical modeling of blood flow through arterial bifurcation
9 pages, 9 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The blood vascular system consists of blood vessels such as arteries, arterioles, capillaries and veins that convey blood throughout the body. The pressure difference which exists between the ends of the vessels provides the living force required to the flow of blood. In this study, we have used a model that is appropriate for use with blood flow simulation applications. We study the effect of decomposition of plaques and or tumour cells on the velocity profile, pressure of the incompressible, Newtonian fluid. We consider two different cases; vessels with bifurcation and without bifurcations and we display all the results through using Commercial software COMSOL Multiphysics 5.2.
[ { "created": "Tue, 31 Mar 2020 08:09:51 GMT", "version": "v1" }, { "created": "Mon, 20 Apr 2020 01:22:09 GMT", "version": "v2" } ]
2020-04-21
[ [ "Azizi", "Tahmineh", "" ], [ "Mugabi", "Robert", "" ] ]
The blood vascular system consists of blood vessels such as arteries, arterioles, capillaries and veins that convey blood throughout the body. The pressure difference which exists between the ends of the vessels provides the living force required to the flow of blood. In this study, we have used a model that is appropriate for use with blood flow simulation applications. We study the effect of decomposition of plaques and or tumour cells on the velocity profile, pressure of the incompressible, Newtonian fluid. We consider two different cases; vessels with bifurcation and without bifurcations and we display all the results through using Commercial software COMSOL Multiphysics 5.2.
2403.00043
Tin Vlasic
Rafael Josip Peni\'c, Tin Vla\v{s}i\'c, Roland G. Huber, Yue Wan, Mile \v{S}iki\'c
RiNALMo: General-Purpose RNA Language Models Can Generalize Well on Structure Prediction Tasks
18 pages, 7 figures
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Ribonucleic acid (RNA) plays a variety of crucial roles in fundamental biological processes. Recently, RNA has become an interesting drug target, emphasizing the need to improve our understanding of its structures and functions. Over the years, sequencing technologies have produced an enormous amount of unlabeled RNA data, which hides important knowledge and potential. Motivated by the successes of protein language models, we introduce RiboNucleic Acid Language Model (RiNALMo) to help unveil the hidden code of RNA. RiNALMo is the largest RNA language model to date with $650$ million parameters pre-trained on $36$ million non-coding RNA sequences from several available databases. RiNALMo is able to extract hidden knowledge and capture the underlying structure information implicitly embedded within the RNA sequences. RiNALMo achieves state-of-the-art results on several downstream tasks. Notably, we show that its generalization capabilities can overcome the inability of other deep learning methods for secondary structure prediction to generalize on unseen RNA families. The code has been made publicly available on https://github.com/lbcb-sci/RiNALMo.
[ { "created": "Thu, 29 Feb 2024 14:50:58 GMT", "version": "v1" } ]
2024-03-04
[ [ "Penić", "Rafael Josip", "" ], [ "Vlašić", "Tin", "" ], [ "Huber", "Roland G.", "" ], [ "Wan", "Yue", "" ], [ "Šikić", "Mile", "" ] ]
Ribonucleic acid (RNA) plays a variety of crucial roles in fundamental biological processes. Recently, RNA has become an interesting drug target, emphasizing the need to improve our understanding of its structures and functions. Over the years, sequencing technologies have produced an enormous amount of unlabeled RNA data, which hides important knowledge and potential. Motivated by the successes of protein language models, we introduce RiboNucleic Acid Language Model (RiNALMo) to help unveil the hidden code of RNA. RiNALMo is the largest RNA language model to date with $650$ million parameters pre-trained on $36$ million non-coding RNA sequences from several available databases. RiNALMo is able to extract hidden knowledge and capture the underlying structure information implicitly embedded within the RNA sequences. RiNALMo achieves state-of-the-art results on several downstream tasks. Notably, we show that its generalization capabilities can overcome the inability of other deep learning methods for secondary structure prediction to generalize on unseen RNA families. The code has been made publicly available on https://github.com/lbcb-sci/RiNALMo.
1904.06517
Marta Stepniewska-Dziubinska
Marta M. Stepniewska-Dziubinska, Piotr Zielenkiewicz and Pawel Siedlecki
Improving detection of protein-ligand binding sites with 3D segmentation
null
Sci Rep 10, 5035 (2020)
10.1038/s41598-020-61860-z
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years machine learning (ML) took bio- and cheminformatics fields by storm, providing new solutions for a vast repertoire of problems related to protein sequence, structure, and interactions analysis. ML techniques, deep neural networks especially, were proven more effective than classical models for tasks like predicting binding affinity for molecular complex. In this work we investigated the earlier stage of drug discovery process - finding druggable pockets on protein surface, that can be later used to design active molecules. For this purpose we developed a 3D fully convolutional neural network capable of binding site segmentation. Our solution has high prediction accuracy and provides intuitive representations of the results, which makes it easy to incorporate into drug discovery projects. The model's source code, together with scripts for most common use-cases is freely available at http://gitlab.com/cheminfIBB/kalasanty
[ { "created": "Sat, 13 Apr 2019 10:10:55 GMT", "version": "v1" }, { "created": "Sat, 28 Mar 2020 13:26:25 GMT", "version": "v2" } ]
2020-03-31
[ [ "Stepniewska-Dziubinska", "Marta M.", "" ], [ "Zielenkiewicz", "Piotr", "" ], [ "Siedlecki", "Pawel", "" ] ]
In recent years machine learning (ML) took bio- and cheminformatics fields by storm, providing new solutions for a vast repertoire of problems related to protein sequence, structure, and interactions analysis. ML techniques, deep neural networks especially, were proven more effective than classical models for tasks like predicting binding affinity for molecular complex. In this work we investigated the earlier stage of drug discovery process - finding druggable pockets on protein surface, that can be later used to design active molecules. For this purpose we developed a 3D fully convolutional neural network capable of binding site segmentation. Our solution has high prediction accuracy and provides intuitive representations of the results, which makes it easy to incorporate into drug discovery projects. The model's source code, together with scripts for most common use-cases is freely available at http://gitlab.com/cheminfIBB/kalasanty
1308.0510
Stefan M\"uller
Stefan M\"uller, Georg Regensburger, Ralf Steuer
Enzyme allocation problems in kinetic metabolic networks: Optimal solutions are elementary flux modes
20 pages, 3 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The survival and proliferation of cells and organisms require a highly coordinated allocation of cellular resources to ensure the efficient synthesis of cellular components. In particular, the total enzymatic capacity for cellular metabolism is limited by finite resources that are shared between all enzymes, such as cytosolic space, energy expenditure for amino-acid synthesis, or micro-nutrients. While extensive work has been done to study constrained optimization problems based only on stoichiometric information, mathematical results that characterize the optimal flux in kinetic metabolic networks are still scarce. Here, we study constrained enzyme allocation problems with general kinetics, using the theory of oriented matroids. We give a rigorous proof for the fact that optimal solutions of the non-linear optimization problem are elementary flux modes. This finding has significant consequences for our understanding of optimality in metabolic networks as well as for the identification of metabolic switches and the computation of optimal flux distributions in kinetic metabolic networks.
[ { "created": "Fri, 2 Aug 2013 14:29:34 GMT", "version": "v1" }, { "created": "Mon, 9 Dec 2013 21:45:11 GMT", "version": "v2" } ]
2013-12-11
[ [ "Müller", "Stefan", "" ], [ "Regensburger", "Georg", "" ], [ "Steuer", "Ralf", "" ] ]
The survival and proliferation of cells and organisms require a highly coordinated allocation of cellular resources to ensure the efficient synthesis of cellular components. In particular, the total enzymatic capacity for cellular metabolism is limited by finite resources that are shared between all enzymes, such as cytosolic space, energy expenditure for amino-acid synthesis, or micro-nutrients. While extensive work has been done to study constrained optimization problems based only on stoichiometric information, mathematical results that characterize the optimal flux in kinetic metabolic networks are still scarce. Here, we study constrained enzyme allocation problems with general kinetics, using the theory of oriented matroids. We give a rigorous proof for the fact that optimal solutions of the non-linear optimization problem are elementary flux modes. This finding has significant consequences for our understanding of optimality in metabolic networks as well as for the identification of metabolic switches and the computation of optimal flux distributions in kinetic metabolic networks.
1007.4120
Tsvi Tlusty
Tsvi Tlusty
Casting Polymer Nets to Optimize Noisy Molecular Codes
PNAS 2008
PNAS June 17, 2008 vol. 105 no. 24 8238-8243
10.1073/pnas.0710274105
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life relies on the efficient performance of molecular codes, which relate symbols and meanings via error-prone molecular recognition. We describe how optimizing a code to withstand the impact of molecular recognition noise may be approximated by the statistics of a two-dimensional network made of polymers. The noisy code is defined by partitioning the space of symbols into regions according to their meanings. The "polymers" are the boundaries between these regions and their statistics defines the cost and the quality of the noisy code. When the parameters that control the cost-quality balance are varied, the polymer network undergoes a first-order transition, where the number of encoded meanings rises discontinuously. Effects of population dynamics on the evolution of molecular codes are discussed.
[ { "created": "Fri, 23 Jul 2010 13:04:53 GMT", "version": "v1" } ]
2010-07-26
[ [ "Tlusty", "Tsvi", "" ] ]
Life relies on the efficient performance of molecular codes, which relate symbols and meanings via error-prone molecular recognition. We describe how optimizing a code to withstand the impact of molecular recognition noise may be approximated by the statistics of a two-dimensional network made of polymers. The noisy code is defined by partitioning the space of symbols into regions according to their meanings. The "polymers" are the boundaries between these regions and their statistics defines the cost and the quality of the noisy code. When the parameters that control the cost-quality balance are varied, the polymer network undergoes a first-order transition, where the number of encoded meanings rises discontinuously. Effects of population dynamics on the evolution of molecular codes are discussed.
1910.00190
Fenix Huang
Fenix W. Huang and Christopher L. Barrett and Christian M. Reidys
The energy-spectrum of bicompatible sequences
20 pages, 10 Figures
null
null
null
q-bio.BM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Genotype-phenotype maps provide a meaningful filtration of sequence space and RNA secondary structures are particular such phenotypes. Compatible sequences i.e.~sequences that satisfy the base pairing constraints of a given RNA structure play an important role in the context of neutral networks and inverse folding. Sequences satisfying the constraints of two structures simultaneously are called bicompatible and phenotypic change, induced by erroneously replicating populations of RNA sequences, is closely connected to bicompatibility. Furthermore, bicompatible sequences are relevant for riboswitch sequences, beacons of evolution, realizing two distinct phenotypes. Results: We present a full loop energy model Boltzmann sampler of bicompatible sequences for pairs of structures. The novel dynamic programming algorithm is based on a topological framework encapsulating the relations between loops. We utilize our sequence sampler to study the energy spectra and density of bicompatible sequences, the rankings of the structures and key properties for evolutionary transitions. Conclusion: Our analysis of riboswitch sequences shows that key properties of bicompatible sequences depend on the particular pair of structures. While there always exist bicompatible sequences for random structure pairs, they are less suited to facilitate transitions. We show that native riboswitch sequences exhibit a distinct signature with regards to the ranking of their two phenotypes relative to the minimum free energy, suggesting a new criterion for identifying native sequences and sequences subjected to evolutionary pressure.
[ { "created": "Tue, 1 Oct 2019 03:57:27 GMT", "version": "v1" } ]
2019-10-02
[ [ "Huang", "Fenix W.", "" ], [ "Barrett", "Christopher L.", "" ], [ "Reidys", "Christian M.", "" ] ]
Background: Genotype-phenotype maps provide a meaningful filtration of sequence space and RNA secondary structures are particular such phenotypes. Compatible sequences i.e.~sequences that satisfy the base pairing constraints of a given RNA structure play an important role in the context of neutral networks and inverse folding. Sequences satisfying the constraints of two structures simultaneously are called bicompatible and phenotypic change, induced by erroneously replicating populations of RNA sequences, is closely connected to bicompatibility. Furthermore, bicompatible sequences are relevant for riboswitch sequences, beacons of evolution, realizing two distinct phenotypes. Results: We present a full loop energy model Boltzmann sampler of bicompatible sequences for pairs of structures. The novel dynamic programming algorithm is based on a topological framework encapsulating the relations between loops. We utilize our sequence sampler to study the energy spectra and density of bicompatible sequences, the rankings of the structures and key properties for evolutionary transitions. Conclusion: Our analysis of riboswitch sequences shows that key properties of bicompatible sequences depend on the particular pair of structures. While there always exist bicompatible sequences for random structure pairs, they are less suited to facilitate transitions. We show that native riboswitch sequences exhibit a distinct signature with regards to the ranking of their two phenotypes relative to the minimum free energy, suggesting a new criterion for identifying native sequences and sequences subjected to evolutionary pressure.
2401.02061
XiaoPeng Shi
Xiaopeng Shi, Chuanhou Gao and Denis Dochain
Controlling the occurrence sequence of reaction modules through biochemical relaxation oscillators
null
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedding sequential computations in biochemical environments is challenging because the computations are carried out by chemical reactions, which are inherently disordered. In this paper we apply modular design to specific calculations through chemical reactions and provide a design scheme of biochemical oscillator models in order to generate periodical species for the order regulation of these reaction modules. We take the case of arbitrary multi-module regulation into consideration, analyze the main errors in the regulation process under \textit{mass-action kinetics} and demonstrate our design scheme under existing synthetic biochemical oscillator models.
[ { "created": "Thu, 4 Jan 2024 04:54:50 GMT", "version": "v1" } ]
2024-01-05
[ [ "Shi", "Xiaopeng", "" ], [ "Gao", "Chuanhou", "" ], [ "Dochain", "Denis", "" ] ]
Embedding sequential computations in biochemical environments is challenging because the computations are carried out by chemical reactions, which are inherently disordered. In this paper we apply modular design to specific calculations through chemical reactions and provide a design scheme of biochemical oscillator models in order to generate periodical species for the order regulation of these reaction modules. We take the case of arbitrary multi-module regulation into consideration, analyze the main errors in the regulation process under \textit{mass-action kinetics} and demonstrate our design scheme under existing synthetic biochemical oscillator models.
1112.2741
Steven Frank
Steven A. Frank
Natural selection. III. Selection versus transmission and the levels of selection
null
Journal of Evolutionary Biology 25:227-243 (2012)
10.1111/j.1420-9101.2011.02431.x
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
George Williams defined an evolutionary unit as hereditary information for which the selection bias between competing units dominates the informational decay caused by imperfect transmission. In this article, I extend Williams' approach to show that the ratio of selection bias to transmission bias provides a unifying framework for diverse biological problems. Specific examples include Haldane and Lande's mutation-selection balance, Eigen's error threshold and quasispecies, Van Valen's clade selection, Price's multilevel formulation of group selection, Szathmary and Demeter's evolutionary origin of primitive cells, Levin and Bull's short-sighted evolution of HIV virulence, Frank's timescale analysis of microbial metabolism, and Maynard Smith and Szathmary's major transitions in evolution. The insights from these diverse applications lead to a deeper understanding of kin selection, group selection, multilevel evolutionary analysis, and the philosophical problems of evolutionary units and individuality.
[ { "created": "Mon, 12 Dec 2011 23:06:51 GMT", "version": "v1" } ]
2012-01-17
[ [ "Frank", "Steven A.", "" ] ]
George Williams defined an evolutionary unit as hereditary information for which the selection bias between competing units dominates the informational decay caused by imperfect transmission. In this article, I extend Williams' approach to show that the ratio of selection bias to transmission bias provides a unifying framework for diverse biological problems. Specific examples include Haldane and Lande's mutation-selection balance, Eigen's error threshold and quasispecies, Van Valen's clade selection, Price's multilevel formulation of group selection, Szathmary and Demeter's evolutionary origin of primitive cells, Levin and Bull's short-sighted evolution of HIV virulence, Frank's timescale analysis of microbial metabolism, and Maynard Smith and Szathmary's major transitions in evolution. The insights from these diverse applications lead to a deeper understanding of kin selection, group selection, multilevel evolutionary analysis, and the philosophical problems of evolutionary units and individuality.
1012.5547
Domenico Napoletani
Domenico Napoletani, Michele Signore, Timothy Sauer, Lance Liotta, Emanuel Petricoin
Homologous Control of Protein Signaling Networks
33 pages, 6 figures
Journal of Theoretical Biology 279 (2011) 29-43
10.1016/j.jtbi.2011.03.020
null
q-bio.MN physics.bio-ph physics.data-an physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a previous paper we introduced a method called augmented sparse reconstruction (ASR) that identifies links among nodes of ordinary differential equation networks, given a small set of observed trajectories with various initial conditions. The main purpose of that technique was to reconstruct intracellular protein signaling networks. In this paper we show that a recursive augmented sparse reconstruction generates artificial networks that are homologous to a large, reference network, in the sense that kinase inhibition of several reactions in the network alters the trajectories of a sizable number of proteins in comparable ways for reference and reconstructed networks. We show this result using a large in-silico model of the epidermal growth factor receptor (EGF-R) driven signaling cascade to generate the data used in the reconstruction algorithm. The most significant consequence of this observed homology is that a nearly optimal combinatorial dosage of kinase inhibitors can be inferred, for many nodes, from the reconstructed network, a result potentially useful for a variety of applications in personalized medicine.
[ { "created": "Sun, 26 Dec 2010 23:01:30 GMT", "version": "v1" } ]
2011-04-18
[ [ "Napoletani", "Domenico", "" ], [ "Signore", "Michele", "" ], [ "Sauer", "Timothy", "" ], [ "Liotta", "Lance", "" ], [ "Petricoin", "Emanuel", "" ] ]
In a previous paper we introduced a method called augmented sparse reconstruction (ASR) that identifies links among nodes of ordinary differential equation networks, given a small set of observed trajectories with various initial conditions. The main purpose of that technique was to reconstruct intracellular protein signaling networks. In this paper we show that a recursive augmented sparse reconstruction generates artificial networks that are homologous to a large, reference network, in the sense that kinase inhibition of several reactions in the network alters the trajectories of a sizable number of proteins in comparable ways for reference and reconstructed networks. We show this result using a large in-silico model of the epidermal growth factor receptor (EGF-R) driven signaling cascade to generate the data used in the reconstruction algorithm. The most significant consequence of this observed homology is that a nearly optimal combinatorial dosage of kinase inhibitors can be inferred, for many nodes, from the reconstructed network, a result potentially useful for a variety of applications in personalized medicine.
q-bio/0605025
Stuart Borrett
S. R. Borrett, W. Bridewell, P. Langely, K. R. Arrigo
A method for representing and developing process models
submitted to Ecological Complexity 28 pages, 9 tables, 1 figure
null
null
null
q-bio.QM q-bio.PE
null
Scientists investigate the dynamics of complex systems with quantitative models, employing them to synthesize knowledge, to explain observations, and to forecast future system behavior. Complete specification of systems is impossible, so models must be simplified abstractions. Thus, the art of modeling involves deciding which system elements to include and determining how they should be represented. We view modeling as search through a space of candidate models that is guided by model objectives, theoretical knowledge, and empirical data. In this contribution, we introduce a method for representing process-based models that facilitates the discovery of models that explain observed behavior. This representation casts dynamic systems as interacting sets of processes that act on entities. Using this approach, a modeler first encodes relevant ecological knowledge into a library of generic entities and processes, then instantiates these theoretical components, and finally assembles candidate models from these elements. We illustrate this methodology with a model of the Ross Sea ecosystem.
[ { "created": "Tue, 16 May 2006 23:00:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Borrett", "S. R.", "" ], [ "Bridewell", "W.", "" ], [ "Langely", "P.", "" ], [ "Arrigo", "K. R.", "" ] ]
Scientists investigate the dynamics of complex systems with quantitative models, employing them to synthesize knowledge, to explain observations, and to forecast future system behavior. Complete specification of systems is impossible, so models must be simplified abstractions. Thus, the art of modeling involves deciding which system elements to include and determining how they should be represented. We view modeling as search through a space of candidate models that is guided by model objectives, theoretical knowledge, and empirical data. In this contribution, we introduce a method for representing process-based models that facilitates the discovery of models that explain observed behavior. This representation casts dynamic systems as interacting sets of processes that act on entities. Using this approach, a modeler first encodes relevant ecological knowledge into a library of generic entities and processes, then instantiates these theoretical components, and finally assembles candidate models from these elements. We illustrate this methodology with a model of the Ross Sea ecosystem.
1305.2622
Marcelo Sobottka
Eduardo Garibaldi and Marcelo Sobottka
A nonsmooth two-sex population model
18 pages, 6 figures. Section 2, in which the model is presented, was rewritten to better explain the elements of the proposed model. The description of parameter "r" was corrected
Mathematical Biosciences (2014), 253, 1-10
10.1016/j.mbs.2014.03.015
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers a two-dimensional logistic model to study populations with two genders. The growth behavior of a population is guided by two coupled ordinary differential equations given by a non-differentiable vector field whose parameters are the secondary sex ratio (the ratio of males to females at time of birth), inter-, intra- and outer-gender competitions, fertility and mortality rates and a mating function. For the case where there is no inter-gender competition and the mortality rates are negligible with respect to the density-dependent mortality, using geometrical techniques, we analyze the singularities and the basin of attraction of the system, determining the relationships between the parameters for which the system presents an equilibrium point. In particular, we describe conditions on the secondary sex ratio and discuss the role of the average number of female sexual partners of each male for the conservation of a two-sex species.
[ { "created": "Sun, 12 May 2013 19:16:23 GMT", "version": "v1" }, { "created": "Sun, 7 Jul 2013 16:43:24 GMT", "version": "v2" }, { "created": "Sun, 18 Aug 2013 19:55:53 GMT", "version": "v3" }, { "created": "Wed, 4 Jun 2014 01:21:44 GMT", "version": "v4" } ]
2014-06-05
[ [ "Garibaldi", "Eduardo", "" ], [ "Sobottka", "Marcelo", "" ] ]
This paper considers a two-dimensional logistic model to study populations with two genders. The growth behavior of a population is guided by two coupled ordinary differential equations given by a non-differentiable vector field whose parameters are the secondary sex ratio (the ratio of males to females at time of birth), inter-, intra- and outer-gender competitions, fertility and mortality rates and a mating function. For the case where there is no inter-gender competition and the mortality rates are negligible with respect to the density-dependent mortality, using geometrical techniques, we analyze the singularities and the basin of attraction of the system, determining the relationships between the parameters for which the system presents an equilibrium point. In particular, we describe conditions on the secondary sex ratio and discuss the role of the average number of female sexual partners of each male for the conservation of a two-sex species.
1307.4291
Olivier Rivoire
Ivan Junier and Olivier Rivoire
Synteny in Bacterial Genomes: Inference, Organization and Evolution
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genes are not located randomly along genomes. Synteny, the conservation of their relative positions in genomes of different species, reflects fundamental constraints on natural evolution. We present approaches to infer pairs of co-localized genes from multiple genomes, describe their organization, and study their evolutionary history. In bacterial genomes, we thus identify synteny units, or "syntons", which are clusters of proximal genes that encompass and extend operons. The size distribution of these syntons divide them into large syntons, which correspond to fundamental macro-molecular complexes of bacteria, and smaller ones, which display a remarkable exponential distribution of sizes. This distribution is "universal" in two respects: it holds for vastly different genomes, and for functionally distinct genes. Similar statistical laws have been reported previously in studies of bacterial genomes, and generally attributed to purifying selection or neutral processes. Here, we perform a new analysis based on the concept of parsimony, and find that the prevailing evolutionary mechanism behind the formation of small syntons is a selective process of gene aggregation. Altogether, our results imply a common evolutionary process that selectively shapes the organization and diversity of bacterial genomes.
[ { "created": "Tue, 16 Jul 2013 14:41:59 GMT", "version": "v1" } ]
2013-07-17
[ [ "Junier", "Ivan", "" ], [ "Rivoire", "Olivier", "" ] ]
Genes are not located randomly along genomes. Synteny, the conservation of their relative positions in genomes of different species, reflects fundamental constraints on natural evolution. We present approaches to infer pairs of co-localized genes from multiple genomes, describe their organization, and study their evolutionary history. In bacterial genomes, we thus identify synteny units, or "syntons", which are clusters of proximal genes that encompass and extend operons. The size distribution of these syntons divide them into large syntons, which correspond to fundamental macro-molecular complexes of bacteria, and smaller ones, which display a remarkable exponential distribution of sizes. This distribution is "universal" in two respects: it holds for vastly different genomes, and for functionally distinct genes. Similar statistical laws have been reported previously in studies of bacterial genomes, and generally attributed to purifying selection or neutral processes. Here, we perform a new analysis based on the concept of parsimony, and find that the prevailing evolutionary mechanism behind the formation of small syntons is a selective process of gene aggregation. Altogether, our results imply a common evolutionary process that selectively shapes the organization and diversity of bacterial genomes.
0901.4211
Pradeep Kumar Mohanty
Sushmita Mookherjee, Mithun Sinha, Saikat Mukhopadhyay, Nitai P. Bhattacharyya, and P. K. Mohanty
MicroRNA Interaction network in human: implications of clustered microRNA in biological pathways and genetic diseases
11 pages, 7 eps figures (Supplementary material is available on request)
Online Journal of Bioinformatics 10 (2), 280 (2009)
null
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel group of small non-coding RNA, known as microRNA (miRNA) is predicted to regulate as high as 90% of the coding genes in human. The diversity and abundance of miRNA targets offer an enormous level of combinatorial possibilities and suggest that miRNAs and their targets form a complex regulatory network. In the present study, we analyzed 711 miRNAs and their 34, 525 predicted targets in the miRBase database which generate a complex bipartite network having numerous numbers of genes forming the hub. Genes at the hub (total 9877) are significantly over represented in genes with specific molecular functions, biological processes and biological pathways as revealed from the analysis using PANTHER. We further construct a miRNA co-target network by linking every pair of miRNAs which co-target at least one gene. The weight of the link, which is taken to be the number of co-targets of the pair of miRNAs vary widely, and we could erase several links while keeping the relevant features of the network intact. The largest connected sub-graph, thus obtained, contains 479 miRNAs. More than 75% of the miRNAs deregulated in 15 different diseases collected from published data are found to be in this largest sub graph. We further analyze this sub-graph to obtain 70 small clusters containing total 330 miRNAs of 479. We identified the biological pathways where the co-targeted genes in the clusters are significantly over- represented in comparison to that obtained with that are not co-targeted by the miRNAs in the cluster. Using published data, we identified that specific clusters of miRNAs are associated with specific diseases by altering particular pathways. We propose that instead of single miRNA, clusters of miRNA that co-targets the genes are important for the regulation of miRNA in diseases.
[ { "created": "Tue, 27 Jan 2009 10:02:50 GMT", "version": "v1" } ]
2010-09-03
[ [ "Mookherjee", "Sushmita", "" ], [ "Sinha", "Mithun", "" ], [ "Mukhopadhyay", "Saikat", "" ], [ "Bhattacharyya", "Nitai P.", "" ], [ "Mohanty", "P. K.", "" ] ]
A novel group of small non-coding RNA, known as microRNA (miRNA) is predicted to regulate as high as 90% of the coding genes in human. The diversity and abundance of miRNA targets offer an enormous level of combinatorial possibilities and suggest that miRNAs and their targets form a complex regulatory network. In the present study, we analyzed 711 miRNAs and their 34, 525 predicted targets in the miRBase database which generate a complex bipartite network having numerous numbers of genes forming the hub. Genes at the hub (total 9877) are significantly over represented in genes with specific molecular functions, biological processes and biological pathways as revealed from the analysis using PANTHER. We further construct a miRNA co-target network by linking every pair of miRNAs which co-target at least one gene. The weight of the link, which is taken to be the number of co-targets of the pair of miRNAs vary widely, and we could erase several links while keeping the relevant features of the network intact. The largest connected sub-graph, thus obtained, contains 479 miRNAs. More than 75% of the miRNAs deregulated in 15 different diseases collected from published data are found to be in this largest sub graph. We further analyze this sub-graph to obtain 70 small clusters containing total 330 miRNAs of 479. We identified the biological pathways where the co-targeted genes in the clusters are significantly over- represented in comparison to that obtained with that are not co-targeted by the miRNAs in the cluster. Using published data, we identified that specific clusters of miRNAs are associated with specific diseases by altering particular pathways. We propose that instead of single miRNA, clusters of miRNA that co-targets the genes are important for the regulation of miRNA in diseases.
2012.09658
Eugene Shakhnovich
Sanchari Bhattacharyyaa, Shimon Bershtein, Bharat V. Adkara, Jaie Woodarda and Eugene I. Shakhnovich
Metabolic response to point mutations reveals principles of modulation of in vivo enzyme activity and phenotype
null
null
null
null
q-bio.BM q-bio.CB
http://creativecommons.org/licenses/by-sa/4.0/
The relationship between sequence variation and phenotype is poorly understood. Here we use metabolomic analysis to elucidate the molecular mechanism underlying the filamentous phenotype of E. coli strains that carry destabilizing mutations in the Dihydrofolate Reductase (DHFR). We find that partial loss of DHFR activity causes SOS response indicative of DNA damage and cell filamentation. This phenotype is triggered by an imbalance in deoxy nucleotide levels, most prominently a disproportionate drop in the intracellular dTTP. We show that a highly cooperative (Hill coefficient 2.5) in vivo activity of Thymidylate Kinase (Tmk), a downstream enzyme that phosphorylates dTMP to dTDP, is the cause of suboptimal dTTP levels. dTMP supplementation in the media rescues filamentation and restores in vivo Tmk kinetics to almost perfect Michaelis-Menten, like its kinetics in vitro. Overall, this study highlights the important role of cellular environment in sculpting enzymatic kinetics with system level implications for bacterial phenotype.
[ { "created": "Thu, 17 Dec 2020 15:11:23 GMT", "version": "v1" } ]
2020-12-18
[ [ "Bhattacharyyaa", "Sanchari", "" ], [ "Bershtein", "Shimon", "" ], [ "Adkara", "Bharat V.", "" ], [ "Woodarda", "Jaie", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
The relationship between sequence variation and phenotype is poorly understood. Here we use metabolomic analysis to elucidate the molecular mechanism underlying the filamentous phenotype of E. coli strains that carry destabilizing mutations in the Dihydrofolate Reductase (DHFR). We find that partial loss of DHFR activity causes SOS response indicative of DNA damage and cell filamentation. This phenotype is triggered by an imbalance in deoxy nucleotide levels, most prominently a disproportionate drop in the intracellular dTTP. We show that a highly cooperative (Hill coefficient 2.5) in vivo activity of Thymidylate Kinase (Tmk), a downstream enzyme that phosphorylates dTMP to dTDP, is the cause of suboptimal dTTP levels. dTMP supplementation in the media rescues filamentation and restores in vivo Tmk kinetics to almost perfect Michaelis-Menten, like its kinetics in vitro. Overall, this study highlights the important role of cellular environment in sculpting enzymatic kinetics with system level implications for bacterial phenotype.
2007.08910
Francois Fages
Mathieu Hemery (Lifeware), Fran\c{c}ois Fages (Lifeware), Sylvain Soliman (Lifeware)
On the Complexity of Quadratization for Polynomial Differential Equations
null
CMSB 2020: The 18th International Conference on Computational Methods in Systems Biology, Sep 2020, Konstanz, Germany
null
null
q-bio.QM cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemical reaction networks (CRNs) are a standard formalism used in chemistry and biology to reason about the dynamics of molecular interaction networks. In their interpretation by ordinary differential equations, CRNs provide a Turing-complete model of analog computattion, in the sense that any computable function over the reals can be computed by a finite number of molecular species with a continuous CRN which approximates the result of that function in one of its components in arbitrary precision. The proof of that result is based on a previous result of Bournez et al. on the Turing-completeness of polyno-mial ordinary differential equations with polynomial initial conditions (PIVP). It uses an encoding of real variables by two non-negative variables for concentrations, and a transformation to an equivalent quadratic PIVP (i.e. with degrees at most 2) for restricting ourselves to at most bimolecular reactions. In this paper, we study the theoretical and practical complexities of the quadratic transformation. We show that both problems of minimizing either the number of variables (i.e., molecular species) or the number of monomials (i.e. elementary reactions) in a quadratic transformation of a PIVP are NP-hard. We present an encoding of those problems in MAX-SAT and show the practical complexity of this algorithm on a benchmark of quadratization problems inspired from CRN design problems.
[ { "created": "Fri, 17 Jul 2020 11:39:09 GMT", "version": "v1" }, { "created": "Mon, 27 Jul 2020 13:46:35 GMT", "version": "v2" } ]
2020-07-28
[ [ "Hemery", "Mathieu", "", "Lifeware" ], [ "Fages", "François", "", "Lifeware" ], [ "Soliman", "Sylvain", "", "Lifeware" ] ]
Chemical reaction networks (CRNs) are a standard formalism used in chemistry and biology to reason about the dynamics of molecular interaction networks. In their interpretation by ordinary differential equations, CRNs provide a Turing-complete model of analog computattion, in the sense that any computable function over the reals can be computed by a finite number of molecular species with a continuous CRN which approximates the result of that function in one of its components in arbitrary precision. The proof of that result is based on a previous result of Bournez et al. on the Turing-completeness of polyno-mial ordinary differential equations with polynomial initial conditions (PIVP). It uses an encoding of real variables by two non-negative variables for concentrations, and a transformation to an equivalent quadratic PIVP (i.e. with degrees at most 2) for restricting ourselves to at most bimolecular reactions. In this paper, we study the theoretical and practical complexities of the quadratic transformation. We show that both problems of minimizing either the number of variables (i.e., molecular species) or the number of monomials (i.e. elementary reactions) in a quadratic transformation of a PIVP are NP-hard. We present an encoding of those problems in MAX-SAT and show the practical complexity of this algorithm on a benchmark of quadratization problems inspired from CRN design problems.
2303.12227
Christine Heitsch
Christine Heitsch and Chi N. Y. Huynh and Greg Johnston
On a barrier height problem for RNA branching
15 pages, 2 figures
null
null
null
q-bio.BM math.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The branching of an RNA molecule is an important structural characteristic yet difficult to predict correctly, especially for longer sequences. Using plane trees as a combinatorial model for RNA folding, we consider the thermodynamic cost, known as the barrier height, of transitioning between branching configurations. Using branching skew as a coarse energy approximation, we characterize various types of paths in the discrete configuration landscape. In particular, we give sufficient conditions for a path to have both minimal length and minimal branching skew. The proofs offer some biological insights, notably the potential importance of both hairpin stability and domain architecture to higher resolution RNA barrier height analyses.
[ { "created": "Tue, 21 Mar 2023 23:03:34 GMT", "version": "v1" } ]
2023-03-23
[ [ "Heitsch", "Christine", "" ], [ "Huynh", "Chi N. Y.", "" ], [ "Johnston", "Greg", "" ] ]
The branching of an RNA molecule is an important structural characteristic yet difficult to predict correctly, especially for longer sequences. Using plane trees as a combinatorial model for RNA folding, we consider the thermodynamic cost, known as the barrier height, of transitioning between branching configurations. Using branching skew as a coarse energy approximation, we characterize various types of paths in the discrete configuration landscape. In particular, we give sufficient conditions for a path to have both minimal length and minimal branching skew. The proofs offer some biological insights, notably the potential importance of both hairpin stability and domain architecture to higher resolution RNA barrier height analyses.
1912.00108
Raouf Dridi Dr
Raouf Dridi, Hedayat Alghassi, Maen Obeidat, Sridhar Tayur
The Topology of Mutated Driver Pathways
Key words: topological data analysis, cancer genomics, mutation data, acute myeloid leukemia, glioblastoma multiforme, persistent homology, simplicial complex, Betti numbers, algebraic topology
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much progress has been made, and continues to be made, towards identifying candidate mutated driver pathways in cancer. However, no systematic approach to understanding how candidate pathways relate to each other for a given cancer (such as Acute myeloid leukemia), and how one type of cancer may be similar or different from another with regard to their respective pathways (Acute myeloid leukemia vs. Glioblastoma multiforme for instance), has emerged thus far. Our work attempts to contribute to the understanding of {\em space of pathways} through a novel topological framework. We illustrate our approach, using mutation data (obtained from TCGA) of two types of tumors: Acute myeloid leukemia (AML) and Glioblastoma multiforme (GBM). We find that the space of pathways for AML is homotopy equivalent to a sphere, while that of GBM is equivalent to a genus-2 surface. We hope to trigger new types of questions (i.e., allow for novel kinds of hypotheses) towards a more comprehensive grasp of cancer.
[ { "created": "Sat, 30 Nov 2019 01:27:29 GMT", "version": "v1" } ]
2019-12-03
[ [ "Dridi", "Raouf", "" ], [ "Alghassi", "Hedayat", "" ], [ "Obeidat", "Maen", "" ], [ "Tayur", "Sridhar", "" ] ]
Much progress has been made, and continues to be made, towards identifying candidate mutated driver pathways in cancer. However, no systematic approach to understanding how candidate pathways relate to each other for a given cancer (such as Acute myeloid leukemia), and how one type of cancer may be similar or different from another with regard to their respective pathways (Acute myeloid leukemia vs. Glioblastoma multiforme for instance), has emerged thus far. Our work attempts to contribute to the understanding of {\em space of pathways} through a novel topological framework. We illustrate our approach, using mutation data (obtained from TCGA) of two types of tumors: Acute myeloid leukemia (AML) and Glioblastoma multiforme (GBM). We find that the space of pathways for AML is homotopy equivalent to a sphere, while that of GBM is equivalent to a genus-2 surface. We hope to trigger new types of questions (i.e., allow for novel kinds of hypotheses) towards a more comprehensive grasp of cancer.
1905.11712
Aleksandra Arda\v{s}eva
Aleksandra Arda\v{s}eva, Robert A. Gatenby, Alexander R. A. Anderson, Helen M. Byrne, Philip K. Maini, Tommaso Lorenzi
Evolutionary dynamics of competing phenotype-structured populations in periodically fluctuating environments
33 pages, 8 figures
Journal of Mathematical Biology, 80, 775-807, 2020
10.1007/s00285-019-01441-5
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living species, ranging from bacteria to animals, exist in environmental conditions that exhibit spatial and temporal heterogeneity which requires them to adapt. Risk-spreading through spontaneous phenotypic variations is a known concept in ecology, which is used to explain how species may survive when faced with the evolutionary risks associated with temporally varying environments. In order to support a deeper understanding of the adaptive role of spontaneous phenotypic variations in fluctuating environments, we consider a system of non-local partial differential equations modelling the evolutionary dynamics of two competing phenotype-structured populations in the presence of periodically oscillating nutrient levels. The two populations undergo spontaneous phenotypic variations at different rates. The phenotypic state of each individual is represented by a continuous variable, and the phenotypic landscape of the populations evolves in time due to variations in the nutrient level. Exploiting the analytical tractability of our model, we study the long-time behaviour of the solutions to obtain a detailed mathematical depiction of evolutionary dynamics. The results suggest that when nutrient levels undergo small and slow oscillations, it is evolutionarily more convenient to rarely undergo spontaneous phenotypic variations. Conversely, under relatively large and fast periodic oscillations in the nutrient levels, which bring about alternating cycles of starvation and nutrient abundance, higher rates of spontaneous phenotypic variations confer a competitive advantage. We discuss the implications of our results in the context of cancer metabolism.
[ { "created": "Tue, 28 May 2019 09:55:22 GMT", "version": "v1" }, { "created": "Fri, 23 Aug 2019 09:09:32 GMT", "version": "v2" } ]
2020-04-03
[ [ "Ardaševa", "Aleksandra", "" ], [ "Gatenby", "Robert A.", "" ], [ "Anderson", "Alexander R. A.", "" ], [ "Byrne", "Helen M.", "" ], [ "Maini", "Philip K.", "" ], [ "Lorenzi", "Tommaso", "" ] ]
Living species, ranging from bacteria to animals, exist in environmental conditions that exhibit spatial and temporal heterogeneity which requires them to adapt. Risk-spreading through spontaneous phenotypic variations is a known concept in ecology, which is used to explain how species may survive when faced with the evolutionary risks associated with temporally varying environments. In order to support a deeper understanding of the adaptive role of spontaneous phenotypic variations in fluctuating environments, we consider a system of non-local partial differential equations modelling the evolutionary dynamics of two competing phenotype-structured populations in the presence of periodically oscillating nutrient levels. The two populations undergo spontaneous phenotypic variations at different rates. The phenotypic state of each individual is represented by a continuous variable, and the phenotypic landscape of the populations evolves in time due to variations in the nutrient level. Exploiting the analytical tractability of our model, we study the long-time behaviour of the solutions to obtain a detailed mathematical depiction of evolutionary dynamics. The results suggest that when nutrient levels undergo small and slow oscillations, it is evolutionarily more convenient to rarely undergo spontaneous phenotypic variations. Conversely, under relatively large and fast periodic oscillations in the nutrient levels, which bring about alternating cycles of starvation and nutrient abundance, higher rates of spontaneous phenotypic variations confer a competitive advantage. We discuss the implications of our results in the context of cancer metabolism.
1404.7182
Andrea Perna
Andrea Perna, Guillaume Gregoire and Richard P. Mann
A note on the duality between interaction responses and mutual positions in flocking and schooling
null
Movement Ecology 2014 2:22
10.1186/s40462-014-0022-5
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Recent research in animal behaviour has contributed to determine how alignment, turning responses, and changes of speed mediate flocking and schooling interactions in different animal species. Here, we address specifically the problem of what interaction responses support different nearest neighbour configurations in terms of mutual position and distance. Results: We find that the different interaction rules observed in different animal species may be a simple consequence of the relative positions that individuals assume when they move together, and of the noise inherent with the movement of animals, or associated with tracking inaccuracy. Conclusions: The anisotropic positioning of individuals with respect to their neighbours, in combination with noise, can explain several aspects of the movement responses observed in real animal groups, and should be considered explicitly in future models of flocking and schooling. By making a distinction between interaction responses involved in maintaining a preferred flock configuration, and interaction responses directed at changing it, we provide a frame to discriminate movement interactions that signal directional conflict from those underlying consensual group motion.
[ { "created": "Mon, 28 Apr 2014 21:58:47 GMT", "version": "v1" }, { "created": "Fri, 4 Jul 2014 08:37:33 GMT", "version": "v2" } ]
2018-09-05
[ [ "Perna", "Andrea", "" ], [ "Gregoire", "Guillaume", "" ], [ "Mann", "Richard P.", "" ] ]
Background: Recent research in animal behaviour has contributed to determine how alignment, turning responses, and changes of speed mediate flocking and schooling interactions in different animal species. Here, we address specifically the problem of what interaction responses support different nearest neighbour configurations in terms of mutual position and distance. Results: We find that the different interaction rules observed in different animal species may be a simple consequence of the relative positions that individuals assume when they move together, and of the noise inherent with the movement of animals, or associated with tracking inaccuracy. Conclusions: The anisotropic positioning of individuals with respect to their neighbours, in combination with noise, can explain several aspects of the movement responses observed in real animal groups, and should be considered explicitly in future models of flocking and schooling. By making a distinction between interaction responses involved in maintaining a preferred flock configuration, and interaction responses directed at changing it, we provide a frame to discriminate movement interactions that signal directional conflict from those underlying consensual group motion.
2006.00579
Tom Britton
Tom Britton and Frank Ball
Summer vacation and COVID-19: effects of metropolitan people going to summer provinces
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many countries are now investigating what the effects of summer vacation might be on the COVID-19 pandemic. Here one particular such question is addressed: what will happen if large numbers of metropolitan people visit a less populated province during the summer vacation? By means of a simple epidemic model, allowing for both short and long-term visitors to the province, it is studied which features are most influential in determining if such summer movements will result in large number of infections among the province population. The method is applied to the island of Gotland off the South East coast of Sweden. It is shown that the amount of mixing between the metropolitan and province groups and the fraction of metropolitan people being infectious upon arrival are most influential. Consequently, minimizing events gathering both the province and metropolitan groups and/or reducing the number of short-term visitors could substantially decrease spreading, as could measures to lower the fraction initially infectious upon arrival.
[ { "created": "Sun, 31 May 2020 18:11:53 GMT", "version": "v1" } ]
2020-06-02
[ [ "Britton", "Tom", "" ], [ "Ball", "Frank", "" ] ]
Many countries are now investigating what the effects of summer vacation might be on the COVID-19 pandemic. Here one particular such question is addressed: what will happen if large numbers of metropolitan people visit a less populated province during the summer vacation? By means of a simple epidemic model, allowing for both short and long-term visitors to the province, it is studied which features are most influential in determining if such summer movements will result in large number of infections among the province population. The method is applied to the island of Gotland off the South East coast of Sweden. It is shown that the amount of mixing between the metropolitan and province groups and the fraction of metropolitan people being infectious upon arrival are most influential. Consequently, minimizing events gathering both the province and metropolitan groups and/or reducing the number of short-term visitors could substantially decrease spreading, as could measures to lower the fraction initially infectious upon arrival.
2404.14821
Valerio Piomponi Mr.
Valerio Piomponi, Miroslav Krepl, Jiri Sponer and Giovanni Bussi
Molecular simulations to investigate the impact of N6-methylation in RNA recognition: Improving accuracy and precision of binding free energy prediction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
N6-methyladenosine (m6A) is a prevalent RNA post-transcriptional modification that plays crucial roles in RNA stability, structural dynamics, and interactions with proteins. The YT521-B (YTH) family of proteins, which are notable m6A readers, function through their highly conserved YTH domain. Recent structural investigations and molecular dynamics (MD) simulations have shed light on the recognition mechanism of m6A by the YTHDC1 protein. Despite advancements, using MD to predict the stabilization induced by m6A on the free energy of binding between RNA and YTH proteins remains challenging, due to inaccuracy of the employed force field and limited sampling. For instance, simulations often fail to sufficiently capture the hydration dynamics of the binding pocket. This study addresses these challenges through an innovative methodology that integrates metadynamics, alchemical simulations, and force-field refinement. Importantly, our research identifies hydration of the binding pocket as giving only a minor contribution to the binding free energy and emphasizes the critical importance of precisely tuning force-field parameters to experimental data. By employing a fitting strategy built on alchemical calculations, we refine the m6A partial charges parameters, thereby enabling the simultaneous reproduction of N6 methylation on both the protein binding free energy and the thermodynamic stability of nine RNA duplexes. Our findings underscore the sensitivity of binding free energies to partial charges, highlighting the necessity for thorough parameterization and validation against experimental observations across a range of structural contexts.
[ { "created": "Tue, 23 Apr 2024 08:18:26 GMT", "version": "v1" } ]
2024-04-24
[ [ "Piomponi", "Valerio", "" ], [ "Krepl", "Miroslav", "" ], [ "Sponer", "Jiri", "" ], [ "Bussi", "Giovanni", "" ] ]
N6-methyladenosine (m6A) is a prevalent RNA post-transcriptional modification that plays crucial roles in RNA stability, structural dynamics, and interactions with proteins. The YT521-B (YTH) family of proteins, which are notable m6A readers, function through their highly conserved YTH domain. Recent structural investigations and molecular dynamics (MD) simulations have shed light on the recognition mechanism of m6A by the YTHDC1 protein. Despite advancements, using MD to predict the stabilization induced by m6A on the free energy of binding between RNA and YTH proteins remains challenging, due to inaccuracy of the employed force field and limited sampling. For instance, simulations often fail to sufficiently capture the hydration dynamics of the binding pocket. This study addresses these challenges through an innovative methodology that integrates metadynamics, alchemical simulations, and force-field refinement. Importantly, our research identifies hydration of the binding pocket as giving only a minor contribution to the binding free energy and emphasizes the critical importance of precisely tuning force-field parameters to experimental data. By employing a fitting strategy built on alchemical calculations, we refine the m6A partial charges parameters, thereby enabling the simultaneous reproduction of N6 methylation on both the protein binding free energy and the thermodynamic stability of nine RNA duplexes. Our findings underscore the sensitivity of binding free energies to partial charges, highlighting the necessity for thorough parameterization and validation against experimental observations across a range of structural contexts.
1603.06879
Enrico Chiovetto
Enrico Chiovetto, Andrea d'Avella and Martin Giese
A Unifying Framework for the Identification of Motor Primitives
33 pages, 8 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long-standing hypothesis in neuroscience is that the central nervous system accomplishes complex motor behaviors through the combination of a small number of motor primitives. Many studies in the last couples of decades have identified motor primitives at the kinematic, kinetic, and electromyographic level, thus supporting modularity at different levels of organization in the motor system. However, these studies relied on heterogeneous definitions of motor primitives and on different algorithms for their identification. Standard unsupervised learning algorithms such as principal component analysis, independent component analysis, and non-negative matrix factorization, or more advanced techniques involving the estimation of temporal delays of the relevant mixture components have been applied. This plurality of algorithms has made difficult to compare and interpret results obtained across different studies. Moreover, how the different definitions of motor primitives relate to each other has never been examined systematically. Here we propose a comprehensive framework for the definition of different types of motor primitives and a single algorithm for their identification. By embedding smoothness priors and specific constraints in the underlying generative model, the algorithm can identify many different types of motor primitives. We assessed the identification performance of the algorithm both on simulated data sets, for which the properties of the primitives and of the corresponding combination parameters were known, and on experimental electromyographic and kinematic data sets, collected from human subjects accomplishing goal-oriented and rhythmic motor tasks. The identification accuracy of the new algorithm was typically equal or better than the accuracy of other unsupervised learning algorithms used previously for the identification of the same types of primitives.
[ { "created": "Tue, 22 Mar 2016 17:23:15 GMT", "version": "v1" } ]
2016-03-23
[ [ "Chiovetto", "Enrico", "" ], [ "d'Avella", "Andrea", "" ], [ "Giese", "Martin", "" ] ]
A long-standing hypothesis in neuroscience is that the central nervous system accomplishes complex motor behaviors through the combination of a small number of motor primitives. Many studies in the last couples of decades have identified motor primitives at the kinematic, kinetic, and electromyographic level, thus supporting modularity at different levels of organization in the motor system. However, these studies relied on heterogeneous definitions of motor primitives and on different algorithms for their identification. Standard unsupervised learning algorithms such as principal component analysis, independent component analysis, and non-negative matrix factorization, or more advanced techniques involving the estimation of temporal delays of the relevant mixture components have been applied. This plurality of algorithms has made difficult to compare and interpret results obtained across different studies. Moreover, how the different definitions of motor primitives relate to each other has never been examined systematically. Here we propose a comprehensive framework for the definition of different types of motor primitives and a single algorithm for their identification. By embedding smoothness priors and specific constraints in the underlying generative model, the algorithm can identify many different types of motor primitives. We assessed the identification performance of the algorithm both on simulated data sets, for which the properties of the primitives and of the corresponding combination parameters were known, and on experimental electromyographic and kinematic data sets, collected from human subjects accomplishing goal-oriented and rhythmic motor tasks. The identification accuracy of the new algorithm was typically equal or better than the accuracy of other unsupervised learning algorithms used previously for the identification of the same types of primitives.
2211.08208
Jonathan Hidalgo
Nana Cabo Bizet, Jonanthan Hidalgo N\'u\~nez, Gil Estefano Rodr\'igez Rivera
Variations of the SIR model for COVID-19 evolution
10 pages, 28 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this work, we discuss the SIR epidemiological model and different variations of it applied to the propagation of the COVID-19 pandemia; we employ the data of the state of Guanajuato and of Mexico. We present some considerations that can improve the predictions made by those models. We consider a time-dependent infection rate, which we adjust to the data. Starting from a linear regime where the populations are much smaller that the country or state population and the population of susceptible (S) can be approximated in convenient units to S approximately 1, we make fits of the parameters. We also consider the case when the susceptible starts departing from 1, for this case we adjust an effective contagion rate. We also explore the ratio of detected populations and the real ones, obtaining that -for the analyzed case it is of approximately 10%. We estimate the number of deaths by making a fit versus the recovered cases, this fit is in first approximation linear, but other powers can give a good agreement. By predictions to past data, we conclude that adaptations of the SIR model can be of great use in describing pandemia's propagation, specially in limited time periods.
[ { "created": "Tue, 15 Nov 2022 15:16:34 GMT", "version": "v1" } ]
2022-11-16
[ [ "Bizet", "Nana Cabo", "" ], [ "Núñez", "Jonanthan Hidalgo", "" ], [ "Rivera", "Gil Estefano Rodrígez", "" ] ]
In this work, we discuss the SIR epidemiological model and different variations of it applied to the propagation of the COVID-19 pandemia; we employ the data of the state of Guanajuato and of Mexico. We present some considerations that can improve the predictions made by those models. We consider a time-dependent infection rate, which we adjust to the data. Starting from a linear regime where the populations are much smaller that the country or state population and the population of susceptible (S) can be approximated in convenient units to S approximately 1, we make fits of the parameters. We also consider the case when the susceptible starts departing from 1, for this case we adjust an effective contagion rate. We also explore the ratio of detected populations and the real ones, obtaining that -for the analyzed case it is of approximately 10%. We estimate the number of deaths by making a fit versus the recovered cases, this fit is in first approximation linear, but other powers can give a good agreement. By predictions to past data, we conclude that adaptations of the SIR model can be of great use in describing pandemia's propagation, specially in limited time periods.
2210.02593
Griffin Chure
Griffin Chure
Be Prospective, Not Retrospective: A Philosophy for Advancing Reproducibility in Modern Biological Research
null
null
null
null
q-bio.OT cs.DL
http://creativecommons.org/licenses/by/4.0/
The ubiquity of computation in modern scientific research inflicts new challenges for reproducibility. While most journals now require code and data be made available, the standards for organization, annotation, and validation remain lax, making the data and code often difficult to decipher or practically use. I believe that this is due to the documentation, collation, and validation of code and data only being done in retrospect. In this essay, I reflect on my experience contending with these challenges and present a philosophy for prioritizing reproducibility in modern biological research where balancing computational analysis and wet-lab experiments is commonplace. Modern tools used in scientific workflows (such as GitHub repositories) lend themselves well to this philosophy where reproducibility begins at project inception, not completion. To that end, I present and provide a programming-language agnostic template architecture that can be immediately copied and made bespoke to your next paper, whether your lab work is wet, dry, or somewhere in between.
[ { "created": "Wed, 5 Oct 2022 22:46:27 GMT", "version": "v1" } ]
2022-10-07
[ [ "Chure", "Griffin", "" ] ]
The ubiquity of computation in modern scientific research inflicts new challenges for reproducibility. While most journals now require code and data be made available, the standards for organization, annotation, and validation remain lax, making the data and code often difficult to decipher or practically use. I believe that this is due to the documentation, collation, and validation of code and data only being done in retrospect. In this essay, I reflect on my experience contending with these challenges and present a philosophy for prioritizing reproducibility in modern biological research where balancing computational analysis and wet-lab experiments is commonplace. Modern tools used in scientific workflows (such as GitHub repositories) lend themselves well to this philosophy where reproducibility begins at project inception, not completion. To that end, I present and provide a programming-language agnostic template architecture that can be immediately copied and made bespoke to your next paper, whether your lab work is wet, dry, or somewhere in between.
0704.0331
C. Soule
J.-L. Jestin, C. Soule (IHES)
Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs
Accepted for publication in the Journal of Theoretical Biology
null
null
null
q-bio.OT
null
This letter reports complete sets of two-fold symmetries between partitions of the universal genetic code. By substituting bases at each position of the codons according to a fixed rule, it happens that properties of the degeneracy pattern or of tRNA aminoacylation specificity are exchanged.
[ { "created": "Tue, 3 Apr 2007 07:15:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Jestin", "J. -L.", "", "IHES" ], [ "Soule", "C.", "", "IHES" ] ]
This letter reports complete sets of two-fold symmetries between partitions of the universal genetic code. By substituting bases at each position of the codons according to a fixed rule, it happens that properties of the degeneracy pattern or of tRNA aminoacylation specificity are exchanged.
1707.06539
Ido Kanter
Shira Sardi, Amir Goldental, Hamutal Amir, Roni Vardi and Ido Kanter
Vitality of Neural Networks under Reoccurring Catastrophic Failures
22 pages, 7 figures
Scientific Reports 6, Article number: 31674 (2016)
10.1038/srep31674
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality.
[ { "created": "Thu, 20 Jul 2017 14:27:25 GMT", "version": "v1" } ]
2017-07-21
[ [ "Sardi", "Shira", "" ], [ "Goldental", "Amir", "" ], [ "Amir", "Hamutal", "" ], [ "Vardi", "Roni", "" ], [ "Kanter", "Ido", "" ] ]
Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality.
1112.2072
Hiroaki Inomata
Hiroaki Inomata, Harima Hirohiko, Masanari Itokawa
Long Brief Pulse Method for Pulse-wave modified Electroconvulsive Therapy
5 pages, 3 figures
null
10.5348/ijcri-2012-07-147-CR-8
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modified-Electroconvulsive Therapy (m-ECT) is administered for the treatment of various psychiatric disorders. The Seizure Generalization Hypothesis holds that propagation of the induced seizure throughout the whole brain is essential for the effective ECT intervention. However, we encounter many clinical cases where, due to high thresholds, seizure is not induced by the maximum dose of electrical charge. Some studies have indicated that the ultrabrief pulse method, in which pulse width is less than 0.5millisecond (ms), is more effective at inducing seizure than conventional brief pulse (0.5ms-2.0ms). Contrary to the studies, we experienced a case of schizophrenia in which m-ECT with 1.0 and 1.5 ms width pulse (referred to as 'long' brief pulse as 0.5ms width pulse is the default in Japan) succeeded in inducing seizure, whereas ultrabrief pulse failed to induce seizure. This case is described in detail. Moreover, we discuss the underlying mechanism of this phenomenon.
[ { "created": "Fri, 9 Dec 2011 11:09:30 GMT", "version": "v1" } ]
2012-09-14
[ [ "Inomata", "Hiroaki", "" ], [ "Hirohiko", "Harima", "" ], [ "Itokawa", "Masanari", "" ] ]
Modified-Electroconvulsive Therapy (m-ECT) is administered for the treatment of various psychiatric disorders. The Seizure Generalization Hypothesis holds that propagation of the induced seizure throughout the whole brain is essential for the effective ECT intervention. However, we encounter many clinical cases where, due to high thresholds, seizure is not induced by the maximum dose of electrical charge. Some studies have indicated that the ultrabrief pulse method, in which pulse width is less than 0.5millisecond (ms), is more effective at inducing seizure than conventional brief pulse (0.5ms-2.0ms). Contrary to the studies, we experienced a case of schizophrenia in which m-ECT with 1.0 and 1.5 ms width pulse (referred to as 'long' brief pulse as 0.5ms width pulse is the default in Japan) succeeded in inducing seizure, whereas ultrabrief pulse failed to induce seizure. This case is described in detail. Moreover, we discuss the underlying mechanism of this phenomenon.
0811.1627
Thomas Butler
Thomas Butler, David Reynolds
Predator-Prey Quasi-cycles from a Path Integral Formalism
4 pages
null
10.1103/PhysRevE.79.032901
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The existence of beyond mean field quasi-cycle oscillations in a simple spatial model of predator prey interactions is derived from a path integral formalism. The results agree substantially with those obtained from analysis of similar models using system size expansions of the master equation. In all of these analyses, the discrete nature of predator prey populations and finite size effects lead to persistent oscillations in time, but spatial patterns fail to form. The path integral formalism goes beyond mean field theory and provides a focus on individual realizations of the stochastic time evolution of population not captured in the standard master equation approach.
[ { "created": "Tue, 11 Nov 2008 05:04:40 GMT", "version": "v1" }, { "created": "Thu, 5 Feb 2009 21:42:59 GMT", "version": "v2" } ]
2009-11-13
[ [ "Butler", "Thomas", "" ], [ "Reynolds", "David", "" ] ]
The existence of beyond mean field quasi-cycle oscillations in a simple spatial model of predator prey interactions is derived from a path integral formalism. The results agree substantially with those obtained from analysis of similar models using system size expansions of the master equation. In all of these analyses, the discrete nature of predator prey populations and finite size effects lead to persistent oscillations in time, but spatial patterns fail to form. The path integral formalism goes beyond mean field theory and provides a focus on individual realizations of the stochastic time evolution of population not captured in the standard master equation approach.
1801.01539
Fatema Zohora
Fatema Tuz Zohora, Ngoc Hieu Tran, Xianglilan Zhang, Lei Xin, Baozhen Shan, and Ming Li
DeepIso: A Deep Learning Model for Peptide Feature Detection
null
null
null
null
q-bio.QM cs.NE physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Liquid chromatography with tandem mass spectrometry (LC-MS/MS) based proteomics is a well-established research field with major applications such as identification of disease biomarkers, drug discovery, drug design and development. In proteomics, protein identification and quantification is a fundamental task, which is done by first enzymatically digesting it into peptides, and then analyzing peptides by LC-MS/MS instruments. The peptide feature detection and quantification from an LC-MS map is the first step in typical analysis workflows. In this paper we propose a novel deep learning based model, DeepIso, that uses Convolutional Neural Networks (CNNs) to scan an LC-MS map to detect peptide features and estimate their abundance. Existing tools are often designed with limited engineered features based on domain knowledge, and depend on pretrained parameters which are hardly updated despite huge amount of new coming proteomic data. Our proposed model, on the other hand, is capable of learning multiple levels of representation of high dimensional data through its many layers of neurons and continuously evolving with newly acquired data. To evaluate our proposed model, we use an antibody dataset including a heavy and a light chain, each digested by Asp-N, Chymotrypsin, Trypsin, thus giving six LC-MS maps for the experiment. Our model achieves 93.21% sensitivity with specificity of 99.44% on this dataset. Our results demonstrate that novel deep learning tools are desirable to advance the state-of-the-art in protein identification and quantification.
[ { "created": "Sat, 9 Dec 2017 04:55:39 GMT", "version": "v1" } ]
2018-01-08
[ [ "Zohora", "Fatema Tuz", "" ], [ "Tran", "Ngoc Hieu", "" ], [ "Zhang", "Xianglilan", "" ], [ "Xin", "Lei", "" ], [ "Shan", "Baozhen", "" ], [ "Li", "Ming", "" ] ]
Liquid chromatography with tandem mass spectrometry (LC-MS/MS) based proteomics is a well-established research field with major applications such as identification of disease biomarkers, drug discovery, drug design and development. In proteomics, protein identification and quantification is a fundamental task, which is done by first enzymatically digesting it into peptides, and then analyzing peptides by LC-MS/MS instruments. The peptide feature detection and quantification from an LC-MS map is the first step in typical analysis workflows. In this paper we propose a novel deep learning based model, DeepIso, that uses Convolutional Neural Networks (CNNs) to scan an LC-MS map to detect peptide features and estimate their abundance. Existing tools are often designed with limited engineered features based on domain knowledge, and depend on pretrained parameters which are hardly updated despite huge amount of new coming proteomic data. Our proposed model, on the other hand, is capable of learning multiple levels of representation of high dimensional data through its many layers of neurons and continuously evolving with newly acquired data. To evaluate our proposed model, we use an antibody dataset including a heavy and a light chain, each digested by Asp-N, Chymotrypsin, Trypsin, thus giving six LC-MS maps for the experiment. Our model achieves 93.21% sensitivity with specificity of 99.44% on this dataset. Our results demonstrate that novel deep learning tools are desirable to advance the state-of-the-art in protein identification and quantification.
2011.12567
Jacques van Helden
Erwan Sallard (ENS Paris), Jos\'e Halloy (LIED), Didier Casane (EGCE), Etienne Decroly (AFMB), Jacques van Helden (TAGC, IFB-CORE)
Tracing the origins of SARS-CoV-2 in coronavirus phylogenies
English translation of a French manuscript to be published in the August-Sept 2020 issue of M{\'e}decine/Sciences, EDP Sciences
null
10.1051/medsci/2020123
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SARS-CoV-2 is a new human coronavirus (CoV), which emerged in China in late 2019 and is responsible for the global COVID-19 pandemic that caused more than 59 million infections and 1.4 million deaths in 11 months. Understanding the origin of this virus is an important issue and it is necessary to determine the mechanisms of its dissemination in order to contain future epidemics. Based on phylogenetic inferences, sequence analysis and structure-function relationships of coronavirus proteins, informed by the knowledge currently available on the virus, we discuss the different scenarios evoked to account for the origin - natural or synthetic - of the virus. The data currently available is not sufficient to firmly assert whether SARS-CoV2 results from a zoonotic emergence or from an accidental escape of a laboratory strain. This question needs to be solved because it has important consequences on the evaluation of risk/benefit balance of our interaction with ecosystems, the intensive breeding of wild and domestic animals, as well as some lab practices and on scientific policy and biosafety regulations. Regardless of its origin, studying the evolution of the molecular mechanisms involved in the emergence of pandemic viruses is essential to develop therapeutic and vaccine strategies and to prevent future zoonoses. This article is a translation and update of a French article published in M{\'e}decine/Sciences, Aug/Sept 2020 (http://doi.org/10.1051/medsci/2020123).
[ { "created": "Wed, 25 Nov 2020 08:10:47 GMT", "version": "v1" } ]
2020-11-26
[ [ "Sallard", "Erwan", "", "ENS Paris" ], [ "Halloy", "José", "", "LIED" ], [ "Casane", "Didier", "", "EGCE" ], [ "Decroly", "Etienne", "", "AFMB" ], [ "van Helden", "Jacques", "", "TAGC, IFB-CORE" ] ]
SARS-CoV-2 is a new human coronavirus (CoV), which emerged in China in late 2019 and is responsible for the global COVID-19 pandemic that caused more than 59 million infections and 1.4 million deaths in 11 months. Understanding the origin of this virus is an important issue and it is necessary to determine the mechanisms of its dissemination in order to contain future epidemics. Based on phylogenetic inferences, sequence analysis and structure-function relationships of coronavirus proteins, informed by the knowledge currently available on the virus, we discuss the different scenarios evoked to account for the origin - natural or synthetic - of the virus. The data currently available is not sufficient to firmly assert whether SARS-CoV2 results from a zoonotic emergence or from an accidental escape of a laboratory strain. This question needs to be solved because it has important consequences on the evaluation of risk/benefit balance of our interaction with ecosystems, the intensive breeding of wild and domestic animals, as well as some lab practices and on scientific policy and biosafety regulations. Regardless of its origin, studying the evolution of the molecular mechanisms involved in the emergence of pandemic viruses is essential to develop therapeutic and vaccine strategies and to prevent future zoonoses. This article is a translation and update of a French article published in M{\'e}decine/Sciences, Aug/Sept 2020 (http://doi.org/10.1051/medsci/2020123).
2109.03727
Carlo Cenedese
Carlo Cenedese, Lorenzo Zino, Michele Cucuzzella, Ming Cao
Optimal policy design to mitigate epidemics on networks using an SIS model
null
null
null
null
q-bio.PE math.DS math.OC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how to effectively control an epidemic spreading on a network is a problem of paramount importance for the scientific community. The ongoing COVID-19 pandemic has highlighted the need for policies that mitigate the spread, without relying on pharmaceutical interventions, that is, without the medical assurance of the recovery process. These policies typically entail lockdowns and mobility restrictions, having thus nonnegligible socio-economic consequences for the population. In this paper, we focus on the problem of finding the optimum policies that "flatten the epidemic curve" while limiting the negative consequences for the society, and formulate it as a nonlinear control problem over a finite prediction horizon. We utilize the model predictive control theory to design a strategy to effectively control the disease, balancing safety and normalcy. An explicit formalization of the control scheme is provided for the susceptible--infected--susceptible epidemic model over a network. Its performance and flexibility are demonstrated by means of numerical simulations.
[ { "created": "Wed, 8 Sep 2021 15:38:31 GMT", "version": "v1" }, { "created": "Sun, 12 Sep 2021 16:15:38 GMT", "version": "v2" } ]
2021-09-14
[ [ "Cenedese", "Carlo", "" ], [ "Zino", "Lorenzo", "" ], [ "Cucuzzella", "Michele", "" ], [ "Cao", "Ming", "" ] ]
Understanding how to effectively control an epidemic spreading on a network is a problem of paramount importance for the scientific community. The ongoing COVID-19 pandemic has highlighted the need for policies that mitigate the spread, without relying on pharmaceutical interventions, that is, without the medical assurance of the recovery process. These policies typically entail lockdowns and mobility restrictions, having thus nonnegligible socio-economic consequences for the population. In this paper, we focus on the problem of finding the optimum policies that "flatten the epidemic curve" while limiting the negative consequences for the society, and formulate it as a nonlinear control problem over a finite prediction horizon. We utilize the model predictive control theory to design a strategy to effectively control the disease, balancing safety and normalcy. An explicit formalization of the control scheme is provided for the susceptible--infected--susceptible epidemic model over a network. Its performance and flexibility are demonstrated by means of numerical simulations.
0712.0367
Zhihui Wang
Zhihui Wang, Christina M. Birch, Thomas S. Deisboeck
Cross-Scale Sensitivity Analysis of a Non-Small Cell Lung Cancer Model: Linking Molecular Signaling Properties to Cellular Behavior
30 pages, 6 figures, 2 tables
null
null
null
q-bio.QM
null
Sensitivity analysis is an effective tool for systematically identifying specific perturbations in parameters that have significant effects on the behavior of a given biosystem, at the scale investigated. In this work, using a two-dimensional, multiscale non-small cell lung cancer (NSCLC) model, we examine the effects of perturbations in system parameters which span both molecular and cellular levels, i.e. across scales of interest. This is achieved by first linking molecular and cellular activities and then assessing the influence of parameters at the molecular level on the tumor's spatio-temporal expansion rate, which serves as the output behavior at the cellular level. Overall, the algorithm operated reliably over relatively large variations of most parameters, hence confirming the robustness of the model. However, three pathway components (proteins PKC, MEK, and ERK) and eleven reaction steps were determined to be of critical importance by employing a sensitivity coefficient as an evaluation index. Each of these sensitive parameters exhibited a similar changing pattern in that a relatively larger increase or decrease in its value resulted in a lesser influence on the system's cellular performance. This study provides a novel cross-scaled approach to analyzing sensitivities of computational model parameters and proposes its application to interdisciplinary biomarker studies.
[ { "created": "Mon, 3 Dec 2007 19:57:27 GMT", "version": "v1" } ]
2007-12-04
[ [ "Wang", "Zhihui", "" ], [ "Birch", "Christina M.", "" ], [ "Deisboeck", "Thomas S.", "" ] ]
Sensitivity analysis is an effective tool for systematically identifying specific perturbations in parameters that have significant effects on the behavior of a given biosystem, at the scale investigated. In this work, using a two-dimensional, multiscale non-small cell lung cancer (NSCLC) model, we examine the effects of perturbations in system parameters which span both molecular and cellular levels, i.e. across scales of interest. This is achieved by first linking molecular and cellular activities and then assessing the influence of parameters at the molecular level on the tumor's spatio-temporal expansion rate, which serves as the output behavior at the cellular level. Overall, the algorithm operated reliably over relatively large variations of most parameters, hence confirming the robustness of the model. However, three pathway components (proteins PKC, MEK, and ERK) and eleven reaction steps were determined to be of critical importance by employing a sensitivity coefficient as an evaluation index. Each of these sensitive parameters exhibited a similar changing pattern in that a relatively larger increase or decrease in its value resulted in a lesser influence on the system's cellular performance. This study provides a novel cross-scaled approach to analyzing sensitivities of computational model parameters and proposes its application to interdisciplinary biomarker studies.
2203.12615
Amadeus M. Gebauer
Amadeus M. Gebauer, Martin R. Pfaller, Fabian A. Braeu, Christian J. Cyron, Wolfgang A. Wall
A homogenized constrained mixture model of cardiac growth and remodeling: Analyzing mechanobiological stability and reversal
null
null
null
null
q-bio.TO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiac growth and remodeling (G&R) patterns change ventricular size, shape, and function both globally and locally. Biomechanical, neurohormonal, and genetic stimuli drive these patterns through changes in myocyte dimension and fibrosis. We propose a novel microstructure-motivated model that predicts organ-scale G&R in the heart based on the homogenized constrained mixture theory. Previous models, based on the kinematic growth theory, reproduced consequences of G&R in bulk myocardial tissue by prescribing the direction and extent of growth but neglected underlying cellular mechanisms. In our model, the direction and extent of G&R emerge naturally from intra- and extra cellular turnover processes in myocardial tissue constituents and their preferred homeostatic stretch state. We additionally propose a method to obtain a mechanobiologically equilibrated reference configuration. We test our model on an idealized 3D left ventricular geometry and demonstrate that our model aims to maintain tensional homeostasis in hypertension conditions. In a stability map, we identify regions of stable and unstable G&R from an identical parameter set with varying systolic pressures and growth factors. Furthermore, we show the extent of G&R reversal after returning the systolic pressure to baseline following stage 1 and 2 hypertension. A realistic model of organ-scale cardiac G&R has the potential to identify patients at risk of heart failure, enable personalized cardiac therapies, and facilitate the optimal design of medical devices.
[ { "created": "Wed, 23 Mar 2022 17:59:16 GMT", "version": "v1" }, { "created": "Fri, 20 Jan 2023 09:04:20 GMT", "version": "v2" }, { "created": "Wed, 3 May 2023 10:41:59 GMT", "version": "v3" } ]
2023-05-04
[ [ "Gebauer", "Amadeus M.", "" ], [ "Pfaller", "Martin R.", "" ], [ "Braeu", "Fabian A.", "" ], [ "Cyron", "Christian J.", "" ], [ "Wall", "Wolfgang A.", "" ] ]
Cardiac growth and remodeling (G&R) patterns change ventricular size, shape, and function both globally and locally. Biomechanical, neurohormonal, and genetic stimuli drive these patterns through changes in myocyte dimension and fibrosis. We propose a novel microstructure-motivated model that predicts organ-scale G&R in the heart based on the homogenized constrained mixture theory. Previous models, based on the kinematic growth theory, reproduced consequences of G&R in bulk myocardial tissue by prescribing the direction and extent of growth but neglected underlying cellular mechanisms. In our model, the direction and extent of G&R emerge naturally from intra- and extra cellular turnover processes in myocardial tissue constituents and their preferred homeostatic stretch state. We additionally propose a method to obtain a mechanobiologically equilibrated reference configuration. We test our model on an idealized 3D left ventricular geometry and demonstrate that our model aims to maintain tensional homeostasis in hypertension conditions. In a stability map, we identify regions of stable and unstable G&R from an identical parameter set with varying systolic pressures and growth factors. Furthermore, we show the extent of G&R reversal after returning the systolic pressure to baseline following stage 1 and 2 hypertension. A realistic model of organ-scale cardiac G&R has the potential to identify patients at risk of heart failure, enable personalized cardiac therapies, and facilitate the optimal design of medical devices.
1803.04236
Amirhossein Jabalameli
Amirhossein Jabalameli, Aman Behal
System Identification of a Multi-timescale Adaptive Threshold Neuronal Model
null
null
null
null
q-bio.NC cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the parameter estimation problem for a multi-timescale adaptive threshold (MAT) neuronal model is investigated. By manipulating the system dynamics, which comprise of a non-resetting leaky integrator coupled with an adaptive threshold, the threshold voltage can be obtained as a realizable model that is linear in the unknown parameters. This linearly parametrized realizable model is then utilized inside a prediction error based framework to identify the threshold parameters with the purpose of predicting single neuron precise firing times. The iterative linear least squares estimation scheme is evaluated using both synthetic data obtained from an exact model as well as experimental data obtained from in vitro rat somatosensory cortical neurons. Results show the ability of this approach to fit the MAT model to different types of fluctuating reference data. The performance of the proposed approach is seen to be superior when comparing with existing identification approaches used by the neuronal community.
[ { "created": "Fri, 23 Feb 2018 23:06:31 GMT", "version": "v1" } ]
2018-03-13
[ [ "Jabalameli", "Amirhossein", "" ], [ "Behal", "Aman", "" ] ]
In this paper, the parameter estimation problem for a multi-timescale adaptive threshold (MAT) neuronal model is investigated. By manipulating the system dynamics, which comprise of a non-resetting leaky integrator coupled with an adaptive threshold, the threshold voltage can be obtained as a realizable model that is linear in the unknown parameters. This linearly parametrized realizable model is then utilized inside a prediction error based framework to identify the threshold parameters with the purpose of predicting single neuron precise firing times. The iterative linear least squares estimation scheme is evaluated using both synthetic data obtained from an exact model as well as experimental data obtained from in vitro rat somatosensory cortical neurons. Results show the ability of this approach to fit the MAT model to different types of fluctuating reference data. The performance of the proposed approach is seen to be superior when comparing with existing identification approaches used by the neuronal community.
1707.09268
Andrew J. Rominger
Andrew J. Rominger and Miguel A. Fuentes and Pablo Marquet
Non-equilibrium evolution of volatility in origination and extinction explains fat-tailed fluctuations in Phanerozoic biodiversity
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluctuations in biodiversity, large and small, are pervasive in the fossil record, yet we do not understand the processes generating them. Here we extend theory from non-equilibrium statistical physics to describe the previously unaccounted for fat-tailed form of fluctuations in marine invertebrate richness through the Phanerozoic. Using this theory, known as superstatistics, we show that the simple fact of heterogeneous origination and extinction rates between clades and conserved rates within clades is sufficient to account for this fat-tailed form. We identify orders and the families they subsume as the taxonomic level at which clades experience inter-clade heterogeneity and within clade homogeneity of rates. Following superstatistics we would posit that orders and families are subsystems in local statistical equilibrium while the entire system is not in equilibrium. The separation of timescales between background origination and extinction within clades compared to the origin of major ecological and evolutionary innovations leading to new clades allows within-clade dynamics to reach equilibrium, while between-clade diversification is non-equilibrial. This between clade non-equilibrium accounts for the fat-tailed nature of the system as a whole. The distribution of shifts in diversification dynamics across orders and families is consistent with niche conservatism and pulsed exploration of adaptive landscapes by higher taxa. Compared to other approaches that have used simple birth-death processes, simple equilibrial dynamics, or non-linear theories from complexity science, superstatistics is superior in its ability to account for both small and extreme fluctuations in the richness of fossil taxa. Its success opens up new research directions to better understand the evolutionary processes leading to stasis in an adaptive landscape interrupted by innovations that lead to novel forms.
[ { "created": "Fri, 28 Jul 2017 14:54:45 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2018 19:33:32 GMT", "version": "v2" }, { "created": "Fri, 10 May 2019 21:11:19 GMT", "version": "v3" } ]
2019-05-14
[ [ "Rominger", "Andrew J.", "" ], [ "Fuentes", "Miguel A.", "" ], [ "Marquet", "Pablo", "" ] ]
Fluctuations in biodiversity, large and small, are pervasive in the fossil record, yet we do not understand the processes generating them. Here we extend theory from non-equilibrium statistical physics to describe the previously unaccounted for fat-tailed form of fluctuations in marine invertebrate richness through the Phanerozoic. Using this theory, known as superstatistics, we show that the simple fact of heterogeneous origination and extinction rates between clades and conserved rates within clades is sufficient to account for this fat-tailed form. We identify orders and the families they subsume as the taxonomic level at which clades experience inter-clade heterogeneity and within clade homogeneity of rates. Following superstatistics we would posit that orders and families are subsystems in local statistical equilibrium while the entire system is not in equilibrium. The separation of timescales between background origination and extinction within clades compared to the origin of major ecological and evolutionary innovations leading to new clades allows within-clade dynamics to reach equilibrium, while between-clade diversification is non-equilibrial. This between clade non-equilibrium accounts for the fat-tailed nature of the system as a whole. The distribution of shifts in diversification dynamics across orders and families is consistent with niche conservatism and pulsed exploration of adaptive landscapes by higher taxa. Compared to other approaches that have used simple birth-death processes, simple equilibrial dynamics, or non-linear theories from complexity science, superstatistics is superior in its ability to account for both small and extreme fluctuations in the richness of fossil taxa. Its success opens up new research directions to better understand the evolutionary processes leading to stasis in an adaptive landscape interrupted by innovations that lead to novel forms.
q-bio/0503002
Gerardo Chowell
Gerardo Chowell, Paul W. Fenimore, Melissa A. Castillo-Garsow, Carlos Castillo-Chavez
SARS oubreaks in Ontario, Hong Kong and Singapore: the role of diagnosis and isolation as a control mechanism
16 pages, 3 tables, 4 figures
Journal of Theoretical Biology 24,1-8 (2003)
null
LA-UR-03-2653
q-bio.OT
null
In this article we use global and regional data from the SARS epidemic in conjunction with a model of susceptible, exposed, infective, diagnosed, and recovered classes of people (``SEIJR'') to extract average properties and rate constants for those populations. The model is fitted to data from the Ontario (Toronto) in Canada, Hong Kong in China and Singapore outbreaks and predictions are made based on various assumptions and observations, including the current effect of isolating individuals diagnosed with SARS. The epidemic dynamics for Hong Kong and Singapore appear to be different from the dynamics in Toronto, Ontario. Toronto shows a very rapid increase in the number of cases between March 31st and April 6th, followed by a {\it significant} slowing in the number of new cases. We explain this as the result of an increase in the diagnostic rate and in the effectiveness of patient isolation after March 26th. Our best estimates are consistent with SARS eventually being contained in Toronto, although the time of containment is sensitive to the parameters in our model. It is shown that despite the empirically modeled heterogeneity in transmission, SARS' average reproductive number is 1.2, a value quite similar to that computed for some strains of influenza \cite{CC2}. Although it would not be surprising to see levels of SARS infection higher than ten per cent in some regions of the world (if unchecked), lack of data and the observed heterogeneity and sensitivity of parameters prevent us from predicting the long-term impact of SARS.
[ { "created": "Tue, 1 Mar 2005 04:30:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chowell", "Gerardo", "" ], [ "Fenimore", "Paul W.", "" ], [ "Castillo-Garsow", "Melissa A.", "" ], [ "Castillo-Chavez", "Carlos", "" ] ]
In this article we use global and regional data from the SARS epidemic in conjunction with a model of susceptible, exposed, infective, diagnosed, and recovered classes of people (``SEIJR'') to extract average properties and rate constants for those populations. The model is fitted to data from the Ontario (Toronto) in Canada, Hong Kong in China and Singapore outbreaks and predictions are made based on various assumptions and observations, including the current effect of isolating individuals diagnosed with SARS. The epidemic dynamics for Hong Kong and Singapore appear to be different from the dynamics in Toronto, Ontario. Toronto shows a very rapid increase in the number of cases between March 31st and April 6th, followed by a {\it significant} slowing in the number of new cases. We explain this as the result of an increase in the diagnostic rate and in the effectiveness of patient isolation after March 26th. Our best estimates are consistent with SARS eventually being contained in Toronto, although the time of containment is sensitive to the parameters in our model. It is shown that despite the empirically modeled heterogeneity in transmission, SARS' average reproductive number is 1.2, a value quite similar to that computed for some strains of influenza \cite{CC2}. Although it would not be surprising to see levels of SARS infection higher than ten per cent in some regions of the world (if unchecked), lack of data and the observed heterogeneity and sensitivity of parameters prevent us from predicting the long-term impact of SARS.
1307.7313
Jeffrey Ross-Ibarra
Justin P. Gerke, Jode W. Edwards, Katherine E. Guill, Jeffrey Ross-Ibarra, Michael D. McMullen
The genomic impacts of drift and selection for hybrid performance in maize
null
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
Modern maize breeding relies upon selection in inbreeding populations to improve the performance of cross-population hybrids. The United States Department of Agriculture - Agricultural Research Service reciprocal recurrent selection experiment between the Iowa Stiff Stalk Synthetic (BSSS) and the Iowa Corn Borer Synthetic No. 1 (BSCB1) populations represents one of the longest standing models of selection for hybrid performance. To investigate the genomic impact of this selection program, we used the Illumina MaizeSNP50 high-density SNP array to determine genotypes of progenitor lines and over 600 individuals across multiple cycles of selection. Consistent with previous research (Messmer et al., 1991; Labate et al., 1997; Hagdorn et al., 2003; Hinze et al., 2005), we found that genetic diversity within each population steadily decreases, with a corresponding increase in population structure. High marker density also enabled the first view of haplotype ancestry, fixation and recombination within this historic maize experiment. Extensive regions of haplotype fixation within each population are visible in the pericentromeric regions, where large blocks trace back to single founder inbreds. Simulation attributes most of the observed reduction in genetic diversity to genetic drift. Signatures of selection were difficult to observe in the background of this strong genetic drift, but heterozygosity in each population has fallen more than expected. Regions of haplotype fixation represent the most likely targets of selection, but as observed in other germplasm selected for hybrid performance (Feng et al., 2006), there is no overlap between the most likely targets of selection in the two populations. We discuss how this pattern is likely to occur during selection for hybrid performance, and how it poses challenges for dissecting the impacts of modern breeding and selection on the maize genome.
[ { "created": "Sat, 27 Jul 2013 21:41:17 GMT", "version": "v1" } ]
2015-06-29
[ [ "Gerke", "Justin P.", "" ], [ "Edwards", "Jode W.", "" ], [ "Guill", "Katherine E.", "" ], [ "Ross-Ibarra", "Jeffrey", "" ], [ "McMullen", "Michael D.", "" ] ]
Modern maize breeding relies upon selection in inbreeding populations to improve the performance of cross-population hybrids. The United States Department of Agriculture - Agricultural Research Service reciprocal recurrent selection experiment between the Iowa Stiff Stalk Synthetic (BSSS) and the Iowa Corn Borer Synthetic No. 1 (BSCB1) populations represents one of the longest standing models of selection for hybrid performance. To investigate the genomic impact of this selection program, we used the Illumina MaizeSNP50 high-density SNP array to determine genotypes of progenitor lines and over 600 individuals across multiple cycles of selection. Consistent with previous research (Messmer et al., 1991; Labate et al., 1997; Hagdorn et al., 2003; Hinze et al., 2005), we found that genetic diversity within each population steadily decreases, with a corresponding increase in population structure. High marker density also enabled the first view of haplotype ancestry, fixation and recombination within this historic maize experiment. Extensive regions of haplotype fixation within each population are visible in the pericentromeric regions, where large blocks trace back to single founder inbreds. Simulation attributes most of the observed reduction in genetic diversity to genetic drift. Signatures of selection were difficult to observe in the background of this strong genetic drift, but heterozygosity in each population has fallen more than expected. Regions of haplotype fixation represent the most likely targets of selection, but as observed in other germplasm selected for hybrid performance (Feng et al., 2006), there is no overlap between the most likely targets of selection in the two populations. We discuss how this pattern is likely to occur during selection for hybrid performance, and how it poses challenges for dissecting the impacts of modern breeding and selection on the maize genome.
2204.05415
Nathaniel Linden
Nathaniel J. Linden, Boris Kramer and Padmini Rangamani
Bayesian Parameter Estimation for Dynamical Models in Systems Biology
58 pages, 24 figures
PLOS Computational Biology 18(10): e1010651 (2022)
10.1371/journal.pcbi.1010651
null
q-bio.QM q-bio.MN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dynamical systems modeling, particularly via systems of ordinary differential equations, has been used to effectively capture the temporal behavior of different biochemical components in signal transduction networks. Despite the recent advances in experimental measurements, including sensor development and '-omics' studies that have helped populate protein-protein interaction networks in great detail, modeling in systems biology lacks systematic methods to estimate kinetic parameters and quantify associated uncertainties. This is because of multiple reasons, including sparse and noisy experimental measurements, lack of detailed molecular mechanisms underlying the reactions, and missing biochemical interactions. Additionally, the inherent nonlinearities with respect to the states and parameters associated with the system of differential equations further compound the challenges of parameter estimation. In this study, we propose a comprehensive framework for Bayesian parameter estimation and complete quantification of the effects of uncertainties in the data and models. We apply these methods to a series of signaling models of increasing mathematical complexity. Systematic analysis of these dynamical systems showed that parameter estimation depends on data sparsity, noise level, and model structure, including the existence of multiple steady states. These results highlight how focused uncertainty quantification can enrich systems biology modeling and enable additional quantitative analyses for parameter estimation.
[ { "created": "Mon, 11 Apr 2022 21:24:10 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2022 16:38:09 GMT", "version": "v2" }, { "created": "Thu, 5 Jan 2023 17:06:49 GMT", "version": "v3" } ]
2023-01-06
[ [ "Linden", "Nathaniel J.", "" ], [ "Kramer", "Boris", "" ], [ "Rangamani", "Padmini", "" ] ]
Dynamical systems modeling, particularly via systems of ordinary differential equations, has been used to effectively capture the temporal behavior of different biochemical components in signal transduction networks. Despite the recent advances in experimental measurements, including sensor development and '-omics' studies that have helped populate protein-protein interaction networks in great detail, modeling in systems biology lacks systematic methods to estimate kinetic parameters and quantify associated uncertainties. This is because of multiple reasons, including sparse and noisy experimental measurements, lack of detailed molecular mechanisms underlying the reactions, and missing biochemical interactions. Additionally, the inherent nonlinearities with respect to the states and parameters associated with the system of differential equations further compound the challenges of parameter estimation. In this study, we propose a comprehensive framework for Bayesian parameter estimation and complete quantification of the effects of uncertainties in the data and models. We apply these methods to a series of signaling models of increasing mathematical complexity. Systematic analysis of these dynamical systems showed that parameter estimation depends on data sparsity, noise level, and model structure, including the existence of multiple steady states. These results highlight how focused uncertainty quantification can enrich systems biology modeling and enable additional quantitative analyses for parameter estimation.
2109.06700
Carter Butts
Vy Duong, Elizabeth Diessner, Gianmarc Grazioli, Rachel W. Martin, and Carter T. Butts
Neural Upscaling from Residue-level Protein Structure Networks to Atomistic Structure
null
null
null
null
q-bio.BM cs.LG physics.data-an q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coarse-graining is a powerful tool for extending the reach of dynamic models of proteins and other biological macromolecules. Topological coarse-graining, in which biomolecules or sets thereof are represented via graph structures, is a particularly useful way of obtaining highly compressed representations of molecular structure, and simulations operating via such representations can achieve substantial computational savings. A drawback of coarse-graining, however, is the loss of atomistic detail - an effect that is especially acute for topological representations such as protein structure networks (PSNs). Here, we introduce an approach based on a combination of machine learning and physically-guided refinement for inferring atomic coordinates from PSNs. This "neural upscaling" procedure exploits the constraints implied by PSNs on possible configurations, as well as differences in the likelihood of observing different configurations with the same PSN. Using a 1 $\mu$s atomistic molecular dynamics trajectory of A$\beta_{1-40}$, we show that neural upscaling is able to effectively recapitulate detailed structural information for intrinsically disordered proteins, being particularly successful in recovering features such as transient secondary structure. These results suggest that scalable network-based models for protein structure and dynamics may be used in settings where atomistic detail is desired, with upscaling employed to impute atomic coordinates from PSNs.
[ { "created": "Wed, 25 Aug 2021 23:43:57 GMT", "version": "v1" } ]
2021-09-15
[ [ "Duong", "Vy", "" ], [ "Diessner", "Elizabeth", "" ], [ "Grazioli", "Gianmarc", "" ], [ "Martin", "Rachel W.", "" ], [ "Butts", "Carter T.", "" ] ]
Coarse-graining is a powerful tool for extending the reach of dynamic models of proteins and other biological macromolecules. Topological coarse-graining, in which biomolecules or sets thereof are represented via graph structures, is a particularly useful way of obtaining highly compressed representations of molecular structure, and simulations operating via such representations can achieve substantial computational savings. A drawback of coarse-graining, however, is the loss of atomistic detail - an effect that is especially acute for topological representations such as protein structure networks (PSNs). Here, we introduce an approach based on a combination of machine learning and physically-guided refinement for inferring atomic coordinates from PSNs. This "neural upscaling" procedure exploits the constraints implied by PSNs on possible configurations, as well as differences in the likelihood of observing different configurations with the same PSN. Using a 1 $\mu$s atomistic molecular dynamics trajectory of A$\beta_{1-40}$, we show that neural upscaling is able to effectively recapitulate detailed structural information for intrinsically disordered proteins, being particularly successful in recovering features such as transient secondary structure. These results suggest that scalable network-based models for protein structure and dynamics may be used in settings where atomistic detail is desired, with upscaling employed to impute atomic coordinates from PSNs.
1409.4256
Stefan Engblom
Tomas Ekeberg, Stefan Engblom, and Jing Liu
Machine learning for ultrafast X-ray diffraction patterns on large-scale GPU clusters
null
Int. J. High Perf. Comput. Appl. 29(2):233--243 (2015)
10.1177/1094342015572030
null
q-bio.BM cs.DC cs.LG physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classical method of determining the atomic structure of complex molecules by analyzing diffraction patterns is currently undergoing drastic developments. Modern techniques for producing extremely bright and coherent X-ray lasers allow a beam of streaming particles to be intercepted and hit by an ultrashort high energy X-ray beam. Through machine learning methods the data thus collected can be transformed into a three-dimensional volumetric intensity map of the particle itself. The computational complexity associated with this problem is very high such that clusters of data parallel accelerators are required. We have implemented a distributed and highly efficient algorithm for inversion of large collections of diffraction patterns targeting clusters of hundreds of GPUs. With the expected enormous amount of diffraction data to be produced in the foreseeable future, this is the required scale to approach real time processing of data at the beam site. Using both real and synthetic data we look at the scaling properties of the application and discuss the overall computational viability of this exciting and novel imaging technique.
[ { "created": "Thu, 11 Sep 2014 20:26:23 GMT", "version": "v1" }, { "created": "Tue, 16 Dec 2014 12:53:13 GMT", "version": "v2" } ]
2015-10-12
[ [ "Ekeberg", "Tomas", "" ], [ "Engblom", "Stefan", "" ], [ "Liu", "Jing", "" ] ]
The classical method of determining the atomic structure of complex molecules by analyzing diffraction patterns is currently undergoing drastic developments. Modern techniques for producing extremely bright and coherent X-ray lasers allow a beam of streaming particles to be intercepted and hit by an ultrashort high energy X-ray beam. Through machine learning methods the data thus collected can be transformed into a three-dimensional volumetric intensity map of the particle itself. The computational complexity associated with this problem is very high such that clusters of data parallel accelerators are required. We have implemented a distributed and highly efficient algorithm for inversion of large collections of diffraction patterns targeting clusters of hundreds of GPUs. With the expected enormous amount of diffraction data to be produced in the foreseeable future, this is the required scale to approach real time processing of data at the beam site. Using both real and synthetic data we look at the scaling properties of the application and discuss the overall computational viability of this exciting and novel imaging technique.
2309.09557
bastien chassagnol
Bastien Chassagnol (LPSM), Gr\'egory Nuel (LPMA), Etienne Becht
DeCovarT, a multidimensional probalistic model for the deconvolution of heterogeneous transcriptomic samples
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although bulk transcriptomic analyses have greatly contributed to a better understanding of complex diseases, their sensibility is hampered by the highly heterogeneous cellular compositions of biological samples. To address this limitation, computational deconvolution methods have been designed to automatically estimate the frequencies of the cellular components that make up tissues, typically using reference samples of physically purified populations. However, they perform badly at differentiating closely related cell populations. We hypothesised that the integration of the covariance matrices of the reference samples could improve the performance of deconvolution algorithms. We therefore developed a new tool, DeCovarT, that integrates the structure of individual cellular transcriptomic network to reconstruct the bulk profile. Specifically, we inferred the ratios of the mixture components by a standard maximum likelihood estimation (MLE) method, using the Levenberg-Marquardt algorithm to recover the maximum from the parametric convolutional distribution of our model. We then consider a reparametrisation of the log-likelihood to explicitly incorporate the simplex constraint on the ratios. Preliminary numerical simulations suggest that this new algorithm outperforms previously published methods, particularly when individual cellular transcriptomic profiles strongly overlap.
[ { "created": "Mon, 18 Sep 2023 08:10:37 GMT", "version": "v1" } ]
2023-09-19
[ [ "Chassagnol", "Bastien", "", "LPSM" ], [ "Nuel", "Grégory", "", "LPMA" ], [ "Becht", "Etienne", "" ] ]
Although bulk transcriptomic analyses have greatly contributed to a better understanding of complex diseases, their sensibility is hampered by the highly heterogeneous cellular compositions of biological samples. To address this limitation, computational deconvolution methods have been designed to automatically estimate the frequencies of the cellular components that make up tissues, typically using reference samples of physically purified populations. However, they perform badly at differentiating closely related cell populations. We hypothesised that the integration of the covariance matrices of the reference samples could improve the performance of deconvolution algorithms. We therefore developed a new tool, DeCovarT, that integrates the structure of individual cellular transcriptomic network to reconstruct the bulk profile. Specifically, we inferred the ratios of the mixture components by a standard maximum likelihood estimation (MLE) method, using the Levenberg-Marquardt algorithm to recover the maximum from the parametric convolutional distribution of our model. We then consider a reparametrisation of the log-likelihood to explicitly incorporate the simplex constraint on the ratios. Preliminary numerical simulations suggest that this new algorithm outperforms previously published methods, particularly when individual cellular transcriptomic profiles strongly overlap.
2405.05462
Reihaneh Hassanzadeh
Reihaneh Hassanzadeh, Anees Abrol, Hamid Reza Hassanzadeh, Vince D. Calhoun
Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer's Disease Biomarkers
null
null
null
null
q-bio.NC cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Generative approaches for cross-modality transformation have recently gained significant attention in neuroimaging. While most previous work has focused on case-control data, the application of generative models to disorder-specific datasets and their ability to preserve diagnostic patterns remain relatively unexplored. Hence, in this study, we investigated the use of a generative adversarial network (GAN) in the context of Alzheimer's disease (AD) to generate functional network connectivity (FNC) and T1-weighted structural magnetic resonance imaging data from each other. We employed a cycle-GAN to synthesize data in an unpaired data transition and enhanced the transition by integrating weak supervision in cases where paired data were available. Our findings revealed that our model could offer remarkable capability, achieving a structural similarity index measure (SSIM) of $0.89 \pm 0.003$ for T1s and a correlation of $0.71 \pm 0.004$ for FNCs. Moreover, our qualitative analysis revealed similar patterns between generated and actual data when comparing AD to cognitively normal (CN) individuals. In particular, we observed significantly increased functional connectivity in cerebellar-sensory motor and cerebellar-visual networks and reduced connectivity in cerebellar-subcortical, auditory-sensory motor, sensory motor-visual, and cerebellar-cognitive control networks. Additionally, the T1 images generated by our model showed a similar pattern of atrophy in the hippocampal and other temporal regions of Alzheimer's patients.
[ { "created": "Wed, 8 May 2024 23:38:02 GMT", "version": "v1" } ]
2024-05-10
[ [ "Hassanzadeh", "Reihaneh", "" ], [ "Abrol", "Anees", "" ], [ "Hassanzadeh", "Hamid Reza", "" ], [ "Calhoun", "Vince D.", "" ] ]
Generative approaches for cross-modality transformation have recently gained significant attention in neuroimaging. While most previous work has focused on case-control data, the application of generative models to disorder-specific datasets and their ability to preserve diagnostic patterns remain relatively unexplored. Hence, in this study, we investigated the use of a generative adversarial network (GAN) in the context of Alzheimer's disease (AD) to generate functional network connectivity (FNC) and T1-weighted structural magnetic resonance imaging data from each other. We employed a cycle-GAN to synthesize data in an unpaired data transition and enhanced the transition by integrating weak supervision in cases where paired data were available. Our findings revealed that our model could offer remarkable capability, achieving a structural similarity index measure (SSIM) of $0.89 \pm 0.003$ for T1s and a correlation of $0.71 \pm 0.004$ for FNCs. Moreover, our qualitative analysis revealed similar patterns between generated and actual data when comparing AD to cognitively normal (CN) individuals. In particular, we observed significantly increased functional connectivity in cerebellar-sensory motor and cerebellar-visual networks and reduced connectivity in cerebellar-subcortical, auditory-sensory motor, sensory motor-visual, and cerebellar-cognitive control networks. Additionally, the T1 images generated by our model showed a similar pattern of atrophy in the hippocampal and other temporal regions of Alzheimer's patients.
2107.03307
Juan Rocha
Juan C. Rocha
Ecosystems are showing symptoms of resilience loss
19 pages (including SM), 7 figures on main text, 7 SM figures
null
10.1088/1748-9326/ac73a8
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Ecosystems around the world are at risk of critical transitions due to increasing anthropogenic pressures and climate change. Yet it is unclear where the risks are higher or where in the world ecosystems are more vulnerable. Here I measure resilience of primary productivity proxies for marine and terrestrial ecosystems globally. Up to 29% of global terrestrial ecosystem, and 24% marine ones, show symptoms of resilience loss. These symptoms are shown in all biomes, but Arctic tundra and boreal forest are the most affected, as well as the Indian Ocean and Eastern Pacific. Although the results are likely an underestimation, they enable the identification of risk areas as well as the potential synchrony of some transitions, helping prioritize areas for management interventions and conservation.
[ { "created": "Wed, 7 Jul 2021 15:36:18 GMT", "version": "v1" }, { "created": "Fri, 7 Jan 2022 04:03:16 GMT", "version": "v2" } ]
2022-06-29
[ [ "Rocha", "Juan C.", "" ] ]
Ecosystems around the world are at risk of critical transitions due to increasing anthropogenic pressures and climate change. Yet it is unclear where the risks are higher or where in the world ecosystems are more vulnerable. Here I measure resilience of primary productivity proxies for marine and terrestrial ecosystems globally. Up to 29% of global terrestrial ecosystem, and 24% marine ones, show symptoms of resilience loss. These symptoms are shown in all biomes, but Arctic tundra and boreal forest are the most affected, as well as the Indian Ocean and Eastern Pacific. Although the results are likely an underestimation, they enable the identification of risk areas as well as the potential synchrony of some transitions, helping prioritize areas for management interventions and conservation.
q-bio/0612045
Dr. Paul J. Werbos
Paul J. Werbos
Using Adaptive Dynamic Programming to Understand and Replicate Brain Intelligence: the Next Level Design
13p. Preprint for invited talk, IEEE Conference on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL07), expanded into chapter in R. Kozma, Neurodynamics of Higher-Level Cognition and Consciousness, Springer, 2007. New version adds appendix E, clarifications, and a new paragraph on clocks in the brain
null
null
null
q-bio.NC
null
Since the 1960s I proposed that we could understand and replicate the highest level of intelligence seen in the brain, by building ever more capable and general systems for adaptive dynamic programming (ADP), which is like reinforcement learning but based on approximating the Bellman equation and allowing the controller to know its utility function. Growing empirical evidence on the brain supports this approach. Adaptive critic systems now meet tough engineering challenges and provide a kind of first-generation model of the brain. Lewis, Prokhorov and myself have early second-generation work. Mammal brains possess three core capabilities, creativity/imagination and ways to manage spatial and temporal complexity, even beyond the second generation. This paper reviews previous progress, and describes new tools and approaches to overcome the spatial complexity gap.
[ { "created": "Sun, 24 Dec 2006 17:04:33 GMT", "version": "v1" }, { "created": "Tue, 8 May 2007 17:09:36 GMT", "version": "v2" } ]
2007-05-23
[ [ "Werbos", "Paul J.", "" ] ]
Since the 1960s I proposed that we could understand and replicate the highest level of intelligence seen in the brain, by building ever more capable and general systems for adaptive dynamic programming (ADP), which is like reinforcement learning but based on approximating the Bellman equation and allowing the controller to know its utility function. Growing empirical evidence on the brain supports this approach. Adaptive critic systems now meet tough engineering challenges and provide a kind of first-generation model of the brain. Lewis, Prokhorov and myself have early second-generation work. Mammal brains possess three core capabilities, creativity/imagination and ways to manage spatial and temporal complexity, even beyond the second generation. This paper reviews previous progress, and describes new tools and approaches to overcome the spatial complexity gap.
1612.09351
Ricard Sole
Ricard Sole, Raul Montanez, Salva Duran-Nebreda, Daniel R. Amor, Blai Vidiella and Josep Sardanyes
Population dynamics of synthetic Terraformation motifs
23 pages, 11 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystems are complex systems, currently experiencing several threats associated with global warming, intensive exploitation, and human-driven habitat degradation. Such threats are pushing ecosystems to the brink of collapse. Because of a general presence of multiple stable states, including states involving population extinction, and due to intrinsic nonlinearities associated with feedback loops, collapse can occur in a catastrophic manner. Such catastrophic shifts have been suggested to pervade many of the future transitions affecting ecosystems at many different scales. Many studies have tried to delineate potential warning signals predicting such ongoing shifts but little is known about how such transitions might be effectively prevented. It has been recently suggested that a potential path to prevent or modify the outcome of these transitions would involve designing synthetic organisms and synthetic ecological interactions that could push these endangered systems out of the critical boundaries. Four classes of such ecological engineering designs or {\em Terraformation motifs} have been defined in a qualitative way. Here we develop the simplest mathematical models associated with these motifs, defining the expected stability conditions and domains where the motifs shall properly work.
[ { "created": "Fri, 30 Dec 2016 00:02:36 GMT", "version": "v1" } ]
2017-01-02
[ [ "Sole", "Ricard", "" ], [ "Montanez", "Raul", "" ], [ "Duran-Nebreda", "Salva", "" ], [ "Amor", "Daniel R.", "" ], [ "Vidiella", "Blai", "" ], [ "Sardanyes", "Josep", "" ] ]
Ecosystems are complex systems, currently experiencing several threats associated with global warming, intensive exploitation, and human-driven habitat degradation. Such threats are pushing ecosystems to the brink of collapse. Because of a general presence of multiple stable states, including states involving population extinction, and due to intrinsic nonlinearities associated with feedback loops, collapse can occur in a catastrophic manner. Such catastrophic shifts have been suggested to pervade many of the future transitions affecting ecosystems at many different scales. Many studies have tried to delineate potential warning signals predicting such ongoing shifts but little is known about how such transitions might be effectively prevented. It has been recently suggested that a potential path to prevent or modify the outcome of these transitions would involve designing synthetic organisms and synthetic ecological interactions that could push these endangered systems out of the critical boundaries. Four classes of such ecological engineering designs or {\em Terraformation motifs} have been defined in a qualitative way. Here we develop the simplest mathematical models associated with these motifs, defining the expected stability conditions and domains where the motifs shall properly work.
1309.3521
Liane Gabora
Liane Gabora
Cultural Evolution Entails (Creativity Entails (Concept Combination Entails Quantum Structure))
null
Gabora, L. (2007). Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium 8: Quantum Interaction, March 26-28, Stanford University, pp. 106-113
null
null
q-bio.PE q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theory of natural selection cannot describe how early life evolved, in part because acquired characteristics are passed on through horizontal exchange. It has been proposed that culture, like life, began with the emergence of autopoietic form, thus its evolution too cannot be described by natural selection. The evolution of autopoietic form can be described using a framework referred to as Context-driven Actualization of Potential (CAP), which grew out of a generalization of the formalisms of quantum mechanics, and encompasses nondeterministic as well as deterministic change of state. The autopoietic structure that evolves through culture is the mind, or more accurately the conceptual network that yields an individual's internal model of the world. A branch of CAP research referred to as the state-context-property (SCOP) formalism provides a mathematical framework for reconciling the stability of conceptual structure with its susceptibility to context-driven change. The combination of two or more concepts (an extreme case of contextual influence), as occurs in insight, is modeled as a state of entanglement. Theoretical and empirical findings are presented that challenge assumptions underlying virtually all of cognitive science, such as the notion of spreading activation and the assumption that cognitive processes can be described with a Kolmogorovian probability model.
[ { "created": "Wed, 11 Sep 2013 21:56:38 GMT", "version": "v1" } ]
2013-09-16
[ [ "Gabora", "Liane", "" ] ]
The theory of natural selection cannot describe how early life evolved, in part because acquired characteristics are passed on through horizontal exchange. It has been proposed that culture, like life, began with the emergence of autopoietic form, thus its evolution too cannot be described by natural selection. The evolution of autopoietic form can be described using a framework referred to as Context-driven Actualization of Potential (CAP), which grew out of a generalization of the formalisms of quantum mechanics, and encompasses nondeterministic as well as deterministic change of state. The autopoietic structure that evolves through culture is the mind, or more accurately the conceptual network that yields an individual's internal model of the world. A branch of CAP research referred to as the state-context-property (SCOP) formalism provides a mathematical framework for reconciling the stability of conceptual structure with its susceptibility to context-driven change. The combination of two or more concepts (an extreme case of contextual influence), as occurs in insight, is modeled as a state of entanglement. Theoretical and empirical findings are presented that challenge assumptions underlying virtually all of cognitive science, such as the notion of spreading activation and the assumption that cognitive processes can be described with a Kolmogorovian probability model.
1206.2332
Joseph Pickrell
Joseph K. Pickrell, Jonathan K. Pritchard
Inference of population splits and mixtures from genome-wide allele frequency data
28 pages, 6 figures in main text. Attached supplement is 22 pages, 15 figures. This is an updated version of the preprint available at http://precedings.nature.com/documents/6956/version/1
PLoS Genet 8(11): e1002967 (2012)
10.1371/journal.pgen.1002967
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many aspects of the historical relationships between populations in a species are reflected in genetic data. Inferring these relationships from genetic data, however, remains a challenging task. In this paper, we present a statistical model for inferring the patterns of population splits and mixtures in multiple populations. In this model, the sampled populations in a species are related to their common ancestor through a graph of ancestral populations. Using genome-wide allele frequency data and a Gaussian approximation to genetic drift, we infer the structure of this graph. We applied this method to a set of 55 human populations and a set of 82 dog breeds and wild canids. In both species, we show that a simple bifurcating tree does not fully describe the data; in contrast, we infer many migration events. While some of the migration events that we find have been detected previously, many have not. For example, in the human data we infer that Cambodians trace approximately 16% of their ancestry to a population ancestral to other extant East Asian populations. In the dog data, we infer that both the boxer and basenji trace a considerable fraction of their ancestry (9% and 25%, respectively) to wolves subsequent to domestication, and that East Asian toy breeds (the Shih Tzu and the Pekingese) result from admixture between modern toy breeds and "ancient" Asian breeds. Software implementing the model described here, called TreeMix, is available at http://treemix.googlecode.com
[ { "created": "Mon, 11 Jun 2012 19:39:18 GMT", "version": "v1" } ]
2012-11-20
[ [ "Pickrell", "Joseph K.", "" ], [ "Pritchard", "Jonathan K.", "" ] ]
Many aspects of the historical relationships between populations in a species are reflected in genetic data. Inferring these relationships from genetic data, however, remains a challenging task. In this paper, we present a statistical model for inferring the patterns of population splits and mixtures in multiple populations. In this model, the sampled populations in a species are related to their common ancestor through a graph of ancestral populations. Using genome-wide allele frequency data and a Gaussian approximation to genetic drift, we infer the structure of this graph. We applied this method to a set of 55 human populations and a set of 82 dog breeds and wild canids. In both species, we show that a simple bifurcating tree does not fully describe the data; in contrast, we infer many migration events. While some of the migration events that we find have been detected previously, many have not. For example, in the human data we infer that Cambodians trace approximately 16% of their ancestry to a population ancestral to other extant East Asian populations. In the dog data, we infer that both the boxer and basenji trace a considerable fraction of their ancestry (9% and 25%, respectively) to wolves subsequent to domestication, and that East Asian toy breeds (the Shih Tzu and the Pekingese) result from admixture between modern toy breeds and "ancient" Asian breeds. Software implementing the model described here, called TreeMix, is available at http://treemix.googlecode.com
1204.6353
Fernando Antoneli Jr
Fernando Antoneli, Francisco Bosco, Diogo Castro and Luiz Mario Janini
Virus Replication as a Phenotypic Version of Polynucleotide Evolution
23 pages, 1 figure, 2 tables. arXiv admin note: substantial text overlap with arXiv:1110.3368
Bulletin of Mathematical Biology, April 2013, Volume 75, Issue 4, pp 602-628
10.1007/s11538-013-9822-9
null
q-bio.PE math.PR physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we revisit and adapt to viral evolution an approach based on the theory of branching process advanced by Demetrius, Schuster and Sigmund ("Polynucleotide evolution and branching processes", Bull. Math. Biol. 46 (1985) 239-262), in their study of polynucleotide evolution. By taking into account beneficial effects we obtain a non-trivial multivariate generalization of their single-type branching process model. Perturbative techniques allows us to obtain analytical asymptotic expressions for the main global parameters of the model which lead to the following rigorous results: (i) a new criterion for "no sure extinction", (ii) a generalization and proof, for this particular class of models, of the lethal mutagenesis criterion proposed by Bull, Sanju\'an and Wilke ("Theory of lethal mutagenesis for viruses", J. Virology 18 (2007) 2930-2939), (iii) a new proposal for the notion of relaxation time with a quantitative prescription for its evaluation, (iv) the quantitative description of the evolution of the expected values in in four distinct "stages": extinction threshold, lethal mutagenesis, stationary "equilibrium" and transient. Finally, based on these quantitative results we are able to draw some qualitative conclusions.
[ { "created": "Sat, 28 Apr 2012 00:36:43 GMT", "version": "v1" }, { "created": "Thu, 11 Oct 2012 21:33:23 GMT", "version": "v2" }, { "created": "Thu, 10 Jan 2013 00:35:04 GMT", "version": "v3" }, { "created": "Sat, 8 Feb 2014 21:10:22 GMT", "version": "v4" } ]
2014-02-11
[ [ "Antoneli", "Fernando", "" ], [ "Bosco", "Francisco", "" ], [ "Castro", "Diogo", "" ], [ "Janini", "Luiz Mario", "" ] ]
In this paper we revisit and adapt to viral evolution an approach based on the theory of branching process advanced by Demetrius, Schuster and Sigmund ("Polynucleotide evolution and branching processes", Bull. Math. Biol. 46 (1985) 239-262), in their study of polynucleotide evolution. By taking into account beneficial effects we obtain a non-trivial multivariate generalization of their single-type branching process model. Perturbative techniques allows us to obtain analytical asymptotic expressions for the main global parameters of the model which lead to the following rigorous results: (i) a new criterion for "no sure extinction", (ii) a generalization and proof, for this particular class of models, of the lethal mutagenesis criterion proposed by Bull, Sanju\'an and Wilke ("Theory of lethal mutagenesis for viruses", J. Virology 18 (2007) 2930-2939), (iii) a new proposal for the notion of relaxation time with a quantitative prescription for its evaluation, (iv) the quantitative description of the evolution of the expected values in in four distinct "stages": extinction threshold, lethal mutagenesis, stationary "equilibrium" and transient. Finally, based on these quantitative results we are able to draw some qualitative conclusions.
1008.4025
Peter Csermely
Tamas Korcsmaros, Illes J. Farkas, Mat\'e S. Szalay, Petra Rovo, David Fazekas, Zoltan Spiro, Csaba Bode, Katalin Lenti, Tibor Vellai and Peter Csermely
Uniformly curated signaling pathways reveal tissue-specific cross-talks and support drug target discovery
9 pages, 4 figures, 2 tables and a supplementary info with 5 Figures and 13 Tables
Bioinformatics 26, 2042-2050 (2010)
10.1093/bioinformatics/btq310
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Signaling pathways control a large variety of cellular processes. However, currently, even within the same database signaling pathways are often curated at different levels of detail. This makes comparative and cross-talk analyses difficult. Results: We present SignaLink, a database containing 8 major signaling pathways from Caenorhabditis elegans, Drosophila melanogaster, and humans. Based on 170 review and approx. 800 research articles, we have compiled pathways with semi-automatic searches and uniform, well-documented curation rules. We found that in humans any two of the 8 pathways can cross-talk. We quantified the possible tissue- and cancer-specific activity of cross-talks and found pathway-specific expression profiles. In addition, we identified 327 proteins relevant for drug target discovery. Conclusions: We provide a novel resource for comparative and cross-talk analyses of signaling pathways. The identified multi-pathway and tissue-specific cross-talks contribute to the understanding of the signaling complexity in health and disease and underscore its importance in network-based drug target selection. Availability: http://SignaLink.org
[ { "created": "Tue, 24 Aug 2010 12:18:00 GMT", "version": "v1" } ]
2010-08-25
[ [ "Korcsmaros", "Tamas", "" ], [ "Farkas", "Illes J.", "" ], [ "Szalay", "Maté S.", "" ], [ "Rovo", "Petra", "" ], [ "Fazekas", "David", "" ], [ "Spiro", "Zoltan", "" ], [ "Bode", "Csaba", "" ], [ "Le...
Motivation: Signaling pathways control a large variety of cellular processes. However, currently, even within the same database signaling pathways are often curated at different levels of detail. This makes comparative and cross-talk analyses difficult. Results: We present SignaLink, a database containing 8 major signaling pathways from Caenorhabditis elegans, Drosophila melanogaster, and humans. Based on 170 review and approx. 800 research articles, we have compiled pathways with semi-automatic searches and uniform, well-documented curation rules. We found that in humans any two of the 8 pathways can cross-talk. We quantified the possible tissue- and cancer-specific activity of cross-talks and found pathway-specific expression profiles. In addition, we identified 327 proteins relevant for drug target discovery. Conclusions: We provide a novel resource for comparative and cross-talk analyses of signaling pathways. The identified multi-pathway and tissue-specific cross-talks contribute to the understanding of the signaling complexity in health and disease and underscore its importance in network-based drug target selection. Availability: http://SignaLink.org
2103.07226
Mar\'ia Vallet-Regi
Rafaela Garcia-Alvarez, Isabel Izquierdo-Barba, Maria Vallet-Regi
3D scaffold with effective multidrug sequential release against bacteria biofilm
40 pagues, 14 figures
Acta Biomaterialia. 49, 113-126 (2017)
10.1016/j.actbio.2016.11.028
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bone infection is a feared complication following surgery or trauma that remains as an extremely difficult disease to deal with. So far, the outcome of therapy could be improved with the design of 3D implants, which combine the merits of osseous regeneration and local multidrug therapy so as to avoid bacterial growth, drug resistance and the feared side effects. Herein, hierarchical 3D multidrug scaffolds based on nanocomposite bioceramic and polyvinyl alcohol (PVA) prepared by rapid prototyping with an external coating of gelatin-glutaraldehyde (Gel-Glu) have been fabricated. These 3D scaffolds contain three antimicrobial agents (rifampin, levofloxacin and vancomycin), which have been localized in different compartments of the scaffold to obtain different release kinetics and more effective combined therapy. Levofloxacin was loaded into the mesopores of nanocomposite bioceramic part, vancomycin was localized into PVA biopolymer part and rifampin was loaded in the external coating of Gel-Glu. The obtained results show an early and fast release of rifampin followed by sustained and prolonged release of vancomycin and levofloxacin, respectively, which are mainly governed by the progressive in vitro degradability rate of these scaffolds. This combined therapy is able to destroy Gram-positive and Gram-negative bacteria biofilms as well as inhibit the bacteria growth; in addition, these multifunctional scaffolds exhibit excellent bioactivity as well as good biocompatibility with complete cell colonization of preosteoblast in the entire surface, ensuring good bone regeneration. These findings suggest that these hierarchical 3D multidrug scaffolds are promising candidates as platforms for local bone infection therapy.
[ { "created": "Fri, 12 Mar 2021 12:06:46 GMT", "version": "v1" } ]
2021-03-15
[ [ "Garcia-Alvarez", "Rafaela", "" ], [ "Izquierdo-Barba", "Isabel", "" ], [ "Vallet-Regi", "Maria", "" ] ]
Bone infection is a feared complication following surgery or trauma that remains as an extremely difficult disease to deal with. So far, the outcome of therapy could be improved with the design of 3D implants, which combine the merits of osseous regeneration and local multidrug therapy so as to avoid bacterial growth, drug resistance and the feared side effects. Herein, hierarchical 3D multidrug scaffolds based on nanocomposite bioceramic and polyvinyl alcohol (PVA) prepared by rapid prototyping with an external coating of gelatin-glutaraldehyde (Gel-Glu) have been fabricated. These 3D scaffolds contain three antimicrobial agents (rifampin, levofloxacin and vancomycin), which have been localized in different compartments of the scaffold to obtain different release kinetics and more effective combined therapy. Levofloxacin was loaded into the mesopores of nanocomposite bioceramic part, vancomycin was localized into PVA biopolymer part and rifampin was loaded in the external coating of Gel-Glu. The obtained results show an early and fast release of rifampin followed by sustained and prolonged release of vancomycin and levofloxacin, respectively, which are mainly governed by the progressive in vitro degradability rate of these scaffolds. This combined therapy is able to destroy Gram-positive and Gram-negative bacteria biofilms as well as inhibit the bacteria growth; in addition, these multifunctional scaffolds exhibit excellent bioactivity as well as good biocompatibility with complete cell colonization of preosteoblast in the entire surface, ensuring good bone regeneration. These findings suggest that these hierarchical 3D multidrug scaffolds are promising candidates as platforms for local bone infection therapy.
1409.5280
Marcus Kaiser
Marc-Thorsten Huett and Marcus Kaiser and Claus C. Hilgetag
Perspective: network-guided pattern formation of neural dynamics
null
Phil. Trans. R. Soc. B 20130522, 2014
10.1098/rstb.2013.0522
null
q-bio.NC nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
[ { "created": "Thu, 18 Sep 2014 12:17:16 GMT", "version": "v1" } ]
2014-09-19
[ [ "Huett", "Marc-Thorsten", "" ], [ "Kaiser", "Marcus", "" ], [ "Hilgetag", "Claus C.", "" ] ]
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
1804.04090
Pierre Degond
Alina Chertock, Pierre Degond, Sophie Hecht, Jean-Paul Vincent
Incompressible limit of a continuum model of tissue growth with segregation for two cell populations
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a model for the growth two interacting populations of cells that do not mix. The dynamics is driven by pressure and cohesion forces on the one hand and proliferation on the other hand. Following earlier works on the single population case, we show that the model approximates a free boundary Hele Shaw type model that we characterise using both analytical and numerical arguments.
[ { "created": "Wed, 11 Apr 2018 16:53:33 GMT", "version": "v1" } ]
2018-04-12
[ [ "Chertock", "Alina", "" ], [ "Degond", "Pierre", "" ], [ "Hecht", "Sophie", "" ], [ "Vincent", "Jean-Paul", "" ] ]
This paper proposes a model for the growth two interacting populations of cells that do not mix. The dynamics is driven by pressure and cohesion forces on the one hand and proliferation on the other hand. Following earlier works on the single population case, we show that the model approximates a free boundary Hele Shaw type model that we characterise using both analytical and numerical arguments.
1505.01375
Jicun Wang-Michelitsch
Jicun Wang-Michelitsch and Thomas M. Michelitsch
Cell transformation in tumor-development: a result of accumulation of Misrepairs of DNA through many generations of cells
14 pages, 3 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Development of a tumor is known to be a result of accumulation of DNA changes in somatic cells. However, the processes of how DNA changes are produced and how they accumulate in somatic cells are not clear. DNA changes include two types: point DNA mutations and chromosome changes. However, point DNA mutations (DNA mutations) are the main type of DNA changes that can remain and accumulate in cells. Severe DNA injuries are the causes for DNA mutations. However, Misrepair of DNA is an essential process for transforming a DNA injury into a survivable and inheritable DNA mutation. In somatic cells, Misrepair of DNA is the main source of DNA mutations. Since the surviving chance of a cell by Misrepair of DNA is low, accumulation of DNA mutations can take place only possibly in the cells that can proliferate. Tumors can only develop in the tissues that are regenerable. The accumulation of Misrepairs of DNA needs to proceed in many generations of cells, and cell transformation from a normal cell into a tumor cell is a slow and long process. However, once a cell is transformed especially when it is malignantly transformed, the deficiency of DNA repair and the rapid cell proliferation will accelerate the accumulation of DNA mutations. The process of accumulation of DNA mutations is actually the process of aging of a genome DNA. Repeated cell injuries and repeated cell regenerations are the two preconditions for tumor-development. For cancer prevention, a moderate and flexible living style is advised.
[ { "created": "Wed, 6 May 2015 14:14:42 GMT", "version": "v1" }, { "created": "Wed, 1 Jun 2016 11:36:57 GMT", "version": "v2" }, { "created": "Wed, 4 Jan 2017 10:00:00 GMT", "version": "v3" } ]
2017-01-05
[ [ "Wang-Michelitsch", "Jicun", "" ], [ "Michelitsch", "Thomas M.", "" ] ]
Development of a tumor is known to be a result of accumulation of DNA changes in somatic cells. However, the processes of how DNA changes are produced and how they accumulate in somatic cells are not clear. DNA changes include two types: point DNA mutations and chromosome changes. However, point DNA mutations (DNA mutations) are the main type of DNA changes that can remain and accumulate in cells. Severe DNA injuries are the causes for DNA mutations. However, Misrepair of DNA is an essential process for transforming a DNA injury into a survivable and inheritable DNA mutation. In somatic cells, Misrepair of DNA is the main source of DNA mutations. Since the surviving chance of a cell by Misrepair of DNA is low, accumulation of DNA mutations can take place only possibly in the cells that can proliferate. Tumors can only develop in the tissues that are regenerable. The accumulation of Misrepairs of DNA needs to proceed in many generations of cells, and cell transformation from a normal cell into a tumor cell is a slow and long process. However, once a cell is transformed especially when it is malignantly transformed, the deficiency of DNA repair and the rapid cell proliferation will accelerate the accumulation of DNA mutations. The process of accumulation of DNA mutations is actually the process of aging of a genome DNA. Repeated cell injuries and repeated cell regenerations are the two preconditions for tumor-development. For cancer prevention, a moderate and flexible living style is advised.
1209.4487
Konstantin Blyuss
K.B. Blyuss and L.B. Nicholson
The role of tunable activation thresholds in the dynamics of autoimmunity
23 pages, 6 figures
J. Theor. Biol. 308, 45-55 (2012)
null
null
q-bio.PE nlin.CD q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been known for some time that human autoimmune diseases can be triggered by viral infections. Several possible mechanisms of interactions between a virus and immune system have been analysed, with a prevailing opinion being that the onset of autoimmunity can in many cases be attributed to "molecular mimicry", where linear peptide epitopes, processed from viral proteins, mimic normal host self proteins, thus leading to a cross-reaction of immune response against virus with host cells. In this paper we present a mathematical model for the dynamics of an immune response to a viral infection and autoimmunity, which takes into account T cells with different activation thresholds. We show how the infection can be cleared by the immune system, as well as how it can lead to a chronic infection or recurrent infection with relapses and remissions. Numerical simulations of the model are performed to illustrate various dynamical regimes, as well as to analyse the potential impact of treatment of autoimmune disease in the chronic and recurrent states. The results provide good qualitative agreement with available data on immune responses to viral infections and progression of autoimmune diseases.
[ { "created": "Thu, 20 Sep 2012 10:19:38 GMT", "version": "v1" } ]
2012-09-21
[ [ "Blyuss", "K. B.", "" ], [ "Nicholson", "L. B.", "" ] ]
It has been known for some time that human autoimmune diseases can be triggered by viral infections. Several possible mechanisms of interactions between a virus and immune system have been analysed, with a prevailing opinion being that the onset of autoimmunity can in many cases be attributed to "molecular mimicry", where linear peptide epitopes, processed from viral proteins, mimic normal host self proteins, thus leading to a cross-reaction of immune response against virus with host cells. In this paper we present a mathematical model for the dynamics of an immune response to a viral infection and autoimmunity, which takes into account T cells with different activation thresholds. We show how the infection can be cleared by the immune system, as well as how it can lead to a chronic infection or recurrent infection with relapses and remissions. Numerical simulations of the model are performed to illustrate various dynamical regimes, as well as to analyse the potential impact of treatment of autoimmune disease in the chronic and recurrent states. The results provide good qualitative agreement with available data on immune responses to viral infections and progression of autoimmune diseases.
2402.16763
Laura Kriener
Laura Kriener, Kristin V\"olk, Ben von H\"unerbein, Federico Benitez, Walter Senn, Mihai A. Petrovici
Order from chaos: Interplay of development and learning in recurrent networks of structured neurons
4 pages, 2 figures
null
null
null
q-bio.NC cs.AI cs.NE
http://creativecommons.org/licenses/by/4.0/
Behavior can be described as a temporal sequence of actions driven by neural activity. To learn complex sequential patterns in neural networks, memories of past activities need to persist on significantly longer timescales than relaxation times of single-neuron activity. While recurrent networks can produce such long transients, training these networks in a biologically plausible way is challenging. One approach has been reservoir computing, where only weights from a recurrent network to a readout are learned. Other models achieve learning of recurrent synaptic weights using propagated errors. However, their biological plausibility typically suffers from issues with locality, resource allocation or parameter scales and tuning. We suggest that many of these issues can be alleviated by considering dendritic information storage and computation. By applying a fully local, always-on plasticity rule we are able to learn complex sequences in a recurrent network comprised of two populations. Importantly, our model is resource-efficient, enabling the learning of complex sequences using only a small number of neurons. We demonstrate these features in a mock-up of birdsong learning, in which our networks first learn a long, non-Markovian sequence that they can then reproduce robustly despite external disturbances.
[ { "created": "Mon, 26 Feb 2024 17:30:34 GMT", "version": "v1" } ]
2024-02-27
[ [ "Kriener", "Laura", "" ], [ "Völk", "Kristin", "" ], [ "von Hünerbein", "Ben", "" ], [ "Benitez", "Federico", "" ], [ "Senn", "Walter", "" ], [ "Petrovici", "Mihai A.", "" ] ]
Behavior can be described as a temporal sequence of actions driven by neural activity. To learn complex sequential patterns in neural networks, memories of past activities need to persist on significantly longer timescales than relaxation times of single-neuron activity. While recurrent networks can produce such long transients, training these networks in a biologically plausible way is challenging. One approach has been reservoir computing, where only weights from a recurrent network to a readout are learned. Other models achieve learning of recurrent synaptic weights using propagated errors. However, their biological plausibility typically suffers from issues with locality, resource allocation or parameter scales and tuning. We suggest that many of these issues can be alleviated by considering dendritic information storage and computation. By applying a fully local, always-on plasticity rule we are able to learn complex sequences in a recurrent network comprised of two populations. Importantly, our model is resource-efficient, enabling the learning of complex sequences using only a small number of neurons. We demonstrate these features in a mock-up of birdsong learning, in which our networks first learn a long, non-Markovian sequence that they can then reproduce robustly despite external disturbances.
2004.04735
Pedro Teles Dr
Pedro Teles
A time-dependent SEIR model to analyse the evolution of the SARS-CoV-2 epidemic outbreak in Portugal
7 figures. arXiv admin note: text overlap with arXiv:2003.10047
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Background: The analysis of the Sars-CoV-2 epidemic is of paramount importance to understand the dynamics of the coronavirus spread. This can help health and government authorities take the appropriate measures and implement suitable politics aimed at fighting and preventing it. Methods: A time-dependent dynamic SEIR model inspired in a model previously used during the MERS outbreak in South Korea was used to analyse the time trajectories of active and hospitalized cases in Portugal. Results: The time evolution of the virus spread in the country was adequately modelled. The model has changeable parameters every five days since the onset of mitigation measures. A peak of about 22,000 active cases is estimated, although the official value for recovered cases is out of date. Hospitalized cases could reach a peak of about 1,250 cases, of which 200/300 in ICU units. Conclusion: With appropriate measures, the number of active cases in Portugal can be controlled at about 22,000 people, of which about 1,250 hospitalized and 200/300 in ICU units. This seems manageable by the country national health service with an estimated 1,140 ventilators.
[ { "created": "Thu, 9 Apr 2020 01:32:51 GMT", "version": "v1" }, { "created": "Mon, 13 Apr 2020 03:29:20 GMT", "version": "v2" }, { "created": "Sat, 18 Apr 2020 22:11:07 GMT", "version": "v3" }, { "created": "Wed, 29 Apr 2020 02:51:27 GMT", "version": "v4" }, { "cr...
2020-05-04
[ [ "Teles", "Pedro", "" ] ]
Background: The analysis of the Sars-CoV-2 epidemic is of paramount importance to understand the dynamics of the coronavirus spread. This can help health and government authorities take the appropriate measures and implement suitable politics aimed at fighting and preventing it. Methods: A time-dependent dynamic SEIR model inspired in a model previously used during the MERS outbreak in South Korea was used to analyse the time trajectories of active and hospitalized cases in Portugal. Results: The time evolution of the virus spread in the country was adequately modelled. The model has changeable parameters every five days since the onset of mitigation measures. A peak of about 22,000 active cases is estimated, although the official value for recovered cases is out of date. Hospitalized cases could reach a peak of about 1,250 cases, of which 200/300 in ICU units. Conclusion: With appropriate measures, the number of active cases in Portugal can be controlled at about 22,000 people, of which about 1,250 hospitalized and 200/300 in ICU units. This seems manageable by the country national health service with an estimated 1,140 ventilators.
1202.5931
M. Angeles Serrano
Oriol G\"uell, Francesc Sagu\'es, and M. \'Angeles Serrano
Predicting effects of structural stress in a genome-reduced model bacterial metabolism
null
null
null
null
q-bio.MN cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We studied in silico effects of structural stress in Mycoplasma pneumoniae, a genome-reduced model bacterial organism, by tracking the damage propagating on its metabolic network after a deleterious perturbation. First, we analyzed failure cascades spreading from individual reactions and pairs of reactions and compared the results to those in Staphylococcus aureus and Escherichia coli. To alert to the potential damage caused by the failure of individual reactions, we propose a generic predictor based on local information that identifies target reactions for structural vulnerability. With respect to the simultaneous failure of pairs of reactions, we detected strong non-linear amplification effects that can be predicted by the presence of specific motifs in the intersection of single cascades. We further connected the metabolic and gene co-expression networks of M. pneumoniae through enzyme activities, and studied the consequences of knocking out individual genes and clusters of genes. Damage caused by single gene knockouts reveals a strong correlation between genome-scale cascades of large impact and gene essentiality. At the same time, we found that genes controlling high-damage reactions tend to operate in functional isolation, as a metabolic protection mechanism. We conclude that the architecture of M. pneumoniae, both at the level of metabolism and genome, seems to have evolved towards increased structural robustness, similarly to other more complex model bacterial organisms, despite its reduced genome size and its greater metabolic network linearity. Our approach, although motivated biochemically, is generic enough to be of potential use toward analyzing and predicting spreading of structural stress in any bipartite complex network.
[ { "created": "Mon, 27 Feb 2012 13:37:10 GMT", "version": "v1" } ]
2012-02-28
[ [ "Güell", "Oriol", "" ], [ "Sagués", "Francesc", "" ], [ "Serrano", "M. Ángeles", "" ] ]
We studied in silico effects of structural stress in Mycoplasma pneumoniae, a genome-reduced model bacterial organism, by tracking the damage propagating on its metabolic network after a deleterious perturbation. First, we analyzed failure cascades spreading from individual reactions and pairs of reactions and compared the results to those in Staphylococcus aureus and Escherichia coli. To alert to the potential damage caused by the failure of individual reactions, we propose a generic predictor based on local information that identifies target reactions for structural vulnerability. With respect to the simultaneous failure of pairs of reactions, we detected strong non-linear amplification effects that can be predicted by the presence of specific motifs in the intersection of single cascades. We further connected the metabolic and gene co-expression networks of M. pneumoniae through enzyme activities, and studied the consequences of knocking out individual genes and clusters of genes. Damage caused by single gene knockouts reveals a strong correlation between genome-scale cascades of large impact and gene essentiality. At the same time, we found that genes controlling high-damage reactions tend to operate in functional isolation, as a metabolic protection mechanism. We conclude that the architecture of M. pneumoniae, both at the level of metabolism and genome, seems to have evolved towards increased structural robustness, similarly to other more complex model bacterial organisms, despite its reduced genome size and its greater metabolic network linearity. Our approach, although motivated biochemically, is generic enough to be of potential use toward analyzing and predicting spreading of structural stress in any bipartite complex network.
1208.5729
F. Matthew Mihelic
F. Matthew Mihelic
A Theoretical Mechanism of Szilard Engine Function in Nucleic Acids and the Implications for Quantum Coherence in Biological Systems
Search for Fundamental Theory: The VII International Symposium Honoring French Mathematical Physicist Jean-Pierre Vigier Date: 12-14 July 2010 Location: London 4 pages
AIP Conf. Proc. 1316, July 2010, pp. 287-290
10.1063/1.3536440
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nucleic acids theoretically possess a Szilard engine function that can convert the energy associated with the Shannon entropy of molecules for which they have coded recognition, into the useful work of geometric reconfiguration of the nucleic acid molecule. This function is logically reversible because its mechanism is literally and physically constructed out of the information necessary to reduce the Shannon entropy of such molecules, which means that this information exists on both sides of the theoretical engine, and because information is retained in the geometric degrees of freedom of the nucleic acid molecule, a quantum gate is formed through which multi-state nucleic acid qubits can interact. Entangled biophotons emitted as a consequence of symmetry breaking nucleic acid Szilard engine (NASE) function can be used to coordinate relative positioning of different nucleic acid locations, both within and between cells, thus providing the potential for quantum coherence of an entire biological system. Theoretical implications of understanding biological systems as such "quantum adaptive systems" include the potential for multi-agent based quantum computing, and a better understanding of systemic pathologies such as cancer, as being related to a loss of systemic quantum coherence.
[ { "created": "Mon, 13 Aug 2012 17:33:45 GMT", "version": "v1" } ]
2012-08-29
[ [ "Mihelic", "F. Matthew", "" ] ]
Nucleic acids theoretically possess a Szilard engine function that can convert the energy associated with the Shannon entropy of molecules for which they have coded recognition, into the useful work of geometric reconfiguration of the nucleic acid molecule. This function is logically reversible because its mechanism is literally and physically constructed out of the information necessary to reduce the Shannon entropy of such molecules, which means that this information exists on both sides of the theoretical engine, and because information is retained in the geometric degrees of freedom of the nucleic acid molecule, a quantum gate is formed through which multi-state nucleic acid qubits can interact. Entangled biophotons emitted as a consequence of symmetry breaking nucleic acid Szilard engine (NASE) function can be used to coordinate relative positioning of different nucleic acid locations, both within and between cells, thus providing the potential for quantum coherence of an entire biological system. Theoretical implications of understanding biological systems as such "quantum adaptive systems" include the potential for multi-agent based quantum computing, and a better understanding of systemic pathologies such as cancer, as being related to a loss of systemic quantum coherence.
1403.2575
Miguel A. Fortuna
Miguel A. Fortuna, Raul Ortega, and Jordi Bascompte
The Web of Life
www.web-of-life.es
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/3.0/
The Web of Life (www.web-of-life.es) provides a graphical user interface, based on Google Maps, for easily visualizing and downloading data on ecological networks of species interactions. It is designed and implemented in a relational database management system, allowing sophisticated and user-friendly searching across networks. Users can access the database by any web browser using a variety of operating systems. Data can be downloaded in several common formats, and a data transmission webservice in JavaScript Object Notation is also provided.
[ { "created": "Tue, 11 Mar 2014 13:46:46 GMT", "version": "v1" } ]
2014-03-12
[ [ "Fortuna", "Miguel A.", "" ], [ "Ortega", "Raul", "" ], [ "Bascompte", "Jordi", "" ] ]
The Web of Life (www.web-of-life.es) provides a graphical user interface, based on Google Maps, for easily visualizing and downloading data on ecological networks of species interactions. It is designed and implemented in a relational database management system, allowing sophisticated and user-friendly searching across networks. Users can access the database by any web browser using a variety of operating systems. Data can be downloaded in several common formats, and a data transmission webservice in JavaScript Object Notation is also provided.
2111.03748
Mark Lowell
Mark Lowell
Graph topology determines coexistence in the rock-paper-scissors system
15 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species survival in the $(3, 1)$ May-Leonard system is determined by the mobility, with a critical mobility threshold between long-term coexistence and extinction. We show experimentally that the critical mobility threshold is determined by the topology of the graph, with the critical mobility threshold of the periodic lattice twice that of a topological 2-sphere. We use topological techniques to monitor the evolution of patterns in various graphs and estimate their mean persistence time, and show that the difference in critical mobility threshold is due to a specific pattern that can form on the lattice but not on the sphere. Finally, we release the software toolkit we developed to perform these experiments to the community.
[ { "created": "Fri, 5 Nov 2021 22:42:01 GMT", "version": "v1" } ]
2021-11-09
[ [ "Lowell", "Mark", "" ] ]
Species survival in the $(3, 1)$ May-Leonard system is determined by the mobility, with a critical mobility threshold between long-term coexistence and extinction. We show experimentally that the critical mobility threshold is determined by the topology of the graph, with the critical mobility threshold of the periodic lattice twice that of a topological 2-sphere. We use topological techniques to monitor the evolution of patterns in various graphs and estimate their mean persistence time, and show that the difference in critical mobility threshold is due to a specific pattern that can form on the lattice but not on the sphere. Finally, we release the software toolkit we developed to perform these experiments to the community.
2012.02679
Wouter Boomsma
Nicki Skafte Detlefsen, S{\o}ren Hauberg, Wouter Boomsma
What is a meaningful representation of protein sequences?
17 pages, 8 figures, 2 tables
Nature Communications 13, 1914 (2022)
10.1038/s41467-022-29443-w
null
q-bio.BM cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
How we choose to represent our data has a fundamental impact on our ability to subsequently extract information from them. Machine learning promises to automatically determine efficient representations from large unstructured datasets, such as those arising in biology. However, empirical evidence suggests that seemingly minor changes to these machine learning models yield drastically different data representations that result in different biological interpretations of data. This begs the question of what even constitutes the most meaningful representation. Here, we approach this question for representations of protein sequences, which have received considerable attention in the recent literature. We explore two key contexts in which representations naturally arise: transfer learning and interpretable learning. In the first context, we demonstrate that several contemporary practices yield suboptimal performance, and in the latter we demonstrate that taking representation geometry into account significantly improves interpretability and lets the models reveal biological information that is otherwise obscured.
[ { "created": "Sat, 28 Nov 2020 19:37:22 GMT", "version": "v1" }, { "created": "Wed, 5 May 2021 19:49:18 GMT", "version": "v2" }, { "created": "Thu, 21 Oct 2021 21:32:04 GMT", "version": "v3" }, { "created": "Mon, 7 Mar 2022 08:55:04 GMT", "version": "v4" } ]
2022-05-31
[ [ "Detlefsen", "Nicki Skafte", "" ], [ "Hauberg", "Søren", "" ], [ "Boomsma", "Wouter", "" ] ]
How we choose to represent our data has a fundamental impact on our ability to subsequently extract information from them. Machine learning promises to automatically determine efficient representations from large unstructured datasets, such as those arising in biology. However, empirical evidence suggests that seemingly minor changes to these machine learning models yield drastically different data representations that result in different biological interpretations of data. This begs the question of what even constitutes the most meaningful representation. Here, we approach this question for representations of protein sequences, which have received considerable attention in the recent literature. We explore two key contexts in which representations naturally arise: transfer learning and interpretable learning. In the first context, we demonstrate that several contemporary practices yield suboptimal performance, and in the latter we demonstrate that taking representation geometry into account significantly improves interpretability and lets the models reveal biological information that is otherwise obscured.
2209.12625
Manvi Jain Ms
Manvi Jain, C.M. Markan
Effect of Brief Meditation Intervention on Attention: An ERP Investigation
5 pages, 3 figure, National Systems Conference, 2022
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fast and efficient strategies for modulation of attention have been extensively studied recently. The present study has attempted to observe the effect of brief meditation practices on executive control of the attention system. The study recruits cognitive control by introducing conflict using Stroop task in twenty-six novice participants. Behavioral responses indicate a positive effect on response time and accuracy at Stroop task after ten minutes of meditation intervention. Neurophysiological findings suggest more efficient allocation of attentional resources. An increase in positive ERP components (P200, P300) and expected decrease in the inhibitory or negative component (N200) after intervention shows positive results. The findings suggest a positive impact of meditation intervention on attention even for brief periods in non-meditating population.
[ { "created": "Mon, 26 Sep 2022 12:14:33 GMT", "version": "v1" }, { "created": "Sun, 9 Oct 2022 09:22:37 GMT", "version": "v2" } ]
2022-10-11
[ [ "Jain", "Manvi", "" ], [ "Markan", "C. M.", "" ] ]
Fast and efficient strategies for modulation of attention have been extensively studied recently. The present study has attempted to observe the effect of brief meditation practices on executive control of the attention system. The study recruits cognitive control by introducing conflict using Stroop task in twenty-six novice participants. Behavioral responses indicate a positive effect on response time and accuracy at Stroop task after ten minutes of meditation intervention. Neurophysiological findings suggest more efficient allocation of attentional resources. An increase in positive ERP components (P200, P300) and expected decrease in the inhibitory or negative component (N200) after intervention shows positive results. The findings suggest a positive impact of meditation intervention on attention even for brief periods in non-meditating population.
2006.15010
Ravi Kiran
Ravi Kiran, Madhumita Roy, Syed Abbas, A. Taraphder
Effect of population migration and punctuated lockdown on the spread of infectious diseases
17 pages, 14 figures
Nonauton. Dyn. Syst. 2021; 8:251-266
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the critical measures to control infectious diseases is a lockdown. Once past the lockdown stage in many parts of the world, the crucial question now concerns the effects of relaxing the lockdown and finding the best ways to implement further lockdown(s), if required, to control the spread. With the relaxation of lockdown, people migrate to different cities and enhance the spread of the disease. This work presents the population migration model for n-cities and applies the model for migration between two and three cities. The reproduction number is calculated, and the effect of the migration rate is analyzed. A punctuated lockdown is implemented to simulate a protocol of repeated lockdowns that limits the resurgence of infections. A damped oscillatory behavior is observed with multiple peaks over a period.
[ { "created": "Fri, 26 Jun 2020 14:32:41 GMT", "version": "v1" }, { "created": "Mon, 22 Feb 2021 15:18:06 GMT", "version": "v2" }, { "created": "Thu, 9 Dec 2021 13:24:24 GMT", "version": "v3" } ]
2021-12-10
[ [ "Kiran", "Ravi", "" ], [ "Roy", "Madhumita", "" ], [ "Abbas", "Syed", "" ], [ "Taraphder", "A.", "" ] ]
One of the critical measures to control infectious diseases is a lockdown. Once past the lockdown stage in many parts of the world, the crucial question now concerns the effects of relaxing the lockdown and finding the best ways to implement further lockdown(s), if required, to control the spread. With the relaxation of lockdown, people migrate to different cities and enhance the spread of the disease. This work presents the population migration model for n-cities and applies the model for migration between two and three cities. The reproduction number is calculated, and the effect of the migration rate is analyzed. A punctuated lockdown is implemented to simulate a protocol of repeated lockdowns that limits the resurgence of infections. A damped oscillatory behavior is observed with multiple peaks over a period.
1611.09245
G Manjunath
G Manjunath
Evolving Network Model that Almost Regenerates Epileptic Data
To Appear in Neural Computation
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many realistic networks, the edges representing the interactions between the nodes are time-varying. There is growing evidence that the complex network that models the dynamics of the human brain has time-varying interconnections, i.e., the network is evolving. Based on this evidence, we construct a patient and data specific evolving network model (comprising discrete-time dynamical systems) in which epileptic seizures or their terminations in the brain are also determined by the nature of the time-varying interconnections between the nodes. A novel and unique feature of our methodology is that the evolving network model remembers the data from which it was conceived from, in the sense that it evolves to almost regenerate the patient data even upon presenting an arbitrary initial condition to it. We illustrate a potential utility of our methodology by constructing an evolving network from clinical data that aids in identifying an approximate seizure focus -- nodes in such a theoretically determined seizure focus are outgoing hubs that apparently act as spreaders of seizures. We also point out the efficacy of removal of such spreaders in limiting seizures.
[ { "created": "Thu, 17 Nov 2016 14:25:45 GMT", "version": "v1" } ]
2016-11-29
[ [ "Manjunath", "G", "" ] ]
In many realistic networks, the edges representing the interactions between the nodes are time-varying. There is growing evidence that the complex network that models the dynamics of the human brain has time-varying interconnections, i.e., the network is evolving. Based on this evidence, we construct a patient and data specific evolving network model (comprising discrete-time dynamical systems) in which epileptic seizures or their terminations in the brain are also determined by the nature of the time-varying interconnections between the nodes. A novel and unique feature of our methodology is that the evolving network model remembers the data from which it was conceived from, in the sense that it evolves to almost regenerate the patient data even upon presenting an arbitrary initial condition to it. We illustrate a potential utility of our methodology by constructing an evolving network from clinical data that aids in identifying an approximate seizure focus -- nodes in such a theoretically determined seizure focus are outgoing hubs that apparently act as spreaders of seizures. We also point out the efficacy of removal of such spreaders in limiting seizures.
1412.5062
Manoj Gopalakrishnan
V. Jemseena and Manoj Gopalakrishnan (IIT Madras)
Effects of aging in catastrophe on the steady state and dynamics of a microtubule population
More detailed comparisons with experimental data and other changes in text, change in title
Phys. Rev. E 91, 052704 (2015)
null
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several independent observations have suggested that catastrophe transition in microtubules is not a first-order process, as is usually assumed. Recent {\it in vitro} observations by Gardner et al.[ M. K. Gardner et al., Cell {\bf147}, 1092 (2011)] showed that microtubule catastrophe takes place via multiple steps and the frequency increases with the age of the filament. Here, we investigate, via numerical simulations and mathematical calculations, some of the consequences of age dependence of catastrophe on the dynamics of microtubules as a function of the aging rate, for two different models of aging: exponential growth, but saturating asymptotically and purely linear growth. The boundary demarcating the steady state and non-steady state regimes in the dynamics is derived analytically in both cases. Numerical simulations, supported by analytical calculations in the linear model, show that aging leads to non-exponential length distributions in steady state. More importantly, oscillations ensue in microtubule length and velocity. The regularity of oscillations, as characterized by the negative dip in the autocorrelation function, is reduced by increasing the frequency of rescue events. Our study shows that age dependence of catastrophe could function as an intrinsic mechanism to generate oscillatory dynamics in a microtubule population, distinct from hitherto identified ones.
[ { "created": "Tue, 16 Dec 2014 16:19:56 GMT", "version": "v1" }, { "created": "Wed, 1 Jul 2015 06:42:19 GMT", "version": "v2" } ]
2015-07-02
[ [ "Jemseena", "V.", "", "IIT Madras" ], [ "Gopalakrishnan", "Manoj", "", "IIT Madras" ] ]
Several independent observations have suggested that catastrophe transition in microtubules is not a first-order process, as is usually assumed. Recent {\it in vitro} observations by Gardner et al.[ M. K. Gardner et al., Cell {\bf147}, 1092 (2011)] showed that microtubule catastrophe takes place via multiple steps and the frequency increases with the age of the filament. Here, we investigate, via numerical simulations and mathematical calculations, some of the consequences of age dependence of catastrophe on the dynamics of microtubules as a function of the aging rate, for two different models of aging: exponential growth, but saturating asymptotically and purely linear growth. The boundary demarcating the steady state and non-steady state regimes in the dynamics is derived analytically in both cases. Numerical simulations, supported by analytical calculations in the linear model, show that aging leads to non-exponential length distributions in steady state. More importantly, oscillations ensue in microtubule length and velocity. The regularity of oscillations, as characterized by the negative dip in the autocorrelation function, is reduced by increasing the frequency of rescue events. Our study shows that age dependence of catastrophe could function as an intrinsic mechanism to generate oscillatory dynamics in a microtubule population, distinct from hitherto identified ones.
1909.10045
Daniel Juliano Pamplona da Silva
Daniel Juliano Pamplona da Silva and Edmundo Capelas de Oliveira
Habitat fragmentation: the possibility of a patch disrupting its neighbor
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper starts from the Fisher-Kolmogorov-Petrovskii-Piskunov equation to model diffusive populations. The main result, according to this model, is that two connected patches in a system do not always contribute to each other. Specifically, inserting a large fragment next to a small one is always positive for life inside the small patch, while inserting a very small patch next to a large one can be negative for life inside the large fragment. This result, obtained to homogeneously fragmented regions, is possible from the general case expression for the minimum sizes in a system of two patches. This expression by itself is an interesting result, because it allows the study of other characteristics not included in the present work.
[ { "created": "Sun, 22 Sep 2019 16:56:22 GMT", "version": "v1" } ]
2019-09-24
[ [ "da Silva", "Daniel Juliano Pamplona", "" ], [ "de Oliveira", "Edmundo Capelas", "" ] ]
This paper starts from the Fisher-Kolmogorov-Petrovskii-Piskunov equation to model diffusive populations. The main result, according to this model, is that two connected patches in a system do not always contribute to each other. Specifically, inserting a large fragment next to a small one is always positive for life inside the small patch, while inserting a very small patch next to a large one can be negative for life inside the large fragment. This result, obtained to homogeneously fragmented regions, is possible from the general case expression for the minimum sizes in a system of two patches. This expression by itself is an interesting result, because it allows the study of other characteristics not included in the present work.
1912.08340
Kai-Chih Huang
Kai-Chih Huang, Junjie Li, Chi Zhang, Yuying Tan, and Ji-Xin Cheng
Multiplex stimulated Raman scattering imaging cytometry reveals cancer metabolic signatures in a spatially, temporally, and spectrally resolved manner
42 pages, 21 figures
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In situ measurement of cellular metabolites is still a challenge in biology. Conventional methods, such as mass spectrometry or fluorescence microscopy, would either destruct the sample or introduce strong perturbations to the functions of target molecules. Here, we present multiplex stimulated Raman scattering (SRS) imaging cytometry as a label-free single-cell analysis platform with chemical specifity, and high-throughput capabilities. Cellular compartments such as lipid droplets, endoplasmic reticulum, and nuclei are seperated from the cytoplasm. Based on these chemical segmentations, 260 features from both morphology and molecular composition were generated and analyzed for each cell. Using SRS imaging cytometry, we studied the metabolic responses of human pancreatic cancer cells under stress by starvation and chemotherapy drug treatments. We unveiled lipid-facilitated protrusion as a metabolic marker for stress-resistant cancer cells through statistical analysis of thousands of cells. Our findings also demonstrate the potential of targeting lipid metabolism for selective treatment of starvation-resistant and chemotherapy-resistant cancers. These results highlight our SRS imaging cytometry as a powerful label-free tool for biological discoveries with a high-throughput, high-content capacity.
[ { "created": "Wed, 18 Dec 2019 02:12:03 GMT", "version": "v1" } ]
2019-12-19
[ [ "Huang", "Kai-Chih", "" ], [ "Li", "Junjie", "" ], [ "Zhang", "Chi", "" ], [ "Tan", "Yuying", "" ], [ "Cheng", "Ji-Xin", "" ] ]
In situ measurement of cellular metabolites is still a challenge in biology. Conventional methods, such as mass spectrometry or fluorescence microscopy, would either destruct the sample or introduce strong perturbations to the functions of target molecules. Here, we present multiplex stimulated Raman scattering (SRS) imaging cytometry as a label-free single-cell analysis platform with chemical specifity, and high-throughput capabilities. Cellular compartments such as lipid droplets, endoplasmic reticulum, and nuclei are seperated from the cytoplasm. Based on these chemical segmentations, 260 features from both morphology and molecular composition were generated and analyzed for each cell. Using SRS imaging cytometry, we studied the metabolic responses of human pancreatic cancer cells under stress by starvation and chemotherapy drug treatments. We unveiled lipid-facilitated protrusion as a metabolic marker for stress-resistant cancer cells through statistical analysis of thousands of cells. Our findings also demonstrate the potential of targeting lipid metabolism for selective treatment of starvation-resistant and chemotherapy-resistant cancers. These results highlight our SRS imaging cytometry as a powerful label-free tool for biological discoveries with a high-throughput, high-content capacity.
2112.11908
Mauro Salazar
Sander Tonkens, Paul de Klaver, and Mauro Salazar
Optimizing Vaccine Allocation Strategies in Pandemic Outbreaks: An Optimal Control Approach
ECC 2022
null
null
null
q-bio.PE cs.SI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since early 2020, the world has been dealing with a raging pandemic outbreak: COVID-19. A year later, vaccines have become accessible, but in limited quantities, so that governments needed to devise a strategy to decide which part of the population to prioritize when assigning the available doses, and how to manage the interval between doses for multi-dose vaccines. In this paper, we present an optimization framework to address the dynamic double-dose vaccine allocation problem whereby the available vaccine doses must be administered to different age-groups to minimize specific societal objectives. In particular, we first identify an age-dependent Susceptible-Exposed-Infected-Recovered (SEIR) epidemic model including an extension capturing partially and fully vaccinated people, whereby we account for age-dependent immunity and infectiousness levels together with disease severity. Second, we leverage our model to frame the dynamic age-dependent vaccine allocation problem for different societal objectives, such as the minimization of infections or fatalities, and solve it with nonlinear programming techniques. Finally, we carry out a numerical case study with real-world data from The Netherlands. Our results show how different societal objectives can significantly alter the optimal vaccine allocation strategy. For instance, we find that minimizing the overall number of infections results in delaying second doses, whilst to minimize fatalities it is important to fully vaccinate the elderly first.
[ { "created": "Mon, 13 Dec 2021 09:19:01 GMT", "version": "v1" }, { "created": "Tue, 12 Apr 2022 08:26:46 GMT", "version": "v2" } ]
2022-04-13
[ [ "Tonkens", "Sander", "" ], [ "de Klaver", "Paul", "" ], [ "Salazar", "Mauro", "" ] ]
Since early 2020, the world has been dealing with a raging pandemic outbreak: COVID-19. A year later, vaccines have become accessible, but in limited quantities, so that governments needed to devise a strategy to decide which part of the population to prioritize when assigning the available doses, and how to manage the interval between doses for multi-dose vaccines. In this paper, we present an optimization framework to address the dynamic double-dose vaccine allocation problem whereby the available vaccine doses must be administered to different age-groups to minimize specific societal objectives. In particular, we first identify an age-dependent Susceptible-Exposed-Infected-Recovered (SEIR) epidemic model including an extension capturing partially and fully vaccinated people, whereby we account for age-dependent immunity and infectiousness levels together with disease severity. Second, we leverage our model to frame the dynamic age-dependent vaccine allocation problem for different societal objectives, such as the minimization of infections or fatalities, and solve it with nonlinear programming techniques. Finally, we carry out a numerical case study with real-world data from The Netherlands. Our results show how different societal objectives can significantly alter the optimal vaccine allocation strategy. For instance, we find that minimizing the overall number of infections results in delaying second doses, whilst to minimize fatalities it is important to fully vaccinate the elderly first.
2406.08163
Misha Chai
Misha Chai and Holger Kantz
A conceptual predator-prey model with super-long transients
null
null
null
null
q-bio.PE nlin.CD physics.bio-ph
http://creativecommons.org/publicdomain/zero/1.0/
Drawing on the understanding of the logistic map, we propose a simple predator-prey model where predators and prey adapt to each other, leading to the co-evolution of the system. The special dynamics observed in periodic windows contribute to the coexistence of multiple time scales, adding to the complexity of the system. Typical dynamics in ecosystems, such as the persistence and coexistence of population cycles and chaotic behaviors, the emergence of super-long transients, regime shifts, and the quantifying of resilience, are encapsulated within this single model. The simplicity of our model allows for detailed analysis, including linear analysis, reinforcing its potential as a conceptual tool for understanding ecosystems deeply. Additionally, our results suggest that longer lifetimes in ecosystems might come at the expense of reduced populations due to limited resources.
[ { "created": "Wed, 12 Jun 2024 12:52:09 GMT", "version": "v1" } ]
2024-06-13
[ [ "Chai", "Misha", "" ], [ "Kantz", "Holger", "" ] ]
Drawing on the understanding of the logistic map, we propose a simple predator-prey model where predators and prey adapt to each other, leading to the co-evolution of the system. The special dynamics observed in periodic windows contribute to the coexistence of multiple time scales, adding to the complexity of the system. Typical dynamics in ecosystems, such as the persistence and coexistence of population cycles and chaotic behaviors, the emergence of super-long transients, regime shifts, and the quantifying of resilience, are encapsulated within this single model. The simplicity of our model allows for detailed analysis, including linear analysis, reinforcing its potential as a conceptual tool for understanding ecosystems deeply. Additionally, our results suggest that longer lifetimes in ecosystems might come at the expense of reduced populations due to limited resources.