id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1410.1278
Vince Grolmusz
Csaba Kerepesi and Vince Grolmusz
Giant Viruses of the Kutch Desert
null
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Kutch desert (Great Rann of Kutch, Gujarat, India) is a unique ecosystem: in the larger part of the year it is a hot, salty desert that is flooded regularly in the Indian monsoon season. In the dry season, the crystallized salt deposits form the "white desert" in large regions. The first metagenomic analysis of the soil samples of Kutch was published in 2013, and the data was deposited in the NCBI Sequence Read Archive. The sequences were analyzed at the same time phylogenetically for prokaryotes, especially for bacterial taxa. In the present work, we are searching for the DNA sequences of the recently discovered giant viruses in the soil samples of the Kutch desert. Since most giant viruses were discovered in biofilms in industrial cooling towers, ocean water and freshwater ponds, we were surprised to find their DNA sequences in the soil samples of a seasonally very hot and arid, salty environment.
[ { "created": "Mon, 6 Oct 2014 07:50:22 GMT", "version": "v1" }, { "created": "Tue, 7 Oct 2014 10:46:39 GMT", "version": "v2" } ]
2014-10-08
[ [ "Kerepesi", "Csaba", "" ], [ "Grolmusz", "Vince", "" ] ]
The Kutch desert (Great Rann of Kutch, Gujarat, India) is a unique ecosystem: in the larger part of the year it is a hot, salty desert that is flooded regularly in the Indian monsoon season. In the dry season, the crystallized salt deposits form the "white desert" in large regions. The first metagenomic analysis of the soil samples of Kutch was published in 2013, and the data was deposited in the NCBI Sequence Read Archive. The sequences were analyzed at the same time phylogenetically for prokaryotes, especially for bacterial taxa. In the present work, we are searching for the DNA sequences of the recently discovered giant viruses in the soil samples of the Kutch desert. Since most giant viruses were discovered in biofilms in industrial cooling towers, ocean water and freshwater ponds, we were surprised to find their DNA sequences in the soil samples of a seasonally very hot and arid, salty environment.
1912.05502
Alejandro Abarca-Blanco
Juan F. Yee-de Le\'on, Brenda Soto-Garc\'ia, Diana Ar\'aiz-Hern\'andez, Jes\'us Rolando Delgado-Balderas, Miguel A. Esparza, Carlos Aguilar-Avelar, J. D. Wong-Campos, Franco Chac\'on, Jos\'e Y. L\'opez-Hern\'andez, A. Mauricio Gonz\'alez-Trevi\~no, Jos\'e R. Yee-de Le\'on, Jorge L. Zamora-Mendoza, Mario M. Alvarez, Grissel Trujillo-de Santiago, Lauro S. G\'omez-Guerra, Celia N. S\'anchez-Dom\'inguez, Liza P. Velarde-Calvillo and Alejandro Abarca-Blanco
Characterization of a novel automated microfiltration device for the efficient isolation and analysis of circulating tumor cells from clinical blood samples
13 pages, 6 figures, under review
null
10.1038/s41598-020-63672-7
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The detection and analysis of circulating tumor cells (CTCs) may enable a broad range of cancer-related applications, including the identification of acquired drug resistance during treatments. However, the non-scalable fabrication, prolonged sample processing times, and the lack of automation, associated with most of the technologies developed to isolate these rare cells, have impeded their transition into the clinical practice. This work describes a novel membrane-based microfiltration device comprised of a fully automated sample processing unit and a machine-vision-enabled imaging system that allows the efficient isolation and rapid analysis of CTCs from blood. The device performance was characterized using four prostate cancer cell lines, including PC-3, VCaP, DU-145, and LNCaP, obtaining high assay reproducibility and capture efficiencies greater than 93% after processing 7.5 mL blood samples from healthy donors, spiked with 100 cancer cells. Cancer cells remained viable after filtration due to the minimal shear stress exerted over cells during the procedure, while the identification of cancer cells by immunostaining was not affected by the number of non-specific events captured on the membrane. We were also able to identify the androgen receptor (AR) point mutation T878A from 7.5 mL blood samples spiked with 50 LNCaP cells using RT-PCR and Sanger sequencing. Finally, CTCs were detected in 8 of 8 samples from patients diagnosed with metastatic prostate cancer (mean $\pm$ SEM = 21 $\pm$ 2.957 CTCs/mL, median = 21 CTC/mL), thereby validating the potential clinical utility of the device.
[ { "created": "Wed, 11 Dec 2019 18:05:18 GMT", "version": "v1" } ]
2020-05-07
[ [ "León", "Juan F. Yee-de", "" ], [ "Soto-García", "Brenda", "" ], [ "Aráiz-Hernández", "Diana", "" ], [ "Delgado-Balderas", "Jesús Rolando", "" ], [ "Esparza", "Miguel A.", "" ], [ "Aguilar-Avelar", "Carlos", "" ], [ ...
The detection and analysis of circulating tumor cells (CTCs) may enable a broad range of cancer-related applications, including the identification of acquired drug resistance during treatments. However, the non-scalable fabrication, prolonged sample processing times, and the lack of automation, associated with most of the technologies developed to isolate these rare cells, have impeded their transition into the clinical practice. This work describes a novel membrane-based microfiltration device comprised of a fully automated sample processing unit and a machine-vision-enabled imaging system that allows the efficient isolation and rapid analysis of CTCs from blood. The device performance was characterized using four prostate cancer cell lines, including PC-3, VCaP, DU-145, and LNCaP, obtaining high assay reproducibility and capture efficiencies greater than 93% after processing 7.5 mL blood samples from healthy donors, spiked with 100 cancer cells. Cancer cells remained viable after filtration due to the minimal shear stress exerted over cells during the procedure, while the identification of cancer cells by immunostaining was not affected by the number of non-specific events captured on the membrane. We were also able to identify the androgen receptor (AR) point mutation T878A from 7.5 mL blood samples spiked with 50 LNCaP cells using RT-PCR and Sanger sequencing. Finally, CTCs were detected in 8 of 8 samples from patients diagnosed with metastatic prostate cancer (mean $\pm$ SEM = 21 $\pm$ 2.957 CTCs/mL, median = 21 CTC/mL), thereby validating the potential clinical utility of the device.
2109.01347
Chiyin Zheng
Chiyin Zheng
Account for Neuronal Representations from the Perspective of Neurons
49 pages, 6 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Mounting evidence in neuroscience suggests the possibility of neuronal representations that individual neurons serve as the substrates of different mental representations in a point-to-point way. Combined with associationism, it can potentially address a range of theoretical problems and provide a straightforward explanation for our cognition. However, this idea is merely a hypothesis with many questions unsolved. In this paper, I will bring up a new framework to defend the idea of neuronal representations. The strategy is from micro- to macro-level. Specifically, in the micro-level, I first propose that our brain' prefers and preserves more active neurons. Yet as total chance of discharge, neurons must take strategies to fire more strongly and frequently. Then I describe how they take synaptic plasticity, inhibition, and synchronization as their strategies and demonstrate how the execution of these strategies during turn them into specialized neurons that selectively but strongly respond to familiar entities. In the macro-level, I further discuss how these specialized neurons underlie various cognitive functions and phenomena. Significantly, this paper, through defending neuronal representation, introduces a novel way to understand the relationship between brain and cognition.
[ { "created": "Fri, 3 Sep 2021 07:13:29 GMT", "version": "v1" } ]
2021-09-06
[ [ "Zheng", "Chiyin", "" ] ]
Mounting evidence in neuroscience suggests the possibility of neuronal representations that individual neurons serve as the substrates of different mental representations in a point-to-point way. Combined with associationism, it can potentially address a range of theoretical problems and provide a straightforward explanation for our cognition. However, this idea is merely a hypothesis with many questions unsolved. In this paper, I will bring up a new framework to defend the idea of neuronal representations. The strategy is from micro- to macro-level. Specifically, in the micro-level, I first propose that our brain' prefers and preserves more active neurons. Yet as total chance of discharge, neurons must take strategies to fire more strongly and frequently. Then I describe how they take synaptic plasticity, inhibition, and synchronization as their strategies and demonstrate how the execution of these strategies during turn them into specialized neurons that selectively but strongly respond to familiar entities. In the macro-level, I further discuss how these specialized neurons underlie various cognitive functions and phenomena. Significantly, this paper, through defending neuronal representation, introduces a novel way to understand the relationship between brain and cognition.
2310.01768
Yikai Liu
Yikai Liu, Ming Chen, Guang Lin
Backdiff: a diffusion model for generalized transferable protein backmapping
22 pages, 5 figures
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Coarse-grained (CG) models play a crucial role in the study of protein structures, protein thermodynamic properties, and protein conformation dynamics. Due to the information loss in the coarse-graining process, backmapping from CG to all-atom configurations is essential in many protein design and drug discovery applications when detailed atomic representations are needed for in-depth studies. Despite recent progress in data-driven backmapping approaches, devising a backmapping method that can be universally applied across various CG models and proteins remains unresolved. In this work, we propose BackDiff, a new generative model designed to achieve generalization and reliability in the protein backmapping problem. BackDiff leverages the conditional score-based diffusion model with geometric representations. Since different CG models can contain different coarse-grained sites which include selected atoms (CG atoms) and simple CG auxiliary functions of atomistic coordinates (CG auxiliary variables), we design a self-supervised training framework to adapt to different CG atoms, and constrain the diffusion sampling paths with arbitrary CG auxiliary variables as conditions. Our method facilitates end-to-end training and allows efficient sampling across different proteins and diverse CG models without the need for retraining. Comprehensive experiments over multiple popular CG models demonstrate BackDiff's superior performance to existing state-of-the-art approaches, and generalization and flexibility that these approaches cannot achieve. A pretrained BackDiff model can offer a convenient yet reliable plug-and-play solution for protein researchers, enabling them to investigate further from their own CG models.
[ { "created": "Tue, 3 Oct 2023 03:32:07 GMT", "version": "v1" }, { "created": "Wed, 29 Nov 2023 03:43:56 GMT", "version": "v2" } ]
2023-11-30
[ [ "Liu", "Yikai", "" ], [ "Chen", "Ming", "" ], [ "Lin", "Guang", "" ] ]
Coarse-grained (CG) models play a crucial role in the study of protein structures, protein thermodynamic properties, and protein conformation dynamics. Due to the information loss in the coarse-graining process, backmapping from CG to all-atom configurations is essential in many protein design and drug discovery applications when detailed atomic representations are needed for in-depth studies. Despite recent progress in data-driven backmapping approaches, devising a backmapping method that can be universally applied across various CG models and proteins remains unresolved. In this work, we propose BackDiff, a new generative model designed to achieve generalization and reliability in the protein backmapping problem. BackDiff leverages the conditional score-based diffusion model with geometric representations. Since different CG models can contain different coarse-grained sites which include selected atoms (CG atoms) and simple CG auxiliary functions of atomistic coordinates (CG auxiliary variables), we design a self-supervised training framework to adapt to different CG atoms, and constrain the diffusion sampling paths with arbitrary CG auxiliary variables as conditions. Our method facilitates end-to-end training and allows efficient sampling across different proteins and diverse CG models without the need for retraining. Comprehensive experiments over multiple popular CG models demonstrate BackDiff's superior performance to existing state-of-the-art approaches, and generalization and flexibility that these approaches cannot achieve. A pretrained BackDiff model can offer a convenient yet reliable plug-and-play solution for protein researchers, enabling them to investigate further from their own CG models.
q-bio/0402017
Manuel Middendorf
Manuel Middendorf, Etay Ziv, Carter Adams, Jen Hom, Robin Koytcheff, Chaya Levovitz, Gregory Woods, Linda Chen, Chris Wiggins
Discriminative Topological Features Reveal Biological Network Mechanisms
supplemental website: http://www.columbia.edu/itc/applied/wiggins/netclass/
BMC Bioinformatics 2004, 5:181 (22 November 2004)
null
null
q-bio.MN
null
Recent genomic and bioinformatic advances have motivated the development of numerous random network models purporting to describe graphs of biological, technological, and sociological origin. The success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional) ``word space.'' This map defines an input space for classification schemes which allow us for the first time to state unambiguously which models are most descriptive of the networks they purport to describe. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work, source code for which is freely available. We anticipate that this new approach to network analysis will be of broad impact to a number of communities.
[ { "created": "Mon, 9 Feb 2004 06:56:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Middendorf", "Manuel", "" ], [ "Ziv", "Etay", "" ], [ "Adams", "Carter", "" ], [ "Hom", "Jen", "" ], [ "Koytcheff", "Robin", "" ], [ "Levovitz", "Chaya", "" ], [ "Woods", "Gregory", "" ], [ "Chen",...
Recent genomic and bioinformatic advances have motivated the development of numerous random network models purporting to describe graphs of biological, technological, and sociological origin. The success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional) ``word space.'' This map defines an input space for classification schemes which allow us for the first time to state unambiguously which models are most descriptive of the networks they purport to describe. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work, source code for which is freely available. We anticipate that this new approach to network analysis will be of broad impact to a number of communities.
1203.4482
Samir Suweis Dr.
S. Suweis, E. Bertuzzo, L. Mari, I. Rodriguez-Iturbe, A. Maritan and A. Rinaldo
On Species Persistence-Time Distributions
30 pages, 5 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present new theoretical and empirical results on the probability distributions of species persistence times in natural ecosystems. Persistence times, defined as the timespans occurring between species' colonization and local extinction in a given geographic region, are empirically estimated from local observations of species' presence/absence. A connected sampling problem is presented, generalized and solved analytically. Species persistence is shown to provide a direct connection with key spatial macroecological patterns like species-area and endemics-area relationships. Our empirical analysis pertains to two different ecosystems and taxa: a herbaceous plant community and a estuarine fish database. Despite the substantial differences in ecological interactions and spatial scales, we confirm earlier evidence on the general properties of the scaling of persistence times, including the predicted effects of the structure of the spatial interaction network. The framework tested here allows to investigate directly nature and extent of spatial effects in the context of ecosystem dynamics. The notable coherence between spatial and temporal macroecological patterns, theoretically derived and empirically verified, is suggested to underlie general features of the dynamic evolution of ecosystems.
[ { "created": "Mon, 19 Mar 2012 09:46:27 GMT", "version": "v1" } ]
2012-03-21
[ [ "Suweis", "S.", "" ], [ "Bertuzzo", "E.", "" ], [ "Mari", "L.", "" ], [ "Rodriguez-Iturbe", "I.", "" ], [ "Maritan", "A.", "" ], [ "Rinaldo", "A.", "" ] ]
We present new theoretical and empirical results on the probability distributions of species persistence times in natural ecosystems. Persistence times, defined as the timespans occurring between species' colonization and local extinction in a given geographic region, are empirically estimated from local observations of species' presence/absence. A connected sampling problem is presented, generalized and solved analytically. Species persistence is shown to provide a direct connection with key spatial macroecological patterns like species-area and endemics-area relationships. Our empirical analysis pertains to two different ecosystems and taxa: a herbaceous plant community and a estuarine fish database. Despite the substantial differences in ecological interactions and spatial scales, we confirm earlier evidence on the general properties of the scaling of persistence times, including the predicted effects of the structure of the spatial interaction network. The framework tested here allows to investigate directly nature and extent of spatial effects in the context of ecosystem dynamics. The notable coherence between spatial and temporal macroecological patterns, theoretically derived and empirically verified, is suggested to underlie general features of the dynamic evolution of ecosystems.
1710.02876
Lucas Daniel Wittwer
Lucas D. Wittwer, Michael Peters, Sebastian Aland, Dagmar Iber
Simulating Organogenesis in COMSOL: Comparison Of Methods For Simulating Branching Morphogenesis
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During organogenesis tissue grows and deforms. The growth processes are controlled by diffusible proteins, so-called morphogens. Many different patterning mechanisms have been proposed. The stereotypic branching program during lung development can be recapitulated by a receptor-ligand based Turing model. Our group has previously used the Arbitrary Lagrangian-Eulerian (ALE) framework for solving the receptor-ligand Turing model on growing lung domains. However, complex mesh deformations which occur during lung growth severely limit the number of branch generations that can be simulated. A new Phase-Field implementation avoids mesh deformations by considering the surface of the modelling domains as interfaces between phases, and by coupling the reaction-diffusion framework to these surfaces. In this paper, we present a rigorous comparison between the Phase-Field approach and the ALE-based simulation.
[ { "created": "Sun, 8 Oct 2017 19:51:09 GMT", "version": "v1" } ]
2017-10-10
[ [ "Wittwer", "Lucas D.", "" ], [ "Peters", "Michael", "" ], [ "Aland", "Sebastian", "" ], [ "Iber", "Dagmar", "" ] ]
During organogenesis tissue grows and deforms. The growth processes are controlled by diffusible proteins, so-called morphogens. Many different patterning mechanisms have been proposed. The stereotypic branching program during lung development can be recapitulated by a receptor-ligand based Turing model. Our group has previously used the Arbitrary Lagrangian-Eulerian (ALE) framework for solving the receptor-ligand Turing model on growing lung domains. However, complex mesh deformations which occur during lung growth severely limit the number of branch generations that can be simulated. A new Phase-Field implementation avoids mesh deformations by considering the surface of the modelling domains as interfaces between phases, and by coupling the reaction-diffusion framework to these surfaces. In this paper, we present a rigorous comparison between the Phase-Field approach and the ALE-based simulation.
2207.03805
Ferran Larrroya
Ferran Larroya, Tobias Galla
Demographic noise in complex ecological communities
20 pages, 10 figures
J. Phys. Complex. 4 025012 (2023)
10.1088/2632-072X/acd21b
null
q-bio.PE cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
We introduce an individual-based model of a complex ecological community with random interactions. The model contains a large number of species, each with a finite population of individuals, subject to discrete reproduction and death events. The interaction coefficients determining the rates of these events is chosen from an ensemble of random matrices, and is kept fixed in time. The set-up is such that the model reduces to the known generalised Lotka-Volterra equations with random interaction coefficients in the limit of an infinite population for each species. Demographic noise in the individual-based model means that species which would survive in the Lotka-Volterra model can become extinct. These noise-driven extinctions are the focus of the paper. We find that, for increasing complexity of interactions, ecological communities generally become less prone to extinctions induced by demographic noise. An exception are systems composed entirely of predator-prey pairs. These systems are known to be stable in deterministic Lotka-Volterra models with random interactions, but, as we show, they are nevertheless particularly vulnerable to fluctuations.
[ { "created": "Fri, 8 Jul 2022 10:22:05 GMT", "version": "v1" } ]
2023-07-19
[ [ "Larroya", "Ferran", "" ], [ "Galla", "Tobias", "" ] ]
We introduce an individual-based model of a complex ecological community with random interactions. The model contains a large number of species, each with a finite population of individuals, subject to discrete reproduction and death events. The interaction coefficients determining the rates of these events is chosen from an ensemble of random matrices, and is kept fixed in time. The set-up is such that the model reduces to the known generalised Lotka-Volterra equations with random interaction coefficients in the limit of an infinite population for each species. Demographic noise in the individual-based model means that species which would survive in the Lotka-Volterra model can become extinct. These noise-driven extinctions are the focus of the paper. We find that, for increasing complexity of interactions, ecological communities generally become less prone to extinctions induced by demographic noise. An exception are systems composed entirely of predator-prey pairs. These systems are known to be stable in deterministic Lotka-Volterra models with random interactions, but, as we show, they are nevertheless particularly vulnerable to fluctuations.
0912.5175
William Bialek
Thierry Mora, Aleksandra Walczak, William Bialek and Curtis G. Callan Jr
Maximum entropy models for antibody diversity
null
PNAS 107(12) 5405-5410 (2010)
10.1073/pnas.1001705107
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition of pathogens relies on families of proteins showing great diversity. Here we construct maximum entropy models of the sequence repertoire, building on recent experiments that provide a nearly exhaustive sampling of the IgM sequences in zebrafish. These models are based solely on pairwise correlations between residue positions, but correctly capture the higher order statistical properties of the repertoire. Exploiting the interpretation of these models as statistical physics problems, we make several predictions for the collective properties of the sequence ensemble: the distribution of sequences obeys Zipf's law, the repertoire decomposes into several clusters, and there is a massive restriction of diversity due to the correlations. These predictions are completely inconsistent with models in which amino acid substitutions are made independently at each site, and are in good agreement with the data. Our results suggest that antibody diversity is not limited by the sequences encoded in the genome, and may reflect rapid adaptation to antigenic challenges. This approach should be applicable to the study of the global properties of other protein families.
[ { "created": "Mon, 28 Dec 2009 14:42:08 GMT", "version": "v1" } ]
2011-11-28
[ [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra", "" ], [ "Bialek", "William", "" ], [ "Callan", "Curtis G.", "Jr" ] ]
Recognition of pathogens relies on families of proteins showing great diversity. Here we construct maximum entropy models of the sequence repertoire, building on recent experiments that provide a nearly exhaustive sampling of the IgM sequences in zebrafish. These models are based solely on pairwise correlations between residue positions, but correctly capture the higher order statistical properties of the repertoire. Exploiting the interpretation of these models as statistical physics problems, we make several predictions for the collective properties of the sequence ensemble: the distribution of sequences obeys Zipf's law, the repertoire decomposes into several clusters, and there is a massive restriction of diversity due to the correlations. These predictions are completely inconsistent with models in which amino acid substitutions are made independently at each site, and are in good agreement with the data. Our results suggest that antibody diversity is not limited by the sequences encoded in the genome, and may reflect rapid adaptation to antigenic challenges. This approach should be applicable to the study of the global properties of other protein families.
2006.15336
Sashikumaar Ganesan Prof.
Sashikumaar Ganesan and Deepak Subramani
Spatio-temporal predictive modeling framework for infectious disease spread
9 pages, 4 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel predictive modeling framework for the spread of infectious diseases using high dimensional partial differential equations is developed and implemented. A scalar function representing the infected population is defined on a high-dimensional space and its evolution over all directions is described by a population balance equation (PBE). New infections are introduced among the susceptible population from non-quarantined infected population based on their interaction, adherence to distancing norms, hygiene levels and any other societal interventions. Moreover, recovery, death, immunity and all aforementioned parameters are modeled on the high-dimensional space. To epitomize the capabilities and features of the above framework, prognostic estimates of Covid-19 spread using a six-dimensional (time, 2D space, infection severity, duration of infection, and population age) PBE is presented. Further, scenario analysis for different policy interventions and population behavior is presented, throwing more insights into the spatio-temporal spread of infections across disease age, intensity and age of population. These insights could be used for science-informed policy planning.
[ { "created": "Sat, 27 Jun 2020 10:36:39 GMT", "version": "v1" }, { "created": "Fri, 3 Jul 2020 01:24:35 GMT", "version": "v2" } ]
2020-07-06
[ [ "Ganesan", "Sashikumaar", "" ], [ "Subramani", "Deepak", "" ] ]
A novel predictive modeling framework for the spread of infectious diseases using high dimensional partial differential equations is developed and implemented. A scalar function representing the infected population is defined on a high-dimensional space and its evolution over all directions is described by a population balance equation (PBE). New infections are introduced among the susceptible population from non-quarantined infected population based on their interaction, adherence to distancing norms, hygiene levels and any other societal interventions. Moreover, recovery, death, immunity and all aforementioned parameters are modeled on the high-dimensional space. To epitomize the capabilities and features of the above framework, prognostic estimates of Covid-19 spread using a six-dimensional (time, 2D space, infection severity, duration of infection, and population age) PBE is presented. Further, scenario analysis for different policy interventions and population behavior is presented, throwing more insights into the spatio-temporal spread of infections across disease age, intensity and age of population. These insights could be used for science-informed policy planning.
q-bio/0405024
Luis Diambra
Luis Diambra
Modeling stochastic Ca$^{2+}$ release from a cluster of IP$_3$-sensitive receptors
25 pages 10 figures, revised version
null
null
null
q-bio.SC
null
We focused our attention on Ca$^{2+}$ release from the endoplasmic reticulum through a cluster of inositol 1,4,5-trisphosphate (IP$_3$) receptor channels. The random opening and closing of these receptors introduce stochastic effects that have been observed experimentally. Here, we present a stochastic version of Othmer-Tang model for IP$_3$ receptor clusters. We address the average behavior of the channels in response to IP$_3$ stimuli. We found, by stochastic simulation, that the shape of the receptor response to IP$_3$ (fraction of open channels versus [IP$_3$]), is affected by the cytosolic Ca$^{2+}$ level. We also study several aspects of the stochastic properties of Ca${2+}$ release and we compare with experimental observations.
[ { "created": "Mon, 31 May 2004 20:06:35 GMT", "version": "v1" }, { "created": "Thu, 29 Jul 2004 23:28:24 GMT", "version": "v2" } ]
2009-09-29
[ [ "Diambra", "Luis", "" ] ]
We focused our attention on Ca$^{2+}$ release from the endoplasmic reticulum through a cluster of inositol 1,4,5-trisphosphate (IP$_3$) receptor channels. The random opening and closing of these receptors introduce stochastic effects that have been observed experimentally. Here, we present a stochastic version of Othmer-Tang model for IP$_3$ receptor clusters. We address the average behavior of the channels in response to IP$_3$ stimuli. We found, by stochastic simulation, that the shape of the receptor response to IP$_3$ (fraction of open channels versus [IP$_3$]), is affected by the cytosolic Ca$^{2+}$ level. We also study several aspects of the stochastic properties of Ca${2+}$ release and we compare with experimental observations.
q-bio/0412005
Carl Troein
Stuart Kauffman, Carsten Peterson, Bj\"orn Samuelsson and Carl Troein
Genetic networks with canalyzing Boolean rules are always stable
Final version available through PNAS open access at http://www.pnas.org/cgi/content/abstract/0407783101v1
Proc. Natl. Acad. Sci. USA 101 (2004), 17102-17107
10.1073/pnas.0407783101
LU TP 04-10
q-bio.MN cond-mat.soft
null
We determine stability and attractor properties of random Boolean genetic network models with canalyzing rules for a variety of architectures. For all power law, exponential, and flat in-degree distributions, we find that the networks are dynamically stable. Furthermore, for architectures with few inputs per node, the dynamics of the networks is close to critical. In addition, the fraction of genes that are active decreases with the number of inputs per node. These results are based upon investigating ensembles of networks using analytical methods. Also, for different in-degree distributions, the numbers of fixed points and cycles are calculated, with results intuitively consistent with stability analysis; fewer inputs per node implies more cycles, and vice versa. There are hints that genetic networks acquire broader degree distributions with evolution, and hence our results indicate that for single cells, the dynamics should become more stable with evolution. However, such an effect is very likely compensated for by multicellular dynamics, because one expects less stability when interactions among cells are included. We verify this by simulations of a simple model for interactions among cells.
[ { "created": "Thu, 2 Dec 2004 16:35:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kauffman", "Stuart", "" ], [ "Peterson", "Carsten", "" ], [ "Samuelsson", "Björn", "" ], [ "Troein", "Carl", "" ] ]
We determine stability and attractor properties of random Boolean genetic network models with canalyzing rules for a variety of architectures. For all power law, exponential, and flat in-degree distributions, we find that the networks are dynamically stable. Furthermore, for architectures with few inputs per node, the dynamics of the networks is close to critical. In addition, the fraction of genes that are active decreases with the number of inputs per node. These results are based upon investigating ensembles of networks using analytical methods. Also, for different in-degree distributions, the numbers of fixed points and cycles are calculated, with results intuitively consistent with stability analysis; fewer inputs per node implies more cycles, and vice versa. There are hints that genetic networks acquire broader degree distributions with evolution, and hence our results indicate that for single cells, the dynamics should become more stable with evolution. However, such an effect is very likely compensated for by multicellular dynamics, because one expects less stability when interactions among cells are included. We verify this by simulations of a simple model for interactions among cells.
2101.02698
Daniel Han Mr.
Sergei Fedotov, Daniel Han, Andrey Yu. Zubarev, Mark Johnston and Victoria J Allan
Variable-order fractional master equation and clustering of particles: non-uniform lysosome distribution
arXiv admin note: text overlap with arXiv:1902.03087
null
10.1098/rsta.2020.0317
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
In this paper, we formulate the space-dependent variable-order fractional master equation to model clustering of particles, organelles, inside living cells. We find its solution in the long time limit describing non-uniform distribution due to a space dependent fractional exponent. In the continuous space limit, the solution of this fractional master equation is found to be exactly the same as the space-dependent variable-order fractional diffusion equation. In addition, we show that the clustering of lysosomes, an essential organelle for healthy functioning of mammalian cells, exhibit space-dependent fractional exponents. Furthermore, we demonstrate that the non-uniform distribution of lysosomes in living cells is accurately described by the asymptotic solution of the space-dependent variable-order fractional master equation. Finally, Monte Carlo simulations of the fractional master equation validate our analytical solution.
[ { "created": "Thu, 7 Jan 2021 18:58:52 GMT", "version": "v1" } ]
2021-07-22
[ [ "Fedotov", "Sergei", "" ], [ "Han", "Daniel", "" ], [ "Zubarev", "Andrey Yu.", "" ], [ "Johnston", "Mark", "" ], [ "Allan", "Victoria J", "" ] ]
In this paper, we formulate the space-dependent variable-order fractional master equation to model clustering of particles, organelles, inside living cells. We find its solution in the long time limit describing non-uniform distribution due to a space dependent fractional exponent. In the continuous space limit, the solution of this fractional master equation is found to be exactly the same as the space-dependent variable-order fractional diffusion equation. In addition, we show that the clustering of lysosomes, an essential organelle for healthy functioning of mammalian cells, exhibit space-dependent fractional exponents. Furthermore, we demonstrate that the non-uniform distribution of lysosomes in living cells is accurately described by the asymptotic solution of the space-dependent variable-order fractional master equation. Finally, Monte Carlo simulations of the fractional master equation validate our analytical solution.
1304.4515
Ewan Birney
Mikhail Spivakov, Thomas O. Auer, Ravindra Peravali, Ian Dunham, Dirk Dolle, Asao Fujiyama, Atsushi Toyoda, Tomoyuki Aizu, Yohei Minakuchi, Felix Loosli, Kiyoshi Naruse, Ewan Birney, Joachim Wittbrodt
Genomic and phenotypic characterisation of a wild Medaka population: Establishing an isogenic population genetic resource in fish
5 figures, 30 pages
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background Oryzias latipes (Medaka) has been established as a vertebrate genetic model for over a century, and has recently been rediscovered outside its native Japan. The power of new sequencing methods now makes it possible to reinvigorate Medaka genetics, in particular by establishing a near-isogenic panel derived from a single wild population. Results Here we characterise the genomes of wild Medaka catches obtained from a single Southern Japanese population in Kiyosu as a precursor for the establishment of a near isogenic panel of wild lines. The population is free of significant detrimental population structure, and has advantageous linkage disequilibrium properties suitable for establishment of the proposed panel. Analysis of morphometric traits in five representative inbred strains suggests phenotypic mapping will be feasible in the panel. In addition high throughput genome sequencing of these Medaka strains confirms their evolutionary relationships on lines of geographic separation and provides further evidence that there has been little significant interbreeding between the Southern and Northern Medaka population since the Southern/Northern population split. The sequence data suggest that the Southern Japanese Medaka existed as a larger older population which went through a relatively recent bottleneck around 10,000 years ago. In addition we detect patterns of recent positive selection in the Southern population. Conclusions These data indicate that the genetic structure of the Kiyosu Medaka samples are suitable for the establishment of a vertebrate near isogenic panel and therefore inbreeding of 200 lines based on this population has commenced. Progress of this project can be tracked at http://www.ebi.ac.uk/birney-srv/medaka-ref-panel
[ { "created": "Tue, 16 Apr 2013 16:40:59 GMT", "version": "v1" }, { "created": "Tue, 19 Nov 2013 15:19:40 GMT", "version": "v2" } ]
2013-11-20
[ [ "Spivakov", "Mikhail", "" ], [ "Auer", "Thomas O.", "" ], [ "Peravali", "Ravindra", "" ], [ "Dunham", "Ian", "" ], [ "Dolle", "Dirk", "" ], [ "Fujiyama", "Asao", "" ], [ "Toyoda", "Atsushi", "" ], [ ...
Background Oryzias latipes (Medaka) has been established as a vertebrate genetic model for over a century, and has recently been rediscovered outside its native Japan. The power of new sequencing methods now makes it possible to reinvigorate Medaka genetics, in particular by establishing a near-isogenic panel derived from a single wild population. Results Here we characterise the genomes of wild Medaka catches obtained from a single Southern Japanese population in Kiyosu as a precursor for the establishment of a near isogenic panel of wild lines. The population is free of significant detrimental population structure, and has advantageous linkage disequilibrium properties suitable for establishment of the proposed panel. Analysis of morphometric traits in five representative inbred strains suggests phenotypic mapping will be feasible in the panel. In addition high throughput genome sequencing of these Medaka strains confirms their evolutionary relationships on lines of geographic separation and provides further evidence that there has been little significant interbreeding between the Southern and Northern Medaka population since the Southern/Northern population split. The sequence data suggest that the Southern Japanese Medaka existed as a larger older population which went through a relatively recent bottleneck around 10,000 years ago. In addition we detect patterns of recent positive selection in the Southern population. Conclusions These data indicate that the genetic structure of the Kiyosu Medaka samples are suitable for the establishment of a vertebrate near isogenic panel and therefore inbreeding of 200 lines based on this population has commenced. Progress of this project can be tracked at http://www.ebi.ac.uk/birney-srv/medaka-ref-panel
1509.06123
Ricardo Oliveros-Ramos
Ricardo Oliveros-Ramos, Philippe Verley and Yunne-Jai Shin
A sequential approach to calibrate ecosystem models with multiple time series data
33 pages, 4 tables, 13 figures, 2 appendices
null
10.1016/j.pocean.2017.01.002
null
q-bio.QM q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecosystem approach to fisheries requires a thorough understanding of fishing impacts on ecosystem status and processes as well as predictive tools such as ecosystem models to provide useful information for management. The credibility of such models is essential when used as decision making tools, and model fitting to observed data is one major criterion to assess such credibility. However, more attention has been given to the exploration of model behavior than to a rigorous confrontation to observations, as the calibration of ecosystem models is challenging in many ways. First, ecosystem models can only be simulated numerically and are generally too complex for mathematical analysis and explicit parameter estimation; secondly, the complex dynamics represented in ecosystem models allow species-specific parameters to impact other species parameters through ecological interactions; thirdly, critical data about non-commercial species are often poor; lastly, technical aspects can be impediments to the calibration with regard to the high computational cost potentially involved and the scarce documentation published on fitting complex ecosystem models to data. This work highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose criteria to classify the parameters of a model: model dependency and time variability of the parameters. These criteria and the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. The end-to-end ecosystem model ROMS-PISCES-OSMOSE applied to the Northern Humboldt Current Ecosystem is used as an illustrative case study.
[ { "created": "Mon, 21 Sep 2015 06:59:23 GMT", "version": "v1" } ]
2024-04-30
[ [ "Oliveros-Ramos", "Ricardo", "" ], [ "Verley", "Philippe", "" ], [ "Shin", "Yunne-Jai", "" ] ]
Ecosystem approach to fisheries requires a thorough understanding of fishing impacts on ecosystem status and processes as well as predictive tools such as ecosystem models to provide useful information for management. The credibility of such models is essential when used as decision making tools, and model fitting to observed data is one major criterion to assess such credibility. However, more attention has been given to the exploration of model behavior than to a rigorous confrontation to observations, as the calibration of ecosystem models is challenging in many ways. First, ecosystem models can only be simulated numerically and are generally too complex for mathematical analysis and explicit parameter estimation; secondly, the complex dynamics represented in ecosystem models allow species-specific parameters to impact other species parameters through ecological interactions; thirdly, critical data about non-commercial species are often poor; lastly, technical aspects can be impediments to the calibration with regard to the high computational cost potentially involved and the scarce documentation published on fitting complex ecosystem models to data. This work highlights some issues related to the confrontation of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration of ecosystem models. We propose criteria to classify the parameters of a model: model dependency and time variability of the parameters. These criteria and the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. The end-to-end ecosystem model ROMS-PISCES-OSMOSE applied to the Northern Humboldt Current Ecosystem is used as an illustrative case study.
2010.04864
Nathan Baker
Arun V. Sathanur, Nathan A. Baker
A clustering-based biased Monte Carlo approach to protein titration curve prediction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we developed an efficient approach to compute ensemble averages in systems with pairwise-additive energetic interactions between the entities. Methods involving full enumeration of the configuration space result in exponential complexity. Sampling methods such as Markov Chain Monte Carlo (MCMC) algorithms have been proposed to tackle the exponential complexity of these problems; however, in certain scenarios where significant energetic coupling exists between the entities, the efficiency of the such algorithms can be diminished. We used a strategy to improve the efficiency of MCMC by taking advantage of the cluster structure in the interaction energy matrix to bias the sampling. We pursued two different schemes for the biased MCMC runs and show that they are valid MCMC schemes. We used both synthesized and real-world systems to show the improved performance of our biased MCMC methods when compared to the regular MCMC method. In particular, we applied these algorithms to the problem of estimating protonation ensemble averages and titration curves of residues in a protein.
[ { "created": "Sat, 10 Oct 2020 01:32:52 GMT", "version": "v1" } ]
2020-10-13
[ [ "Sathanur", "Arun V.", "" ], [ "Baker", "Nathan A.", "" ] ]
In this work, we developed an efficient approach to compute ensemble averages in systems with pairwise-additive energetic interactions between the entities. Methods involving full enumeration of the configuration space result in exponential complexity. Sampling methods such as Markov Chain Monte Carlo (MCMC) algorithms have been proposed to tackle the exponential complexity of these problems; however, in certain scenarios where significant energetic coupling exists between the entities, the efficiency of the such algorithms can be diminished. We used a strategy to improve the efficiency of MCMC by taking advantage of the cluster structure in the interaction energy matrix to bias the sampling. We pursued two different schemes for the biased MCMC runs and show that they are valid MCMC schemes. We used both synthesized and real-world systems to show the improved performance of our biased MCMC methods when compared to the regular MCMC method. In particular, we applied these algorithms to the problem of estimating protonation ensemble averages and titration curves of residues in a protein.
1807.00844
Yasser A. Ahmed
Nashwa Araby, Soha Soliman, Eman Abdel Raheem and Yasser Ahmed
Morphogenesis of the Sternum in Quail Embryos
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
The flat bone develops through intramembranous ossification, in which the mesenchymal cells are directly driven towards osteogenic lineage without the formation of cartilage template. While long bone develops through endochondral ossification, where cartilage template act as an intermediate stage between mesenchymal and bone tissues. Although the avian sternum is a flat bone, some studies describe formation of a cartilage template during its development. The aim of the current study was to observe the mechanism of ossification in quail sternum during embryonic development. Thirty quail embryos were collected for the current study (5 embryos/ day) during the period between Day (D) 5 and D10 of embryonic development and processed for light microscopy. The differentiation of mesenchymal condensation in to the chondrogenic cells was observed at D5 whereas the secretion of extracellular matrix could be evident at D6. The cartilage primordia were observed by D7 which were consisted of chondrocytes, embedded in matrix and surrounded by perichondrium. Later these primordia were developed in to cartilage template by D8 where the chondrocytes were present in their lacuna. This template attained the shape of future sternum by D9, which was more distinct at D10. These preliminary observations suggested that the quail sternum grows through endochondral ossification. The future study will further explore the histological changes of quail sternum during post-hatching development.
[ { "created": "Mon, 2 Jul 2018 18:07:05 GMT", "version": "v1" } ]
2018-07-04
[ [ "Araby", "Nashwa", "" ], [ "Soliman", "Soha", "" ], [ "Raheem", "Eman Abdel", "" ], [ "Ahmed", "Yasser", "" ] ]
The flat bone develops through intramembranous ossification, in which the mesenchymal cells are directly driven towards osteogenic lineage without the formation of cartilage template. While long bone develops through endochondral ossification, where cartilage template act as an intermediate stage between mesenchymal and bone tissues. Although the avian sternum is a flat bone, some studies describe formation of a cartilage template during its development. The aim of the current study was to observe the mechanism of ossification in quail sternum during embryonic development. Thirty quail embryos were collected for the current study (5 embryos/ day) during the period between Day (D) 5 and D10 of embryonic development and processed for light microscopy. The differentiation of mesenchymal condensation in to the chondrogenic cells was observed at D5 whereas the secretion of extracellular matrix could be evident at D6. The cartilage primordia were observed by D7 which were consisted of chondrocytes, embedded in matrix and surrounded by perichondrium. Later these primordia were developed in to cartilage template by D8 where the chondrocytes were present in their lacuna. This template attained the shape of future sternum by D9, which was more distinct at D10. These preliminary observations suggested that the quail sternum grows through endochondral ossification. The future study will further explore the histological changes of quail sternum during post-hatching development.
1403.1034
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Effect of Small-World Connectivity on Fast Sparsely Synchronized Cortical Rhythms
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fast cortical rhythms with stochastic and intermittent neural discharges have been observed in electric recordings of brain activity. Recently, Brunel et al. developed a framework to describe this kind of fast sparse synchronization in both random and globally-coupled networks of suprathreshold spiking neurons. However, in a real cortical circuit, synaptic connections are known to have complex topology which is neither regular nor random. Hence, in order to extend the works of Brunel et al. to realistic neural networks, we study the effect of network architecture on these fast sparsely synchronized rhythms in an inhibitory population of suprathreshold fast spiking (FS) Izhikevich interneurons. We first employ the conventional Erd\"{o}s-Renyi random graph of suprathreshold FS Izhikevich interneurons for modeling the complex connectivity in neural systems, and study emergence of the population synchronized states by varying both the synaptic inhibition strength $J$ and the noise intensity $D$. Thus, fast sparsely synchronized states of relatively high degree are found to appear for large values of $J$ and $D$. Second, for fixed values of $J$ and $D$ where fast sparse synchronization occurs in the random network, we consider the Watts-Strogatz small-world network of suprathreshold FS Izhikevich interneurons which interpolates between regular lattice and random graph via rewiring, and investigate the effect of small-world synaptic connectivity on emergence of fast sparsely synchronized rhythms by varying the rewiring probability $p$ from short-range to long-range connection. When passing a small critical value $p^*_c$ $(\simeq 0.12)$, fast sparsely synchronized population rhythms are found to emerge in small-world networks with predominantly local connections and rare long-range connections.
[ { "created": "Wed, 5 Mar 2014 08:11:42 GMT", "version": "v1" }, { "created": "Mon, 16 Jun 2014 08:25:03 GMT", "version": "v2" }, { "created": "Thu, 10 Jul 2014 05:46:20 GMT", "version": "v3" }, { "created": "Tue, 2 Sep 2014 01:56:01 GMT", "version": "v4" } ]
2014-09-03
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
Fast cortical rhythms with stochastic and intermittent neural discharges have been observed in electric recordings of brain activity. Recently, Brunel et al. developed a framework to describe this kind of fast sparse synchronization in both random and globally-coupled networks of suprathreshold spiking neurons. However, in a real cortical circuit, synaptic connections are known to have complex topology which is neither regular nor random. Hence, in order to extend the works of Brunel et al. to realistic neural networks, we study the effect of network architecture on these fast sparsely synchronized rhythms in an inhibitory population of suprathreshold fast spiking (FS) Izhikevich interneurons. We first employ the conventional Erd\"{o}s-Renyi random graph of suprathreshold FS Izhikevich interneurons for modeling the complex connectivity in neural systems, and study emergence of the population synchronized states by varying both the synaptic inhibition strength $J$ and the noise intensity $D$. Thus, fast sparsely synchronized states of relatively high degree are found to appear for large values of $J$ and $D$. Second, for fixed values of $J$ and $D$ where fast sparse synchronization occurs in the random network, we consider the Watts-Strogatz small-world network of suprathreshold FS Izhikevich interneurons which interpolates between regular lattice and random graph via rewiring, and investigate the effect of small-world synaptic connectivity on emergence of fast sparsely synchronized rhythms by varying the rewiring probability $p$ from short-range to long-range connection. When passing a small critical value $p^*_c$ $(\simeq 0.12)$, fast sparsely synchronized population rhythms are found to emerge in small-world networks with predominantly local connections and rare long-range connections.
1705.03738
Fengyan Wu
Fengyan Wu, Xiaoli Chen, Yayun Zheng, Jinqiao Duan, J\"urgen Kurths, Xiaofan Li
L\'{e}vy noise-induced transitions in gene regulatory networks
null
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Important effects of noise on a one-dimensional gene expression model involving a single gene have recently been discussed. However, few works have been devoted to the transition in two-dimensional models which include the interaction of genes. Therefore, we investigate here, a quantitative two-dimensional model (MeKS network) of gene expression dynamics describing the competence development in the B. subtilis under the influence of L\'evy as well as Brownian motions, where noises can do the B. subtilis a favor in nutrient depletion. To analyze the transitions between the vegetative and the competence regions therein, two deterministic quantities, the mean first exit time (MFET) and the first escape probability (FEP) from a microscopic perspective, as well as their averaged versions from a macroscopic perspective, are applied. The relative contribution factor (RCF), the ratio of non-Gaussian and Gaussian noise strengths, is adopted to implement optimal control in these transitions. Schematic representations indicate that there exists an optimum choice that makes the transition occurring at the highest probability. Additionally, we use a geometric concept, the stochastic basin of attraction, to exhibit a pictorial comprehension about the influence of the L\'{e}vy motion on the basin stability of the competence state.
[ { "created": "Wed, 10 May 2017 13:04:57 GMT", "version": "v1" } ]
2017-05-11
[ [ "Wu", "Fengyan", "" ], [ "Chen", "Xiaoli", "" ], [ "Zheng", "Yayun", "" ], [ "Duan", "Jinqiao", "" ], [ "Kurths", "Jürgen", "" ], [ "Li", "Xiaofan", "" ] ]
Important effects of noise on a one-dimensional gene expression model involving a single gene have recently been discussed. However, few works have been devoted to the transition in two-dimensional models which include the interaction of genes. Therefore, we investigate here, a quantitative two-dimensional model (MeKS network) of gene expression dynamics describing the competence development in the B. subtilis under the influence of L\'evy as well as Brownian motions, where noises can do the B. subtilis a favor in nutrient depletion. To analyze the transitions between the vegetative and the competence regions therein, two deterministic quantities, the mean first exit time (MFET) and the first escape probability (FEP) from a microscopic perspective, as well as their averaged versions from a macroscopic perspective, are applied. The relative contribution factor (RCF), the ratio of non-Gaussian and Gaussian noise strengths, is adopted to implement optimal control in these transitions. Schematic representations indicate that there exists an optimum choice that makes the transition occurring at the highest probability. Additionally, we use a geometric concept, the stochastic basin of attraction, to exhibit a pictorial comprehension about the influence of the L\'{e}vy motion on the basin stability of the competence state.
2404.08711
Pratham Kankariya
Pratham Kankariya, Rachita Rode, Kevin Mudaliar, Prof. Pranali Hatode
Drug Repurposing for Parkinson's Disease Using Random Walk With Restart Algorithm and the Parkinson's Disease Ontology Database
5 pages, Final Year Engineering Project on Machine Learning and Healthcare Industry
null
null
null
q-bio.QM cs.LG q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Parkinson's disease is a progressive and slowly developing neurodegenerative disease, characterized by dopaminergic neuron loss in the substantia nigra region of the brain. Despite extensive research by scientists, there is not yet a cure to this problem and the available therapies mainly help to reduce some of the Parkinson's symptoms. Drug repurposing (that is, the process of finding new uses for existing drugs) receives more appraisals as an efficient way that allows for reducing the time, resources, and risks associated with the development of new drugs. In this research, we design a novel computational platform that integrates gene expression data, biological networks, and the PDOD database to identify possible drug-repositioning agents for PD therapy. By using machine learning approaches like the RWR algorithm and PDOD scoring system we arrange drug-disease conversions and sort our potential sandboxes according to their possible efficacy. We propose gene expression analysis, network prioritization, and drug target data analysis to arrive at a comprehensive evaluation of drug repurposing chances. Our study results highlight such therapies as promising drug candidates to conduct further research on PD treatment. We also provide the rationale for promising drug repurposing ideas by using various sources of data and computational approaches.
[ { "created": "Thu, 11 Apr 2024 20:11:25 GMT", "version": "v1" } ]
2024-04-16
[ [ "Kankariya", "Pratham", "" ], [ "Rode", "Rachita", "" ], [ "Mudaliar", "Kevin", "" ], [ "Hatode", "Prof. Pranali", "" ] ]
Parkinson's disease is a progressive and slowly developing neurodegenerative disease, characterized by dopaminergic neuron loss in the substantia nigra region of the brain. Despite extensive research by scientists, there is not yet a cure to this problem and the available therapies mainly help to reduce some of the Parkinson's symptoms. Drug repurposing (that is, the process of finding new uses for existing drugs) receives more appraisals as an efficient way that allows for reducing the time, resources, and risks associated with the development of new drugs. In this research, we design a novel computational platform that integrates gene expression data, biological networks, and the PDOD database to identify possible drug-repositioning agents for PD therapy. By using machine learning approaches like the RWR algorithm and PDOD scoring system we arrange drug-disease conversions and sort our potential sandboxes according to their possible efficacy. We propose gene expression analysis, network prioritization, and drug target data analysis to arrive at a comprehensive evaluation of drug repurposing chances. Our study results highlight such therapies as promising drug candidates to conduct further research on PD treatment. We also provide the rationale for promising drug repurposing ideas by using various sources of data and computational approaches.
2212.00168
Vince Grolmusz
Daniel Hegedus and Vince Grolmusz
Robust Circuitry-Based Scores of Structural Importance of Human Brain Areas
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
We consider the 1015-vertex human consensus connectome computed from the diffusion MRI data of 1064 subjects. We define seven different orders on these 1015 graph vertices, where the orders depend on parameters derived from the brain circuitry, that is, from the properties of the edges (or connections) incident to the vertices ordered. We order the vertices according to their degree, the sum, the maximum, and the average of the fiber counts on the incident edges, and the sum, the maximum and the average length of the fibers in the incident edges. We analyze the similarities of these seven orders by the Spearman correlation coefficient and by their inversion numbers and have found that all of these seven orders have great similarities. In other words, if we interpret the orders as scoring of the importance of the vertices in the consensus connectome, then the scores of the vertices will be similar in all seven orderings. That is, important vertices of the human connectome typically have many neighbors, connected with long and thick axonal fibers (where thickness is measured by fiber numbers), and their incident edges have high maximum and average values of length and fiber-number parameters, too. Therefore, these parameters may yield robust ways of deciding which vertices are more important in the anatomy of our brain circuitry than the others.
[ { "created": "Wed, 30 Nov 2022 23:32:26 GMT", "version": "v1" } ]
2022-12-02
[ [ "Hegedus", "Daniel", "" ], [ "Grolmusz", "Vince", "" ] ]
We consider the 1015-vertex human consensus connectome computed from the diffusion MRI data of 1064 subjects. We define seven different orders on these 1015 graph vertices, where the orders depend on parameters derived from the brain circuitry, that is, from the properties of the edges (or connections) incident to the vertices ordered. We order the vertices according to their degree, the sum, the maximum, and the average of the fiber counts on the incident edges, and the sum, the maximum and the average length of the fibers in the incident edges. We analyze the similarities of these seven orders by the Spearman correlation coefficient and by their inversion numbers and have found that all of these seven orders have great similarities. In other words, if we interpret the orders as scoring of the importance of the vertices in the consensus connectome, then the scores of the vertices will be similar in all seven orderings. That is, important vertices of the human connectome typically have many neighbors, connected with long and thick axonal fibers (where thickness is measured by fiber numbers), and their incident edges have high maximum and average values of length and fiber-number parameters, too. Therefore, these parameters may yield robust ways of deciding which vertices are more important in the anatomy of our brain circuitry than the others.
1504.00033
Guo-Wei Wei
Kelin Xia and Zhixiong Zhao and Guo-Wei Wei
Multiresolution topological simplification
22 pages and 14 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Persistent homology has been devised as a promising tool for the topological simplification of complex data. However, it is computationally intractable for large data sets. In this work, we introduce multiresolution persistent homology for tackling large data sets. Our basic idea is to match the resolution with the scale of interest so as to create a topological microscopy for the underlying data. We utilize flexibility-rigidity index (FRI) to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution, we are able to focus the topological lens on a desirable scale. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA and RNA molecules. In particular, the topological persistence of a virus capsid with 240 protein monomers is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks and graphs.
[ { "created": "Tue, 31 Mar 2015 20:47:59 GMT", "version": "v1" } ]
2015-04-02
[ [ "Xia", "Kelin", "" ], [ "Zhao", "Zhixiong", "" ], [ "Wei", "Guo-Wei", "" ] ]
Persistent homology has been devised as a promising tool for the topological simplification of complex data. However, it is computationally intractable for large data sets. In this work, we introduce multiresolution persistent homology for tackling large data sets. Our basic idea is to match the resolution with the scale of interest so as to create a topological microscopy for the underlying data. We utilize flexibility-rigidity index (FRI) to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution, we are able to focus the topological lens on a desirable scale. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA and RNA molecules. In particular, the topological persistence of a virus capsid with 240 protein monomers is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks and graphs.
2310.08345
Shesha Gopal Marehalli Srinivas
Shesha Gopal Marehalli Srinivas, Francesco Avanzini, Massimiliano Esposito
Characterizing the Conditions for Indefinite Growth in Open Chemical Reaction Networks
null
null
null
null
q-bio.MN cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
The thermodynamic and dynamical conditions necessary to observe indefinite growth in homogeneous open chemical reaction networks (CRNs) satisfying mass action kinetics were presented in Srinivas et al. (2023): Unimolecular CRNs can only accumulate equilibrium concentrations of species while multimolecular CRNs are needed to produce indefinite growth with nonequilibrium concentrations. Within multimolecular CRNs, pseudo-unimolecular CRNs produce nonequilibrium concentrations with zero efficiencies. Nonequilibrium growth with finite efficiencies requires dynamically nonlinear CRNs. In this paper, we provide a detailed analysis supporting these results. Mathematical proofs are provided for growth in unimolecular and pseudo-unimolecular CRNs. For multimolecular CRNs, four models displaying very distinctive topological properties are extensively studied, both numerically and partly analytically.
[ { "created": "Thu, 12 Oct 2023 14:08:50 GMT", "version": "v1" } ]
2023-10-13
[ [ "Srinivas", "Shesha Gopal Marehalli", "" ], [ "Avanzini", "Francesco", "" ], [ "Esposito", "Massimiliano", "" ] ]
The thermodynamic and dynamical conditions necessary to observe indefinite growth in homogeneous open chemical reaction networks (CRNs) satisfying mass action kinetics were presented in Srinivas et al. (2023): Unimolecular CRNs can only accumulate equilibrium concentrations of species while multimolecular CRNs are needed to produce indefinite growth with nonequilibrium concentrations. Within multimolecular CRNs, pseudo-unimolecular CRNs produce nonequilibrium concentrations with zero efficiencies. Nonequilibrium growth with finite efficiencies requires dynamically nonlinear CRNs. In this paper, we provide a detailed analysis supporting these results. Mathematical proofs are provided for growth in unimolecular and pseudo-unimolecular CRNs. For multimolecular CRNs, four models displaying very distinctive topological properties are extensively studied, both numerically and partly analytically.
2311.17965
Manal Helal
Manal Helal, Fanrong Kong, Sharon C. A. Chen, Michael Bain, Richard Christen, Vitali Sintchenko
Defining Reference Sequences for Nocardia Species by Similarity and Clustering Analyses of 16S rRNA Gene Sequence Data
null
PLoS ONE June 2011 | Volume 6 | Issue 6 | e19517
10.1371/journal.pone.0019517
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
The intra- and inter-species genetic diversity of bacteria and the absence of 'reference', or the most representative, sequences of individual species present a significant challenge for sequence-based identification. The aims of this study were to determine the utility, and compare the performance of several clustering and classification algorithms to identify the species of 364 sequences of 16S rRNA gene with a defined species in GenBank, and 110 sequences of 16S rRNA gene with no defined species, all within the genus Nocardia. A total of 364 16S rRNA gene sequences of Nocardia species were studied. In addition, 110 16S rRNA gene sequences assigned only to the Nocardia genus level at the time of submission to GenBank were used for machine learning classification experiments. Different clustering algorithms were compared with a novel algorithm or the linear mapping (LM) of the distance matrix. Principal Components Analysis was used for the dimensionality reduction and visualization. Results: The LM algorithm achieved the highest performance and classified the set of 364 16S rRNA sequences into 80 clusters, the majority of which (83.52%) corresponded with the original species. The most representative 16S rRNA sequences for individual Nocardia species have been identified as 'centroids' in respective clusters from which the distances to all other sequences were minimized; 110 16S rRNA gene sequences with identifications recorded only at the genus level were classified using machine learning methods. Simple kNN machine learning demonstrated the highest performance and classified Nocardia species sequences with an accuracy of 92.7% and a mean frequency of 0.578.
[ { "created": "Wed, 29 Nov 2023 12:09:02 GMT", "version": "v1" } ]
2023-12-01
[ [ "Helal", "Manal", "" ], [ "Kong", "Fanrong", "" ], [ "Chen", "Sharon C. A.", "" ], [ "Bain", "Michael", "" ], [ "Christen", "Richard", "" ], [ "Sintchenko", "Vitali", "" ] ]
The intra- and inter-species genetic diversity of bacteria and the absence of 'reference', or the most representative, sequences of individual species present a significant challenge for sequence-based identification. The aims of this study were to determine the utility, and compare the performance of several clustering and classification algorithms to identify the species of 364 sequences of 16S rRNA gene with a defined species in GenBank, and 110 sequences of 16S rRNA gene with no defined species, all within the genus Nocardia. A total of 364 16S rRNA gene sequences of Nocardia species were studied. In addition, 110 16S rRNA gene sequences assigned only to the Nocardia genus level at the time of submission to GenBank were used for machine learning classification experiments. Different clustering algorithms were compared with a novel algorithm or the linear mapping (LM) of the distance matrix. Principal Components Analysis was used for the dimensionality reduction and visualization. Results: The LM algorithm achieved the highest performance and classified the set of 364 16S rRNA sequences into 80 clusters, the majority of which (83.52%) corresponded with the original species. The most representative 16S rRNA sequences for individual Nocardia species have been identified as 'centroids' in respective clusters from which the distances to all other sequences were minimized; 110 16S rRNA gene sequences with identifications recorded only at the genus level were classified using machine learning methods. Simple kNN machine learning demonstrated the highest performance and classified Nocardia species sequences with an accuracy of 92.7% and a mean frequency of 0.578.
2103.02048
Javier Rubio-Herrero
Javier Rubio-Herrero and Yuchen Wang
A Flexible Rolling Regression Framework for Time-Varying SIRD models: Application to COVID-19
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present paper introduces a data-driven framework for describing the time-varying nature of an SIRD model in the context of COVID-19. By embedding a rolling regression in a mixed integer bilevel nonlinear programming problem, our aim is to provide the research community with a model that reproduces accurately the observed changes in the number of infected, recovered, and death cases, while providing information about the time dependency of the parameters that govern the SIRD model. We propose this optimization model and a genetic algorithm to tackle its solution. Moreover, we test this algorithm with 2020 COVID-19 data from the state of Minnesota and found that our results are consistent both qualitatively and quantitatively, thus proving that the framework proposed is an effective an flexible tool to describe the dynamics of a pandemic.
[ { "created": "Tue, 2 Mar 2021 21:53:32 GMT", "version": "v1" } ]
2021-03-04
[ [ "Rubio-Herrero", "Javier", "" ], [ "Wang", "Yuchen", "" ] ]
The present paper introduces a data-driven framework for describing the time-varying nature of an SIRD model in the context of COVID-19. By embedding a rolling regression in a mixed integer bilevel nonlinear programming problem, our aim is to provide the research community with a model that reproduces accurately the observed changes in the number of infected, recovered, and death cases, while providing information about the time dependency of the parameters that govern the SIRD model. We propose this optimization model and a genetic algorithm to tackle its solution. Moreover, we test this algorithm with 2020 COVID-19 data from the state of Minnesota and found that our results are consistent both qualitatively and quantitatively, thus proving that the framework proposed is an effective an flexible tool to describe the dynamics of a pandemic.
1805.05359
Laura Ellwein
Laura Ellwein Fix (1), Joseph Khoury (2), Russell Moores (2), Lauren Linkous (1), Matthew Brandes (3), and Henry J. Rozycki (2) ((1) Department of Mathematics and Applied Mathematics, Virginia Commonwealth University, Richmond, VA, (2) Division of Neonatal Medicine, Children's Hospital of Richmond, Virginia Commonwealth University, Richmond, VA, (3) VCU School of Medicine, Virginia Commonwealth University, Richmond, VA)
Theoretical open-loop model of respiratory mechanics in the extremely preterm infant
22 pages, 5 figures
null
10.1371/journal.pone.0198425
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-invasive ventilation is increasingly used for respiratory support in preterm infants, and is associated with a lower risk of chronic lung disease. However, this mode is often not successful in the extremely preterm infant in part due to their markedly increased chest wall compliance that does not provide enough structure against which the forces of inhalation can generate sufficient pressure. To address the continued challenge of studying treatments in this fragile population, we developed a nonlinear lumped-parameter model of respiratory system mechanics of the extremely preterm infant that incorporates nonlinear lung and chest wall compliances and lung volume parameters tuned to this population. In particular we developed a novel empirical representation of progressive volume loss based on compensatory alveolar pressure increase resulting from collapsed alveoli. The model demonstrates increased rate of volume loss related to high chest wall compliance, and simulates laryngeal braking for elevation of end-expiratory lung volume and constant positive airway pressure (CPAP). The model predicts that low chest wall compliance (chest stiffening) in addition to laryngeal braking and CPAP enhance breathing and delay lung volume loss. These results motivate future data collection strategies and investigation into treatments for chest wall stiffening.
[ { "created": "Mon, 14 May 2018 18:03:42 GMT", "version": "v1" } ]
2018-07-04
[ [ "Fix", "Laura Ellwein", "" ], [ "Khoury", "Joseph", "" ], [ "Moores", "Russell", "" ], [ "Linkous", "Lauren", "" ], [ "Brandes", "Matthew", "" ], [ "Rozycki", "Henry J.", "" ] ]
Non-invasive ventilation is increasingly used for respiratory support in preterm infants, and is associated with a lower risk of chronic lung disease. However, this mode is often not successful in the extremely preterm infant in part due to their markedly increased chest wall compliance that does not provide enough structure against which the forces of inhalation can generate sufficient pressure. To address the continued challenge of studying treatments in this fragile population, we developed a nonlinear lumped-parameter model of respiratory system mechanics of the extremely preterm infant that incorporates nonlinear lung and chest wall compliances and lung volume parameters tuned to this population. In particular we developed a novel empirical representation of progressive volume loss based on compensatory alveolar pressure increase resulting from collapsed alveoli. The model demonstrates increased rate of volume loss related to high chest wall compliance, and simulates laryngeal braking for elevation of end-expiratory lung volume and constant positive airway pressure (CPAP). The model predicts that low chest wall compliance (chest stiffening) in addition to laryngeal braking and CPAP enhance breathing and delay lung volume loss. These results motivate future data collection strategies and investigation into treatments for chest wall stiffening.
1909.08553
Asohan Amarasingham
Jonathan Platkiewicz, Zachary Saccomano, Sam McKenzie, Daniel English and Asohan Amarasingham
Monosynaptic inference via finely-timed spikes
45 pages, 11 figures
J Comput Neurosci (2021)
10.1007/s10827-020-00770-5
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Observations of finely-timed spike relationships in population recordings have been used to support partial reconstruction of neural microcircuit diagrams. In this approach, fine-timescale components of paired spike train interactions are isolated and subsequently attributed to synaptic parameters. Recent perturbation studies strengthen the case for such an inference, yet the complete set of measurements needed to calibrate statistical models are unavailable. To address this gap, we study features of pairwise spiking in a large-scale in vivo dataset where presynaptic neurons were explicitly decoupled from network activity by juxtacellular stimulation. We then construct biophysical models of paired spike trains to reproduce the observed phenomenology of in vivo monosynaptic interactions, including both fine-timescale spike-spike correlations and firing irregularity. A key characteristic of these models is that the paired neurons are coupled by rapidly-fluctuating background inputs. We quantify a monosynapse's causal effect by comparing the postsynaptic train with its counterfactual, when the monosynapse is removed. Subsequently, we develop statistical techniques for estimating this causal effect from the pre- and post-synaptic spike trains. A particular focus is the justification and application of a nonparametric separation of timescale principle to implement synaptic inference. Using simulated data generated from the biophysical models, we characterize the regimes in which the estimators accurately identify the monosynaptic effect. A secondary goal is to initiate a critical exploration of neurostatistical assumptions in terms of biophysical mechanisms, particularly with regards to the challenging but arguably fundamental issue of fast, unobservable nonstationarities in background dynamics.
[ { "created": "Wed, 18 Sep 2019 16:22:38 GMT", "version": "v1" }, { "created": "Sat, 5 Sep 2020 23:55:40 GMT", "version": "v2" } ]
2021-02-15
[ [ "Platkiewicz", "Jonathan", "" ], [ "Saccomano", "Zachary", "" ], [ "McKenzie", "Sam", "" ], [ "English", "Daniel", "" ], [ "Amarasingham", "Asohan", "" ] ]
Observations of finely-timed spike relationships in population recordings have been used to support partial reconstruction of neural microcircuit diagrams. In this approach, fine-timescale components of paired spike train interactions are isolated and subsequently attributed to synaptic parameters. Recent perturbation studies strengthen the case for such an inference, yet the complete set of measurements needed to calibrate statistical models are unavailable. To address this gap, we study features of pairwise spiking in a large-scale in vivo dataset where presynaptic neurons were explicitly decoupled from network activity by juxtacellular stimulation. We then construct biophysical models of paired spike trains to reproduce the observed phenomenology of in vivo monosynaptic interactions, including both fine-timescale spike-spike correlations and firing irregularity. A key characteristic of these models is that the paired neurons are coupled by rapidly-fluctuating background inputs. We quantify a monosynapse's causal effect by comparing the postsynaptic train with its counterfactual, when the monosynapse is removed. Subsequently, we develop statistical techniques for estimating this causal effect from the pre- and post-synaptic spike trains. A particular focus is the justification and application of a nonparametric separation of timescale principle to implement synaptic inference. Using simulated data generated from the biophysical models, we characterize the regimes in which the estimators accurately identify the monosynaptic effect. A secondary goal is to initiate a critical exploration of neurostatistical assumptions in terms of biophysical mechanisms, particularly with regards to the challenging but arguably fundamental issue of fast, unobservable nonstationarities in background dynamics.
2202.11182
Matthew Macauley
Isadora Deal, Matthew Macauley, and Robin Davies
Boolean models of the transport, synthesis, and metabolism of tryptophan in Escherichia Coli
33 pages, 12 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tryptophan (trp) operon in E. coli codes for the proteins responsible for the synthesis of the amino acid tryptophan from chorismic acid, and has been one of the most well-studied gene networks since its discovery in the 1960s. The tryptophanase (tna) operon codes for proteins needed to transport and metabolize it. Both of these have been modeled individually with delay differential equations under the assumption of mass-action kinetics. Recent work has provided strong evidence for bistable behavior of the tna operon. The authors of (Orozco, 2019) identified a medium range of tryptophan in which the system has two stable steady-states, and they reproduced these experimentally. In this paper, we will show how a Boolean model can capture this bistability. We will also develop and analyze a Boolean model of the trp operon. Finally, we will combine these two to create a single Boolean model of the transport, synthesis, and metabolism of tryptophan. In this amalgamated model, the bistability disappears, presumably reflecting the ability of the trp operon to produce tryptophan and drive the system toward homeostasis. All of these models have longer attractors that we call "artifacts of synchrony", which disappear in the asynchronous automata. This curiously matches the behavior of a recent Boolean model of the arabinose operon in E. coli, and we discuss some open-ended questions that arise along these lines.
[ { "created": "Tue, 22 Feb 2022 21:22:29 GMT", "version": "v1" } ]
2022-02-24
[ [ "Deal", "Isadora", "" ], [ "Macauley", "Matthew", "" ], [ "Davies", "Robin", "" ] ]
The tryptophan (trp) operon in E. coli codes for the proteins responsible for the synthesis of the amino acid tryptophan from chorismic acid, and has been one of the most well-studied gene networks since its discovery in the 1960s. The tryptophanase (tna) operon codes for proteins needed to transport and metabolize it. Both of these have been modeled individually with delay differential equations under the assumption of mass-action kinetics. Recent work has provided strong evidence for bistable behavior of the tna operon. The authors of (Orozco, 2019) identified a medium range of tryptophan in which the system has two stable steady-states, and they reproduced these experimentally. In this paper, we will show how a Boolean model can capture this bistability. We will also develop and analyze a Boolean model of the trp operon. Finally, we will combine these two to create a single Boolean model of the transport, synthesis, and metabolism of tryptophan. In this amalgamated model, the bistability disappears, presumably reflecting the ability of the trp operon to produce tryptophan and drive the system toward homeostasis. All of these models have longer attractors that we call "artifacts of synchrony", which disappear in the asynchronous automata. This curiously matches the behavior of a recent Boolean model of the arabinose operon in E. coli, and we discuss some open-ended questions that arise along these lines.
2405.04011
Wei Xie
Keilung Choy and Wei Xie
Adjoint Sensitivity Analysis on Multi-Scale Bioprocess Stochastic Reaction Network
11 pages, 2 figures
null
null
null
q-bio.MN stat.ML
http://creativecommons.org/licenses/by/4.0/
Motivated by the pressing challenges in the digital twin development for biomanufacturing systems, we introduce an adjoint sensitivity analysis (SA) approach to expedite the learning of mechanistic model parameters. In this paper, we consider enzymatic stochastic reaction networks representing a multi-scale bioprocess mechanistic model that allows us to integrate disparate data from diverse production processes and leverage the information from existing macro-kinetic and genome-scale models. To support forward prediction and backward reasoning, we develop a convergent adjoint SA algorithm studying how the perturbations of model parameters and inputs (e.g., initial state) propagate through enzymatic reaction networks and impact on output trajectory predictions. This SA can provide a sample efficient and interpretable way to assess the sensitivities between inputs and outputs accounting for their causal dependencies. Our empirical study underscores the resilience of these sensitivities and illuminates a deeper comprehension of the regulatory mechanisms behind bioprocess through sensitivities.
[ { "created": "Tue, 7 May 2024 05:06:45 GMT", "version": "v1" }, { "created": "Fri, 28 Jun 2024 21:50:16 GMT", "version": "v2" } ]
2024-07-02
[ [ "Choy", "Keilung", "" ], [ "Xie", "Wei", "" ] ]
Motivated by the pressing challenges in the digital twin development for biomanufacturing systems, we introduce an adjoint sensitivity analysis (SA) approach to expedite the learning of mechanistic model parameters. In this paper, we consider enzymatic stochastic reaction networks representing a multi-scale bioprocess mechanistic model that allows us to integrate disparate data from diverse production processes and leverage the information from existing macro-kinetic and genome-scale models. To support forward prediction and backward reasoning, we develop a convergent adjoint SA algorithm studying how the perturbations of model parameters and inputs (e.g., initial state) propagate through enzymatic reaction networks and impact on output trajectory predictions. This SA can provide a sample efficient and interpretable way to assess the sensitivities between inputs and outputs accounting for their causal dependencies. Our empirical study underscores the resilience of these sensitivities and illuminates a deeper comprehension of the regulatory mechanisms behind bioprocess through sensitivities.
1706.03085
Jacqueline Wentz
J. M. Wentz (University of Colorado, Boulder), A. Mendenhall (University of Washington, Seattle), D. M. Bortz (University of Colorado, Boulder)
Pattern Formation in the Longevity-Related Expression of Heat Shock Protein-16.2 in Caenorhabditis elegans
32 pages with appendix, 10 figures, 1 table
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aging in Caenorhabditis elegans is controlled, in part, by the insulin-like signaling and heat shock response pathways. Following thermal stress, expression levels of small heat shock protein 16.2 show a spatial patterning across the 20 intestinal cells that reside along the length of the worm. Here, we present a hypothesized mechanism that could lead to this patterned response and develop a mathematical model of this system to test our hypothesis. We propose that the patterned expression of heat shock protein is caused by a diffusion-driven instability within the pseudocoelom, or fluid-filled cavity, that borders the intestinal cells in C. elegans. This instability is due to the interactions between two classes of insulin like peptides that serve antagonistic roles. We examine output from the developed model and compare it to experimental data on heat shock protein expression. Furthermore, we use the model to gain insight on possible biological parameters in the system. The model presented is capable of producing patterns similar to what is observed experimentally and provides a first step in mathematically modeling aging-related mechanisms in C. elegans.
[ { "created": "Fri, 9 Jun 2017 18:29:30 GMT", "version": "v1" } ]
2017-06-13
[ [ "Wentz", "J. M.", "", "University of Colorado, Boulder" ], [ "Mendenhall", "A.", "", "University of Washington, Seattle" ], [ "Bortz", "D. M.", "", "University of Colorado,\n Boulder" ] ]
Aging in Caenorhabditis elegans is controlled, in part, by the insulin-like signaling and heat shock response pathways. Following thermal stress, expression levels of small heat shock protein 16.2 show a spatial patterning across the 20 intestinal cells that reside along the length of the worm. Here, we present a hypothesized mechanism that could lead to this patterned response and develop a mathematical model of this system to test our hypothesis. We propose that the patterned expression of heat shock protein is caused by a diffusion-driven instability within the pseudocoelom, or fluid-filled cavity, that borders the intestinal cells in C. elegans. This instability is due to the interactions between two classes of insulin like peptides that serve antagonistic roles. We examine output from the developed model and compare it to experimental data on heat shock protein expression. Furthermore, we use the model to gain insight on possible biological parameters in the system. The model presented is capable of producing patterns similar to what is observed experimentally and provides a first step in mathematically modeling aging-related mechanisms in C. elegans.
q-bio/0606030
Herculano Martinho
Sergio Godoy Penteado, Claudio S. Meneses, Anderson de Oliveira Lobo, Airton Abrahao Martin, Herculano da Silva Martinho
Diagnosis of rotator cuff lesions by FT-Raman spectroscopy: a biochemical study
17 pages, presented on SPEC 2006-Heidelberg, Germany
null
null
null
q-bio.TO q-bio.BM
null
The biochemical changes on normal and degenerated tissues of rotator cuff supraspinatus tendons were probed by FT-Raman spectroscopy. The Raman spectra showed differences on the spectral regions of cysteine, amino acids, nucleic acids, carbohydrates, and lipids. These spectral differences were assigned to pathological biochemical alterations due to the degenerative process of the tendon. Principal Components Analysis was performed on the spectral data and enabled the correct classification of the spectra as normal (grade 1) and degenerated (grades 2 and 3). These findings indicate that Raman spectroscopy could be a very promising tool for the rotator cuff supraspinatus tendon diagnosis and for quantification of their degenerative degree.
[ { "created": "Thu, 22 Jun 2006 12:16:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Penteado", "Sergio Godoy", "" ], [ "Meneses", "Claudio S.", "" ], [ "Lobo", "Anderson de Oliveira", "" ], [ "Martin", "Airton Abrahao", "" ], [ "Martinho", "Herculano da Silva", "" ] ]
The biochemical changes on normal and degenerated tissues of rotator cuff supraspinatus tendons were probed by FT-Raman spectroscopy. The Raman spectra showed differences on the spectral regions of cysteine, amino acids, nucleic acids, carbohydrates, and lipids. These spectral differences were assigned to pathological biochemical alterations due to the degenerative process of the tendon. Principal Components Analysis was performed on the spectral data and enabled the correct classification of the spectra as normal (grade 1) and degenerated (grades 2 and 3). These findings indicate that Raman spectroscopy could be a very promising tool for the rotator cuff supraspinatus tendon diagnosis and for quantification of their degenerative degree.
1810.11522
Yoel Shkolnisky
Ido Greenberg and Yoel Shkolnisky
Common lines modeling for reference free ab-initio reconstruction in cryo-EM
null
Journal of Structural Biology, 200(2): 106-117, 2017
10.1016/j.jsb.2017.09.007
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of estimating an unbiased and reference-free ab-inito model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab-initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab-initio models with resolutions of 20A or better, even from class averages consisting of as few as three raw images per class.
[ { "created": "Fri, 26 Oct 2018 20:35:15 GMT", "version": "v1" } ]
2018-10-30
[ [ "Greenberg", "Ido", "" ], [ "Shkolnisky", "Yoel", "" ] ]
We consider the problem of estimating an unbiased and reference-free ab-inito model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab-initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab-initio models with resolutions of 20A or better, even from class averages consisting of as few as three raw images per class.
1903.07585
Marianna Colasuonno
Marianna Colasuonno, Anna Lisa Palange, Rachida Aid, Miguel Ferreira, Hilaria Mollica, Roberto Palomba, Michele Emdin, Massimo Del Sette, C\'edric Chauvierre, Didier Letourneur, Paolo Decuzzi
Erythrocyte-Inspired Discoidal Polymeric Nanoconstructs carrying Tissue Plasminogen Activator for the Enhanced Lysis of Blood Clots
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tissue plasminogen activator (tPA) is the sole approved therapeutic molecule for the treatment of acute ischemic stroke. Yet, only a small percentage of patients could benefit from this life-saving treatment because of medical contraindications and severe side effects, including brain hemorrhage, associated with delayed administration. Here, a nano therapeutic agent is realized by directly associating the clinical formulation of tPA to the porous structure of soft discoidal polymeric nanoconstructs (tPA-DPNs). The porous matrix of DPNs protects tPA from rapid degradation, allowing tPA-DPNs to preserve over 70 % of the tPA original activity after 3 h of exposure to serum proteins. Under dynamic conditions, tPA-DPNs dissolve clots more efficiently than free tPA, as demonstrated in a microfluidic chip where clots are formed mimicking in vivo conditions. At 60 min post treatment initiation, the clot area reduces by half (57 + 8 %) with tPA-DPNs, whereas a similar result (56 + 21 %) is obtained only after 90 min for free tPA. In murine mesentery venules, the intravenous administration of 2.5 mg/kg of tPA-DPNs resolves almost 90 % of the blood clots, whereas a similar dose of free tPA successfully recanalize only about 40 % of the treated vessels. At about 1/10 of the clinical dose (1.0 mg/kg), tPA-DPNs still effectively dissolve 70 % of the clots, whereas free tPA works efficiently only on 16 % of the vessels. In vivo, discoidal tPA-DPNs outperform the lytic activity of 200 nm spherical tPA-coated nanoconstructs in terms of both percentage of successful recanalization events and clot area reduction. The conjugation of tPA with preserved lytic activity, the deformability and blood circulating time of DPNs together with the faster blood clot dissolution would make tPA-DPNs a promising nanotool for enhancing both potency and safety of thrombolytic therapies.
[ { "created": "Fri, 8 Mar 2019 11:17:10 GMT", "version": "v1" } ]
2019-03-19
[ [ "Colasuonno", "Marianna", "" ], [ "Palange", "Anna Lisa", "" ], [ "Aid", "Rachida", "" ], [ "Ferreira", "Miguel", "" ], [ "Mollica", "Hilaria", "" ], [ "Palomba", "Roberto", "" ], [ "Emdin", "Michele", "" ...
Tissue plasminogen activator (tPA) is the sole approved therapeutic molecule for the treatment of acute ischemic stroke. Yet, only a small percentage of patients could benefit from this life-saving treatment because of medical contraindications and severe side effects, including brain hemorrhage, associated with delayed administration. Here, a nano therapeutic agent is realized by directly associating the clinical formulation of tPA to the porous structure of soft discoidal polymeric nanoconstructs (tPA-DPNs). The porous matrix of DPNs protects tPA from rapid degradation, allowing tPA-DPNs to preserve over 70 % of the tPA original activity after 3 h of exposure to serum proteins. Under dynamic conditions, tPA-DPNs dissolve clots more efficiently than free tPA, as demonstrated in a microfluidic chip where clots are formed mimicking in vivo conditions. At 60 min post treatment initiation, the clot area reduces by half (57 + 8 %) with tPA-DPNs, whereas a similar result (56 + 21 %) is obtained only after 90 min for free tPA. In murine mesentery venules, the intravenous administration of 2.5 mg/kg of tPA-DPNs resolves almost 90 % of the blood clots, whereas a similar dose of free tPA successfully recanalize only about 40 % of the treated vessels. At about 1/10 of the clinical dose (1.0 mg/kg), tPA-DPNs still effectively dissolve 70 % of the clots, whereas free tPA works efficiently only on 16 % of the vessels. In vivo, discoidal tPA-DPNs outperform the lytic activity of 200 nm spherical tPA-coated nanoconstructs in terms of both percentage of successful recanalization events and clot area reduction. The conjugation of tPA with preserved lytic activity, the deformability and blood circulating time of DPNs together with the faster blood clot dissolution would make tPA-DPNs a promising nanotool for enhancing both potency and safety of thrombolytic therapies.
q-bio/0412030
Wan Ahmad Tajuddin Wan Abdullah
Wan Ahmad Tajuddin Wan Abdullah (Universiti Malaya, Kuala Lumpur)
Love before Sex
Paper presented at PERFIK 2004, Oct. 2004, Kuala Lumpur. 8 pages. pdf only
null
null
null
q-bio.PE
null
Much has been debated about the benefit of sexual over asexual reproduction in terms of evolutionary fitness. Here we focus on the advantage that may be brought about by the process of mating, where the choosing of mates contributes to the increase in fitness in a constructive way. We carry out computer simulations of such mating systems and investigate, on one hand, how mate phenotypes contribute to offspring fitness, and, on the other hand, how selection affects mate phenotypes. We discuss how helpful such a mechanism may be in determining trajectories on rugged energy landscapes leading to global optimum.
[ { "created": "Thu, 16 Dec 2004 05:26:05 GMT", "version": "v1" } ]
2007-05-23
[ [ "Abdullah", "Wan Ahmad Tajuddin Wan", "", "Universiti Malaya, Kuala Lumpur" ] ]
Much has been debated about the benefit of sexual over asexual reproduction in terms of evolutionary fitness. Here we focus on the advantage that may be brought about by the process of mating, where the choosing of mates contributes to the increase in fitness in a constructive way. We carry out computer simulations of such mating systems and investigate, on one hand, how mate phenotypes contribute to offspring fitness, and, on the other hand, how selection affects mate phenotypes. We discuss how helpful such a mechanism may be in determining trajectories on rugged energy landscapes leading to global optimum.
1505.01138
Sebastian Kmiecik
Maciej Blaszczyk, Mateusz Kurcinski, Maksim Kouza, Lukasz Wieteska, Aleksander Debinski, Andrzej Kolinski, Sebastian Kmiecik
Modeling of protein-peptide interactions using the CABS-dock web server for binding site search and flexible docking
Published in Methods journal, available online 10 July 2015
Methods, 93:72-83, 2016
10.1016/j.ymeth.2015.07.004
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-peptide interactions play essential functional roles in living organisms and their structural characterization is a hot subject of current experimental and theoretical research. Computational modeling of the structure of protein-peptide interactions is usually divided into two stages: prediction of the binding site at a protein receptor surface, and then docking (and modeling) the peptide structure into the known binding site. This paper presents a comprehensive CABS-dock method for the simultaneous search of binding sites and flexible protein-peptide docking, available as a users friendly web server. We present example CABS-dock results obtained in the default CABS-dock mode and using its advanced options that enable the user to increase the range of flexibility for chosen receptor fragments or to exclude user-selected binding modes from docking search. Furthermore, we demonstrate a strategy to improve CABS-dock performance by assessing the quality of models with classical molecular dynamics. Finally, we discuss the promising extensions and applications of the CABS-dock method and provide a tutorial appendix for the convenient analysis and visualization of CABS-dock results. The CABS-dock web server is freely available at http://biocomp.chem.uw.edu.pl/CABSdock/
[ { "created": "Tue, 5 May 2015 19:34:52 GMT", "version": "v1" }, { "created": "Tue, 28 Jul 2015 14:27:05 GMT", "version": "v2" } ]
2016-01-12
[ [ "Blaszczyk", "Maciej", "" ], [ "Kurcinski", "Mateusz", "" ], [ "Kouza", "Maksim", "" ], [ "Wieteska", "Lukasz", "" ], [ "Debinski", "Aleksander", "" ], [ "Kolinski", "Andrzej", "" ], [ "Kmiecik", "Sebastian", ...
Protein-peptide interactions play essential functional roles in living organisms and their structural characterization is a hot subject of current experimental and theoretical research. Computational modeling of the structure of protein-peptide interactions is usually divided into two stages: prediction of the binding site at a protein receptor surface, and then docking (and modeling) the peptide structure into the known binding site. This paper presents a comprehensive CABS-dock method for the simultaneous search of binding sites and flexible protein-peptide docking, available as a users friendly web server. We present example CABS-dock results obtained in the default CABS-dock mode and using its advanced options that enable the user to increase the range of flexibility for chosen receptor fragments or to exclude user-selected binding modes from docking search. Furthermore, we demonstrate a strategy to improve CABS-dock performance by assessing the quality of models with classical molecular dynamics. Finally, we discuss the promising extensions and applications of the CABS-dock method and provide a tutorial appendix for the convenient analysis and visualization of CABS-dock results. The CABS-dock web server is freely available at http://biocomp.chem.uw.edu.pl/CABSdock/
2104.05991
Nicol\'as Gallego
Nicol\'as Gallego-Molina, Marco Formoso, Andr\'es Ortiz, Francisco J. Mart\'inez-Murcia, Juan L. Luque
Temporal EigenPAC for dyslexia diagnosis
null
null
10.1007/978-3-030-85099-9_4
null
q-bio.NC cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroencephalography signals allow to explore the functional activity of the brain cortex in a non-invasive way. However, the analysis of these signals is not straightforward due to the presence of different artifacts and the very low signal-to-noise ratio. Cross-Frequency Coupling (CFC) methods provide a way to extract information from EEG, related to the synchronization among frequency bands. However, CFC methods are usually applied in a local way, computing the interaction between phase and amplitude at the same electrode. In this work we show a method to compute PAC features among electrodes to study the functional connectivity. Moreover, this has been applied jointly with Principal Component Analysis to explore patterns related to Dyslexia in 7-years-old children. The developed methodology reveals the temporal evolution of PAC-based connectivity. Directions of greatest variance computed by PCA are called eigenPACs here, since they resemble the classical \textit{eigenfaces} representation. The projection of PAC data onto the eigenPACs provide a set of features that has demonstrates their discriminative capability, specifically in the Beta-Gamma bands.
[ { "created": "Tue, 13 Apr 2021 07:51:07 GMT", "version": "v1" } ]
2022-01-24
[ [ "Gallego-Molina", "Nicolás", "" ], [ "Formoso", "Marco", "" ], [ "Ortiz", "Andrés", "" ], [ "Martínez-Murcia", "Francisco J.", "" ], [ "Luque", "Juan L.", "" ] ]
Electroencephalography signals allow to explore the functional activity of the brain cortex in a non-invasive way. However, the analysis of these signals is not straightforward due to the presence of different artifacts and the very low signal-to-noise ratio. Cross-Frequency Coupling (CFC) methods provide a way to extract information from EEG, related to the synchronization among frequency bands. However, CFC methods are usually applied in a local way, computing the interaction between phase and amplitude at the same electrode. In this work we show a method to compute PAC features among electrodes to study the functional connectivity. Moreover, this has been applied jointly with Principal Component Analysis to explore patterns related to Dyslexia in 7-years-old children. The developed methodology reveals the temporal evolution of PAC-based connectivity. Directions of greatest variance computed by PCA are called eigenPACs here, since they resemble the classical \textit{eigenfaces} representation. The projection of PAC data onto the eigenPACs provide a set of features that has demonstrates their discriminative capability, specifically in the Beta-Gamma bands.
2001.09411
Farshid Mohammad-Rafiee
Fatemeh Khodabandeh, Hashem Fatemi, and Farshid Mohammad-Rafiee1
Insight into the Unwrapping of the Dinucleosome
null
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamics of nucleosomes, the building blocks of the chromatin, has crucial effects on expression, replication and repair of genomes in eukaryotes. Beside constant movements of nucleosomes by thermal fluctuations, ATP-dependent chromatin remodelling complexes cause their active displacements. Here we propose a theoretical analysis of dinucleosome wrapping and unwrapping dynamics in the presence of an external force. We explore the energy landscape and configurations of dinucleosome in different unwrapped states. Moreover, using a dynamical Monte-Carlo simulation algorithm, we demonstrate the dynamical features of the system such as the unwrapping force for partial and full wrapping processes. Furthermore, we show that in the short length of linker DNA ($\sim 10 - 90$ bp), the asymmetric unwrapping occurs. These findings could shed some light on chromatin dynamics and gene accessibility.
[ { "created": "Sun, 26 Jan 2020 06:30:19 GMT", "version": "v1" } ]
2020-01-28
[ [ "Khodabandeh", "Fatemeh", "" ], [ "Fatemi", "Hashem", "" ], [ "Mohammad-Rafiee1", "Farshid", "" ] ]
Dynamics of nucleosomes, the building blocks of the chromatin, has crucial effects on expression, replication and repair of genomes in eukaryotes. Beside constant movements of nucleosomes by thermal fluctuations, ATP-dependent chromatin remodelling complexes cause their active displacements. Here we propose a theoretical analysis of dinucleosome wrapping and unwrapping dynamics in the presence of an external force. We explore the energy landscape and configurations of dinucleosome in different unwrapped states. Moreover, using a dynamical Monte-Carlo simulation algorithm, we demonstrate the dynamical features of the system such as the unwrapping force for partial and full wrapping processes. Furthermore, we show that in the short length of linker DNA ($\sim 10 - 90$ bp), the asymmetric unwrapping occurs. These findings could shed some light on chromatin dynamics and gene accessibility.
2205.02118
Antonio Batista
Matheus Hansen, Paulo R. Protachevicz, Kelly C. Iarosz, Ibere L. Caldas, Antonio M. Batista, Elbert E. N. Macau
The effect of time delay for synchronisation suppression in neuronal networks
null
null
10.1016/j.chaos.2022.112690
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the time delay in the synaptic conductance for suppression of spike synchronisation in a random network of Hodgkin Huxley neurons coupled by means of chemical synapses. In the first part, we examine in detail how the time delay acts over the network during the synchronised and desynchronised neuronal activities. We observe a relation between the neuronal dynamics and the syaptic conductance distributions. We find parameter values in which the time delay has high effectiveness in promoting the suppression of spike synchronisation. In the second part, we analyse how the delayed neuronal networks react when pulsed inputs with different profiles (periodic, random, and mixed) are applied on the neurons. We show the main parameters responsible for inducing or not synchronous neuronal oscillations in delayed networks.
[ { "created": "Sat, 30 Apr 2022 14:10:04 GMT", "version": "v1" } ]
2022-10-19
[ [ "Hansen", "Matheus", "" ], [ "Protachevicz", "Paulo R.", "" ], [ "Iarosz", "Kelly C.", "" ], [ "Caldas", "Ibere L.", "" ], [ "Batista", "Antonio M.", "" ], [ "Macau", "Elbert E. N.", "" ] ]
We study the time delay in the synaptic conductance for suppression of spike synchronisation in a random network of Hodgkin Huxley neurons coupled by means of chemical synapses. In the first part, we examine in detail how the time delay acts over the network during the synchronised and desynchronised neuronal activities. We observe a relation between the neuronal dynamics and the syaptic conductance distributions. We find parameter values in which the time delay has high effectiveness in promoting the suppression of spike synchronisation. In the second part, we analyse how the delayed neuronal networks react when pulsed inputs with different profiles (periodic, random, and mixed) are applied on the neurons. We show the main parameters responsible for inducing or not synchronous neuronal oscillations in delayed networks.
1802.02523
Zg Ma
John Z. G. Ma
Plasma Brain Dynamics (PBD): A Mechanism for EEG Waves Under Human Consciousness
null
Cosmos and History: The Journal of Natural and Social Philosophy, Vol 13, No 2 (2017)
null
null
q-bio.NC physics.med-ph
http://creativecommons.org/publicdomain/zero/1.0/
EEG signals are records of nonlinear solitary waves in human brains. The waves have several types (e.g., a, b, g, q, d) in response to different levels of consciousness. They are classified into two groups: Group-1 consists of complex storm-like waves (a, b, and g); Group-2 is composed of simple quasilinear waves (q and d). In order to elucidate the mechanism of EEG wave formation and propagation, this paper extends the Vlasov-Maxwell equations of Plasma Brain Dynamics (PBD) to a set of two-fluid, self-similar, nonlinear solitary wave equations. Numerical simulations are performed for different EEG signals. Main results include: (1) The excitation and propagation of the EEG wave packets are dependent of electric and magnetic fields, brain aqua-ions, electron and ion temperatures, masses, and their initial fluid speeds; (2) Group-1 complex waves contain three ingredients: the high-frequency ion-acoustic (IA) mode, the intermediate-frequency lower-hybrid (LH) mode, and the low-frequency ion-cyclotron (IC) mode; (3) Group-2 simple waves fall within the IA band, featured by one or a combination of the three envelopes: sinusoidal, sawtooth, and spiky/bipolar. The study proposes an alternative model to Quantum Brain Dynamics (QBD) by suggesting that the formation and propagation of the nonlinear solitary EEG waves in the brain have the same mechanism as that of the waves in space plasmas
[ { "created": "Tue, 16 Jan 2018 01:51:18 GMT", "version": "v1" } ]
2019-07-22
[ [ "Ma", "John Z. G.", "" ] ]
EEG signals are records of nonlinear solitary waves in human brains. The waves have several types (e.g., a, b, g, q, d) in response to different levels of consciousness. They are classified into two groups: Group-1 consists of complex storm-like waves (a, b, and g); Group-2 is composed of simple quasilinear waves (q and d). In order to elucidate the mechanism of EEG wave formation and propagation, this paper extends the Vlasov-Maxwell equations of Plasma Brain Dynamics (PBD) to a set of two-fluid, self-similar, nonlinear solitary wave equations. Numerical simulations are performed for different EEG signals. Main results include: (1) The excitation and propagation of the EEG wave packets are dependent of electric and magnetic fields, brain aqua-ions, electron and ion temperatures, masses, and their initial fluid speeds; (2) Group-1 complex waves contain three ingredients: the high-frequency ion-acoustic (IA) mode, the intermediate-frequency lower-hybrid (LH) mode, and the low-frequency ion-cyclotron (IC) mode; (3) Group-2 simple waves fall within the IA band, featured by one or a combination of the three envelopes: sinusoidal, sawtooth, and spiky/bipolar. The study proposes an alternative model to Quantum Brain Dynamics (QBD) by suggesting that the formation and propagation of the nonlinear solitary EEG waves in the brain have the same mechanism as that of the waves in space plasmas
2303.16743
Florian Jug
Damian Edward Dalle Nogare, Matthew Hartley, Joran Deschamps, Jan Ellenberg, Florian Jug
bAIoimage analysis: elevating the rate of scientific discovery -- as a community
5 pages, 1 figure, opinion
null
10.1038/s41592-023-01929-5
null
q-bio.OT eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The future of bioimage analysis is increasingly defined by the development and use of tools that rely on deep learning and artificial intelligence (AI). For this trend to continue in a way most useful for stimulating scientific progress, it will require our multidisciplinary community to work together, establish FAIR data sharing and deliver usable, reproducible analytical tools.
[ { "created": "Wed, 29 Mar 2023 14:49:40 GMT", "version": "v1" } ]
2023-08-31
[ [ "Nogare", "Damian Edward Dalle", "" ], [ "Hartley", "Matthew", "" ], [ "Deschamps", "Joran", "" ], [ "Ellenberg", "Jan", "" ], [ "Jug", "Florian", "" ] ]
The future of bioimage analysis is increasingly defined by the development and use of tools that rely on deep learning and artificial intelligence (AI). For this trend to continue in a way most useful for stimulating scientific progress, it will require our multidisciplinary community to work together, establish FAIR data sharing and deliver usable, reproducible analytical tools.
1012.2025
Adriano Barra Dr.
Adriano Barra, Silvio Franz, Thiago Sabetta
Some thoughts on the ontogenesis in B-cell immune networks
null
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are interested in modeling theoretical immunology within a statistical mechanics flavor: focusing on the antigen-independent maturation process of B-cells, in this paper we try to revise the problem of self vs non-self discrimination by mature B lymphocytes. We consider only B lymphocytes: despite this is of course an oversimplification, however such a toy model may help to highlight features of their interactions otherwise shadowed by main driven mechanisms due to i.e. helper T-cell signalling. By analyzing possible influences of the ontogenesis of the immune system on the final behavior of B lymphocytes, we try to merge over the purely negative selection mechanism at their birth with the adult self-regulation process. The final goal is a "thermodynamical picture" by which both the scenarios can exist and, actually, be synergically complementary: Trough numerical simulations we impose on a recent scheme for B-cell interactions, that part of self-reactive lymphocytes are killed during the ontogenesis by which two observations stem: At first the so built system is able to show anergy with respect to the previously encountered self even in its mature life, then this naturally leads to an increasing variance (and average) in the connectivity distribution of the resulting idiotypic network. As a consequence, following Varela perspective, this shift may contribute to push to anergy those self-directed cells which are free to explore the body: identifying the latter as the highly connected ones, anergy is imposed even via the B-network regulation, and its strength is influenced by the negative selection.
[ { "created": "Thu, 9 Dec 2010 15:05:03 GMT", "version": "v1" } ]
2010-12-10
[ [ "Barra", "Adriano", "" ], [ "Franz", "Silvio", "" ], [ "Sabetta", "Thiago", "" ] ]
We are interested in modeling theoretical immunology within a statistical mechanics flavor: focusing on the antigen-independent maturation process of B-cells, in this paper we try to revise the problem of self vs non-self discrimination by mature B lymphocytes. We consider only B lymphocytes: despite this is of course an oversimplification, however such a toy model may help to highlight features of their interactions otherwise shadowed by main driven mechanisms due to i.e. helper T-cell signalling. By analyzing possible influences of the ontogenesis of the immune system on the final behavior of B lymphocytes, we try to merge over the purely negative selection mechanism at their birth with the adult self-regulation process. The final goal is a "thermodynamical picture" by which both the scenarios can exist and, actually, be synergically complementary: Trough numerical simulations we impose on a recent scheme for B-cell interactions, that part of self-reactive lymphocytes are killed during the ontogenesis by which two observations stem: At first the so built system is able to show anergy with respect to the previously encountered self even in its mature life, then this naturally leads to an increasing variance (and average) in the connectivity distribution of the resulting idiotypic network. As a consequence, following Varela perspective, this shift may contribute to push to anergy those self-directed cells which are free to explore the body: identifying the latter as the highly connected ones, anergy is imposed even via the B-network regulation, and its strength is influenced by the negative selection.
1208.6497
Christel Kamp
Christel Kamp, Mathieu Moslonka-Lefebvre, Samuel Alizon
Predicting epidemics on weighted networks
20 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The contact structure between hosts has a critical influence on disease spread. However, most networkbased models used in epidemiology tend to ignore heterogeneity in the weighting of contacts. This assumption is known to be at odds with the data for many contact networks (e.g. sexual contact networks) and to have a strong effect on the predictions of epidemiological models. One of the reasons why models usually ignore heterogeneity in transmission is that we currently lack tools to analyze weighted networks, such that most studies rely on numerical simulations. Here, we present a novel framework to estimate key epidemiological variables, such as the rate of early epidemic expansion and the basic reproductive ratio, from joint probability distributions of number of partners (contacts) and number of interaction events through which contacts are weighted. This framework also allows for a derivation of the full time course of epidemic prevalence and contact behaviour which is validated using numerical simulations. Our framework allows for the incorporation of more realistic contact networks into epidemiological models, thus improving predictions on the spread of emerging infectious diseases.
[ { "created": "Fri, 31 Aug 2012 13:47:29 GMT", "version": "v1" } ]
2012-09-03
[ [ "Kamp", "Christel", "" ], [ "Moslonka-Lefebvre", "Mathieu", "" ], [ "Alizon", "Samuel", "" ] ]
The contact structure between hosts has a critical influence on disease spread. However, most networkbased models used in epidemiology tend to ignore heterogeneity in the weighting of contacts. This assumption is known to be at odds with the data for many contact networks (e.g. sexual contact networks) and to have a strong effect on the predictions of epidemiological models. One of the reasons why models usually ignore heterogeneity in transmission is that we currently lack tools to analyze weighted networks, such that most studies rely on numerical simulations. Here, we present a novel framework to estimate key epidemiological variables, such as the rate of early epidemic expansion and the basic reproductive ratio, from joint probability distributions of number of partners (contacts) and number of interaction events through which contacts are weighted. This framework also allows for a derivation of the full time course of epidemic prevalence and contact behaviour which is validated using numerical simulations. Our framework allows for the incorporation of more realistic contact networks into epidemiological models, thus improving predictions on the spread of emerging infectious diseases.
2010.03951
Kexin Huang
Kexin Huang, Tianfan Fu, Dawood Khan, Ali Abid, Ali Abdalla, Abubakar Abid, Lucas M. Glass, Marinka Zitnik, Cao Xiao, Jimeng Sun
MolDesigner: Interactive Design of Efficacious Drugs with Deep Learning
NeurIPS 2020 Demonstration Track
null
null
null
q-bio.QM cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The efficacy of a drug depends on its binding affinity to the therapeutic target and pharmacokinetics. Deep learning (DL) has demonstrated remarkable progress in predicting drug efficacy. We develop MolDesigner, a human-in-the-loop web user-interface (UI), to assist drug developers leverage DL predictions to design more effective drugs. A developer can draw a drug molecule in the interface. In the backend, more than 17 state-of-the-art DL models generate predictions on important indices that are crucial for a drug's efficacy. Based on these predictions, drug developers can edit the drug molecule and reiterate until satisfaction. MolDesigner can make predictions in real-time with a latency of less than a second.
[ { "created": "Mon, 5 Oct 2020 21:25:25 GMT", "version": "v1" } ]
2020-10-09
[ [ "Huang", "Kexin", "" ], [ "Fu", "Tianfan", "" ], [ "Khan", "Dawood", "" ], [ "Abid", "Ali", "" ], [ "Abdalla", "Ali", "" ], [ "Abid", "Abubakar", "" ], [ "Glass", "Lucas M.", "" ], [ "Zitnik", "...
The efficacy of a drug depends on its binding affinity to the therapeutic target and pharmacokinetics. Deep learning (DL) has demonstrated remarkable progress in predicting drug efficacy. We develop MolDesigner, a human-in-the-loop web user-interface (UI), to assist drug developers leverage DL predictions to design more effective drugs. A developer can draw a drug molecule in the interface. In the backend, more than 17 state-of-the-art DL models generate predictions on important indices that are crucial for a drug's efficacy. Based on these predictions, drug developers can edit the drug molecule and reiterate until satisfaction. MolDesigner can make predictions in real-time with a latency of less than a second.
1909.11070
R. Mulet
Jorge Fernandez-de-Cossio-Diaz and Roberto Mulet
Spin Glass Theory of Interacting Metabolic Networks
4 Figures
Phys. Rev. E 101, 042401 (2020)
10.1103/PhysRevE.101.042401
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We cast the metabolism of interacting cells within a statistical mechanics framework considering both, the actual phenotypic capacities of each cell and its interaction with its neighbors. Reaction fluxes will be the components of high-dimensional spin vectors, whose values will be constrained by the stochiometry and the energy requirements of the metabolism. Within this picture, finding the phenotypic states of the population turns out to be equivalent to searching for the equilibrium states of a disordered spin model. We provide a general solution of this problem for arbitrary metabolic networks and interactions. We apply this solution to a simplified model of metabolism and to a complex metabolic network, the central core of the \emph{E. coli}, and demonstrate that the combination of selective pressure and interactions define a complex phenotypic space. Cells may specialize in producing or consuming metabolites complementing each other at the population level and this is described by an equilibrium phase space with multiple minima, like in a spin-glass model.
[ { "created": "Tue, 24 Sep 2019 17:47:40 GMT", "version": "v1" } ]
2020-04-08
[ [ "Fernandez-de-Cossio-Diaz", "Jorge", "" ], [ "Mulet", "Roberto", "" ] ]
We cast the metabolism of interacting cells within a statistical mechanics framework considering both, the actual phenotypic capacities of each cell and its interaction with its neighbors. Reaction fluxes will be the components of high-dimensional spin vectors, whose values will be constrained by the stochiometry and the energy requirements of the metabolism. Within this picture, finding the phenotypic states of the population turns out to be equivalent to searching for the equilibrium states of a disordered spin model. We provide a general solution of this problem for arbitrary metabolic networks and interactions. We apply this solution to a simplified model of metabolism and to a complex metabolic network, the central core of the \emph{E. coli}, and demonstrate that the combination of selective pressure and interactions define a complex phenotypic space. Cells may specialize in producing or consuming metabolites complementing each other at the population level and this is described by an equilibrium phase space with multiple minima, like in a spin-glass model.
2109.10224
Gregory Rehm
Gregory Rehm, Jimmy Nguyen, Chelsea Gilbeau, Marc T Bomactao, Chen-Nee Chuah, Jason Adams
Clinical Validation of Single-Chamber Model-Based Algorithms Used to Estimate Respiratory Compliance
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Non-invasive estimation of respiratory physiology using computational algorithms promises to be a valuable technique for future clinicians to detect detrimental changes in patient pathophysiology. However, few clinical algorithms used to non-invasively analyze lung physiology have undergone rigorous validation in a clinical setting, and are often validated either using mechanical devices, or with small clinical validation datasets using 2-8 patients. This work aims to improve this situation by first, establishing an open, and clinically validated dataset comprising data from both mechanical lungs and nearly 40,000 breaths from 18 intubated patients. Next, we use this data to evaluate 15 different algorithms that use the "single chamber" model of estimating respiratory compliance. We evaluate these algorithms under varying clinical scenarios patients typically experience during hospitalization. In particular, we explore algorithm performance under four different types of patient ventilator asynchrony. We also analyze algorithms under varying ventilation modes to benchmark algorithm performance and to determine if ventilation mode has any impact on the algorithm. Our approach yields several advances by 1) showing which specific algorithms work best clinically under varying mode and asynchrony scenarios, 2) developing a simple mathematical method to reduce variance in algorithmic results, and 3) presenting additional insights about single-chamber model algorithms. We hope that our paper, approach, dataset, and software framework can thus be used by future researchers to improve their work and allow future integration of "single chamber" algorithms into clinical practice.
[ { "created": "Sun, 19 Sep 2021 07:34:15 GMT", "version": "v1" } ]
2021-09-22
[ [ "Rehm", "Gregory", "" ], [ "Nguyen", "Jimmy", "" ], [ "Gilbeau", "Chelsea", "" ], [ "Bomactao", "Marc T", "" ], [ "Chuah", "Chen-Nee", "" ], [ "Adams", "Jason", "" ] ]
Non-invasive estimation of respiratory physiology using computational algorithms promises to be a valuable technique for future clinicians to detect detrimental changes in patient pathophysiology. However, few clinical algorithms used to non-invasively analyze lung physiology have undergone rigorous validation in a clinical setting, and are often validated either using mechanical devices, or with small clinical validation datasets using 2-8 patients. This work aims to improve this situation by first, establishing an open, and clinically validated dataset comprising data from both mechanical lungs and nearly 40,000 breaths from 18 intubated patients. Next, we use this data to evaluate 15 different algorithms that use the "single chamber" model of estimating respiratory compliance. We evaluate these algorithms under varying clinical scenarios patients typically experience during hospitalization. In particular, we explore algorithm performance under four different types of patient ventilator asynchrony. We also analyze algorithms under varying ventilation modes to benchmark algorithm performance and to determine if ventilation mode has any impact on the algorithm. Our approach yields several advances by 1) showing which specific algorithms work best clinically under varying mode and asynchrony scenarios, 2) developing a simple mathematical method to reduce variance in algorithmic results, and 3) presenting additional insights about single-chamber model algorithms. We hope that our paper, approach, dataset, and software framework can thus be used by future researchers to improve their work and allow future integration of "single chamber" algorithms into clinical practice.
1705.03457
Omer Faruk Gulban
Omer Faruk Gulban
The relation between color spaces and compositional data analysis demonstrated with magnetic resonance image processing applications
13 pages, 3 figures, short paper, submitted to Austrian Journal of Statistics compositional data analysis special issue, first revision, fix rendering error in fig2
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel application of compositional data analysis methods in the context of color image processing. A vector decomposition method is proposed to reveal compositional components of any vector with positive components followed by compositional data analysis to demonstrate the relation between color space concepts such as hue and saturation to their compositional counterparts. The proposed methods are applied to a magnetic resonance imaging dataset acquired from a living human brain and a digital color photograph to perform image fusion. Potential future applications in magnetic resonance imaging are mentioned and the benefits/disadvantages of the proposed methods are discussed in terms of color image processing.
[ { "created": "Tue, 9 May 2017 07:14:26 GMT", "version": "v1" }, { "created": "Mon, 16 Oct 2017 17:50:56 GMT", "version": "v2" }, { "created": "Tue, 27 Mar 2018 16:35:10 GMT", "version": "v3" }, { "created": "Mon, 11 Jun 2018 10:19:36 GMT", "version": "v4" } ]
2018-06-12
[ [ "Gulban", "Omer Faruk", "" ] ]
This paper presents a novel application of compositional data analysis methods in the context of color image processing. A vector decomposition method is proposed to reveal compositional components of any vector with positive components followed by compositional data analysis to demonstrate the relation between color space concepts such as hue and saturation to their compositional counterparts. The proposed methods are applied to a magnetic resonance imaging dataset acquired from a living human brain and a digital color photograph to perform image fusion. Potential future applications in magnetic resonance imaging are mentioned and the benefits/disadvantages of the proposed methods are discussed in terms of color image processing.
1306.6129
Taichi Haruna
Taichi Haruna
Robustness and Directed Structures in Ecological Flow Networks
7 pages
null
null
null
q-bio.PE nlin.AO physics.soc-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robustness of ecological flow networks under random failure of arcs is considered with respect to two different functionalities: coherence and circulation. In our previous work, we showed that each functionality is associated with a natural path notion: lateral path for the former and directed path for the latter. Robustness of a network is measured in terms of the size of the giant laterally connected arc component and that of the giant strongly connected arc component, respectively. We study how realistic structures of ecological flow networks affect the robustness with respect to each functionality. To quantify the impact of realistic network structures, two null models are considered for a given real ecological flow network: one is random networks with the same degree distribution and the other is those with the same average degree. Robustness of the null models is calculated by theoretically solving the size of giant components for the configuration model. We show that realistic network structures have positive effect on robustness for coherence, whereas they have negative effect on robustness for circulation.
[ { "created": "Wed, 26 Jun 2013 05:00:52 GMT", "version": "v1" } ]
2013-06-27
[ [ "Haruna", "Taichi", "" ] ]
Robustness of ecological flow networks under random failure of arcs is considered with respect to two different functionalities: coherence and circulation. In our previous work, we showed that each functionality is associated with a natural path notion: lateral path for the former and directed path for the latter. Robustness of a network is measured in terms of the size of the giant laterally connected arc component and that of the giant strongly connected arc component, respectively. We study how realistic structures of ecological flow networks affect the robustness with respect to each functionality. To quantify the impact of realistic network structures, two null models are considered for a given real ecological flow network: one is random networks with the same degree distribution and the other is those with the same average degree. Robustness of the null models is calculated by theoretically solving the size of giant components for the configuration model. We show that realistic network structures have positive effect on robustness for coherence, whereas they have negative effect on robustness for circulation.
2008.05888
Murali Padmanabha
Murali Padmanabha, Alexander Kobelski, Arne-Jens Hempel, Stefan Streif
A comprehensive dynamic growth and development model of Hermetia illucens larvae
null
null
10.1371/journal.pone.0239084
null
q-bio.QM cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Larvae of Hermetia illucens, also commonly known as black soldier fly (BSF) have gained significant importance in the feed industry, primarily used as feed for aquaculture and other livestock farming. Mathematical model such as Von Bertalanffy growth model and dynamic energy budget models are available for modelling the growth of various organisms but have their demerits for their application to the growth and development of BSF. Also, such dynamic models were not yet applied to the growth of the BSF larvae despite models proven to be useful for automation of industrial production process (e.g. feeding, heating/cooling, ventilation, harvesting, etc.). This work primarily focuses on developing a model based on the principles of the afore mentioned models from literature that can provide accurate mathematical description of the dry mass changes throughout the life cycle and the transition of development phases of the larvae. To further improve the accuracy of these models, various factors affecting the growth and development such as temperature, feed quality, feeding rate, moisture content in feed, and airflow rate are developed and integrated into the dynamic growth model. An extensive set of data were aggregated from various literature and used for the model development, parameter estimation and validation. Models describing the environmental factors were individually validated based on the data sets collected. In addition, the dynamic growth model was also validated for dry mass evolution and development stage transition of larvae reared on different substrate feeding rates. The developed models with the estimated parameters performed well highlighting its application in decision-support systems and automation for large scale production.
[ { "created": "Thu, 13 Aug 2020 13:21:38 GMT", "version": "v1" } ]
2021-01-27
[ [ "Padmanabha", "Murali", "" ], [ "Kobelski", "Alexander", "" ], [ "Hempel", "Arne-Jens", "" ], [ "Streif", "Stefan", "" ] ]
Larvae of Hermetia illucens, also commonly known as black soldier fly (BSF) have gained significant importance in the feed industry, primarily used as feed for aquaculture and other livestock farming. Mathematical model such as Von Bertalanffy growth model and dynamic energy budget models are available for modelling the growth of various organisms but have their demerits for their application to the growth and development of BSF. Also, such dynamic models were not yet applied to the growth of the BSF larvae despite models proven to be useful for automation of industrial production process (e.g. feeding, heating/cooling, ventilation, harvesting, etc.). This work primarily focuses on developing a model based on the principles of the afore mentioned models from literature that can provide accurate mathematical description of the dry mass changes throughout the life cycle and the transition of development phases of the larvae. To further improve the accuracy of these models, various factors affecting the growth and development such as temperature, feed quality, feeding rate, moisture content in feed, and airflow rate are developed and integrated into the dynamic growth model. An extensive set of data were aggregated from various literature and used for the model development, parameter estimation and validation. Models describing the environmental factors were individually validated based on the data sets collected. In addition, the dynamic growth model was also validated for dry mass evolution and development stage transition of larvae reared on different substrate feeding rates. The developed models with the estimated parameters performed well highlighting its application in decision-support systems and automation for large scale production.
1305.0727
Daniel Beard
Klas H. Pettersen, Scott M. Bugenhagen, Javaid Nauman, Daniel A. Beard, and Stig W. Omholt
Arterial stiffening provides sufficient explanation for primary hypertension
19 pages, 4 figures
null
10.1371/journal.pcbi.1003634
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypertension is one of the most common age-related chronic diseases and by predisposing individuals for heart failure, stroke and kidney disease, it is a major source of morbidity and mortality. Its etiology remains enigmatic despite intense research efforts over many decades. By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that arterial stiffening seems sufficient to explain age-related emergence of hypertension. Specifically, the empirically observed chronic changes in pulse pressure with age, and the impaired capacity of hypertensive individuals to regulate short-term changes in blood pressure, arise as emergent properties of the integrated system. Results are consistent with available experimental data from chemical and surgical manipulation of the cardio-vascular system. In contrast to widely held opinions, the results suggest that primary hypertension can be attributed to a mechanogenic etiology without challenging current conceptions of renal and sympathetic nervous system function. The results support the view that a major target for treating chronic hypertension in the elderly is the reestablishment of a proper baroreflex response.
[ { "created": "Fri, 3 May 2013 14:43:46 GMT", "version": "v1" }, { "created": "Mon, 6 May 2013 15:11:44 GMT", "version": "v2" } ]
2015-06-15
[ [ "Pettersen", "Klas H.", "" ], [ "Bugenhagen", "Scott M.", "" ], [ "Nauman", "Javaid", "" ], [ "Beard", "Daniel A.", "" ], [ "Omholt", "Stig W.", "" ] ]
Hypertension is one of the most common age-related chronic diseases and by predisposing individuals for heart failure, stroke and kidney disease, it is a major source of morbidity and mortality. Its etiology remains enigmatic despite intense research efforts over many decades. By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that arterial stiffening seems sufficient to explain age-related emergence of hypertension. Specifically, the empirically observed chronic changes in pulse pressure with age, and the impaired capacity of hypertensive individuals to regulate short-term changes in blood pressure, arise as emergent properties of the integrated system. Results are consistent with available experimental data from chemical and surgical manipulation of the cardio-vascular system. In contrast to widely held opinions, the results suggest that primary hypertension can be attributed to a mechanogenic etiology without challenging current conceptions of renal and sympathetic nervous system function. The results support the view that a major target for treating chronic hypertension in the elderly is the reestablishment of a proper baroreflex response.
1411.0733
Silvia Grigolon
Silvia Grigolon, Peter Sollich, Olivier C. Martin
Modeling the emergence of polarity patterns for the intercellular transport of auxin in plants
17 pages and 9 figures (Main Text), 9 pages and 4 figures (Supplementary Material), revised version with some rearrangements
J. R. Soc. Interface, 2015, 12, 20141223
10.1098/rsif.2014.1223
null
q-bio.TO cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hormone auxin is actively transported throughout plants via protein machineries including the dedicated transporter known as PIN. The associated transport is ordered with nearby cells driving auxin flux in similar directions. Here we provide a model of both the auxin transport and of the dynamics of cellular polarisation based on flux sensing. Our main findings are: (i) spontaneous intracellular PIN polarisation arises if PIN recycling dynamics are sufficiently non-linear, (ii) there is no need for an auxin concentration gradient, and (iii) ordered multi-cellular patterns of PIN polarisation are favored by molecular noise.
[ { "created": "Mon, 3 Nov 2014 23:22:22 GMT", "version": "v1" }, { "created": "Fri, 13 Mar 2015 11:05:33 GMT", "version": "v2" } ]
2016-04-12
[ [ "Grigolon", "Silvia", "" ], [ "Sollich", "Peter", "" ], [ "Martin", "Olivier C.", "" ] ]
The hormone auxin is actively transported throughout plants via protein machineries including the dedicated transporter known as PIN. The associated transport is ordered with nearby cells driving auxin flux in similar directions. Here we provide a model of both the auxin transport and of the dynamics of cellular polarisation based on flux sensing. Our main findings are: (i) spontaneous intracellular PIN polarisation arises if PIN recycling dynamics are sufficiently non-linear, (ii) there is no need for an auxin concentration gradient, and (iii) ordered multi-cellular patterns of PIN polarisation are favored by molecular noise.
2406.05108
Dinh Viet Cuong
Dinh Viet Cuong, Branislava Lali\'c, Mina Petri\'c, Binh Nguyen, Mark Roantree
Adapting Physics-Informed Neural Networks To Optimize ODEs in Mosquito Population Dynamics
null
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by/4.0/
Physics informed neural networks have been gaining popularity due to their unique ability to incorporate physics laws into data-driven models, ensuring that the predictions are not only consistent with empirical data but also align with domain-specific knowledge in the form of physics equations. The integration of physics principles enables the method to require less data while maintaining the robustness of deep learning in modeling complex dynamical systems. However, current PINN frameworks are not sufficiently mature for real-world ODE systems, especially those with extreme multi-scale behavior such as mosquito population dynamical modelling. In this research, we propose a PINN framework with several improvements for forward and inverse problems for ODE systems with a case study application in modelling the dynamics of mosquito populations. The framework tackles the gradient imbalance and stiff problems posed by mosquito ordinary differential equations. The method offers a simple but effective way to resolve the time causality issue in PINNs by gradually expanding the training time domain until it covers entire domain of interest. As part of a robust evaluation, we conduct experiments using simulated data to evaluate the effectiveness of the approach. Preliminary results indicate that physics-informed machine learning holds significant potential for advancing the study of ecological systems.
[ { "created": "Fri, 7 Jun 2024 17:40:38 GMT", "version": "v1" } ]
2024-06-10
[ [ "Cuong", "Dinh Viet", "" ], [ "Lalić", "Branislava", "" ], [ "Petrić", "Mina", "" ], [ "Nguyen", "Binh", "" ], [ "Roantree", "Mark", "" ] ]
Physics informed neural networks have been gaining popularity due to their unique ability to incorporate physics laws into data-driven models, ensuring that the predictions are not only consistent with empirical data but also align with domain-specific knowledge in the form of physics equations. The integration of physics principles enables the method to require less data while maintaining the robustness of deep learning in modeling complex dynamical systems. However, current PINN frameworks are not sufficiently mature for real-world ODE systems, especially those with extreme multi-scale behavior such as mosquito population dynamical modelling. In this research, we propose a PINN framework with several improvements for forward and inverse problems for ODE systems with a case study application in modelling the dynamics of mosquito populations. The framework tackles the gradient imbalance and stiff problems posed by mosquito ordinary differential equations. The method offers a simple but effective way to resolve the time causality issue in PINNs by gradually expanding the training time domain until it covers entire domain of interest. As part of a robust evaluation, we conduct experiments using simulated data to evaluate the effectiveness of the approach. Preliminary results indicate that physics-informed machine learning holds significant potential for advancing the study of ecological systems.
2008.12027
Markus Lill
Amr H. Mahmoud, Jonas F. Lill, Markus A. Lill
Graph-convolution neural network-based flexible docking utilizing coarse-grained distance matrix
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Prediction of protein-ligand complexes for flexible proteins remains still a challenging problem in computational structural biology and drug design. Here we present two novel deep neural network approaches with significant improvement in efficiency and accuracy of binding mode prediction on a large and diverse set of protein systems compared to standard docking. Whereas the first graph convolutional network is used for re-ranking poses the second approach aims to generate and rank poses independent of standard docking approaches. This novel approach relies on the prediction of distance matrices between ligand atoms and protein C_alpha atoms thus incorporating side-chain flexibility implicitly.
[ { "created": "Thu, 27 Aug 2020 10:04:51 GMT", "version": "v1" } ]
2020-08-28
[ [ "Mahmoud", "Amr H.", "" ], [ "Lill", "Jonas F.", "" ], [ "Lill", "Markus A.", "" ] ]
Prediction of protein-ligand complexes for flexible proteins remains still a challenging problem in computational structural biology and drug design. Here we present two novel deep neural network approaches with significant improvement in efficiency and accuracy of binding mode prediction on a large and diverse set of protein systems compared to standard docking. Whereas the first graph convolutional network is used for re-ranking poses the second approach aims to generate and rank poses independent of standard docking approaches. This novel approach relies on the prediction of distance matrices between ligand atoms and protein C_alpha atoms thus incorporating side-chain flexibility implicitly.
2402.03529
Carlos Calvo Tapia
Carlos Calvo Tapia, Valeriy A. Makarov Slizneva, and Cees van Leeuwen
Basic principles drive self-organization of brain-like connectivity structure
null
Communications in Nonlinear Science and Numerical Simulation 82 105065, 2020
10.1016/j.cnsns.2019.105065
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain can be considered as a system that dynamically optimizes the structure of anatomical connections based on the efficiency requirements of functional connectivity. To illustrate the power of this principle in organizing the complexity of brain architecture, we portray the functional connectivity as diffusion on the current network structure. The diffusion drives adaptive rewiring, resulting in changes to the network to enhance its efficiency. This dynamic evolution of the network structure generates, and thus explains, modular small-worlds with rich club effects, f eatures commonly observed in neural anatomy. Taking wiring length and propagating waves into account leads to the morphogenesis of more specific neural structures that are stalwarts of the detailed brain functional anatomy, such as parallelism, divergence, convergence, super-rings, and super-chains. By showing how such structures emerge, largely independently of their specific biological realization, we offer a new conjecture on how natural and artificial brain-like structures can be physically implemented.
[ { "created": "Mon, 5 Feb 2024 21:37:03 GMT", "version": "v1" } ]
2024-02-07
[ [ "Tapia", "Carlos Calvo", "" ], [ "Slizneva", "Valeriy A. Makarov", "" ], [ "van Leeuwen", "Cees", "" ] ]
The brain can be considered as a system that dynamically optimizes the structure of anatomical connections based on the efficiency requirements of functional connectivity. To illustrate the power of this principle in organizing the complexity of brain architecture, we portray the functional connectivity as diffusion on the current network structure. The diffusion drives adaptive rewiring, resulting in changes to the network to enhance its efficiency. This dynamic evolution of the network structure generates, and thus explains, modular small-worlds with rich club effects, f eatures commonly observed in neural anatomy. Taking wiring length and propagating waves into account leads to the morphogenesis of more specific neural structures that are stalwarts of the detailed brain functional anatomy, such as parallelism, divergence, convergence, super-rings, and super-chains. By showing how such structures emerge, largely independently of their specific biological realization, we offer a new conjecture on how natural and artificial brain-like structures can be physically implemented.
2212.00146
Lorenzo Pareschi
Lorenzo Pareschi, Giuseppe Toscani
The kinetic theory of mutation rates
null
null
null
null
q-bio.PE math-ph math.MP math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Luria--Delbr\"uck mutation model is a cornerstone of evolution theory and has been mathematically formulated in a number of ways. In this paper we illustrate how this model of mutation rates can be derived by means of classical statistical mechanics tools, in particular by modeling the phenomenon resorting to methodologies borrowed from classical kinetic theory of rarefied gases. The aim is to construct a linear kinetic model that can reproduce the Luria--Delbr\"uck distribution starting from the elementary interactions that qualitatively and quantitatively describe the variation of mutated cells. The kinetic description is easily adaptable to different situations and makes it possible to clearly identify the differences between the elementary variations leading to the formulations of Luria--Delbr\"uck, Lea--Coulson, and Kendall, respectively. The kinetic approach additionally emphasizes basic principles which not only help to unify existing results but also allow for useful extensions.
[ { "created": "Wed, 30 Nov 2022 22:46:17 GMT", "version": "v1" } ]
2022-12-02
[ [ "Pareschi", "Lorenzo", "" ], [ "Toscani", "Giuseppe", "" ] ]
The Luria--Delbr\"uck mutation model is a cornerstone of evolution theory and has been mathematically formulated in a number of ways. In this paper we illustrate how this model of mutation rates can be derived by means of classical statistical mechanics tools, in particular by modeling the phenomenon resorting to methodologies borrowed from classical kinetic theory of rarefied gases. The aim is to construct a linear kinetic model that can reproduce the Luria--Delbr\"uck distribution starting from the elementary interactions that qualitatively and quantitatively describe the variation of mutated cells. The kinetic description is easily adaptable to different situations and makes it possible to clearly identify the differences between the elementary variations leading to the formulations of Luria--Delbr\"uck, Lea--Coulson, and Kendall, respectively. The kinetic approach additionally emphasizes basic principles which not only help to unify existing results but also allow for useful extensions.
2407.09488
Xin Li
Xin Li
Manifold Learning via Memory and Context
null
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/publicdomain/zero/1.0/
Given a memory with infinite capacity, can we solve the learning problem? Apparently, nature has solved this problem as evidenced by the evolution of mammalian brains. Inspired by the organizational principles underlying hippocampal-neocortical systems, we present a navigation-based approach to manifold learning using memory and context. The key insight is to navigate on the manifold and memorize the positions of each route as inductive/design bias of direct-fit-to-nature. We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning via memory (local maps) and context (global indexing). The indexing to the library of local maps within global coordinates is collected by an associative memory serving as the librarian, which mimics the coupling between the hippocampus and the neocortex. In addition to breaking from the notorious bias-variance dilemma and the curse of dimensionality, we discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems. The energy efficiency of navigation-based learning makes it suitable for hardware implementation on non-von Neumann architectures, such as the emerging in-memory computing paradigm, including spiking neural networks and memristor neural networks.
[ { "created": "Fri, 17 May 2024 17:06:19 GMT", "version": "v1" } ]
2024-07-16
[ [ "Li", "Xin", "" ] ]
Given a memory with infinite capacity, can we solve the learning problem? Apparently, nature has solved this problem as evidenced by the evolution of mammalian brains. Inspired by the organizational principles underlying hippocampal-neocortical systems, we present a navigation-based approach to manifold learning using memory and context. The key insight is to navigate on the manifold and memorize the positions of each route as inductive/design bias of direct-fit-to-nature. We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning via memory (local maps) and context (global indexing). The indexing to the library of local maps within global coordinates is collected by an associative memory serving as the librarian, which mimics the coupling between the hippocampus and the neocortex. In addition to breaking from the notorious bias-variance dilemma and the curse of dimensionality, we discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems. The energy efficiency of navigation-based learning makes it suitable for hardware implementation on non-von Neumann architectures, such as the emerging in-memory computing paradigm, including spiking neural networks and memristor neural networks.
1910.11194
Yi-Hsuan Lin
Alan N. Amin, Yi-Hsuan Lin, Suman Das, Hue Sun Chan
Analytical Theory for Sequence-Specific Binary Fuzzy Complexes of Charged Intrinsically Disordered Proteins
51 pages, 11 figures. Accepted for Publication in J. Phys. Chem. B
J. Phys. Chem. B 124, 6709--6720 (2020)
10.1021/acs.jpcb.0c04575
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intrinsically disordered proteins (IDPs) are important for biological functions. In contrast to folded proteins, molecular recognition among certain IDPs is "fuzzy" in that their binding and/or phase separation are stochastically governed by the interacting IDPs' amino acid sequences while their assembled conformations remain largely disordered. To help elucidate a basic aspect of this fascinating yet poorly understood phenomenon, the binding of a homo- or hetero-dimeric pair of polyampholytic IDPs is modeled statistical mechanically using cluster expansion. We find that the binding affinities of binary fuzzy complexes in the model correlate strongly with a newly derived simple "jSCD" parameter readily calculable from the pair of IDPs' sequence charge patterns. Predictions by our analytical theory are in essential agreement with coarse-grained explicit-chain simulations. This computationally efficient theoretical framework is expected to be broadly applicable to rationalizing and predicting sequence-specific IDP-IDP polyelectrostatic interactions.
[ { "created": "Thu, 24 Oct 2019 14:48:14 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 18:50:48 GMT", "version": "v2" }, { "created": "Tue, 7 Jul 2020 18:10:31 GMT", "version": "v3" } ]
2020-08-10
[ [ "Amin", "Alan N.", "" ], [ "Lin", "Yi-Hsuan", "" ], [ "Das", "Suman", "" ], [ "Chan", "Hue Sun", "" ] ]
Intrinsically disordered proteins (IDPs) are important for biological functions. In contrast to folded proteins, molecular recognition among certain IDPs is "fuzzy" in that their binding and/or phase separation are stochastically governed by the interacting IDPs' amino acid sequences while their assembled conformations remain largely disordered. To help elucidate a basic aspect of this fascinating yet poorly understood phenomenon, the binding of a homo- or hetero-dimeric pair of polyampholytic IDPs is modeled statistical mechanically using cluster expansion. We find that the binding affinities of binary fuzzy complexes in the model correlate strongly with a newly derived simple "jSCD" parameter readily calculable from the pair of IDPs' sequence charge patterns. Predictions by our analytical theory are in essential agreement with coarse-grained explicit-chain simulations. This computationally efficient theoretical framework is expected to be broadly applicable to rationalizing and predicting sequence-specific IDP-IDP polyelectrostatic interactions.
2210.06303
Pakorn Uttayopas
Pakorn Uttayopas, Xiaoxiao Cheng, Udaya Bhaskar Rongala, Henrik J\"orntell, Etienne Burdet
Dynamic neuronal networks efficiently achieve classification in robotic interactions with real-world objects
9 pages, 8 figures.aim to use it for ph-coding reporting and further
null
null
null
q-bio.NC cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Biological cortical networks are potentially fully recurrent networks without any distinct output layer, where recognition may instead rely on the distribution of activity across its neurons. Because such biological networks can have rich dynamics, they are well-designed to cope with dynamical interactions of the types that occur in nature, while traditional machine learning networks may struggle to make sense of such data. Here we connected a simple model neuronal network (based on the 'linear summation neuron model' featuring biologically realistic dynamics (LSM), consisting of 10 of excitatory and 10 inhibitory neurons, randomly connected) to a robot finger with multiple types of force sensors when interacting with materials of different levels of compliance. Scope: to explore the performance of the network on classification accuracy. Therefore, we compared the performance of the network output with principal component analysis of statistical features of the sensory data as well as its mechanical properties. Remarkably, even though the LSM was a very small and untrained network, and merely designed to provide rich internal network dynamics while the neuron model itself was highly simplified, we found that the LSM outperformed these other statistical approaches in terms of accuracy.
[ { "created": "Wed, 12 Oct 2022 15:09:59 GMT", "version": "v1" }, { "created": "Fri, 11 Nov 2022 13:14:54 GMT", "version": "v2" } ]
2022-11-14
[ [ "Uttayopas", "Pakorn", "" ], [ "Cheng", "Xiaoxiao", "" ], [ "Rongala", "Udaya Bhaskar", "" ], [ "Jörntell", "Henrik", "" ], [ "Burdet", "Etienne", "" ] ]
Biological cortical networks are potentially fully recurrent networks without any distinct output layer, where recognition may instead rely on the distribution of activity across its neurons. Because such biological networks can have rich dynamics, they are well-designed to cope with dynamical interactions of the types that occur in nature, while traditional machine learning networks may struggle to make sense of such data. Here we connected a simple model neuronal network (based on the 'linear summation neuron model' featuring biologically realistic dynamics (LSM), consisting of 10 of excitatory and 10 inhibitory neurons, randomly connected) to a robot finger with multiple types of force sensors when interacting with materials of different levels of compliance. Scope: to explore the performance of the network on classification accuracy. Therefore, we compared the performance of the network output with principal component analysis of statistical features of the sensory data as well as its mechanical properties. Remarkably, even though the LSM was a very small and untrained network, and merely designed to provide rich internal network dynamics while the neuron model itself was highly simplified, we found that the LSM outperformed these other statistical approaches in terms of accuracy.
q-bio/0610033
Raffaele Vardavas
Raffaele Vardavas, Romulus Breban, Sally Blower
The Vaccinee's Dilemma: Individual-level Decisions, Self- Organization & Influenza Epidemics
null
null
null
null
q-bio.PE
null
Inspired by Minority Games, we constructed a novel individual-level game of adaptive decision-making based on the dilemma of deciding whether to participate in voluntary influenza vaccination programs. The proportion of the population vaccinated (i.e., the vaccination coverage) determines epidemic severity. Above a critical vaccination coverage, epidemics are prevented; hence individuals find it unnecessary to vaccinate. The adaptive dynamics of the decisions directly affect influenza epidemiology and, conversely, influenza epidemiology strongly influences decision-making. This feedback mechanism creates a unique self-organized state where epidemics are prevented. This state is attracting, but unstable; thus epidemics are rarely prevented. This result implies that vaccination will have to be mandatory if the public health objective is to prevent influenza epidemics. We investigated how collective behavior changes when public health programs are implemented. Surprisingly, programs requiring advance payment for several years of vaccination prevents severe epidemics, even with voluntary vaccination. Prevention is determined by the individuals' adaptability, memory, and number of pre-paid vaccinations. Notably, vaccinating families exacerbates and increases the frequency of severe epidemics.
[ { "created": "Tue, 17 Oct 2006 21:10:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Vardavas", "Raffaele", "" ], [ "Breban", "Romulus", "" ], [ "Blower", "Sally", "" ] ]
Inspired by Minority Games, we constructed a novel individual-level game of adaptive decision-making based on the dilemma of deciding whether to participate in voluntary influenza vaccination programs. The proportion of the population vaccinated (i.e., the vaccination coverage) determines epidemic severity. Above a critical vaccination coverage, epidemics are prevented; hence individuals find it unnecessary to vaccinate. The adaptive dynamics of the decisions directly affect influenza epidemiology and, conversely, influenza epidemiology strongly influences decision-making. This feedback mechanism creates a unique self-organized state where epidemics are prevented. This state is attracting, but unstable; thus epidemics are rarely prevented. This result implies that vaccination will have to be mandatory if the public health objective is to prevent influenza epidemics. We investigated how collective behavior changes when public health programs are implemented. Surprisingly, programs requiring advance payment for several years of vaccination prevents severe epidemics, even with voluntary vaccination. Prevention is determined by the individuals' adaptability, memory, and number of pre-paid vaccinations. Notably, vaccinating families exacerbates and increases the frequency of severe epidemics.
2012.14319
Pablo Moisset de Espan\'es
Patricio Cumsille, Oscar Rojas-D\'iaz, Pablo Moisset de Espan\'es
Forecasting COVID-19 Chile's second outbreak by a generalized SIR model with constant time delays and a fitted positivity rate
23 pages, 7 figures
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
The COVID-19 disease has forced countries to make a considerable collaborative effort between scientists and governments to provide indicators to suitable follow-up the pandemic's consequences. Mathematical modeling plays a crucial role in quantifying indicators describing diverse aspects of the pandemic. Consequently, this work aims to develop a clear, efficient, and reproducible methodology for parameter optimization, whose implementation is illustrated using data from three representative regions from Chile and a suitable generalized SIR model together with a fitted positivity rate. Our results reproduce the general trend of the infected's curve, distinguishing the reported and real cases. Finally, our methodology is robust, and it allows us to forecast a second outbreak of COVID-19 and the infection fatality rate of COVID-19 qualitatively according to the reported dead cases.
[ { "created": "Mon, 28 Dec 2020 16:02:06 GMT", "version": "v1" } ]
2020-12-29
[ [ "Cumsille", "Patricio", "" ], [ "Rojas-Díaz", "Oscar", "" ], [ "de Espanés", "Pablo Moisset", "" ] ]
The COVID-19 disease has forced countries to make a considerable collaborative effort between scientists and governments to provide indicators to suitable follow-up the pandemic's consequences. Mathematical modeling plays a crucial role in quantifying indicators describing diverse aspects of the pandemic. Consequently, this work aims to develop a clear, efficient, and reproducible methodology for parameter optimization, whose implementation is illustrated using data from three representative regions from Chile and a suitable generalized SIR model together with a fitted positivity rate. Our results reproduce the general trend of the infected's curve, distinguishing the reported and real cases. Finally, our methodology is robust, and it allows us to forecast a second outbreak of COVID-19 and the infection fatality rate of COVID-19 qualitatively according to the reported dead cases.
1305.7147
Yucheng Hu
Yucheng Hu and Tianqi Zhu
Cell Growth and Size Homeostasis in Silico
null
null
10.1016/j.bpj.2014.01.038
null
q-bio.CB
http://creativecommons.org/licenses/by/3.0/
Cell growth in size is a complex process coordinated by intrinsic and environmental signals. In a recent work [Tzur et al., Science, 2009, 325:167-171], size distributions in an exponentially growing population of mammalian cells were used to infer the growth rate in size. The results suggest that cell growth is neither linear nor exponential, but subject to size-dependent regulation. To explain their data, we build a model in which the cell growth rate is controlled by the relative amount of mRNA and ribosomes in a cell. Plus a stochastic division rule, the evolutionary process of a population of cells can be simulated and the statistics of the in-silico population agree well with the experimental data. To further explore the model space, alternative growth models and division rules are studied. This work may serve as a starting point for us to understand the rational behind cell growth and size regulation using predictive models.
[ { "created": "Thu, 30 May 2013 15:52:42 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2013 16:21:05 GMT", "version": "v2" }, { "created": "Mon, 4 Nov 2013 06:12:43 GMT", "version": "v3" } ]
2015-06-16
[ [ "Hu", "Yucheng", "" ], [ "Zhu", "Tianqi", "" ] ]
Cell growth in size is a complex process coordinated by intrinsic and environmental signals. In a recent work [Tzur et al., Science, 2009, 325:167-171], size distributions in an exponentially growing population of mammalian cells were used to infer the growth rate in size. The results suggest that cell growth is neither linear nor exponential, but subject to size-dependent regulation. To explain their data, we build a model in which the cell growth rate is controlled by the relative amount of mRNA and ribosomes in a cell. Plus a stochastic division rule, the evolutionary process of a population of cells can be simulated and the statistics of the in-silico population agree well with the experimental data. To further explore the model space, alternative growth models and division rules are studied. This work may serve as a starting point for us to understand the rational behind cell growth and size regulation using predictive models.
1804.10925
Valmir C. Barbosa
Valmir C. Barbosa, Raul Donangelo, Sergio R. Souza
Co-evolution of the mitotic and meiotic modes of eukaryotic cellular division
null
Physical Review E 98 (2018), 032409
10.1103/PhysRevE.98.032409
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genetic material of a eukaryotic cell comprises both nuclear DNA (ncDNA) and mitochondrial DNA (mtDNA). These differ markedly in several aspects but nevertheless must encode proteins that are compatible with one another. Here we introduce a network model of the hypothetical co-evolution of the two most common modes of cellular division for reproduction: by mitosis (supporting asexual reproduction) and by meiosis (supporting sexual reproduction). Our model is based on a random hypergraph, with two nodes for each possible genotype, each encompassing both ncDNA and mtDNA. One of the nodes is necessarily generated by mitosis occurring at a parent genotype, the other by meiosis occurring at two parent genotypes. A genotype's fitness depends on the compatibility of its ncDNA and mtDNA. The model has two probability parameters, $p$ and $r$, the former accounting for the diversification of ncDNA during meiosis, the latter for the diversification of mtDNA accompanying both meiosis and mitosis. Another parameter, $\lambda$, is used to regulate the relative rate at which mitosis- and meiosis-generated genotypes are produced. We have found that, even though $p$ and $r$ do affect the existence of evolutionary pathways in the network, the crucial parameter regulating the coexistence of the two modes of cellular division is $\lambda$. Depending on genotype size, $\lambda$ can be valued so that either mode of cellular division prevails. Our study is closely related to a recent hypothesis that views the appearance of cellular division by meiosis, as opposed to division by mitosis, as an evolutionary strategy for boosting ncDNA diversification to keep up with that of mtDNA. Our results indicate that this may well have been the case, thus lending support to the first hypothesis in the field to take into account the role of such ubiquitous and essential organelles as mitochondria.
[ { "created": "Sun, 29 Apr 2018 13:33:27 GMT", "version": "v1" } ]
2020-09-04
[ [ "Barbosa", "Valmir C.", "" ], [ "Donangelo", "Raul", "" ], [ "Souza", "Sergio R.", "" ] ]
The genetic material of a eukaryotic cell comprises both nuclear DNA (ncDNA) and mitochondrial DNA (mtDNA). These differ markedly in several aspects but nevertheless must encode proteins that are compatible with one another. Here we introduce a network model of the hypothetical co-evolution of the two most common modes of cellular division for reproduction: by mitosis (supporting asexual reproduction) and by meiosis (supporting sexual reproduction). Our model is based on a random hypergraph, with two nodes for each possible genotype, each encompassing both ncDNA and mtDNA. One of the nodes is necessarily generated by mitosis occurring at a parent genotype, the other by meiosis occurring at two parent genotypes. A genotype's fitness depends on the compatibility of its ncDNA and mtDNA. The model has two probability parameters, $p$ and $r$, the former accounting for the diversification of ncDNA during meiosis, the latter for the diversification of mtDNA accompanying both meiosis and mitosis. Another parameter, $\lambda$, is used to regulate the relative rate at which mitosis- and meiosis-generated genotypes are produced. We have found that, even though $p$ and $r$ do affect the existence of evolutionary pathways in the network, the crucial parameter regulating the coexistence of the two modes of cellular division is $\lambda$. Depending on genotype size, $\lambda$ can be valued so that either mode of cellular division prevails. Our study is closely related to a recent hypothesis that views the appearance of cellular division by meiosis, as opposed to division by mitosis, as an evolutionary strategy for boosting ncDNA diversification to keep up with that of mtDNA. Our results indicate that this may well have been the case, thus lending support to the first hypothesis in the field to take into account the role of such ubiquitous and essential organelles as mitochondria.
2101.10902
Brandon Carter
Ge Liu, Alexander Dimitrakakis, Brandon Carter, David Gifford
Maximum n-times Coverage for Vaccine Design
ICLR 2022
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum of the weights of elements that are covered at least $n$ times is at least $\tau$. Maximum $n$-times coverage is a generalization of the multi-set multi-cover problem, is NP-complete, and is not submodular. We introduce two new practical solutions for $n$-times coverage based on integer linear programming and sequential greedy optimization. We show that maximum $n$-times coverage is a natural way to frame peptide vaccine design, and find that it produces a pan-strain COVID-19 vaccine design that is superior to 29 other published designs in predicted population coverage and the expected number of peptides displayed by each individual's HLA molecules.
[ { "created": "Sun, 24 Jan 2021 22:20:24 GMT", "version": "v1" }, { "created": "Sat, 12 Jun 2021 00:46:04 GMT", "version": "v2" }, { "created": "Tue, 15 Jun 2021 15:07:02 GMT", "version": "v3" }, { "created": "Mon, 21 Feb 2022 18:12:51 GMT", "version": "v4" }, { "c...
2022-05-06
[ [ "Liu", "Ge", "" ], [ "Dimitrakakis", "Alexander", "" ], [ "Carter", "Brandon", "" ], [ "Gifford", "David", "" ] ]
We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum of the weights of elements that are covered at least $n$ times is at least $\tau$. Maximum $n$-times coverage is a generalization of the multi-set multi-cover problem, is NP-complete, and is not submodular. We introduce two new practical solutions for $n$-times coverage based on integer linear programming and sequential greedy optimization. We show that maximum $n$-times coverage is a natural way to frame peptide vaccine design, and find that it produces a pan-strain COVID-19 vaccine design that is superior to 29 other published designs in predicted population coverage and the expected number of peptides displayed by each individual's HLA molecules.
1504.07266
Sonya Ridden Miss
Sonya J. Ridden and Hannah H. Chang and Konstantinos C. Zygalakis and Ben D. MacArthur
Entropy, Ergodicity and Stem Cell Multipotency
6 pages, 3 figures
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Populations of mammalian stem cells commonly exhibit considerable cell-cell variability. However, the functional role of this diversity is unclear. Here, we analyze expression fluctuations of the stem cell surface marker Sca1 in mouse hematopoietic progenitor cells using a simple stochastic model and find that the observed dynamics naturally lie close to a critical state, thereby producing a diverse population that is able to respond rapidly to environmental changes. We propose an information-theoretic interpretation of these results that views cellular multipotency as an instance of maximum entropy statistical inference.
[ { "created": "Mon, 27 Apr 2015 20:22:12 GMT", "version": "v1" }, { "created": "Sat, 19 Sep 2015 16:24:19 GMT", "version": "v2" }, { "created": "Fri, 16 Oct 2015 21:56:09 GMT", "version": "v3" } ]
2015-10-20
[ [ "Ridden", "Sonya J.", "" ], [ "Chang", "Hannah H.", "" ], [ "Zygalakis", "Konstantinos C.", "" ], [ "MacArthur", "Ben D.", "" ] ]
Populations of mammalian stem cells commonly exhibit considerable cell-cell variability. However, the functional role of this diversity is unclear. Here, we analyze expression fluctuations of the stem cell surface marker Sca1 in mouse hematopoietic progenitor cells using a simple stochastic model and find that the observed dynamics naturally lie close to a critical state, thereby producing a diverse population that is able to respond rapidly to environmental changes. We propose an information-theoretic interpretation of these results that views cellular multipotency as an instance of maximum entropy statistical inference.
0812.4708
Christophe Deroulers
Christophe Deroulers (IMNC), Marine Aubert (IMNC), Mathilde Badoual (IMNC), Basil Grammaticos (IMNC)
Modeling tumor cell migration: from microscopic to macroscopic
Final published version; 14 pages, 7 figures
Physical Review E: Statistical, Nonlinear, and Soft Matter Physics 79, 3 (2009) 031917
10.1103/PhysRevE.79.031917
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been shown experimentally that contact interactions may influence the migration of cancer cells. Previous works have modelized this thanks to stochastic, discrete models (cellular automata) at the cell level. However, for the study of the growth of real-size tumors with several millions of cells, it is best to use a macroscopic model having the form of a partial differential equation (PDE) for the density of cells. The difficulty is to predict the effect, at the macroscopic scale, of contact interactions that take place at the microscopic scale. To address this we use a multiscale approach: starting from a very simple, yet experimentally validated, microscopic model of migration with contact interactions, we derive a macroscopic model. We show that a diffusion equation arises, as is often postulated in the field of glioma modeling, but it is nonlinear because of the interactions. We give the explicit dependence of diffusivity on the cell density and on a parameter governing cell-cell interactions. We discuss in details the conditions of validity of the approximations used in the derivation and we compare analytic results from our PDE to numerical simulations and to some in vitro experiments. We notice that the family of microscopic models we started from includes as special cases some kinetically constrained models that were introduced for the study of the physics of glasses, supercooled liquids and jamming systems.
[ { "created": "Fri, 26 Dec 2008 21:01:07 GMT", "version": "v1" }, { "created": "Thu, 26 Mar 2009 15:50:27 GMT", "version": "v2" } ]
2009-03-26
[ [ "Deroulers", "Christophe", "", "IMNC" ], [ "Aubert", "Marine", "", "IMNC" ], [ "Badoual", "Mathilde", "", "IMNC" ], [ "Grammaticos", "Basil", "", "IMNC" ] ]
It has been shown experimentally that contact interactions may influence the migration of cancer cells. Previous works have modelized this thanks to stochastic, discrete models (cellular automata) at the cell level. However, for the study of the growth of real-size tumors with several millions of cells, it is best to use a macroscopic model having the form of a partial differential equation (PDE) for the density of cells. The difficulty is to predict the effect, at the macroscopic scale, of contact interactions that take place at the microscopic scale. To address this we use a multiscale approach: starting from a very simple, yet experimentally validated, microscopic model of migration with contact interactions, we derive a macroscopic model. We show that a diffusion equation arises, as is often postulated in the field of glioma modeling, but it is nonlinear because of the interactions. We give the explicit dependence of diffusivity on the cell density and on a parameter governing cell-cell interactions. We discuss in details the conditions of validity of the approximations used in the derivation and we compare analytic results from our PDE to numerical simulations and to some in vitro experiments. We notice that the family of microscopic models we started from includes as special cases some kinetically constrained models that were introduced for the study of the physics of glasses, supercooled liquids and jamming systems.
q-bio/0505026
Sheng Bao
Shi Chen, Sheng Bao, Ling Yan, Cheng Huang
A Predation Behavior Model Based on Game Theory
5 pages,1 figure,1 table
null
null
null
q-bio.PE
null
This article adopts game theory to build a model for explaining the predation behavior of animals.We assume that both the prey and the preydator have two stratigies in this game,the active one and the passive one.By calculating the outcome and the income of energy in different stratigies, we find the solution to analyze the different evolution path of both the prey and the predator.A simulation result approximately represents the correctness of our model.
[ { "created": "Sat, 14 May 2005 08:45:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chen", "Shi", "" ], [ "Bao", "Sheng", "" ], [ "Yan", "Ling", "" ], [ "Huang", "Cheng", "" ] ]
This article adopts game theory to build a model for explaining the predation behavior of animals.We assume that both the prey and the preydator have two stratigies in this game,the active one and the passive one.By calculating the outcome and the income of energy in different stratigies, we find the solution to analyze the different evolution path of both the prey and the predator.A simulation result approximately represents the correctness of our model.
1609.07462
Toni Valles-Catala
Borja Esteve-Altava, Toni Valles-Catala, Roger Guimera, Marta Sales-Pardo and Diego Rasskin-Gutman
Bone fusion in normal and pathological development is constrained by the network architecture of the human skull
15 pages, 2 figures
null
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The premature fusion of cranial bones, craniosynostosis, affects the correct development of the skull producing morphological malformations in newborns. To assess the susceptibility of each craniofacial articulation to close prematurely, we used a network model of the skull to quantify the link reliability (an index based on stochastic block modeling and Bayesian inference) of each articulation. We show that, of the 93 human skull articulations at birth, the few articulations that are associated with nonsyndromic craniosynostosis conditions have statistically significant lower reliability scores than the others. In a similar way, articulations that close during the normal postnatal development of the skull have also lower reliability scores than those articulations that persist through adult live. These results indicate a relationship between the architecture of the skull network and the specific articulations that close during normal development and in pathological conditions. Our findings suggest that the topological arrangement of skull bones might act as an epigenetic factor, predisposing some articulations to closure, both in normal and pathological development, and also affecting the long-term evolution of the skull.
[ { "created": "Wed, 27 Jul 2016 10:17:55 GMT", "version": "v1" }, { "created": "Tue, 27 Dec 2016 20:16:32 GMT", "version": "v2" } ]
2016-12-28
[ [ "Esteve-Altava", "Borja", "" ], [ "Valles-Catala", "Toni", "" ], [ "Guimera", "Roger", "" ], [ "Sales-Pardo", "Marta", "" ], [ "Rasskin-Gutman", "Diego", "" ] ]
The premature fusion of cranial bones, craniosynostosis, affects the correct development of the skull producing morphological malformations in newborns. To assess the susceptibility of each craniofacial articulation to close prematurely, we used a network model of the skull to quantify the link reliability (an index based on stochastic block modeling and Bayesian inference) of each articulation. We show that, of the 93 human skull articulations at birth, the few articulations that are associated with nonsyndromic craniosynostosis conditions have statistically significant lower reliability scores than the others. In a similar way, articulations that close during the normal postnatal development of the skull have also lower reliability scores than those articulations that persist through adult live. These results indicate a relationship between the architecture of the skull network and the specific articulations that close during normal development and in pathological conditions. Our findings suggest that the topological arrangement of skull bones might act as an epigenetic factor, predisposing some articulations to closure, both in normal and pathological development, and also affecting the long-term evolution of the skull.
2305.13821
Chaoran Chen
Chaoran Chen, Tanja Stadler
GenSpectrum Chat: Data Exploration in Public Health Using Large Language Models
null
null
null
null
q-bio.GN cs.AI cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Introduction: The COVID-19 pandemic highlighted the importance of making epidemiological data and scientific insights easily accessible and explorable for public health agencies, the general public, and researchers. State-of-the-art approaches for sharing data and insights included regularly updated reports and web dashboards. However, they face a trade-off between the simplicity and flexibility of data exploration. With the capabilities of recent large language models (LLMs) such as GPT-4, this trade-off can be overcome. Results: We developed the chatbot "GenSpectrum Chat" (https://cov-spectrum.org/chat) which uses GPT-4 as the underlying large language model (LLM) to explore SARS-CoV-2 genomic sequencing data. Out of 500 inputs from real-world users, the chatbot provided a correct answer for 453 prompts; an incorrect answer for 13 prompts, and no answer although the question was within scope for 34 prompts. We also tested the chatbot with inputs from 10 different languages, and despite being provided solely with English instructions and examples, it successfully processed prompts in all tested languages. Conclusion: LLMs enable new ways of interacting with information systems. In the field of public health, GenSpectrum Chat can facilitate the analysis of real-time pathogen genomic data. With our chatbot supporting interactive exploration in different languages, we envision quick and direct access to the latest evidence for policymakers around the world.
[ { "created": "Tue, 23 May 2023 08:43:43 GMT", "version": "v1" } ]
2023-05-24
[ [ "Chen", "Chaoran", "" ], [ "Stadler", "Tanja", "" ] ]
Introduction: The COVID-19 pandemic highlighted the importance of making epidemiological data and scientific insights easily accessible and explorable for public health agencies, the general public, and researchers. State-of-the-art approaches for sharing data and insights included regularly updated reports and web dashboards. However, they face a trade-off between the simplicity and flexibility of data exploration. With the capabilities of recent large language models (LLMs) such as GPT-4, this trade-off can be overcome. Results: We developed the chatbot "GenSpectrum Chat" (https://cov-spectrum.org/chat) which uses GPT-4 as the underlying large language model (LLM) to explore SARS-CoV-2 genomic sequencing data. Out of 500 inputs from real-world users, the chatbot provided a correct answer for 453 prompts; an incorrect answer for 13 prompts, and no answer although the question was within scope for 34 prompts. We also tested the chatbot with inputs from 10 different languages, and despite being provided solely with English instructions and examples, it successfully processed prompts in all tested languages. Conclusion: LLMs enable new ways of interacting with information systems. In the field of public health, GenSpectrum Chat can facilitate the analysis of real-time pathogen genomic data. With our chatbot supporting interactive exploration in different languages, we envision quick and direct access to the latest evidence for policymakers around the world.
q-bio/0407010
Renaud Jolivet
Renaud Jolivet and Wulfram Gerstner
Predicting spike times of a detailed conductance-based neuron model driven by stochastic spike arrival
20 pages, 5 figures
Journal of Physiology - Paris 98 (2004) 442--451
10.1016/j.jphysparis.2005.09.010
null
q-bio.NC
null
Reduced models of neuronal activity such as Integrate-and-Fire models allow a description of neuronal dynamics in simple, intuitive terms and are easy to simulate numerically. We present a method to fit an Integrate-and-Fire-type model of neuronal activity, namely a modified version of the Spike Response Model, to a detailed Hodgkin-Huxley-type neuron model driven by stochastic spike arrival. In the Hogkin-Huxley model, spike arrival at the synapse is modeled by a change of synaptic conductance. For such conductance spike input, more than 70% of the postsynaptic action potentials can be predicted with the correct timing by the Integrate-and-Fire-type model. The modified Spike Response Model is based upon a linearized theory of conductance-driven Integrate-and-Fire neuron.
[ { "created": "Tue, 6 Jul 2004 10:16:38 GMT", "version": "v1" } ]
2020-04-03
[ [ "Jolivet", "Renaud", "" ], [ "Gerstner", "Wulfram", "" ] ]
Reduced models of neuronal activity such as Integrate-and-Fire models allow a description of neuronal dynamics in simple, intuitive terms and are easy to simulate numerically. We present a method to fit an Integrate-and-Fire-type model of neuronal activity, namely a modified version of the Spike Response Model, to a detailed Hodgkin-Huxley-type neuron model driven by stochastic spike arrival. In the Hogkin-Huxley model, spike arrival at the synapse is modeled by a change of synaptic conductance. For such conductance spike input, more than 70% of the postsynaptic action potentials can be predicted with the correct timing by the Integrate-and-Fire-type model. The modified Spike Response Model is based upon a linearized theory of conductance-driven Integrate-and-Fire neuron.
2004.04470
Arthur Genthon
Arthur Genthon, David Lacoste
Fluctuation relations and fitness landscapes of growing cell populations
null
Sci Rep 10, 11889 (2020)
10.1038/s41598-020-68444-x
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
We construct a pathwise formulation of a growing population of cells, based on two different samplings of lineages within the population, namely the forward and backward samplings. We show that a general symmetry relation, called fluctuation relation relates these two samplings, independently of the model used to generate divisions and growth in the cell population. These relations lead to estimators of the population growth rate, which can be very efficient as we demonstrate by an analysis of a set of mother machine data. These fluctuation relations lead to general and important inequalities between the mean number of divisions and the doubling time of the population. We also study the fitness landscape, a concept based on the two samplings mentioned above, which quantifies the correlations between a phenotypic trait of interest and the number of divisions. We obtain explicit results when the trait is the age or the size, for age and size-controlled models.
[ { "created": "Thu, 9 Apr 2020 10:41:46 GMT", "version": "v1" }, { "created": "Tue, 2 Nov 2021 16:14:05 GMT", "version": "v2" } ]
2021-11-03
[ [ "Genthon", "Arthur", "" ], [ "Lacoste", "David", "" ] ]
We construct a pathwise formulation of a growing population of cells, based on two different samplings of lineages within the population, namely the forward and backward samplings. We show that a general symmetry relation, called fluctuation relation relates these two samplings, independently of the model used to generate divisions and growth in the cell population. These relations lead to estimators of the population growth rate, which can be very efficient as we demonstrate by an analysis of a set of mother machine data. These fluctuation relations lead to general and important inequalities between the mean number of divisions and the doubling time of the population. We also study the fitness landscape, a concept based on the two samplings mentioned above, which quantifies the correlations between a phenotypic trait of interest and the number of divisions. We obtain explicit results when the trait is the age or the size, for age and size-controlled models.
2102.03025
Chikara Furusawa
Kunihiko Kaneko and Chikara Furusawa
Direction and Constraint in Phenotypic Evolution: Dimension Reduction and Global Proportionality in Phenotype Fluctuation and Responses
25 pages, 10 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
A macroscopic theory for describing cellular states during steady-growth is presented, which is based on the consistency between cellular growth and molecular replication, as well as the robustness of phenotypes against perturbations. Adaptive changes in high-dimensional phenotypes were shown to be restricted within a low-dimensional slow manifold, from which a macroscopic law for cellular states was derived, which was confirmed by adaptation experiments on bacteria under stress. Next, the theory was extended to phenotypic evolution, leading to proportionality between phenotypic responses against genetic evolution and environmental adaptation. The link between robustness to noise and mutation, as a result of robustness in developmental dynamics to perturbations, showed proportionality between phenotypic plasticity by genetic changes and by environmental noise. Accordingly, directionality and constraint in phenotypic evolution was quantitatively formulated in terms of phenotypic fluctuation and the response against environmental change. The evolutionary relevance of slow modes in controlling high-dimensional phenotypes is discussed.
[ { "created": "Fri, 5 Feb 2021 06:56:23 GMT", "version": "v1" } ]
2021-02-08
[ [ "Kaneko", "Kunihiko", "" ], [ "Furusawa", "Chikara", "" ] ]
A macroscopic theory for describing cellular states during steady-growth is presented, which is based on the consistency between cellular growth and molecular replication, as well as the robustness of phenotypes against perturbations. Adaptive changes in high-dimensional phenotypes were shown to be restricted within a low-dimensional slow manifold, from which a macroscopic law for cellular states was derived, which was confirmed by adaptation experiments on bacteria under stress. Next, the theory was extended to phenotypic evolution, leading to proportionality between phenotypic responses against genetic evolution and environmental adaptation. The link between robustness to noise and mutation, as a result of robustness in developmental dynamics to perturbations, showed proportionality between phenotypic plasticity by genetic changes and by environmental noise. Accordingly, directionality and constraint in phenotypic evolution was quantitatively formulated in terms of phenotypic fluctuation and the response against environmental change. The evolutionary relevance of slow modes in controlling high-dimensional phenotypes is discussed.
2401.13022
Caterina Strambio-De-Castillia Ph.D.
Nikki Bialy, Frank Alber, Brenda Andrews, Michael Angelo, Brian Beliveau, Lacramioara Bintu, Alistair Boettiger, Ulrike Boehm, Claire M. Brown, Mahmoud Bukar Maina, James J. Chambers, Beth A. Cimini, Kevin Eliceiri, Rachel Errington, Orestis Faklaris, Nathalie Gaudreault, Ronald N. Germain, Wojtek Goscinski, David Grunwald, Michael Halter, Dorit Hanein, John W. Hickey, Judith Lacoste, Alex Laude, Emma Lundberg, Jian Ma, Leonel Malacrida, Josh Moore, Glyn Nelson, Elizabeth Kathleen Neumann, Roland Nitschke, Shuichi Onami, Jaime A. Pimentel, Anne L. Plant, Andrea J. Radtke, Bikash Sabata, Denis Schapiro, Johannes Sch\"oneberg, Jeffrey M. Spraggins, Damir Sudar, Wouter-Michiel Adrien Maria Vierdag, Niels Volkmann, Carolina W\"ahlby, Siyuan (Steven) Wang, Ziv Yaniv and Caterina Strambio-De-Castillia
Harmonizing the Generation and Pre-publication Stewardship of FAIR Image Data
This manuscript is published with a closely related companion entitled, Enabling Global Image Data Sharing in the Life Sciences, which can be found at the following link, arXiv:2401.13023 [q-bio.OT]
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Together with the molecular knowledge of genes and proteins, biological images promise to significantly enhance the scientific understanding of complex cellular systems and to advance predictive and personalized therapeutic products for human health. For this potential to be realized, quality-assured image data must be shared among labs at a global scale to be compared, pooled, and reanalyzed, thus unleashing untold potential beyond the original purpose for which the data was generated. There are two broad sets of requirements to enable image data sharing in the life sciences. One set of requirements is articulated in the companion White Paper entitled Enabling Global Image Data Sharing in the Life Sciences, which is published in parallel and addresses the need to build the cyberinfrastructure for sharing the digital array data. In this White Paper, we detail a broad set of requirements, which involves collecting, managing, presenting, and propagating contextual information essential to assess the quality, understand the content, interpret the scientific implications, and reuse image data in the context of the experimental details. We start by providing an overview of the main lessons learned to date through international community activities, which have recently made considerable progress toward generating community standard practices for imaging Quality Control (QC) and metadata. We then provide a clear set of recommendations for amplifying this work. The driving goal is to address remaining challenges and democratize access to everyday practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.
[ { "created": "Tue, 23 Jan 2024 18:47:50 GMT", "version": "v1" }, { "created": "Mon, 29 Jan 2024 14:06:25 GMT", "version": "v2" }, { "created": "Tue, 30 Jan 2024 17:50:28 GMT", "version": "v3" }, { "created": "Thu, 8 Feb 2024 23:06:17 GMT", "version": "v4" } ]
2024-02-12
[ [ "Bialy", "Nikki", "", "Steven" ], [ "Alber", "Frank", "", "Steven" ], [ "Andrews", "Brenda", "", "Steven" ], [ "Angelo", "Michael", "", "Steven" ], [ "Beliveau", "Brian", "", "Steven" ], [ "Bintu", "Lacrami...
Together with the molecular knowledge of genes and proteins, biological images promise to significantly enhance the scientific understanding of complex cellular systems and to advance predictive and personalized therapeutic products for human health. For this potential to be realized, quality-assured image data must be shared among labs at a global scale to be compared, pooled, and reanalyzed, thus unleashing untold potential beyond the original purpose for which the data was generated. There are two broad sets of requirements to enable image data sharing in the life sciences. One set of requirements is articulated in the companion White Paper entitled Enabling Global Image Data Sharing in the Life Sciences, which is published in parallel and addresses the need to build the cyberinfrastructure for sharing the digital array data. In this White Paper, we detail a broad set of requirements, which involves collecting, managing, presenting, and propagating contextual information essential to assess the quality, understand the content, interpret the scientific implications, and reuse image data in the context of the experimental details. We start by providing an overview of the main lessons learned to date through international community activities, which have recently made considerable progress toward generating community standard practices for imaging Quality Control (QC) and metadata. We then provide a clear set of recommendations for amplifying this work. The driving goal is to address remaining challenges and democratize access to everyday practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.
2309.15844
Tomoko Matsui
Nourddine Azzaoui, Tomoko Matsui, and Daisuke Murakami
Data-Driven Framework for Uncovering Hidden Control Strategies in Evolutionary Analysis
18 pages
null
null
null
q-bio.PE stat.ME
http://creativecommons.org/licenses/by/4.0/
We have devised a data-driven framework for uncovering hidden control strategies used by an evolutionary system described by an evolutionary probability distribution. This innovative framework enables deciphering of the concealed mechanisms that contribute to the progression or mitigation of such situations as the spread of COVID-19. Novel algorithms are used to estimate the optimal control in tandem with the parameters for evolution in general dynamical systems, thereby extending the concept of model predictive control. This is a significant departure from conventional control methods, which require knowledge of the system to manipulate its evolution and of the controller's strategy or parameters. We used a generalized additive model, supplemented by extensive statistical testing, to identify a set of predictor covariates closely linked to the control. Using real-world COVID-19 data, we successfully delineated the descriptive behaviors of the COVID-19 epidemics in five prefectures in Japan and nine countries. We compared these nine countries and grouped them on the basis of shared profiles, providing valuable insights into their pandemic responses. Our findings underscore the potential of our framework as a powerful tool for understanding and managing complex evolutionary processes.
[ { "created": "Tue, 26 Sep 2023 13:58:54 GMT", "version": "v1" } ]
2023-09-28
[ [ "Azzaoui", "Nourddine", "" ], [ "Matsui", "Tomoko", "" ], [ "Murakami", "Daisuke", "" ] ]
We have devised a data-driven framework for uncovering hidden control strategies used by an evolutionary system described by an evolutionary probability distribution. This innovative framework enables deciphering of the concealed mechanisms that contribute to the progression or mitigation of such situations as the spread of COVID-19. Novel algorithms are used to estimate the optimal control in tandem with the parameters for evolution in general dynamical systems, thereby extending the concept of model predictive control. This is a significant departure from conventional control methods, which require knowledge of the system to manipulate its evolution and of the controller's strategy or parameters. We used a generalized additive model, supplemented by extensive statistical testing, to identify a set of predictor covariates closely linked to the control. Using real-world COVID-19 data, we successfully delineated the descriptive behaviors of the COVID-19 epidemics in five prefectures in Japan and nine countries. We compared these nine countries and grouped them on the basis of shared profiles, providing valuable insights into their pandemic responses. Our findings underscore the potential of our framework as a powerful tool for understanding and managing complex evolutionary processes.
0805.3368
Michael Stiber
M. Stiber, F. Kawasaki and D. Xu
A model of dissociated cortical tissue
4 pages, 8 PDF figure files, uses graphicx, mathptmx, helvet, courier, amsmath, and 1 custom style file
Proc. 7th Int. Workshop on Neural Coding, Montevideo, Uruguay, Nov. 7-12, 2007, pp. 24-27
null
null
q-bio.NC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A powerful experimental approach for investigating computation in networks of biological neurons is the use of cultured dissociated cortical cells grown into networks on a multi-electrode array. Such preparations allow investigation of network development, activity, plasticity, responses to stimuli, and the effects of pharmacological agents. They also exhibit whole-culture pathological bursting; understanding the mechanisms that underlie this could allow creation of more useful cell cultures and possibly have medical applications.
[ { "created": "Wed, 21 May 2008 23:36:04 GMT", "version": "v1" } ]
2008-05-23
[ [ "Stiber", "M.", "" ], [ "Kawasaki", "F.", "" ], [ "Xu", "D.", "" ] ]
A powerful experimental approach for investigating computation in networks of biological neurons is the use of cultured dissociated cortical cells grown into networks on a multi-electrode array. Such preparations allow investigation of network development, activity, plasticity, responses to stimuli, and the effects of pharmacological agents. They also exhibit whole-culture pathological bursting; understanding the mechanisms that underlie this could allow creation of more useful cell cultures and possibly have medical applications.
2310.15729
Macoto Kikuchi
Macoto Kikuchi
Phenotype selection due to mutational robustness
6 pages, 3 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Darwinian evolution gives rise not only to the adaptation to the environment but also to the enhancement of the robustness against mutation. Suppose more than one phenotypes share the same fitness value. We expect that some phenotypes are hardly selected as a consequence of the selection bias for mutational robustness. We investigated this phenotype selection for a model of gene regulatory networks (GRNs). We constructed the randomly generated set of GRNs using the multicanonical Monte Carlo method and compared it to the outcomes of evolutionary simulations. The results suggest that the mutationally least robust phenotype is suppressed in evolution.
[ { "created": "Tue, 24 Oct 2023 11:05:14 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2023 10:10:24 GMT", "version": "v2" } ]
2023-12-12
[ [ "Kikuchi", "Macoto", "" ] ]
Darwinian evolution gives rise not only to the adaptation to the environment but also to the enhancement of the robustness against mutation. Suppose more than one phenotypes share the same fitness value. We expect that some phenotypes are hardly selected as a consequence of the selection bias for mutational robustness. We investigated this phenotype selection for a model of gene regulatory networks (GRNs). We constructed the randomly generated set of GRNs using the multicanonical Monte Carlo method and compared it to the outcomes of evolutionary simulations. The results suggest that the mutationally least robust phenotype is suppressed in evolution.
2212.06052
Rafael De Andrade Moral
Rafael A. Moral, Rishabh Vishwakarma, John Connolly, Laura Byrne, Catherine Hurley, John A. Finn, Caroline Brophy
Going beyond richness: Modelling the BEF relationship using species identity, evenness, richness and species interactions via the DImodels R package
null
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BEF studies aim to understand how ecosystems respond to a gradient of species diversity. Diversity-Interactions (DI) models are suitable for analysing the BEF relationship. These models relate an ecosystem function response of a community to the identity of the species in the community, their evenness (proportions) and interactions. The number of species in the community (richness) is also implicitly modelled through this approach. It is common in BEF studies to model an ecosystem function as a function of richness; while this can uncover trends in the BEF relationship, by definition, species diversity is much broader than richness alone, and important patterns in the BEF relationship may remain hidden. In this paper, we introduce the DImodels R package for implementing DI models. We also compare DI models to traditional modelling approaches to highlight the advantages of using a multi-dimensional definition of species diversity. We show that using DI models can lead to considerably improved model fit over other methods; it does this by incorporating variation due to the multiple facets of species diversity. Predicting from a DI model is not limited to the study design points, the model can extrapolate to predict for any species composition and proportions (assuming there is sufficient coverage of this space in the study design). Expressing the BEF relationship as a function of richness alone can be useful to capture overall trends. However, collapsing the multiple dimensions of species diversity to a single dimension (such as richness) can result in valuable ecological information being lost. DI modelling provides a framework to test the multiple components of species diversity in the BEF relationship. It facilitates uncovering a deeper ecological understanding of the BEF relationship and can lead to enhanced inference.
[ { "created": "Fri, 9 Dec 2022 16:22:15 GMT", "version": "v1" }, { "created": "Sat, 25 Feb 2023 12:54:07 GMT", "version": "v2" }, { "created": "Fri, 5 May 2023 08:20:43 GMT", "version": "v3" } ]
2023-05-08
[ [ "Moral", "Rafael A.", "" ], [ "Vishwakarma", "Rishabh", "" ], [ "Connolly", "John", "" ], [ "Byrne", "Laura", "" ], [ "Hurley", "Catherine", "" ], [ "Finn", "John A.", "" ], [ "Brophy", "Caroline", "" ] ]
BEF studies aim to understand how ecosystems respond to a gradient of species diversity. Diversity-Interactions (DI) models are suitable for analysing the BEF relationship. These models relate an ecosystem function response of a community to the identity of the species in the community, their evenness (proportions) and interactions. The number of species in the community (richness) is also implicitly modelled through this approach. It is common in BEF studies to model an ecosystem function as a function of richness; while this can uncover trends in the BEF relationship, by definition, species diversity is much broader than richness alone, and important patterns in the BEF relationship may remain hidden. In this paper, we introduce the DImodels R package for implementing DI models. We also compare DI models to traditional modelling approaches to highlight the advantages of using a multi-dimensional definition of species diversity. We show that using DI models can lead to considerably improved model fit over other methods; it does this by incorporating variation due to the multiple facets of species diversity. Predicting from a DI model is not limited to the study design points, the model can extrapolate to predict for any species composition and proportions (assuming there is sufficient coverage of this space in the study design). Expressing the BEF relationship as a function of richness alone can be useful to capture overall trends. However, collapsing the multiple dimensions of species diversity to a single dimension (such as richness) can result in valuable ecological information being lost. DI modelling provides a framework to test the multiple components of species diversity in the BEF relationship. It facilitates uncovering a deeper ecological understanding of the BEF relationship and can lead to enhanced inference.
2310.13723
Yaroslav Balytskyi
Yaroslav Balytskyi, Nataliia Kalashnyk, Inna Hubenko, Alina Balytska, Kelly McNear
Enhancing Open-World Bacterial Raman Spectra Identification by Feature Regularization for Improved Resilience against Unknown Classes
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The combination of Deep Learning techniques and Raman spectroscopy shows great potential offering precise and prompt identification of pathogenic bacteria in clinical settings. However, the traditional closed-set classification approaches assume that all test samples belong to one of the known pathogens, and their applicability is limited since the clinical environment is inherently unpredictable and dynamic, unknown or emerging pathogens may not be included in the available catalogs. We demonstrate that the current state-of-the-art Neural Networks identifying pathogens through Raman spectra are vulnerable to unknown inputs, resulting in an uncontrollable false positive rate. To address this issue, first, we developed a novel ensemble of ResNet architectures combined with the attention mechanism which outperforms existing closed-world methods, achieving an accuracy of $87.8 \pm 0.1\%$ compared to the best available model's accuracy of $86.7 \pm 0.4\%$. Second, through the integration of feature regularization by the Objectosphere loss function, our model achieves both high accuracy in identifying known pathogens from the catalog and effectively separates unknown samples drastically reducing the false positive rate. Finally, the proposed feature regularization method during training significantly enhances the performance of out-of-distribution detectors during the inference phase improving the reliability of the detection of unknown classes. Our novel algorithm for Raman spectroscopy enables the detection of unknown, uncatalogued, and emerging pathogens providing the flexibility to adapt to future pathogens that may emerge, and has the potential to improve the reliability of Raman-based solutions in dynamic operating environments where accuracy is critical, such as public safety applications.
[ { "created": "Thu, 19 Oct 2023 17:19:47 GMT", "version": "v1" } ]
2023-10-24
[ [ "Balytskyi", "Yaroslav", "" ], [ "Kalashnyk", "Nataliia", "" ], [ "Hubenko", "Inna", "" ], [ "Balytska", "Alina", "" ], [ "McNear", "Kelly", "" ] ]
The combination of Deep Learning techniques and Raman spectroscopy shows great potential offering precise and prompt identification of pathogenic bacteria in clinical settings. However, the traditional closed-set classification approaches assume that all test samples belong to one of the known pathogens, and their applicability is limited since the clinical environment is inherently unpredictable and dynamic, unknown or emerging pathogens may not be included in the available catalogs. We demonstrate that the current state-of-the-art Neural Networks identifying pathogens through Raman spectra are vulnerable to unknown inputs, resulting in an uncontrollable false positive rate. To address this issue, first, we developed a novel ensemble of ResNet architectures combined with the attention mechanism which outperforms existing closed-world methods, achieving an accuracy of $87.8 \pm 0.1\%$ compared to the best available model's accuracy of $86.7 \pm 0.4\%$. Second, through the integration of feature regularization by the Objectosphere loss function, our model achieves both high accuracy in identifying known pathogens from the catalog and effectively separates unknown samples drastically reducing the false positive rate. Finally, the proposed feature regularization method during training significantly enhances the performance of out-of-distribution detectors during the inference phase improving the reliability of the detection of unknown classes. Our novel algorithm for Raman spectroscopy enables the detection of unknown, uncatalogued, and emerging pathogens providing the flexibility to adapt to future pathogens that may emerge, and has the potential to improve the reliability of Raman-based solutions in dynamic operating environments where accuracy is critical, such as public safety applications.
2301.13015
Fatih Gulec
Fatih Gulec and Andrew W. Eckford
A Stochastic Biofilm Disruption Model based on Quorum Sensing Mimickers
Accepted for publication in IEEE Transactions on Molecular, Biological, and Multi-Scale Communications
in IEEE Transactions on Molecular, Biological and Multi-Scale Communications, vol. 9, no. 3, pp. 346-350, Sept. 2023
10.1109/tmbmc.2023.3292321
null
q-bio.QM eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quorum sensing (QS) mimickers can be used as an effective tool to disrupt biofilms which consist of communicating bacteria and extracellular polymeric substances. In this paper, a stochastic biofilm disruption model based on the usage of QS mimickers is proposed. A chemical reaction network (CRN) involving four different states is employed to model the biological processes during the biofilm formation and its disruption via QS mimickers. In addition, a state-based stochastic simulation algorithm is proposed to simulate this CRN. The proposed model is validated by the in vitro experimental results of Pseudomonas aeruginosa biofilm and its disruption by rosmarinic acid as the QS mimicker. Our results show that there is an uncertainty in state transitions due to the effect of the randomness in the CRN. In addition to the QS activation threshold, the presented work demonstrates that there are underlying two more thresholds for the disruption of EPS and bacteria, which provides a realistic modeling for biofilm disruption with QS mimickers.
[ { "created": "Mon, 30 Jan 2023 15:51:23 GMT", "version": "v1" }, { "created": "Tue, 27 Jun 2023 14:52:38 GMT", "version": "v2" } ]
2023-11-27
[ [ "Gulec", "Fatih", "" ], [ "Eckford", "Andrew W.", "" ] ]
Quorum sensing (QS) mimickers can be used as an effective tool to disrupt biofilms which consist of communicating bacteria and extracellular polymeric substances. In this paper, a stochastic biofilm disruption model based on the usage of QS mimickers is proposed. A chemical reaction network (CRN) involving four different states is employed to model the biological processes during the biofilm formation and its disruption via QS mimickers. In addition, a state-based stochastic simulation algorithm is proposed to simulate this CRN. The proposed model is validated by the in vitro experimental results of Pseudomonas aeruginosa biofilm and its disruption by rosmarinic acid as the QS mimicker. Our results show that there is an uncertainty in state transitions due to the effect of the randomness in the CRN. In addition to the QS activation threshold, the presented work demonstrates that there are underlying two more thresholds for the disruption of EPS and bacteria, which provides a realistic modeling for biofilm disruption with QS mimickers.
2011.07639
Margaret Cheung
Pengzhi Zhang, Jaebeom Han, Piotr Cieplak, Margaret. S. Cheung
Determining the atomic charge of calcium ion requires the information of its coordination geometry in an EF-hand motif
The following article has been accepted by Journal of Chemical Physics
J. Chem. Phys. 154, 124104 (2021)
10.1063/5.0037517
null
q-bio.BM q-bio.SC
http://creativecommons.org/licenses/by/4.0/
It is challenging to parameterize the force field for calcium ions (Ca2+) in calcium-binding proteins because of their unique coordination chemistry that involves the surrounding atoms required for stability. In this work, we observed wide variation in Ca2+ binding loop conformations of the Ca2+-binding protein calmodulin (CaM), which adopts the most populated ternary structures determined from the MD simulations, followed by ab initio quantum mechanical (QM) calculations on all twelve amino acids in the loop that coordinate Ca2+ in aqueous solution. Ca2+ charges were derived by fitting to the electrostatic potential (ESP) in the context of a classical or polarizable force field (PFF). We discovered that the atomic radius of Ca2+ in conventional force fields is too large for the QM calculation to capture the variation in the coordination geometry of Ca2+ in its ionic form, leading to unphysical charges. Specifically, we found that the fitted atomic charges of Ca2+ in the context of PFF depend on the coordinating geometry of electronegative atoms from the amino acids in the loop. Although nearby water molecules do not influence the atomic charge of Ca2+, they are crucial for compensating for the coordination of Ca2+ due to the conformational flexibility in the EF-hand loop. Our method advances the development of force fields for metal ions and protein binding sites in dynamic environments.
[ { "created": "Sun, 15 Nov 2020 22:28:12 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 13:56:36 GMT", "version": "v2" } ]
2021-08-03
[ [ "Zhang", "Pengzhi", "" ], [ "Han", "Jaebeom", "" ], [ "Cieplak", "Piotr", "" ], [ "Cheung", "Margaret. S.", "" ] ]
It is challenging to parameterize the force field for calcium ions (Ca2+) in calcium-binding proteins because of their unique coordination chemistry that involves the surrounding atoms required for stability. In this work, we observed wide variation in Ca2+ binding loop conformations of the Ca2+-binding protein calmodulin (CaM), which adopts the most populated ternary structures determined from the MD simulations, followed by ab initio quantum mechanical (QM) calculations on all twelve amino acids in the loop that coordinate Ca2+ in aqueous solution. Ca2+ charges were derived by fitting to the electrostatic potential (ESP) in the context of a classical or polarizable force field (PFF). We discovered that the atomic radius of Ca2+ in conventional force fields is too large for the QM calculation to capture the variation in the coordination geometry of Ca2+ in its ionic form, leading to unphysical charges. Specifically, we found that the fitted atomic charges of Ca2+ in the context of PFF depend on the coordinating geometry of electronegative atoms from the amino acids in the loop. Although nearby water molecules do not influence the atomic charge of Ca2+, they are crucial for compensating for the coordination of Ca2+ due to the conformational flexibility in the EF-hand loop. Our method advances the development of force fields for metal ions and protein binding sites in dynamic environments.
2004.12338
Giorgio Guzzetta
Giorgio Guzzetta, Flavia Riccardo, Valentina Marziano, Piero Poletti, Filippo Trentini, Antonino Bella, Xanthi Andrianou, Martina Del Manso, Massimo Fabiani, Stefania Bellino, Stefano Boros, Alberto Mateo Urdiales, Maria Fenicia Vescio, Andrea Piccioli, COVID-19 working group, Silvio Brusaferro, Giovanni Rezza, Patrizio Pezzotti, Marco Ajelli, Stefano Merler
The impact of a nation-wide lockdown on COVID-19 transmissibility in Italy
6 pages, 3 figures; submitted
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
On March 10, 2020, Italy imposed a national lockdown to curtail the spread of COVID-19. Here we estimate that, fourteen days after the implementation of the strategy, the net reproduction number has dropped below the epidemic threshold - estimated range 0.4-0.7. Our findings provide a timeline of the effectiveness of the implemented lockdown, which is relevant for a large number of countries that followed Italy in enforcing similar measures.
[ { "created": "Sun, 26 Apr 2020 10:04:31 GMT", "version": "v1" } ]
2020-04-28
[ [ "Guzzetta", "Giorgio", "" ], [ "Riccardo", "Flavia", "" ], [ "Marziano", "Valentina", "" ], [ "Poletti", "Piero", "" ], [ "Trentini", "Filippo", "" ], [ "Bella", "Antonino", "" ], [ "Andrianou", "Xanthi", "...
On March 10, 2020, Italy imposed a national lockdown to curtail the spread of COVID-19. Here we estimate that, fourteen days after the implementation of the strategy, the net reproduction number has dropped below the epidemic threshold - estimated range 0.4-0.7. Our findings provide a timeline of the effectiveness of the implemented lockdown, which is relevant for a large number of countries that followed Italy in enforcing similar measures.
2208.04162
Christoph Zechner
Anne-Lena Moor and Christoph Zechner
Dynamic Information Transfer in Stochastic Biochemical Networks
null
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
We develop numerical and analytical approaches to calculate mutual information between complete paths of two molecular components embedded into a larger reaction network. In particular, we focus on a continuous-time Markov chain formalism, frequently used to describe intracellular processes involving lowly abundant molecular species. Previously, we have shown how the path mutual information can be calculated for such systems when two molecular components interact directly with one another with no intermediate molecular components being present. In this work, we generalize this approach to biochemical networks involving an arbitrary number of molecular components. We present an efficient Monte Carlo method as well as an analytical approximation to calculate the path mutual information and show how it can be decomposed into a pair of transfer entropies that capture the causal flow of information between two network components. We apply our methodology to study information transfer in a simple three-node feedforward network, as well as a more complex positive feedback system that switches stochastically between two metastable modes.
[ { "created": "Mon, 8 Aug 2022 14:02:57 GMT", "version": "v1" } ]
2022-08-09
[ [ "Moor", "Anne-Lena", "" ], [ "Zechner", "Christoph", "" ] ]
We develop numerical and analytical approaches to calculate mutual information between complete paths of two molecular components embedded into a larger reaction network. In particular, we focus on a continuous-time Markov chain formalism, frequently used to describe intracellular processes involving lowly abundant molecular species. Previously, we have shown how the path mutual information can be calculated for such systems when two molecular components interact directly with one another with no intermediate molecular components being present. In this work, we generalize this approach to biochemical networks involving an arbitrary number of molecular components. We present an efficient Monte Carlo method as well as an analytical approximation to calculate the path mutual information and show how it can be decomposed into a pair of transfer entropies that capture the causal flow of information between two network components. We apply our methodology to study information transfer in a simple three-node feedforward network, as well as a more complex positive feedback system that switches stochastically between two metastable modes.
q-bio/0609022
Mikl\'os Cs\H{u}r\"os
Mikl\'os Cs\H{u}r\"os, Laurent No\'e and Gregory Kucherov
Reconsidering the significance of genomic word frequency
null
null
null
null
q-bio.GN
null
We propose that the distribution of DNA words in genomic sequences can be primarily characterized by a double Pareto-lognormal distribution, which explains lognormal and power-law features found across all known genomes. Such a distribution may be the result of completely random sequence evolution by duplication processes. The parametrization of genomic word frequencies allows for an assessment of significance for frequent or rare sequence motifs.
[ { "created": "Thu, 14 Sep 2006 17:18:30 GMT", "version": "v1" } ]
2007-05-23
[ [ "Csűrös", "Miklós", "" ], [ "Noé", "Laurent", "" ], [ "Kucherov", "Gregory", "" ] ]
We propose that the distribution of DNA words in genomic sequences can be primarily characterized by a double Pareto-lognormal distribution, which explains lognormal and power-law features found across all known genomes. Such a distribution may be the result of completely random sequence evolution by duplication processes. The parametrization of genomic word frequencies allows for an assessment of significance for frequent or rare sequence motifs.
2201.13406
Wensi Wu
Wensi Wu, Stephen Ching, Steve A. Maas, Andras Lasso, Patricia Sabin, Jeffrey A. Weiss, Matthew A. Jolley
A Computational Framework for Atrioventricular Valve Modeling using Open-Source Software
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Atrioventricular valve regurgitation is a significant cause of morbidity and mortality in patients with acquired and congenital cardiac valve disease. Image-derived computational modeling of atrioventricular valves has advanced substantially over the last decade and holds particular promise to inform valve repair in small and heterogeneous populations which are less likely to be optimized through empiric clinical application. While an abundance of computational biomechanics studies have investigated mitral and tricuspid valve disease in adults, few studies have investigated application to vulnerable pediatric and congenital heart populations. Further, to date, investigators have primarily relied upon a series of commercial applications that are neither designed for image-derived modeling of cardiac valves, nor freely available to facilitate transparent and reproducible valve science. To address this deficiency, we aimed to build an open-source computational framework for the image-derived biomechanical analysis of atrioventricular valves. In the present work, we integrated an open-source valve modeling platform, SlicerHeart, and an open-source biomechanics finite element modeling software, FEBio, to facilitate image-derived atrioventricular valve model creation and finite element analysis. We present a detailed verification and sensitivity analysis to demonstrate the fidelity of this modeling in application to 3D echocardiography-derived pediatric mitral and tricuspid valve models. Our analyses achieved excellent agreement with those reported in the literature. As such, this evolving computational framework offers a promising initial foundation for future development and investigation of valve mechanics, in particular collaborative efforts targeting the development of improved repairs for children with congenital heart disease.
[ { "created": "Mon, 31 Jan 2022 18:11:25 GMT", "version": "v1" } ]
2022-02-01
[ [ "Wu", "Wensi", "" ], [ "Ching", "Stephen", "" ], [ "Maas", "Steve A.", "" ], [ "Lasso", "Andras", "" ], [ "Sabin", "Patricia", "" ], [ "Weiss", "Jeffrey A.", "" ], [ "Jolley", "Matthew A.", "" ] ]
Atrioventricular valve regurgitation is a significant cause of morbidity and mortality in patients with acquired and congenital cardiac valve disease. Image-derived computational modeling of atrioventricular valves has advanced substantially over the last decade and holds particular promise to inform valve repair in small and heterogeneous populations which are less likely to be optimized through empiric clinical application. While an abundance of computational biomechanics studies have investigated mitral and tricuspid valve disease in adults, few studies have investigated application to vulnerable pediatric and congenital heart populations. Further, to date, investigators have primarily relied upon a series of commercial applications that are neither designed for image-derived modeling of cardiac valves, nor freely available to facilitate transparent and reproducible valve science. To address this deficiency, we aimed to build an open-source computational framework for the image-derived biomechanical analysis of atrioventricular valves. In the present work, we integrated an open-source valve modeling platform, SlicerHeart, and an open-source biomechanics finite element modeling software, FEBio, to facilitate image-derived atrioventricular valve model creation and finite element analysis. We present a detailed verification and sensitivity analysis to demonstrate the fidelity of this modeling in application to 3D echocardiography-derived pediatric mitral and tricuspid valve models. Our analyses achieved excellent agreement with those reported in the literature. As such, this evolving computational framework offers a promising initial foundation for future development and investigation of valve mechanics, in particular collaborative efforts targeting the development of improved repairs for children with congenital heart disease.
2102.03431
Benjamin Hollering
Joseph Cummings, Benjamin Hollering, Christopher Manon
Invariants for level-1 phylogenetic networks under the Cavendar-Farris-Neyman Model
29 pages, 6 figures
null
null
null
q-bio.PE math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks can model more complicated evolutionary phenomena that trees fail to capture such as horizontal gene transfer and hybridization. The same Markov models that are used to model evolution on trees can also be extended to networks and similar questions, such as the identifiability of the network parameter or the invariants of the model, can be asked. In this paper we focus on finding the invariants of the Cavendar-Farris-Neyman (CFN) model on level-1 phylogenetic networks. We do this by reducing the problem to finding invariants of sunlet networks, which are level-1 networks consisting of a single cycle with leaves at each vertex. We then determine all quadratic invariants in the sunlet network ideal which we conjecture generate the full ideal.
[ { "created": "Fri, 5 Feb 2021 22:00:44 GMT", "version": "v1" } ]
2021-02-09
[ [ "Cummings", "Joseph", "" ], [ "Hollering", "Benjamin", "" ], [ "Manon", "Christopher", "" ] ]
Phylogenetic networks can model more complicated evolutionary phenomena that trees fail to capture such as horizontal gene transfer and hybridization. The same Markov models that are used to model evolution on trees can also be extended to networks and similar questions, such as the identifiability of the network parameter or the invariants of the model, can be asked. In this paper we focus on finding the invariants of the Cavendar-Farris-Neyman (CFN) model on level-1 phylogenetic networks. We do this by reducing the problem to finding invariants of sunlet networks, which are level-1 networks consisting of a single cycle with leaves at each vertex. We then determine all quadratic invariants in the sunlet network ideal which we conjecture generate the full ideal.
1902.10589
Claus Metzner
Claus Metzner
Principles of efficient chemotactic pursuit
null
null
null
null
q-bio.CB physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In chemotaxis, cells are modulating their migration patterns in response to concentration gradients of a guiding substance. Immune cells are believed to use such chemotactic sensing for remotely detecting and homing in on pathogens. Considering that an immune cells may encounter a multitude of targets with vastly different migration properties, ranging from immobile to highly mobile, it is not clear which strategies of chemotactic pursuit are simultaneously efficient and versatile. We takle this problem theoretically and define a tunable response function that maps temporal or spatial concentration gradients to migration behavior. The seven free parameters of this response function are optimized numerically with the objective of maximizing search efficiency against a wide spectrum of target cell properties. Finally, we reverse-engineer the best-performing parameter sets to uncover the principles of efficient chemotactic pursuit under different biologically realistic boundary conditions. Remarkably, the numerical optimization rediscovers chemotactic strategies that are well-known in biological systems, such as the gradient-dependent swimming and tumbling modes of E.coli. Some of our results may also be useful for the design of chemotaxis experiments and for the development of algorithms that automatically detect and quantify goal oriented behavior in measured immune cell trajectories.
[ { "created": "Wed, 27 Feb 2019 15:29:30 GMT", "version": "v1" } ]
2019-02-28
[ [ "Metzner", "Claus", "" ] ]
In chemotaxis, cells are modulating their migration patterns in response to concentration gradients of a guiding substance. Immune cells are believed to use such chemotactic sensing for remotely detecting and homing in on pathogens. Considering that an immune cells may encounter a multitude of targets with vastly different migration properties, ranging from immobile to highly mobile, it is not clear which strategies of chemotactic pursuit are simultaneously efficient and versatile. We takle this problem theoretically and define a tunable response function that maps temporal or spatial concentration gradients to migration behavior. The seven free parameters of this response function are optimized numerically with the objective of maximizing search efficiency against a wide spectrum of target cell properties. Finally, we reverse-engineer the best-performing parameter sets to uncover the principles of efficient chemotactic pursuit under different biologically realistic boundary conditions. Remarkably, the numerical optimization rediscovers chemotactic strategies that are well-known in biological systems, such as the gradient-dependent swimming and tumbling modes of E.coli. Some of our results may also be useful for the design of chemotaxis experiments and for the development of algorithms that automatically detect and quantify goal oriented behavior in measured immune cell trajectories.
1602.04971
Sophie L\`ebre
Sophie Lebre and Olivier Gascuel
The combinatorics of overlapping genes
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Overlapping genes exist in all domains of life and are much more abundant than expected at their first discovery in the late 1970s. Assuming that the reference gene is read in frame +0, an overlapping gene can be encoded in two reading frames in the sense strand, denoted by +1 and +2, and in three reading frames in the opposite strand, denoted by -0, -1 and -2. This motivated numerous researchers to study the constraints induced by the genetic code on the various overlapping frames, mostly based on information theory. Our focus in this paper is on the constraints induced on two overlapping genes in terms of amino acids, as well as polypeptides. We show that simple linear constraints bind the amino acid composition of two proteins encoded by overlapping genes. Novel constraints are revealed when polypeptides are considered, and not just single amino acids. For example, in double-coding sequences with an overlapping reading frame -2, each Tyrosine (denoted as Tyr or Y) in the overlapping frame overlaps a Tyrosine in the reference frame +0 (and reciprocally), whereas specific words (e.g. YY) never occur. We thus distinguish between null constraints (YY = 0 in frame -2) and non-null constraints (Y in frame +0 <=> Y in frame -2). Our equivalence-based constraints are symmetrical and thus enable the characterization of the joint composition of overlapping proteins. We describe several formal frameworks and a graph algorithm to characterize and compute these constraints. These results yield support for understanding the mechanisms and evolution of overlapping genes, and for developing novel overlapping gene detection methods.
[ { "created": "Tue, 16 Feb 2016 10:18:04 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2016 15:33:49 GMT", "version": "v2" }, { "created": "Tue, 8 Nov 2016 08:45:53 GMT", "version": "v3" }, { "created": "Thu, 19 Jan 2017 09:15:19 GMT", "version": "v4" } ]
2017-01-20
[ [ "Lebre", "Sophie", "" ], [ "Gascuel", "Olivier", "" ] ]
Overlapping genes exist in all domains of life and are much more abundant than expected at their first discovery in the late 1970s. Assuming that the reference gene is read in frame +0, an overlapping gene can be encoded in two reading frames in the sense strand, denoted by +1 and +2, and in three reading frames in the opposite strand, denoted by -0, -1 and -2. This motivated numerous researchers to study the constraints induced by the genetic code on the various overlapping frames, mostly based on information theory. Our focus in this paper is on the constraints induced on two overlapping genes in terms of amino acids, as well as polypeptides. We show that simple linear constraints bind the amino acid composition of two proteins encoded by overlapping genes. Novel constraints are revealed when polypeptides are considered, and not just single amino acids. For example, in double-coding sequences with an overlapping reading frame -2, each Tyrosine (denoted as Tyr or Y) in the overlapping frame overlaps a Tyrosine in the reference frame +0 (and reciprocally), whereas specific words (e.g. YY) never occur. We thus distinguish between null constraints (YY = 0 in frame -2) and non-null constraints (Y in frame +0 <=> Y in frame -2). Our equivalence-based constraints are symmetrical and thus enable the characterization of the joint composition of overlapping proteins. We describe several formal frameworks and a graph algorithm to characterize and compute these constraints. These results yield support for understanding the mechanisms and evolution of overlapping genes, and for developing novel overlapping gene detection methods.
2011.05521
Antoine Nzeyimana
Antoine Nzeyimana, Kate EA Saunders, John R Geddes, Patrick E McSharry
Lamotrigine Therapy for Bipolar Depression: Analysis of Self-Reported Patient Data
null
JMIR mental health. 2018;5(4):e63
10.2196/mental.9026
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Background: Depression in people with bipolar disorder is a major cause of long-term disability, possibly leading to early mortality and currently, limited safe and effective therapies exist. A double-blinded randomized placebo-controlled trial (CEQUEL study) was conducted to evaluate the efficacy of Lamotrigine plus Quetiapine versus Quetiapine monotherapy in patients with bipolar type I or type II disorders. Objective: The objective of our study was to reanalyze CEQUEL data and determine an unbiased classification accuracy for active lamotrigine versus placebo. We also wanted to establish the time it took for the drug to provide statistically significant outcomes. Methods: Between October 21, 2008 and April 27, 2012, 202 participants from 27 sites in United Kingdom were randomly assigned to two treatments; 101: lamotrigine, 101: placebo. The primary variable used for estimating depressive symptoms was based on the Quick Inventory of Depressive Symptomatology-self report version 16 (QIDS-SR16). We analyze the data using feature engineering and simple classifiers. Results: From weeks 10 to 14, the mean difference in QIDS-SR16 ratings between the groups was -1.6317 (P=.09; sample size=81, 77; 95% CI -0.2403 to 3.5036). From weeks 48 to 52, the mean difference was -2.0032 (P=.09; sample size=54, 48; 95% CI -0.3433 to 4.3497). The coefficient of variation and detrended fluctuation analysis (DFA) exponent alpha had the greatest explanatory power. The out-of-sample classification accuracy for the 138 participants who reported more than 10 times after week 12 was 62%. A consistent classification accuracy higher than the no-information benchmark was obtained in week 44. Conclusions: Lamotrigine plus Quetiapine treatment decreased depressive symptoms in patients, but with substantial temporal instability. A trial of at least 44 weeks was required to achieve consistent results.
[ { "created": "Wed, 11 Nov 2020 02:52:40 GMT", "version": "v1" } ]
2020-11-12
[ [ "Nzeyimana", "Antoine", "" ], [ "Saunders", "Kate EA", "" ], [ "Geddes", "John R", "" ], [ "McSharry", "Patrick E", "" ] ]
Background: Depression in people with bipolar disorder is a major cause of long-term disability, possibly leading to early mortality and currently, limited safe and effective therapies exist. A double-blinded randomized placebo-controlled trial (CEQUEL study) was conducted to evaluate the efficacy of Lamotrigine plus Quetiapine versus Quetiapine monotherapy in patients with bipolar type I or type II disorders. Objective: The objective of our study was to reanalyze CEQUEL data and determine an unbiased classification accuracy for active lamotrigine versus placebo. We also wanted to establish the time it took for the drug to provide statistically significant outcomes. Methods: Between October 21, 2008 and April 27, 2012, 202 participants from 27 sites in United Kingdom were randomly assigned to two treatments; 101: lamotrigine, 101: placebo. The primary variable used for estimating depressive symptoms was based on the Quick Inventory of Depressive Symptomatology-self report version 16 (QIDS-SR16). We analyze the data using feature engineering and simple classifiers. Results: From weeks 10 to 14, the mean difference in QIDS-SR16 ratings between the groups was -1.6317 (P=.09; sample size=81, 77; 95% CI -0.2403 to 3.5036). From weeks 48 to 52, the mean difference was -2.0032 (P=.09; sample size=54, 48; 95% CI -0.3433 to 4.3497). The coefficient of variation and detrended fluctuation analysis (DFA) exponent alpha had the greatest explanatory power. The out-of-sample classification accuracy for the 138 participants who reported more than 10 times after week 12 was 62%. A consistent classification accuracy higher than the no-information benchmark was obtained in week 44. Conclusions: Lamotrigine plus Quetiapine treatment decreased depressive symptoms in patients, but with substantial temporal instability. A trial of at least 44 weeks was required to achieve consistent results.
1107.2504
Thierry Huillet
Thierry Huillet (LPTM)
A branching diffusion model of selection: from the neutral Wright-Fisher case to the one including mutations
To appear in: Intern. Math. Forum
null
null
null
q-bio.QM cond-mat.stat-mech math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider diffusion processes x_{t} on the unit interval. Doob-transformation techniques consist of a selection of x_{t}-paths procedure. The law of the transformed process is the one of a branching diffusion system of particles, each diffusing like a new process tilde{x}_{t}, superposing an additional drift to the one of x_{t}. Killing and/or branching of tilde{x}_{t}-particles occur at some space-dependent rate lambda. For this transformed process, so in the class of branching diffusions, the question arises as to whether the particle system is sub-critical, critical or super-critical. In the first two cases, extinction occurs with probability one. We apply this circle of ideas to diffusion processes arising in population genetics. In this setup, the process x_{t} is a Wright-Fisher (WF) diffusion, either neutral or with mutations. We study a particular Doob transform which is based on the exponential function in the usual fitness parameter sigma. We have in mind that this is an alternative way to introduce selection or fitness in both WF-like diffusions, leading to branching diffusion models ideas. For this Doob-transform model of fitness, the usual selection drift sigma x(1-x) should be superposed to the one of x_{t} to form tilde{x}_{t} which is the process that can branch, binarily. In the first neutral case, there is a trade-off between branching events giving birth to new particles and absorption at the boundaries, killing the particles. Under our assumptions, the branching diffusion process gets eventually globally extinct in finite time with exponential tails. In the second case with mutations, there is a trade-off between killing events removing some particles from the system and reflection at the boundaries where the particles survive. This branching diffusion process also gets eventually globally extinct but in very long finite time with power-law tails. Our approach relies on the spectral expansion of the transition probability kernels of both x_{t} and tilde{x}_{t}.
[ { "created": "Wed, 13 Jul 2011 09:49:52 GMT", "version": "v1" } ]
2011-07-15
[ [ "Huillet", "Thierry", "", "LPTM" ] ]
We consider diffusion processes x_{t} on the unit interval. Doob-transformation techniques consist of a selection of x_{t}-paths procedure. The law of the transformed process is the one of a branching diffusion system of particles, each diffusing like a new process tilde{x}_{t}, superposing an additional drift to the one of x_{t}. Killing and/or branching of tilde{x}_{t}-particles occur at some space-dependent rate lambda. For this transformed process, so in the class of branching diffusions, the question arises as to whether the particle system is sub-critical, critical or super-critical. In the first two cases, extinction occurs with probability one. We apply this circle of ideas to diffusion processes arising in population genetics. In this setup, the process x_{t} is a Wright-Fisher (WF) diffusion, either neutral or with mutations. We study a particular Doob transform which is based on the exponential function in the usual fitness parameter sigma. We have in mind that this is an alternative way to introduce selection or fitness in both WF-like diffusions, leading to branching diffusion models ideas. For this Doob-transform model of fitness, the usual selection drift sigma x(1-x) should be superposed to the one of x_{t} to form tilde{x}_{t} which is the process that can branch, binarily. In the first neutral case, there is a trade-off between branching events giving birth to new particles and absorption at the boundaries, killing the particles. Under our assumptions, the branching diffusion process gets eventually globally extinct in finite time with exponential tails. In the second case with mutations, there is a trade-off between killing events removing some particles from the system and reflection at the boundaries where the particles survive. This branching diffusion process also gets eventually globally extinct but in very long finite time with power-law tails. Our approach relies on the spectral expansion of the transition probability kernels of both x_{t} and tilde{x}_{t}.
1706.00247
Hossam Haick
Inbar Nardi Agmon, Manal Abud, Ori Liran, Naomi Gai-Mo, Maya Ilouze, Amir Onn, Jair Bar, Rossie Navon, Dekel Shlomi, Hossam Haick and Nir Peled
Exhaled Breath Analysis for Monitoring Response to Treatment in Advanced Lung Cancer
null
null
10.1016/j.jtho.2016.02.017
null
q-bio.TO physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
INTRODUCTION: The Response Evaluation Criteria in Solid Tumors (RECIST) serve as the accepted standard to monitor treatment efficacy in lung cancer. However, the time intervals between consecutive computerized tomography scans might be too long to allow early identification of treatment failure. This study examines the use of breath sampling to monitor responses to anticancer treatments in patients with advanced lung cancer. METHODS: A total of 143 breath samples were collected from 39 patients with advanced lung cancer. The exhaled breath signature, determined by gas chromatography/mass spectrometry and a nanomaterial-based array of sensors, was correlated with the response to therapy assessed by RECIST: complete response, partial response, stable disease, or progressive disease. RESULTS: Gas chromatography/mass spectrometry analysis identified three volatile organic compounds as significantly indicating disease control (PR/stable disease), with one of them also significantly discriminating PR/stable disease from progressive disease. The nanoarray had the ability to monitor changes in tumor response across therapy, also indicating any lack of further response to therapy. When one-sensor analysis was used, 59% of the follow-up samples were identified correctly. There was 85% success in monitoring disease control (stable disease/partial response). CONCLUSION: Breath analysis, using mainly the nanoarray, may serve as a surrogate marker for the response to systemic therapy in lung cancer. As a monitoring tool, it can provide the oncologist with a quick bedside method of identifying a lack of response to an anticancer treatment. This may allow quicker recognition than does the current RECIST analysis. Early recognition of treatment failure could improve patient care.
[ { "created": "Thu, 1 Jun 2017 10:35:03 GMT", "version": "v1" } ]
2017-06-02
[ [ "Agmon", "Inbar Nardi", "" ], [ "Abud", "Manal", "" ], [ "Liran", "Ori", "" ], [ "Gai-Mo", "Naomi", "" ], [ "Ilouze", "Maya", "" ], [ "Onn", "Amir", "" ], [ "Bar", "Jair", "" ], [ "Navon", "Ross...
INTRODUCTION: The Response Evaluation Criteria in Solid Tumors (RECIST) serve as the accepted standard to monitor treatment efficacy in lung cancer. However, the time intervals between consecutive computerized tomography scans might be too long to allow early identification of treatment failure. This study examines the use of breath sampling to monitor responses to anticancer treatments in patients with advanced lung cancer. METHODS: A total of 143 breath samples were collected from 39 patients with advanced lung cancer. The exhaled breath signature, determined by gas chromatography/mass spectrometry and a nanomaterial-based array of sensors, was correlated with the response to therapy assessed by RECIST: complete response, partial response, stable disease, or progressive disease. RESULTS: Gas chromatography/mass spectrometry analysis identified three volatile organic compounds as significantly indicating disease control (PR/stable disease), with one of them also significantly discriminating PR/stable disease from progressive disease. The nanoarray had the ability to monitor changes in tumor response across therapy, also indicating any lack of further response to therapy. When one-sensor analysis was used, 59% of the follow-up samples were identified correctly. There was 85% success in monitoring disease control (stable disease/partial response). CONCLUSION: Breath analysis, using mainly the nanoarray, may serve as a surrogate marker for the response to systemic therapy in lung cancer. As a monitoring tool, it can provide the oncologist with a quick bedside method of identifying a lack of response to an anticancer treatment. This may allow quicker recognition than does the current RECIST analysis. Early recognition of treatment failure could improve patient care.
2111.09981
Tom R\"oschinger
Tom R\"oschinger, Roberto Mor\'an Tovar, Simone Pompei, Michael L\"assig
Adaptive ratchets and the evolution of molecular complexity
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biological systems have evolved to amazingly complex states, yet we do not understand in general how evolution operates to generate increasing genetic and functional complexity. Molecular recognition sites are short genome segments or peptides binding a cognate recognition target of sufficient sequence similarity. Such sites are simple, ubiquitous modules of sequence information, cellular function, and evolution. Here we show that recognition sites, if coupled to a time-dependent target, can rapidly evolve to complex states with larger code length and smaller coding density than sites recognising a static target. The underlying fitness model contains selection for recognition, which depends on the sequence similarity between site and target, and a uniform cost per unit of code length. Site sequences are shown to evolve in a specific adaptive ratchet, which produces selection of different strength for code extensions and compressions. Ratchet evolution increases the adaptive width of evolved sites, accelerating the adaptation to moving targets and facilitating refinement and innovation of recognition functions. We apply these results to the recognition of fast-evolving antigens by the human immune system. Our analysis shows how molecular complexity can evolve as a collateral to selection for function in a dynamic environment.
[ { "created": "Thu, 18 Nov 2021 23:51:14 GMT", "version": "v1" } ]
2021-11-22
[ [ "Röschinger", "Tom", "" ], [ "Tovar", "Roberto Morán", "" ], [ "Pompei", "Simone", "" ], [ "Lässig", "Michael", "" ] ]
Biological systems have evolved to amazingly complex states, yet we do not understand in general how evolution operates to generate increasing genetic and functional complexity. Molecular recognition sites are short genome segments or peptides binding a cognate recognition target of sufficient sequence similarity. Such sites are simple, ubiquitous modules of sequence information, cellular function, and evolution. Here we show that recognition sites, if coupled to a time-dependent target, can rapidly evolve to complex states with larger code length and smaller coding density than sites recognising a static target. The underlying fitness model contains selection for recognition, which depends on the sequence similarity between site and target, and a uniform cost per unit of code length. Site sequences are shown to evolve in a specific adaptive ratchet, which produces selection of different strength for code extensions and compressions. Ratchet evolution increases the adaptive width of evolved sites, accelerating the adaptation to moving targets and facilitating refinement and innovation of recognition functions. We apply these results to the recognition of fast-evolving antigens by the human immune system. Our analysis shows how molecular complexity can evolve as a collateral to selection for function in a dynamic environment.
1208.3407
Aaron Quinlan Ph.D.
Ryan M. Layer, Kevin Skadron, Gabriel Robins, Ira M. Hall, and Aaron R. Quinlan
Binary Interval Search (BITS): A Scalable Algorithm for Counting Interval Intersections
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: The comparison of diverse genomic datasets is fundamental to understanding genome biology. Researchers must explore many large datasets of genome intervals (e.g., genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect: that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features is crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures such as Graphics Processing Units (GPUs) by illustrating its utility for efficient Monte-Carlo simulations measuring the significance of relationships between sets of genomic intervals.
[ { "created": "Thu, 16 Aug 2012 16:12:48 GMT", "version": "v1" }, { "created": "Fri, 17 Aug 2012 12:31:24 GMT", "version": "v2" } ]
2012-08-20
[ [ "Layer", "Ryan M.", "" ], [ "Skadron", "Kevin", "" ], [ "Robins", "Gabriel", "" ], [ "Hall", "Ira M.", "" ], [ "Quinlan", "Aaron R.", "" ] ]
Motivation: The comparison of diverse genomic datasets is fundamental to understanding genome biology. Researchers must explore many large datasets of genome intervals (e.g., genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect: that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features is crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures such as Graphics Processing Units (GPUs) by illustrating its utility for efficient Monte-Carlo simulations measuring the significance of relationships between sets of genomic intervals.
1412.1597
Iain Johnston
Iain G. Johnston, Benjamin C. Rickett, Nick S. Jones
Explicit tracking of uncertainty increases the power of quantitative rule-of-thumb reasoning in cell biology
8 pages, 3 figures
Biophys. J. 107 2612 (2014)
10.1016/j.bpj.2014.08.040
null
q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
"Back-of-the-envelope" or "rule-of-thumb" calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behaviour of physical systems, for example in so-called `Fermi problems' in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a `probabilistic calculator' called Caladis, a web tool freely available at www.caladis.org, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely in order to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between Caladis and the Bionumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations.
[ { "created": "Thu, 4 Dec 2014 09:29:48 GMT", "version": "v1" } ]
2014-12-05
[ [ "Johnston", "Iain G.", "" ], [ "Rickett", "Benjamin C.", "" ], [ "Jones", "Nick S.", "" ] ]
"Back-of-the-envelope" or "rule-of-thumb" calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behaviour of physical systems, for example in so-called `Fermi problems' in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a `probabilistic calculator' called Caladis, a web tool freely available at www.caladis.org, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely in order to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between Caladis and the Bionumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations.
q-bio/0610027
Zhihui Wang
Thomas S. Deisboeck and Zhihui Wang
Cancer Dissemination: A Consequence of limited Carrying Capacity?
10 pages
null
null
null
q-bio.TO
null
Assuming that there is feedback between an expanding cancer system and its organ-typical microenvironment, we argue here that such local tumor growth is guided by co-existence rather than competition with the surrounding tissue. We then present a novel concept that understands cancer dissemination as a biological mechanism to evade the specific carrying capacity limit of its host organ. This conceptual framework allows us to relate the tumor system's volumetric growth rate to the host organ's functionality-conveying composite infrastructure, and, intriguingly, already provides useful insights into several clinical findings.
[ { "created": "Mon, 16 Oct 2006 16:32:11 GMT", "version": "v1" }, { "created": "Tue, 31 Oct 2006 15:10:08 GMT", "version": "v2" } ]
2007-05-23
[ [ "Deisboeck", "Thomas S.", "" ], [ "Wang", "Zhihui", "" ] ]
Assuming that there is feedback between an expanding cancer system and its organ-typical microenvironment, we argue here that such local tumor growth is guided by co-existence rather than competition with the surrounding tissue. We then present a novel concept that understands cancer dissemination as a biological mechanism to evade the specific carrying capacity limit of its host organ. This conceptual framework allows us to relate the tumor system's volumetric growth rate to the host organ's functionality-conveying composite infrastructure, and, intriguingly, already provides useful insights into several clinical findings.
q-bio/0702058
Gregory Batt
Gr\'egory Batt (INRIA Rh\^one-Alpes), Delphine Ropers (INRIA Rh\^one-Alpes), Hidde De Jong (INRIA Rh\^one-Alpes), Michel Page (INRIA Rh\^one-Alpes), Johannes Geiselmann
Symbolic Reachability Analysis of Genetic Regulatory Networks using Qualitative Abstractions
null
null
null
null
q-bio.QM
null
The switch-like character of gene regulation has motivated the use of hybrid, discrete-continuous models of genetic regulatory networks. While powerful techniques for the analysis, verification, and control of hybrid systems have been developed, the specificities of the biological application domain pose a number of challenges, notably the absence of quantitative information on parameter values and the size and complexity of networks of biological interest. We introduce a method for the analysis of reachability properties of genetic regulatory networks that is based on a class of discontinuous piecewise-affine (PA) differential equations well-adapted to the above constraints. More specifically, we introduce a hyperrectangular partition of the state space that forms the basis for a discrete abstraction preserving the sign of the derivatives of the state variables. The resulting discrete transition system provides a conservative approximation of the qualitative dynamics of the network and can be efficiently computed in a symbolic manner from inequality constraints on the parameters. The method has been implemented in the computer tool Genetic Network Analyzer (GNA), which has been applied to the analysis of a regulatory system whose functioning is not well-understood by biologists, the nutritional stress response in the bacterium Escherichia coli.
[ { "created": "Wed, 28 Feb 2007 13:39:13 GMT", "version": "v1" } ]
2016-08-14
[ [ "Batt", "Grégory", "", "INRIA Rhône-Alpes" ], [ "Ropers", "Delphine", "", "INRIA\n Rhône-Alpes" ], [ "De Jong", "Hidde", "", "INRIA Rhône-Alpes" ], [ "Page", "Michel", "", "INRIA\n Rhône-Alpes" ], [ "Geiselmann", "Johannes",...
The switch-like character of gene regulation has motivated the use of hybrid, discrete-continuous models of genetic regulatory networks. While powerful techniques for the analysis, verification, and control of hybrid systems have been developed, the specificities of the biological application domain pose a number of challenges, notably the absence of quantitative information on parameter values and the size and complexity of networks of biological interest. We introduce a method for the analysis of reachability properties of genetic regulatory networks that is based on a class of discontinuous piecewise-affine (PA) differential equations well-adapted to the above constraints. More specifically, we introduce a hyperrectangular partition of the state space that forms the basis for a discrete abstraction preserving the sign of the derivatives of the state variables. The resulting discrete transition system provides a conservative approximation of the qualitative dynamics of the network and can be efficiently computed in a symbolic manner from inequality constraints on the parameters. The method has been implemented in the computer tool Genetic Network Analyzer (GNA), which has been applied to the analysis of a regulatory system whose functioning is not well-understood by biologists, the nutritional stress response in the bacterium Escherichia coli.
2206.01092
Cameron Mura
Nikita Sivakumar, Cameron Mura, Shayn M. Peirce
Innovations in Integrating Machine Learning and Agent-Based Modeling of Biomedical Systems
32 pages, 1 table, 8 figures
null
10.3389/fsysb.2022.959665
null
q-bio.QM cs.LG cs.MA q-bio.CB
http://creativecommons.org/licenses/by-sa/4.0/
Agent-based modeling (ABM) is a well-established paradigm for simulating complex systems via interactions between constituent entities. Machine learning (ML) refers to approaches whereby statistical algorithms 'learn' from data on their own, without imposing a priori theories of system behavior. Biological systems -- from molecules, to cells, to entire organisms -- consist of vast numbers of entities, governed by complex webs of interactions that span many spatiotemporal scales and exhibit nonlinearity, stochasticity and intricate coupling between entities. The macroscopic properties and collective dynamics of such systems are difficult to capture via continuum modelling and mean-field formalisms. ABM takes a 'bottom-up' approach that obviates these difficulties by enabling one to easily propose and test a set of well-defined 'rules' to be applied to the individual entities (agents) in a system. Evaluating a system and propagating its state over discrete time-steps effectively simulates the system, allowing observables to be computed and system properties to be analyzed. Because the rules that govern an ABM can be difficult to abstract and formulate from experimental data, there is an opportunity to use ML to help infer optimal, system-specific ABM rules. Once such rule-sets are devised, ABM calculations can generate a wealth of data, and ML can be applied there too -- e.g., to probe statistical measures that meaningfully describe a system's stochastic properties. As an example of synergy in the other direction (from ABM to ML), ABM simulations can generate realistic datasets for training ML algorithms (e.g., for regularization, to mitigate overfitting). In these ways, one can envision various synergistic ABM$\rightleftharpoons$ML loops. This review summarizes how ABM and ML have been integrated in contexts that span spatiotemporal scales, from cellular to population-level epidemiology.
[ { "created": "Thu, 2 Jun 2022 15:19:09 GMT", "version": "v1" }, { "created": "Wed, 9 Nov 2022 18:36:48 GMT", "version": "v2" } ]
2022-11-10
[ [ "Sivakumar", "Nikita", "" ], [ "Mura", "Cameron", "" ], [ "Peirce", "Shayn M.", "" ] ]
Agent-based modeling (ABM) is a well-established paradigm for simulating complex systems via interactions between constituent entities. Machine learning (ML) refers to approaches whereby statistical algorithms 'learn' from data on their own, without imposing a priori theories of system behavior. Biological systems -- from molecules, to cells, to entire organisms -- consist of vast numbers of entities, governed by complex webs of interactions that span many spatiotemporal scales and exhibit nonlinearity, stochasticity and intricate coupling between entities. The macroscopic properties and collective dynamics of such systems are difficult to capture via continuum modelling and mean-field formalisms. ABM takes a 'bottom-up' approach that obviates these difficulties by enabling one to easily propose and test a set of well-defined 'rules' to be applied to the individual entities (agents) in a system. Evaluating a system and propagating its state over discrete time-steps effectively simulates the system, allowing observables to be computed and system properties to be analyzed. Because the rules that govern an ABM can be difficult to abstract and formulate from experimental data, there is an opportunity to use ML to help infer optimal, system-specific ABM rules. Once such rule-sets are devised, ABM calculations can generate a wealth of data, and ML can be applied there too -- e.g., to probe statistical measures that meaningfully describe a system's stochastic properties. As an example of synergy in the other direction (from ABM to ML), ABM simulations can generate realistic datasets for training ML algorithms (e.g., for regularization, to mitigate overfitting). In these ways, one can envision various synergistic ABM$\rightleftharpoons$ML loops. This review summarizes how ABM and ML have been integrated in contexts that span spatiotemporal scales, from cellular to population-level epidemiology.
1507.06433
Magali San Cristobal
Maria-Ines Fariello, Simon Boitard, Sabine Mercier, David Robelin, Thomas Faraut, C\'ecile Arnould, Julien Recoquillay, Olivier Bouchez, G\'erald Salin, Patrice Dehais, David Gourichon, Sophie Leroux, Fr\'ed\'erique Pitel, Christine Leterrier, Magali San Cristobal
A New Local Score Based Method Applied to Behavior-divergent Quail Lines Sequenced in Pools Precisely Detects Selection Signatures on Genes Related to Autism
32 pages, 4 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting genomic footprints of selection is an important step in the understanding of evolution. Accounting for linkage disequilibrium in genome scans allows increasing the detection power, but haplotype-based methods require individual genotypes and are not applicable on pool-sequenced samples. We propose to take advantage of the local score approach to account for linkage disequilibrium, accumulating (possibly small) signals from single markers over a genomic segment, to clearly pinpoint a selection signal, avoiding windowing methods. This method provided results similar to haplotype-based methods on two benchmark data sets with individual genotypes. Results obtained for a divergent selection experiment on behavior in quail, where two lines were sequenced in pools, are precise and biologically coherent, while competing methods failed: our approach led to the detection of signals involving genes known to act on social responsiveness or autistic traits. This local score approach is general and can be applied to other genome-wide analyzes such as GWAS or genome scans for selection.
[ { "created": "Thu, 23 Jul 2015 10:14:35 GMT", "version": "v1" } ]
2015-07-24
[ [ "Fariello", "Maria-Ines", "" ], [ "Boitard", "Simon", "" ], [ "Mercier", "Sabine", "" ], [ "Robelin", "David", "" ], [ "Faraut", "Thomas", "" ], [ "Arnould", "Cécile", "" ], [ "Recoquillay", "Julien", "" ...
Detecting genomic footprints of selection is an important step in the understanding of evolution. Accounting for linkage disequilibrium in genome scans allows increasing the detection power, but haplotype-based methods require individual genotypes and are not applicable on pool-sequenced samples. We propose to take advantage of the local score approach to account for linkage disequilibrium, accumulating (possibly small) signals from single markers over a genomic segment, to clearly pinpoint a selection signal, avoiding windowing methods. This method provided results similar to haplotype-based methods on two benchmark data sets with individual genotypes. Results obtained for a divergent selection experiment on behavior in quail, where two lines were sequenced in pools, are precise and biologically coherent, while competing methods failed: our approach led to the detection of signals involving genes known to act on social responsiveness or autistic traits. This local score approach is general and can be applied to other genome-wide analyzes such as GWAS or genome scans for selection.
2203.00743
Qianqian Song
Minghan Chen, Chunrui Xu, Ziang Xu, Wei He, Haorui Zhang, Jing Su, and Qianqian Song
Uncovering the dynamic effects of DEX treatment on lung cancer by integrating bioinformatic inference and multiscale modeling of scRNA-seq and proteomics data
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Lung cancer is one of the leading causes for cancer-related death, with a five-year survival rate of 18%. It is a priority for us to understand the underlying mechanisms that affect the implementation and effectiveness of lung cancer therapeutics. In this study, we combine the power of Bioinformatics and Systems Biology to comprehensively uncover functional and signaling pathways of drug treatment using bioinformatics inference and multiscale modeling of both scRNA-seq data and proteomics data. The innovative and cross-disciplinary approach can be further applied to other computational studies in tumorigenesis and oncotherapy. Results: A time series of lung adenocarcinoma-derived A549 cells after DEX treatment were analysed. (1) We first discovered the differentially expressed genes in those lung cancer cells. Then through the interrogation of their regulatory network, we identified key hub genes including TGF-\b{eta}, MYC, and SMAD3 varied underlie DEX treatment. Further enrichment analysis revealed the TGF-\b{eta} signaling pathway as the top enriched term. Those genes involved in the TGF-\b{eta} pathway and their crosstalk with the ERBB pathway presented a strong survival prognosis in clinical lung cancer samples. (2) Based on biological validation and further curation, a multiscale model of tumor regulation centered on both TGF-\b{eta}-induced and ERBB-amplified signaling pathways was developed to characterize the dynamics effects of DEX therapy on lung cancer cells. Our simulation results were well matched to available data of SMAD2, FOXO3, TGF\b{eta}1, and TGF\b{eta}R1 over the time course. Moreover, we provided predictions of different doses to illustrate the trend and therapeutic potential of DEX treatment.
[ { "created": "Tue, 1 Mar 2022 21:00:46 GMT", "version": "v1" } ]
2022-03-03
[ [ "Chen", "Minghan", "" ], [ "Xu", "Chunrui", "" ], [ "Xu", "Ziang", "" ], [ "He", "Wei", "" ], [ "Zhang", "Haorui", "" ], [ "Su", "Jing", "" ], [ "Song", "Qianqian", "" ] ]
Motivation: Lung cancer is one of the leading causes for cancer-related death, with a five-year survival rate of 18%. It is a priority for us to understand the underlying mechanisms that affect the implementation and effectiveness of lung cancer therapeutics. In this study, we combine the power of Bioinformatics and Systems Biology to comprehensively uncover functional and signaling pathways of drug treatment using bioinformatics inference and multiscale modeling of both scRNA-seq data and proteomics data. The innovative and cross-disciplinary approach can be further applied to other computational studies in tumorigenesis and oncotherapy. Results: A time series of lung adenocarcinoma-derived A549 cells after DEX treatment were analysed. (1) We first discovered the differentially expressed genes in those lung cancer cells. Then through the interrogation of their regulatory network, we identified key hub genes including TGF-\b{eta}, MYC, and SMAD3 varied underlie DEX treatment. Further enrichment analysis revealed the TGF-\b{eta} signaling pathway as the top enriched term. Those genes involved in the TGF-\b{eta} pathway and their crosstalk with the ERBB pathway presented a strong survival prognosis in clinical lung cancer samples. (2) Based on biological validation and further curation, a multiscale model of tumor regulation centered on both TGF-\b{eta}-induced and ERBB-amplified signaling pathways was developed to characterize the dynamics effects of DEX therapy on lung cancer cells. Our simulation results were well matched to available data of SMAD2, FOXO3, TGF\b{eta}1, and TGF\b{eta}R1 over the time course. Moreover, we provided predictions of different doses to illustrate the trend and therapeutic potential of DEX treatment.
2407.07226
Hue Sun Chan
Tanmoy Pal, Jonas Wess\'en, Suman Das, and Hue Sun Chan
Differential Effects of Sequence-Local versus Nonlocal Charge Patterns on Phase Separation and Conformational Dimensions of Polyampholytes as Model Intrinsically Disordered Proteins
56 pages, 4 main-text figures, Supporting Information (containing supporting text, 1 supporting table, and 9 supporting figures), Table-of-Contents graphics, and 94 references. Accepted for publication The Journal of Physical Chemistry Letters
The Journal of Physical Chemistry Letters 15:8248-8256 (2024)
10.1021/acs.jpclett.4c01973
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Conformational properties of intrinsically disordered proteins (IDPs) are governed by a sequence-ensemble relationship. To differentiate the impact of sequence-local versus sequence-nonlocal features of an IDP's charge pattern on its conformational dimensions and its phase-separation propensity, the charge "blockiness'' $\kappa$ and the nonlocality-weighted sequence charge decoration (SCD) parameters are compared for their correlations with isolated-chain radii of gyration ($R_{\rm g}$s) and upper critical solution temperatures (UCSTs) of polyampholytes modeled by random phase approximation, field-theoretic simulation, and coarse-grained molecular dynamics. SCD is superior to $\kappa$ in predicting $R_{\rm g}$ because SCD accounts for effects of contact order, i.e., nonlocality, on dimensions of isolated chains. In contrast, $\kappa$ and SCD are comparably good, though nonideal, predictors of UCST because frequencies of interchain contacts in the multiple-chain condensed phase are less sensitive to sequence positions than frequencies of intrachain contacts of an isolated chain, as reflected by $\kappa$ correlating better with condensed-phase interaction energy than SCD.
[ { "created": "Tue, 9 Jul 2024 20:46:49 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2024 21:56:53 GMT", "version": "v2" } ]
2024-08-13
[ [ "Pal", "Tanmoy", "" ], [ "Wessén", "Jonas", "" ], [ "Das", "Suman", "" ], [ "Chan", "Hue Sun", "" ] ]
Conformational properties of intrinsically disordered proteins (IDPs) are governed by a sequence-ensemble relationship. To differentiate the impact of sequence-local versus sequence-nonlocal features of an IDP's charge pattern on its conformational dimensions and its phase-separation propensity, the charge "blockiness'' $\kappa$ and the nonlocality-weighted sequence charge decoration (SCD) parameters are compared for their correlations with isolated-chain radii of gyration ($R_{\rm g}$s) and upper critical solution temperatures (UCSTs) of polyampholytes modeled by random phase approximation, field-theoretic simulation, and coarse-grained molecular dynamics. SCD is superior to $\kappa$ in predicting $R_{\rm g}$ because SCD accounts for effects of contact order, i.e., nonlocality, on dimensions of isolated chains. In contrast, $\kappa$ and SCD are comparably good, though nonideal, predictors of UCST because frequencies of interchain contacts in the multiple-chain condensed phase are less sensitive to sequence positions than frequencies of intrachain contacts of an isolated chain, as reflected by $\kappa$ correlating better with condensed-phase interaction energy than SCD.
2402.16390
Francesco Sannino
Baptiste Filoche, Stefan Hohenegger and Francesco Sannino
Information Theory Unification of Epidemiological and Population Dynamics
33 pages, 17 figures
null
null
null
q-bio.PE hep-th stat.AP
http://creativecommons.org/licenses/by/4.0/
We reformulate models in epidemiology and population dynamics in terms of probability distributions. This allows us to construct the Fisher information, which we interpret as the metric of a one-dimensional differentiable manifold. For systems that can be effectively described by a single degree of freedom, we show that their time evolution is fully captured by this metric. In this way, we discover universal features across seemingly very different models. This further motivates a reorganisation of the dynamics around zeroes of the Fisher metric, corresponding to extrema of the probability distribution. Concretely, we propose a simple form of the metric for which we can analytically solve the dynamics of the system that well approximates the time evolution of various established models in epidemiology and population dynamics, thus providing a unifying framework.
[ { "created": "Mon, 26 Feb 2024 08:28:51 GMT", "version": "v1" } ]
2024-02-27
[ [ "Filoche", "Baptiste", "" ], [ "Hohenegger", "Stefan", "" ], [ "Sannino", "Francesco", "" ] ]
We reformulate models in epidemiology and population dynamics in terms of probability distributions. This allows us to construct the Fisher information, which we interpret as the metric of a one-dimensional differentiable manifold. For systems that can be effectively described by a single degree of freedom, we show that their time evolution is fully captured by this metric. In this way, we discover universal features across seemingly very different models. This further motivates a reorganisation of the dynamics around zeroes of the Fisher metric, corresponding to extrema of the probability distribution. Concretely, we propose a simple form of the metric for which we can analytically solve the dynamics of the system that well approximates the time evolution of various established models in epidemiology and population dynamics, thus providing a unifying framework.
q-bio/0703008
Tom Chou
Tom Chou
The stochastic entry of enveloped viruses: Fusion vs. endocytosis
7 pages, 6 figures
Biophys. J., 93, 1116-1123, (2007)
10.1529/biophysj.107.106708
null
q-bio.SC
null
Viral infection requires the binding of receptors on the target cell membrane to glycoproteins, or ``spikes,'' on the viral membrane. The initial entry is usually classified as fusogenic or endocytotic. However, binding of viral spikes to cell surface receptors not only initiates the viral adhesion and the wrapping process necessary for internalization, but can simultaneously initiate direct fusion with the cell membrane. Both fusion and internalization have been observed to be viable pathways for many viruses. We develop a stochastic model for viral entry that incorporates a competition between receptor mediated fusion and endocytosis. The relative probabilities of fusion and endocytosis of a virus particle initially nonspecifically adsorbed on the host cell membrane are computed as functions of receptor concentration, binding strength, and number of spikes. We find different parameter regimes where the entry pathway probabilities can be analytically expressed. Experimental tests of our mechanistic hypotheses are proposed and discussed.
[ { "created": "Fri, 2 Mar 2007 19:29:08 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2007 02:14:35 GMT", "version": "v2" } ]
2015-06-26
[ [ "Chou", "Tom", "" ] ]
Viral infection requires the binding of receptors on the target cell membrane to glycoproteins, or ``spikes,'' on the viral membrane. The initial entry is usually classified as fusogenic or endocytotic. However, binding of viral spikes to cell surface receptors not only initiates the viral adhesion and the wrapping process necessary for internalization, but can simultaneously initiate direct fusion with the cell membrane. Both fusion and internalization have been observed to be viable pathways for many viruses. We develop a stochastic model for viral entry that incorporates a competition between receptor mediated fusion and endocytosis. The relative probabilities of fusion and endocytosis of a virus particle initially nonspecifically adsorbed on the host cell membrane are computed as functions of receptor concentration, binding strength, and number of spikes. We find different parameter regimes where the entry pathway probabilities can be analytically expressed. Experimental tests of our mechanistic hypotheses are proposed and discussed.
2006.12006
Siddhartha Chakrabarty
Dipankar Mondal, Siddhartha P. Chakrabarty
Did the lockdown curb the spread of COVID-19 infection rate in India: A data-driven analysis
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to analyze the effectiveness of three successive nationwide lockdown enforced in India, we present a data-driven analysis of four key parameters, reducing the transmission rate, restraining the growth rate, flattening the epidemic curve and improving the health care system. These were quantified by the consideration of four different metrics, namely, reproduction rate, growth rate, doubling time and death to recovery ratio. The incidence data of the COVID-19 (during the period of 2nd March 2020 to 31st May 2020) outbreak in India was analyzed for the best fit to the epidemic curve, making use of the exponential growth, the maximum likelihood estimation, sequential Bayesian method and estimation of time-dependent reproduction. The best fit (based on the data considered) was for the time-dependent approach. Accordingly, this approach was used to assess the impact on the effective reproduction rate. The period of pre-lockdown to the end of lockdown 3, saw a $45\%$ reduction in the rate of effective reproduction rate. During the same period the growth rate reduced from $393\%$ during the pre-lockdown to $33\%$ after lockdown 3, accompanied by the average doubling time increasing form $4$-$6$ days to $12$-$14$ days. Finally, the death-to-recovery ratio dropped from $0.28$ (pre-lockdown) to $0.08$ after lockdown 3. In conclusion, all the four metrics considered to assess the effectiveness of the lockdown, exhibited significant favourable changes, from the pre-lockdown period to the end of lockdown 3. Analysis of the data in the post-lockdown period with these metrics will provide greater clarity with regards to the extent of the success of the lockdown.
[ { "created": "Mon, 22 Jun 2020 04:50:29 GMT", "version": "v1" } ]
2020-06-23
[ [ "Mondal", "Dipankar", "" ], [ "Chakrabarty", "Siddhartha P.", "" ] ]
In order to analyze the effectiveness of three successive nationwide lockdown enforced in India, we present a data-driven analysis of four key parameters, reducing the transmission rate, restraining the growth rate, flattening the epidemic curve and improving the health care system. These were quantified by the consideration of four different metrics, namely, reproduction rate, growth rate, doubling time and death to recovery ratio. The incidence data of the COVID-19 (during the period of 2nd March 2020 to 31st May 2020) outbreak in India was analyzed for the best fit to the epidemic curve, making use of the exponential growth, the maximum likelihood estimation, sequential Bayesian method and estimation of time-dependent reproduction. The best fit (based on the data considered) was for the time-dependent approach. Accordingly, this approach was used to assess the impact on the effective reproduction rate. The period of pre-lockdown to the end of lockdown 3, saw a $45\%$ reduction in the rate of effective reproduction rate. During the same period the growth rate reduced from $393\%$ during the pre-lockdown to $33\%$ after lockdown 3, accompanied by the average doubling time increasing form $4$-$6$ days to $12$-$14$ days. Finally, the death-to-recovery ratio dropped from $0.28$ (pre-lockdown) to $0.08$ after lockdown 3. In conclusion, all the four metrics considered to assess the effectiveness of the lockdown, exhibited significant favourable changes, from the pre-lockdown period to the end of lockdown 3. Analysis of the data in the post-lockdown period with these metrics will provide greater clarity with regards to the extent of the success of the lockdown.