id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1308.5826 | Pedro Mendes | Simon Mitchell and Pedro Mendes | A Computational Model of Liver Iron Metabolism | null | null | 10.1371/journal.pcbi.1003299 | null | q-bio.MN | http://creativecommons.org/licenses/by/3.0/ | Iron is essential for all known life due to its redox properties, however
these same properties can also lead to its toxicity in overload through the
production of reactive oxygen species. Robust systemic and cellular control are
required to maintain safe levels of iron and the liver seems to be where this
regulation is mainly located. Iron misregulation is implicated in many diseases
and as our understanding of iron metabolism improves the list of iron-related
disorders grows. Recent developments have resulted in greater knowledge of the
fate of iron in the body and have led to a detailed map of its metabolism,
however a quantitative understanding at the systems level of how its components
interact to produce tight regulation remains elusive.
A mechanistic computational model of human liver iron metabolism, which
includes the core regulatory components, was constructed based on known
mechanisms of regulation and on their kinetic properties, obtained from several
publications. The model was then quantitatively validated by comparing its
results with previously published physiological data, and it is able to
reproduce multiple experimental findings. A time course simulation following an
oral dose of iron was compared to a clinical time course study and the
simulation was found to recreate the dynamics and time scale of the systems
response to iron challenge. A disease simulation of haemochromatosis was
created by altering a single reaction parameter that mimics a human
haemochromatosis gene (HFE) mutation. The simulation provides a quantitative
understanding of the liver iron overload that arises in this disease.
This model supports and supplements understanding of the role of the liver as
an iron sensor and provides a framework for further modelling, including
simulations to identify valuable drug targets and design of experiments to
improve further our knowledge of this system.
| [
{
"created": "Tue, 27 Aug 2013 11:21:39 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Mitchell",
"Simon",
""
],
[
"Mendes",
"Pedro",
""
]
] | Iron is essential for all known life due to its redox properties, however these same properties can also lead to its toxicity in overload through the production of reactive oxygen species. Robust systemic and cellular control are required to maintain safe levels of iron and the liver seems to be where this regulation is mainly located. Iron misregulation is implicated in many diseases and as our understanding of iron metabolism improves the list of iron-related disorders grows. Recent developments have resulted in greater knowledge of the fate of iron in the body and have led to a detailed map of its metabolism, however a quantitative understanding at the systems level of how its components interact to produce tight regulation remains elusive. A mechanistic computational model of human liver iron metabolism, which includes the core regulatory components, was constructed based on known mechanisms of regulation and on their kinetic properties, obtained from several publications. The model was then quantitatively validated by comparing its results with previously published physiological data, and it is able to reproduce multiple experimental findings. A time course simulation following an oral dose of iron was compared to a clinical time course study and the simulation was found to recreate the dynamics and time scale of the systems response to iron challenge. A disease simulation of haemochromatosis was created by altering a single reaction parameter that mimics a human haemochromatosis gene (HFE) mutation. The simulation provides a quantitative understanding of the liver iron overload that arises in this disease. This model supports and supplements understanding of the role of the liver as an iron sensor and provides a framework for further modelling, including simulations to identify valuable drug targets and design of experiments to improve further our knowledge of this system. |
1502.06659 | Somwrita Sarkar | Somwrita Sarkar, Sanjay Chawla and Donna Xu | On inferring structural connectivity from brain functional-MRI data | 10 pages, 6 Figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The anatomical structure of the brain can be observed via non-invasive
techniques such as diffusion imaging. However, these are imperfect because they
miss connections that are actually known to exist, especially long range
inter-hemispheric ones. In this paper we formulate the inverse problem of
inferring the structural connectivity of brain networks from experimentally
observed functional connectivity via functional Magnetic Resonance Imaging
(fMRI), by formulating it as a convex optimization problem. We show that
structural connectivity can be modeled as an optimal sparse representation
derived from the much denser functional connectivity in the human brain. Using
only the functional connectivity data as input, we present (a) an optimization
problem that models constraints based on known physiological observations, and
(b) an ADMM algorithm for solving it. The algorithm not only recovers the known
structural connectivity of the brain, but is also able to robustly predict the
long range inter-hemispheric connections missed by DSI or DTI, including a very
good match with experimentally observed quantitative distributions of the
weights/strength of anatomical connections. We demonstrate results on both
synthetic model data and a fine-scale 998 node cortical dataset, and discuss
applications to other complex network domains where retrieving effective
structure from functional signatures are important.
| [
{
"created": "Tue, 24 Feb 2015 00:20:14 GMT",
"version": "v1"
}
] | 2015-02-25 | [
[
"Sarkar",
"Somwrita",
""
],
[
"Chawla",
"Sanjay",
""
],
[
"Xu",
"Donna",
""
]
] | The anatomical structure of the brain can be observed via non-invasive techniques such as diffusion imaging. However, these are imperfect because they miss connections that are actually known to exist, especially long range inter-hemispheric ones. In this paper we formulate the inverse problem of inferring the structural connectivity of brain networks from experimentally observed functional connectivity via functional Magnetic Resonance Imaging (fMRI), by formulating it as a convex optimization problem. We show that structural connectivity can be modeled as an optimal sparse representation derived from the much denser functional connectivity in the human brain. Using only the functional connectivity data as input, we present (a) an optimization problem that models constraints based on known physiological observations, and (b) an ADMM algorithm for solving it. The algorithm not only recovers the known structural connectivity of the brain, but is also able to robustly predict the long range inter-hemispheric connections missed by DSI or DTI, including a very good match with experimentally observed quantitative distributions of the weights/strength of anatomical connections. We demonstrate results on both synthetic model data and a fine-scale 998 node cortical dataset, and discuss applications to other complex network domains where retrieving effective structure from functional signatures are important. |
2306.13582 | Amin Zia | Amin Zia | Heat shock proteins may be a missing link between febrile infection and
cancer tumor rejection via autoantigen molecular mimicry | 16 pages, 3 figures, one table | null | null | null | q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Numerous epidemiological studies suggest febrile infections could confer
long-term immunity to certain types of cancers, though the precise mechanisms
for this phenomenon remain unclear. Systemic heat-shock responses to fever may
be key to understanding the overlapping outcomes of immune responses to
infection and cancer. To investigate this hypothesis, we performed epitope
discovery between heat-shock proteins (HSP) and cancer-associated antigens
(CAA) and annotated the results with experimentally validated epitopes in the
Immune Epitope Database (IEDB) (Vita et al., 2019). Further, epitopes were
matched with their homologs in human pathogens. Results identified 94 epitopes
shared between HSPs and CAAs, with experimental evidence of presentation at MHC
molecules and with high homology to several epitopes of human pathogens. The
identified epitopes can be used as candidates for designing cancer vaccines.
They may also be used to identify autoreactive antibodies or TCR specificities
that, as antibody drugs and cell therapies, would reproduce the effect of
febrile infection in conferring cancer immunity. Our results support the
hypothesis that the loss of self-tolerance to HSPs during febrile infection
confers tumor immunity through molecular mimicry.
| [
{
"created": "Fri, 23 Jun 2023 16:09:31 GMT",
"version": "v1"
}
] | 2023-06-26 | [
[
"Zia",
"Amin",
""
]
] | Numerous epidemiological studies suggest febrile infections could confer long-term immunity to certain types of cancers, though the precise mechanisms for this phenomenon remain unclear. Systemic heat-shock responses to fever may be key to understanding the overlapping outcomes of immune responses to infection and cancer. To investigate this hypothesis, we performed epitope discovery between heat-shock proteins (HSP) and cancer-associated antigens (CAA) and annotated the results with experimentally validated epitopes in the Immune Epitope Database (IEDB) (Vita et al., 2019). Further, epitopes were matched with their homologs in human pathogens. Results identified 94 epitopes shared between HSPs and CAAs, with experimental evidence of presentation at MHC molecules and with high homology to several epitopes of human pathogens. The identified epitopes can be used as candidates for designing cancer vaccines. They may also be used to identify autoreactive antibodies or TCR specificities that, as antibody drugs and cell therapies, would reproduce the effect of febrile infection in conferring cancer immunity. Our results support the hypothesis that the loss of self-tolerance to HSPs during febrile infection confers tumor immunity through molecular mimicry. |
0707.3770 | Tibor Antal | Niko Beerenwinkel, Tibor Antal, David Dingli, Arne Traulsen, Kenneth
W. Kinzler, Victor E. Velculescu, Bert Vogelstein, Martin A. Nowak | Genetic progression and the waiting time to cancer | Details available as supplementary material at
http://www.people.fas.harvard.edu/~antal/publications.html | PLoS Comput Biol 3(11): e225 (2007) | 10.1371/journal.pcbi.0030225 | null | q-bio.PE q-bio.QM | null | Cancer results from genetic alterations that disturb the normal cooperative
behavior of cells. Recent high-throughput genomic studies of cancer cells have
shown that the mutational landscape of cancer is complex and that individual
cancers may evolve through mutations in as many as 20 different
cancer-associated genes. We use data published by Sjoblom et al. (2006) to
develop a new mathematical model for the somatic evolution of colorectal
cancers. We employ the Wright-Fisher process for exploring the basic parameters
of this evolutionary process and derive an analytical approximation for the
expected waiting time to the cancer phenotype. Our results highlight the
relative importance of selection over both the size of the cell population at
risk and the mutation rate. The model predicts that the observed genetic
diversity of cancer genomes can arise under a normal mutation rate if the
average selective advantage per mutation is on the order of 1%. Increased
mutation rates due to genetic instability would allow even smaller selective
advantages during tumorigenesis. The complexity of cancer progression thus can
be understood as the result of multiple sequential mutations, each of which has
a relatively small but positive effect on net cell growth.
| [
{
"created": "Wed, 25 Jul 2007 15:39:48 GMT",
"version": "v1"
}
] | 2011-11-10 | [
[
"Beerenwinkel",
"Niko",
""
],
[
"Antal",
"Tibor",
""
],
[
"Dingli",
"David",
""
],
[
"Traulsen",
"Arne",
""
],
[
"Kinzler",
"Kenneth W.",
""
],
[
"Velculescu",
"Victor E.",
""
],
[
"Vogelstein",
"Bert",
""
],
[
"Nowak",
"Martin A.",
""
]
] | Cancer results from genetic alterations that disturb the normal cooperative behavior of cells. Recent high-throughput genomic studies of cancer cells have shown that the mutational landscape of cancer is complex and that individual cancers may evolve through mutations in as many as 20 different cancer-associated genes. We use data published by Sjoblom et al. (2006) to develop a new mathematical model for the somatic evolution of colorectal cancers. We employ the Wright-Fisher process for exploring the basic parameters of this evolutionary process and derive an analytical approximation for the expected waiting time to the cancer phenotype. Our results highlight the relative importance of selection over both the size of the cell population at risk and the mutation rate. The model predicts that the observed genetic diversity of cancer genomes can arise under a normal mutation rate if the average selective advantage per mutation is on the order of 1%. Increased mutation rates due to genetic instability would allow even smaller selective advantages during tumorigenesis. The complexity of cancer progression thus can be understood as the result of multiple sequential mutations, each of which has a relatively small but positive effect on net cell growth. |
2004.03377 | Fabio Miletto Granozio Dr | Fabio Miletto Granozio | On the problem of comparing Covid-19 fatality rates | three pages, one table, three figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding Covid-19 lethality and its variation from country to country is
essential for supporting governments in the choice of appropriate strategies.
Adopting correct indicators to monitor the lethality of the infection in the
course of the outbreak is a crucial issue. This works highlights how far the
time-dependent case fatality rate is a misleading indicator for estimating the
fatality in the course of the outbreak, even if our attention is only
restricted to the subset of confirmed cases. Our analysis proves that the final
case fatality ratio for several major European countries is bound to largely
exceed 10%. The largely discussed difference in the fatality ratio between
Italy and other major European countries, is to be mostly attributed (except
for the case of Germany) to the more advanced stage of the Italian epidemic.
| [
{
"created": "Mon, 6 Apr 2020 14:32:57 GMT",
"version": "v1"
}
] | 2020-04-08 | [
[
"Granozio",
"Fabio Miletto",
""
]
] | Understanding Covid-19 lethality and its variation from country to country is essential for supporting governments in the choice of appropriate strategies. Adopting correct indicators to monitor the lethality of the infection in the course of the outbreak is a crucial issue. This works highlights how far the time-dependent case fatality rate is a misleading indicator for estimating the fatality in the course of the outbreak, even if our attention is only restricted to the subset of confirmed cases. Our analysis proves that the final case fatality ratio for several major European countries is bound to largely exceed 10%. The largely discussed difference in the fatality ratio between Italy and other major European countries, is to be mostly attributed (except for the case of Germany) to the more advanced stage of the Italian epidemic. |
2309.14598 | Li Chen | Guozhong Zheng, Jiqiang Zhang, Jing Zhang, Weiran Cai, and Li Chen | Decoding trust: A reinforcement learning perspective | 12 pages, 11 figures. Comments are appreciated | null | null | null | q-bio.PE cond-mat.dis-nn nlin.AO stat.ML | http://creativecommons.org/licenses/by/4.0/ | Behavioral experiments on the trust game have shown that trust and
trustworthiness are universal among human beings, contradicting the prediction
by assuming \emph{Homo economicus} in orthodox Economics. This means some
mechanism must be at work that favors their emergence. Most previous
explanations however need to resort to some factors based upon imitative
learning, a simple version of social learning. Here, we turn to the paradigm of
reinforcement learning, where individuals update their strategies by evaluating
the long-term return through accumulated experience. Specifically, we
investigate the trust game with the Q-learning algorithm, where each
participant is associated with two evolving Q-tables that guide one's decision
making as trustor and trustee respectively. In the pairwise scenario, we reveal
that high levels of trust and trustworthiness emerge when individuals
appreciate both their historical experience and returns in the future.
Mechanistically, the evolution of the Q-tables shows a crossover that resembles
human's psychological changes. We also provide the phase diagram for the game
parameters, where the boundary analysis is conducted. These findings are robust
when the scenario is extended to a latticed population. Our results thus
provide a natural explanation for the emergence of trust and trustworthiness
without external factors involved. More importantly, the proposed paradigm
shows the potential in deciphering many puzzles in human behaviors.
| [
{
"created": "Tue, 26 Sep 2023 01:06:29 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Nov 2023 11:37:35 GMT",
"version": "v2"
}
] | 2023-11-28 | [
[
"Zheng",
"Guozhong",
""
],
[
"Zhang",
"Jiqiang",
""
],
[
"Zhang",
"Jing",
""
],
[
"Cai",
"Weiran",
""
],
[
"Chen",
"Li",
""
]
] | Behavioral experiments on the trust game have shown that trust and trustworthiness are universal among human beings, contradicting the prediction by assuming \emph{Homo economicus} in orthodox Economics. This means some mechanism must be at work that favors their emergence. Most previous explanations however need to resort to some factors based upon imitative learning, a simple version of social learning. Here, we turn to the paradigm of reinforcement learning, where individuals update their strategies by evaluating the long-term return through accumulated experience. Specifically, we investigate the trust game with the Q-learning algorithm, where each participant is associated with two evolving Q-tables that guide one's decision making as trustor and trustee respectively. In the pairwise scenario, we reveal that high levels of trust and trustworthiness emerge when individuals appreciate both their historical experience and returns in the future. Mechanistically, the evolution of the Q-tables shows a crossover that resembles human's psychological changes. We also provide the phase diagram for the game parameters, where the boundary analysis is conducted. These findings are robust when the scenario is extended to a latticed population. Our results thus provide a natural explanation for the emergence of trust and trustworthiness without external factors involved. More importantly, the proposed paradigm shows the potential in deciphering many puzzles in human behaviors. |
q-bio/0406018 | Eleonora Alfinito | C. Pennetta, V. Akimov, E. Alfinito, L. Reggiani and G. Gomila | Fluctuations of Complex Networks: Electrical Properties of Single
Protein Nanodevices | 16 Pages and 10 Figures published in SPIE Proceedings of the II
International Symposium on Fluctuation and Noise, Maspalomas,Gran
Canaria,Spain, 25-28 May 2004 | null | 10.1117/12.547636 | null | q-bio.MN cond-mat.other physics.bio-ph q-bio.OT | null | We present for the first time a complex network approach to the study of the
electrical properties of single protein devices. In particular, we consider an
electronic nanobiosensor based on a G-protein coupled receptor. By adopting a
coarse grain description, the protein is modeled as a complex network of
elementary impedances. The positions of the alpha-carbon atoms of each amino
acid are taken as the nodes of the network. The amino acids are assumed to
interact electrically among them. Consequently, a link is drawn between any
pair of nodes neighboring in space within a given distance and an elementary
impedance is associated with each link. The value of this impedance can be
related to the physical and chemical properties of the amino acid pair and to
their relative distance. Accordingly, the conformational changes of the
receptor induced by the capture of the ligand, are translated into a variation
of its electrical properties. Stochastic fluctuations in the value of the
elementary impedances of the network, which mimic different physical effects,
have also been considered. Preliminary results concerning the impedance
spectrum of the network and its fluctuations are presented and discussed for
different values of the model parameters.
| [
{
"created": "Tue, 8 Jun 2004 09:24:54 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Pennetta",
"C.",
""
],
[
"Akimov",
"V.",
""
],
[
"Alfinito",
"E.",
""
],
[
"Reggiani",
"L.",
""
],
[
"Gomila",
"G.",
""
]
] | We present for the first time a complex network approach to the study of the electrical properties of single protein devices. In particular, we consider an electronic nanobiosensor based on a G-protein coupled receptor. By adopting a coarse grain description, the protein is modeled as a complex network of elementary impedances. The positions of the alpha-carbon atoms of each amino acid are taken as the nodes of the network. The amino acids are assumed to interact electrically among them. Consequently, a link is drawn between any pair of nodes neighboring in space within a given distance and an elementary impedance is associated with each link. The value of this impedance can be related to the physical and chemical properties of the amino acid pair and to their relative distance. Accordingly, the conformational changes of the receptor induced by the capture of the ligand, are translated into a variation of its electrical properties. Stochastic fluctuations in the value of the elementary impedances of the network, which mimic different physical effects, have also been considered. Preliminary results concerning the impedance spectrum of the network and its fluctuations are presented and discussed for different values of the model parameters. |
1103.3294 | Daniel Jost | Daniel Jost, Asif Zubair and Ralf Everaers | Bubble statistics and positioning in superhelically stressed DNA | to be appeared in Physical Review E | null | 10.1103/PhysRevE.84.031912 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general framework to study the thermodynamic denaturation of
double-stranded DNA under superhelical stress. We report calculations of
position- and size-dependent opening probabilities for bubbles along the
sequence. Our results are obtained from transfer-matrix solutions of the
Zimm-Bragg model for unconstrained DNA and of a self-consistent linearization
of the Benham model for superhelical DNA. The numerical efficiency of our
method allows for the analysis of entire genomes and of random sequences of
corresponding length ($10^6-10^9$ base pairs). We show that, at physiological
conditions, opening in superhelical DNA is strongly cooperative with average
bubble sizes of $10^2-10^3$ base pairs (bp), and orders of magnitude higher
than in unconstrained DNA. In heterogeneous sequences, the average degree of
base-pair opening is self-averaging, while bubble localization and statistics
are dominated by sequence disorder. Compared to random sequences with identical
GC-content, genomic DNA has a significantly increased probability to open large
bubbles under superhelical stress. These bubbles are frequently located
directly upstream of transcription start sites.
| [
{
"created": "Wed, 16 Mar 2011 20:48:57 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Sep 2011 14:03:20 GMT",
"version": "v2"
}
] | 2013-05-29 | [
[
"Jost",
"Daniel",
""
],
[
"Zubair",
"Asif",
""
],
[
"Everaers",
"Ralf",
""
]
] | We present a general framework to study the thermodynamic denaturation of double-stranded DNA under superhelical stress. We report calculations of position- and size-dependent opening probabilities for bubbles along the sequence. Our results are obtained from transfer-matrix solutions of the Zimm-Bragg model for unconstrained DNA and of a self-consistent linearization of the Benham model for superhelical DNA. The numerical efficiency of our method allows for the analysis of entire genomes and of random sequences of corresponding length ($10^6-10^9$ base pairs). We show that, at physiological conditions, opening in superhelical DNA is strongly cooperative with average bubble sizes of $10^2-10^3$ base pairs (bp), and orders of magnitude higher than in unconstrained DNA. In heterogeneous sequences, the average degree of base-pair opening is self-averaging, while bubble localization and statistics are dominated by sequence disorder. Compared to random sequences with identical GC-content, genomic DNA has a significantly increased probability to open large bubbles under superhelical stress. These bubbles are frequently located directly upstream of transcription start sites. |
0812.5114 | Peter Waddell | Peter J Waddell | Fit of Fossils and Mammalian Molecular Trees: Dating Inconsistencies
Revisited | null | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Divergence time estimation requires the reconciliation of two major sources
of data. These are fossil and/or biogeographic evidence that give estimates of
the absolute age of nodes (ancestors) and molecular estimates that give us
estimates of the relative ages of nodes in a molecular evolutionary tree. Both
forms of data are often best characterized as yielding continuous probability
distributions on nodes. Here, the distributions modeling older fossil
calibrations within the tree of placental (eutherian) mammals are reconsidered.
In particular the Horse/Rhino, Human/Tarsier, Whale/ Hippo, Rabbit/Pika and
Rodentia calibrations are reexamined and adjusted. Inferring the relative ages
of nodes in a phylogeny also requires the assumption of a model of evolutionary
rate change across the tree. Here nine models of evolutionary rate change, are
combined with various continuous distributions modeling fossil calibrations.
Fit of model is measured both relative to a normalized fit, which assumes that
all models fit well in the absence of multiple fossil calibrations, and also by
the linearity of their residuals. The normalized fit used attempts to track
twice the log likelihood difference from the best expected model. The results
suggest there is a very large difference in the age of the root proposed by
calibrations in Supraprimates (informally Euarchontoglires) versus
Laurasiatheria. Combining both sets of calibrations results in the penalty
function vastly increasing in all cases. These issues remain irrespective of
the model used or whether the newer calibrations are used.
| [
{
"created": "Tue, 30 Dec 2008 20:46:22 GMT",
"version": "v1"
}
] | 2008-12-31 | [
[
"Waddell",
"Peter J",
""
]
] | Divergence time estimation requires the reconciliation of two major sources of data. These are fossil and/or biogeographic evidence that give estimates of the absolute age of nodes (ancestors) and molecular estimates that give us estimates of the relative ages of nodes in a molecular evolutionary tree. Both forms of data are often best characterized as yielding continuous probability distributions on nodes. Here, the distributions modeling older fossil calibrations within the tree of placental (eutherian) mammals are reconsidered. In particular the Horse/Rhino, Human/Tarsier, Whale/ Hippo, Rabbit/Pika and Rodentia calibrations are reexamined and adjusted. Inferring the relative ages of nodes in a phylogeny also requires the assumption of a model of evolutionary rate change across the tree. Here nine models of evolutionary rate change, are combined with various continuous distributions modeling fossil calibrations. Fit of model is measured both relative to a normalized fit, which assumes that all models fit well in the absence of multiple fossil calibrations, and also by the linearity of their residuals. The normalized fit used attempts to track twice the log likelihood difference from the best expected model. The results suggest there is a very large difference in the age of the root proposed by calibrations in Supraprimates (informally Euarchontoglires) versus Laurasiatheria. Combining both sets of calibrations results in the penalty function vastly increasing in all cases. These issues remain irrespective of the model used or whether the newer calibrations are used. |
2105.06057 | Sang-Yoon Kim | Sang-Yoon Kim and Woochang Lim | Dynamical Origin for Winner-Take-All Competition in A Biological Network
of The Hippocampal Dentate Gyrus | null | null | 10.1103/PhysRevE.105.014418 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a biological network of the hippocampal dentate gyrus (DG). The
DG is a pre-processor for pattern separation which facilitates pattern storage
and retrieval in the CA3 area of the hippocampus. The main encoding cells in
the DG are the granule cells (GCs) which receive the input from the entorhinal
cortex (EC) and send their output to the CA3. The activation degree of GCs is
so low (~ 5%). This sparsity has been thought to enhance the pattern
separation. We investigate the dynamical origin for winner-take-all (WTA)
competition which leads to sparse activation of the GCs. The whole GCs are
grouped into lamellar clusters. In each GC cluster, there is one inhibitory (I)
basket cell (BC) along with excitatory (E) GCs. There are three kinds of
external inputs into the GCs; the direct excitatory EC input, the indirect
inhibitory EC input, mediated by the HIPP (hilar perforant path-associated)
cells, and the excitatory input from the hilar mossy cells (MCs). The firing
activities of the GCs are determined via competition between the external E and
I inputs. The E-I conductance ratio ${\cal{R}}_{\rm E-I}^{\rm (con)*}$ (given
by the time average of the external E to I conductances) may represents well
the degree of such external E-I input competition. GCs become active when their
$\cal{R}_{\rm E-I}^{\rm (con)*}$ is larger than a threshold ${\cal{R}}_{th}^*$,
and then the mean firing rates of the active GCs are strongly correlated with
$\cal{R}_{\rm E-I}^{\rm (con)*}$. In each GC cluster, the feedback inhibition
of the BC may select the winner GCs. GCs with larger $\cal{R}_{\rm E-I}^{\rm
(con)*}$ than the threshold ${\cal{R}}_{th}^*$ survive, and they become
winners; all the other GCs with smaller $\cal{R}_{\rm E-I}^{\rm (con)*}$ become
silent. The WTA competition occurs via competition between the firing activity
of the GCs and the feedback inhibition from the BC in each GC cluster.
| [
{
"created": "Thu, 13 May 2021 03:18:18 GMT",
"version": "v1"
}
] | 2022-02-09 | [
[
"Kim",
"Sang-Yoon",
""
],
[
"Lim",
"Woochang",
""
]
] | We consider a biological network of the hippocampal dentate gyrus (DG). The DG is a pre-processor for pattern separation which facilitates pattern storage and retrieval in the CA3 area of the hippocampus. The main encoding cells in the DG are the granule cells (GCs) which receive the input from the entorhinal cortex (EC) and send their output to the CA3. The activation degree of GCs is so low (~ 5%). This sparsity has been thought to enhance the pattern separation. We investigate the dynamical origin for winner-take-all (WTA) competition which leads to sparse activation of the GCs. The whole GCs are grouped into lamellar clusters. In each GC cluster, there is one inhibitory (I) basket cell (BC) along with excitatory (E) GCs. There are three kinds of external inputs into the GCs; the direct excitatory EC input, the indirect inhibitory EC input, mediated by the HIPP (hilar perforant path-associated) cells, and the excitatory input from the hilar mossy cells (MCs). The firing activities of the GCs are determined via competition between the external E and I inputs. The E-I conductance ratio ${\cal{R}}_{\rm E-I}^{\rm (con)*}$ (given by the time average of the external E to I conductances) may represents well the degree of such external E-I input competition. GCs become active when their $\cal{R}_{\rm E-I}^{\rm (con)*}$ is larger than a threshold ${\cal{R}}_{th}^*$, and then the mean firing rates of the active GCs are strongly correlated with $\cal{R}_{\rm E-I}^{\rm (con)*}$. In each GC cluster, the feedback inhibition of the BC may select the winner GCs. GCs with larger $\cal{R}_{\rm E-I}^{\rm (con)*}$ than the threshold ${\cal{R}}_{th}^*$ survive, and they become winners; all the other GCs with smaller $\cal{R}_{\rm E-I}^{\rm (con)*}$ become silent. The WTA competition occurs via competition between the firing activity of the GCs and the feedback inhibition from the BC in each GC cluster. |
1505.04432 | Harold P. de Vladar | Harold P. de Vladar and E\"ors Szathm\'ary | Coupled Hebbian learning and evolutionary dynamics in a formal model for
structural synaptic plasticity | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theoretical models of neuronal function consider different mechanisms through
which networks learn, classify and discern inputs. A central focus of these
models is to understand how associations are established amongst neurons, in
order to predict spiking patterns that are compatible with empirical
observations. Although these models have led to major insights and advances,
they still do not account for the astonishing velocity with which the brain
solves certain problems and what lies behind its creativity, amongst others
features. We examine two important components that may crucially aid
comprehensive understanding of said neurodynamical processes. First, we argue
that once presented with a problem, different putative solutions are generated
in parallel by different groups or local neuronal complexes, with the
subsequent stabilization and spread of the best solutions. Using mathematical
models we show that this mechanism accelerates finding the right solutions.
This formalism is analogous to standard replicator-mutator models of evolution
where mutation is analogous to the probability of neuron state switching
(on/off). The second factor that we incorporate is structural synaptic
plasticity, i.e. the making of new and disbanding of old synapses, which we
apply as a dynamical reorganization of synaptic connections. We show that
Hebbian learning alone does not suffice to reach optimal solutions. However,
combining it with parallel evaluation and structural plasticity opens up
possibilities for efficient problem solving. In the resulting networks,
topologies converge to subsets of fully connected components. Imposing costs on
synapses reduces the connectivity, although the number of connected components
remains robust. The average lifetime of synapses is longer for connections that
are established early, and diminishes with synaptic cost.
| [
{
"created": "Sun, 17 May 2015 18:34:12 GMT",
"version": "v1"
}
] | 2015-05-19 | [
[
"de Vladar",
"Harold P.",
""
],
[
"Szathmáry",
"Eörs",
""
]
] | Theoretical models of neuronal function consider different mechanisms through which networks learn, classify and discern inputs. A central focus of these models is to understand how associations are established amongst neurons, in order to predict spiking patterns that are compatible with empirical observations. Although these models have led to major insights and advances, they still do not account for the astonishing velocity with which the brain solves certain problems and what lies behind its creativity, amongst others features. We examine two important components that may crucially aid comprehensive understanding of said neurodynamical processes. First, we argue that once presented with a problem, different putative solutions are generated in parallel by different groups or local neuronal complexes, with the subsequent stabilization and spread of the best solutions. Using mathematical models we show that this mechanism accelerates finding the right solutions. This formalism is analogous to standard replicator-mutator models of evolution where mutation is analogous to the probability of neuron state switching (on/off). The second factor that we incorporate is structural synaptic plasticity, i.e. the making of new and disbanding of old synapses, which we apply as a dynamical reorganization of synaptic connections. We show that Hebbian learning alone does not suffice to reach optimal solutions. However, combining it with parallel evaluation and structural plasticity opens up possibilities for efficient problem solving. In the resulting networks, topologies converge to subsets of fully connected components. Imposing costs on synapses reduces the connectivity, although the number of connected components remains robust. The average lifetime of synapses is longer for connections that are established early, and diminishes with synaptic cost. |
1607.04469 | Fernando Alcalde Cuesta | Fernando Alcalde Cuesta, Pablo Gonz\'alez Sequeiros, and \'Alvaro
Lozano Rojo | Suppressors of selection | New title, improved presentation, and further examples. Supporting
Information is also included | null | 10.1371/journal.pone.0180549 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by recent works on evolutionary graph theory, an area of growing
interest in mathematical and computational biology, we present the first known
examples of undirected structures acting as suppressors of selection for any
fitness value $r > 1$. This means that the average fixation probability of an
advantageous mutant or invader individual placed at some node is strictly less
than that of this individual placed in a well-mixed population. This leads the
way to study more robust structures less prone to invasion, contrary to what
happens with the amplifiers of selection where the fixation probability is
increased on average for advantageous invader individuals. A few families of
amplifiers are known, although some effort was required to prove it. Here, we
use computer aided techniques to find an exact analytical expression of the
fixation probability for some graphs of small order (equal to $6$, $8$ and
$10$) proving that selection is effectively reduced for $r > 1$. Some numerical
experiments using Monte Carlo methods are also performed for larger graphs.
| [
{
"created": "Fri, 15 Jul 2016 11:35:39 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Dec 2016 16:06:01 GMT",
"version": "v2"
}
] | 2017-11-01 | [
[
"Cuesta",
"Fernando Alcalde",
""
],
[
"Sequeiros",
"Pablo González",
""
],
[
"Rojo",
"Álvaro Lozano",
""
]
] | Inspired by recent works on evolutionary graph theory, an area of growing interest in mathematical and computational biology, we present the first known examples of undirected structures acting as suppressors of selection for any fitness value $r > 1$. This means that the average fixation probability of an advantageous mutant or invader individual placed at some node is strictly less than that of this individual placed in a well-mixed population. This leads the way to study more robust structures less prone to invasion, contrary to what happens with the amplifiers of selection where the fixation probability is increased on average for advantageous invader individuals. A few families of amplifiers are known, although some effort was required to prove it. Here, we use computer aided techniques to find an exact analytical expression of the fixation probability for some graphs of small order (equal to $6$, $8$ and $10$) proving that selection is effectively reduced for $r > 1$. Some numerical experiments using Monte Carlo methods are also performed for larger graphs. |
q-bio/0312014 | Edward R. Abraham | Marcus R. Frean and Edward R. Abraham | Adaptation and enslavement in endosymbiont-host associations | v2: Correction made to equations 5 & 6 v3: Revised version accepted
in Phys. Rev. E; New figure added | null | 10.1103/PhysRevE.69.051913 | null | q-bio.PE | null | The evolutionary persistence of symbiotic associations is a puzzle.
Adaptation should eliminate cooperative traits if it is possible to enjoy the
advantages of cooperation without reciprocating - a facet of cooperation known
in game theory as the Prisoner's Dilemma. Despite this barrier, symbioses are
widespread, and may have been necessary for the evolution of complex life. The
discovery of strategies such as tit-for-tat has been presented as a general
solution to the problem of cooperation. However, this only holds for
within-species cooperation, where a single strategy will come to dominate the
population. In a symbiotic association each species may have a different
strategy, and the theoretical analysis of the single species problem is no
guide to the outcome. We present basic analysis of two-species cooperation and
show that a species with a fast adaptation rate is enslaved by a slowly
evolving one. Paradoxically, the rapidly evolving species becomes highly
cooperative, whereas the slowly evolving one gives little in return. This helps
understand the occurrence of endosymbioses where the host benefits, but the
symbionts appear to gain little from the association.
| [
{
"created": "Wed, 10 Dec 2003 01:45:54 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jan 2004 00:44:13 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Feb 2004 04:38:48 GMT",
"version": "v3"
}
] | 2009-11-10 | [
[
"Frean",
"Marcus R.",
""
],
[
"Abraham",
"Edward R.",
""
]
] | The evolutionary persistence of symbiotic associations is a puzzle. Adaptation should eliminate cooperative traits if it is possible to enjoy the advantages of cooperation without reciprocating - a facet of cooperation known in game theory as the Prisoner's Dilemma. Despite this barrier, symbioses are widespread, and may have been necessary for the evolution of complex life. The discovery of strategies such as tit-for-tat has been presented as a general solution to the problem of cooperation. However, this only holds for within-species cooperation, where a single strategy will come to dominate the population. In a symbiotic association each species may have a different strategy, and the theoretical analysis of the single species problem is no guide to the outcome. We present basic analysis of two-species cooperation and show that a species with a fast adaptation rate is enslaved by a slowly evolving one. Paradoxically, the rapidly evolving species becomes highly cooperative, whereas the slowly evolving one gives little in return. This helps understand the occurrence of endosymbioses where the host benefits, but the symbionts appear to gain little from the association. |
0910.5240 | Mike Steel Prof. | Mike Steel and Arne Mooers | Expected length of pendant and interior edges of a Yule tree | 6 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Yule (pure-birth) model is the simplest null model of speciation; each
lineage gives rise to a new lineage independently with the same rate $\lambda$.
We investigate the expected length of an edge chosen at random from the
resulting evolutionary tree. In particular, we compare the expected length of a
randomly selected edge with the expected length of a randomly selected pendant
edge. We provide some exact formulae, and show how our results depend slightly
on whether the depth of the tree or the number of leaves is conditioned on, and
whether $\lambda$ is known or is estimated using maximum likelihood.
| [
{
"created": "Tue, 27 Oct 2009 20:45:45 GMT",
"version": "v1"
}
] | 2009-10-29 | [
[
"Steel",
"Mike",
""
],
[
"Mooers",
"Arne",
""
]
] | The Yule (pure-birth) model is the simplest null model of speciation; each lineage gives rise to a new lineage independently with the same rate $\lambda$. We investigate the expected length of an edge chosen at random from the resulting evolutionary tree. In particular, we compare the expected length of a randomly selected edge with the expected length of a randomly selected pendant edge. We provide some exact formulae, and show how our results depend slightly on whether the depth of the tree or the number of leaves is conditioned on, and whether $\lambda$ is known or is estimated using maximum likelihood. |
2112.11664 | Victor Vikram Odouard | Victor Vikram Odouard, Diana Smirnova, Shimon Edelman | Polarize, Catalyze, Stabilize: How a minority of norm internalizers
amplify group selection and punishment | 27 pages, 4 pages of references, 14 page appendix | null | null | null | q-bio.PE cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many mechanisms behind the evolution of cooperation, such as reciprocity,
indirect reciprocity, and altruistic punishment, require group knowledge of
individual actions. But what keeps people cooperating when no one is looking?
Conformist norm internalization, the tendency to abide by the behavior of the
majority of the group, even when it is individually harmful, could be the
answer. In this paper, we analyze a world where (1) there is group selection
and punishment by indirect reciprocity but (2) many actions (half) go
unobserved, and therefore unpunished. Can norm internalization fill this
"observation gap" and lead to high levels of cooperation, even when agents may
in principle cooperate only when likely to be caught and punished?
Specifically, we seek to understand whether adding norm internalization to the
strategy space in a public goods game can lead to higher levels of cooperation
when both norm internalization and cooperation start out rare. We found the
answer to be positive, but, interestingly, not because norm internalizers end
up making up a substantial fraction of the population, nor because they
cooperate much more than other agent types. Instead, norm internalizers, by
polarizing, catalyzing, and stabilizing cooperation, can increase levels of
cooperation of other agent types, while only making up a minority of the
population themselves.
| [
{
"created": "Wed, 22 Dec 2021 04:35:23 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jan 2022 20:27:01 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Apr 2022 21:02:20 GMT",
"version": "v3"
},
{
"created": "Mon, 27 Mar 2023 04:45:10 GMT",
"version": "v4"
},
{
"created": "Sun, 6 Aug 2023 10:55:23 GMT",
"version": "v5"
},
{
"created": "Thu, 21 Sep 2023 09:59:19 GMT",
"version": "v6"
}
] | 2023-09-22 | [
[
"Odouard",
"Victor Vikram",
""
],
[
"Smirnova",
"Diana",
""
],
[
"Edelman",
"Shimon",
""
]
] | Many mechanisms behind the evolution of cooperation, such as reciprocity, indirect reciprocity, and altruistic punishment, require group knowledge of individual actions. But what keeps people cooperating when no one is looking? Conformist norm internalization, the tendency to abide by the behavior of the majority of the group, even when it is individually harmful, could be the answer. In this paper, we analyze a world where (1) there is group selection and punishment by indirect reciprocity but (2) many actions (half) go unobserved, and therefore unpunished. Can norm internalization fill this "observation gap" and lead to high levels of cooperation, even when agents may in principle cooperate only when likely to be caught and punished? Specifically, we seek to understand whether adding norm internalization to the strategy space in a public goods game can lead to higher levels of cooperation when both norm internalization and cooperation start out rare. We found the answer to be positive, but, interestingly, not because norm internalizers end up making up a substantial fraction of the population, nor because they cooperate much more than other agent types. Instead, norm internalizers, by polarizing, catalyzing, and stabilizing cooperation, can increase levels of cooperation of other agent types, while only making up a minority of the population themselves. |
1602.08913 | Eugene Rosenfeld | E. V. Rosenfeld | X-ray experiment provides a way to reveal the distinction between
discrete and continuous conformation of myosin head | Anyone who understands the principles of interference can easily
check the results of very simple calculations in this text. However, the
editors of a number of biophysical journals rejected the MS and refused to
discuss details. I believe the result is important and stimulates an
experiment. So, I would be sincerely grateful to anyone who could explain me
why it was rejected | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The corner stone of the classical model after Huxley and Simmons is
supposition that a myosin head can reside only in several discrete states and
irregularly jumps from one state to another. Until now, it has not been found a
way to experimentally verify this supposition although confirmation or
refutation of the existence of discrete states is crucial for the solution of
myosin motor problem. Here I show that a set of equal myosin heads arranged
equidistantly along an actin filament produce X-ray pattern which varies with
the type of conformation. If the lever arms of all myosin heads reside in one
and the same position (continuous conformation), all the heads have the same
form-factor and equally scatter electromagnetic wave. In this case, only the
geometric factor associated with a spatial ordering of the heads will determine
the X-ray pattern. The situation changes if the average lever arm position is
the same, but inherently every head can reside only in several diverse discrete
states, hopping irregularly from one to another. In this case, the form-factors
corresponding to distinct states are dissimilar, and the increments in phases
of X-rays scattered by different heads are different as well. Inasmuch as every
quantum of radiation interacts with the heads residing in different states,
this results in additional interference and some peaks in the X-ray pattern
should slacken or even extinct as compared with the pattern from the heads with
the continuous-type conformation. The formulas describing both cases are
compared in this article. In general, the distinction between X-ray patterns is
insignificant, but they could be appreciably different at some stages of
conformation process (respective lever arm position depends on the type of
discrete model). Consequently, one can with luck attempt to find out this
difference using a high-sensitive equipment.
| [
{
"created": "Mon, 29 Feb 2016 11:29:50 GMT",
"version": "v1"
}
] | 2016-03-01 | [
[
"Rosenfeld",
"E. V.",
""
]
] | The corner stone of the classical model after Huxley and Simmons is supposition that a myosin head can reside only in several discrete states and irregularly jumps from one state to another. Until now, it has not been found a way to experimentally verify this supposition although confirmation or refutation of the existence of discrete states is crucial for the solution of myosin motor problem. Here I show that a set of equal myosin heads arranged equidistantly along an actin filament produce X-ray pattern which varies with the type of conformation. If the lever arms of all myosin heads reside in one and the same position (continuous conformation), all the heads have the same form-factor and equally scatter electromagnetic wave. In this case, only the geometric factor associated with a spatial ordering of the heads will determine the X-ray pattern. The situation changes if the average lever arm position is the same, but inherently every head can reside only in several diverse discrete states, hopping irregularly from one to another. In this case, the form-factors corresponding to distinct states are dissimilar, and the increments in phases of X-rays scattered by different heads are different as well. Inasmuch as every quantum of radiation interacts with the heads residing in different states, this results in additional interference and some peaks in the X-ray pattern should slacken or even extinct as compared with the pattern from the heads with the continuous-type conformation. The formulas describing both cases are compared in this article. In general, the distinction between X-ray patterns is insignificant, but they could be appreciably different at some stages of conformation process (respective lever arm position depends on the type of discrete model). Consequently, one can with luck attempt to find out this difference using a high-sensitive equipment. |
2109.14445 | Yen Ting Lin | Jacob Neumann, Yen Ting Lin, Abhishek Mallela, Ely F. Miller, Joshua
Colvin, Abell T. Duprat1, Ye Chen, William S. Hlavacek and Richard G. Posner | Implementation of a practical Markov chain Monte Carlo sampling
algorithm in PyBioNetFit | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Bayesian inference in biological modeling commonly relies on Markov chain
Monte Carlo (MCMC) sampling of a multidimensional and non-Gaussian posterior
distribution that is not analytically tractable. Here, we present the
implementation of a practical MCMC method in the open-source software package
PyBioNetFit (PyBNF), which is designed to support parameterization of
mathematical models for biological systems. The new MCMC method, am,
incorporates an adaptive move proposal distribution. For warm starts, sampling
can be initiated at a specified location in parameter space and with a
multivariate Gaussian proposal distribution defined initially by a specified
covariance matrix. Multiple chains can be generated in parallel using a
computer cluster. We demonstrate that am can be used to successfully solve
real-world Bayesian inference problems, including forecasting of new
Coronavirus Disease 2019 case detection with Bayesian quantification of
forecast uncertainty. PyBNF version 1.1.9, the first stable release with am, is
available at PyPI and can be installed using the pip package-management system
on platforms that have a working installation of Python 3. PyBNF relies on
libRoadRunner and BioNetGen for simulations (e.g., numerical integration of
ordinary differential equations defined in SBML or BNGL files) and
Dask.Distributed for task scheduling on Linux computer clusters.
| [
{
"created": "Wed, 29 Sep 2021 14:27:10 GMT",
"version": "v1"
}
] | 2021-09-30 | [
[
"Neumann",
"Jacob",
""
],
[
"Lin",
"Yen Ting",
""
],
[
"Mallela",
"Abhishek",
""
],
[
"Miller",
"Ely F.",
""
],
[
"Colvin",
"Joshua",
""
],
[
"Duprat1",
"Abell T.",
""
],
[
"Chen",
"Ye",
""
],
[
"Hlavacek",
"William S.",
""
],
[
"Posner",
"Richard G.",
""
]
] | Bayesian inference in biological modeling commonly relies on Markov chain Monte Carlo (MCMC) sampling of a multidimensional and non-Gaussian posterior distribution that is not analytically tractable. Here, we present the implementation of a practical MCMC method in the open-source software package PyBioNetFit (PyBNF), which is designed to support parameterization of mathematical models for biological systems. The new MCMC method, am, incorporates an adaptive move proposal distribution. For warm starts, sampling can be initiated at a specified location in parameter space and with a multivariate Gaussian proposal distribution defined initially by a specified covariance matrix. Multiple chains can be generated in parallel using a computer cluster. We demonstrate that am can be used to successfully solve real-world Bayesian inference problems, including forecasting of new Coronavirus Disease 2019 case detection with Bayesian quantification of forecast uncertainty. PyBNF version 1.1.9, the first stable release with am, is available at PyPI and can be installed using the pip package-management system on platforms that have a working installation of Python 3. PyBNF relies on libRoadRunner and BioNetGen for simulations (e.g., numerical integration of ordinary differential equations defined in SBML or BNGL files) and Dask.Distributed for task scheduling on Linux computer clusters. |
2106.05181 | Cheng Qian | Cheng Qian | Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design | 40 pages, 6 figures; added a section | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/publicdomain/zero/1.0/ | Understanding the basic operational logics of the nervous system is essential
to advancing neuroscientific research. However, theoretical efforts to tackle
this fundamental problem are lacking, despite the abundant empirical data about
the brain that has been collected in the past few decades. To address this
shortcoming, this document introduces a hypothetical framework for the
functional nature of primitive neural networks. It analyzes the idea that the
activity of neurons and synapses can symbolically reenact the dynamic changes
in the world and thus enable an adaptive system of behavior. More
significantly, the network achieves this without participating in an
algorithmic structure. When a neuron's activation represents some symbolic
element in the environment, each of its synapses can indicate a potential
change to the element and its future state. The efficacy of a synaptic
connection further specifies the element's particular probability for, or
contribution to, such a change. As it fires, a neuron's activation is
transformed to its postsynaptic targets, resulting in a chronological shift of
the represented elements. As the inherent function of summation in a neuron
integrates the various presynaptic contributions, the neural network mimics the
collective causal relationship of events in the observed environment.
| [
{
"created": "Fri, 21 May 2021 05:59:27 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Sep 2021 06:27:24 GMT",
"version": "v2"
}
] | 2021-09-07 | [
[
"Qian",
"Cheng",
""
]
] | Understanding the basic operational logics of the nervous system is essential to advancing neuroscientific research. However, theoretical efforts to tackle this fundamental problem are lacking, despite the abundant empirical data about the brain that has been collected in the past few decades. To address this shortcoming, this document introduces a hypothetical framework for the functional nature of primitive neural networks. It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world and thus enable an adaptive system of behavior. More significantly, the network achieves this without participating in an algorithmic structure. When a neuron's activation represents some symbolic element in the environment, each of its synapses can indicate a potential change to the element and its future state. The efficacy of a synaptic connection further specifies the element's particular probability for, or contribution to, such a change. As it fires, a neuron's activation is transformed to its postsynaptic targets, resulting in a chronological shift of the represented elements. As the inherent function of summation in a neuron integrates the various presynaptic contributions, the neural network mimics the collective causal relationship of events in the observed environment. |
1104.0025 | Chris Adami | Christoph Adami, Jifeng Qian, Matthew Rupp, and Arend Hintze | Information content of colored motifs in complex networks | 21 pages, 8 figures, to appear in Artificial Life | Artificial Life 17 (2011) 375-390 | 10.1162/artl_a_00045 | null | q-bio.QM cs.IT math.IT nlin.AO q-bio.MN q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study complex networks in which the nodes of the network are tagged with
different colors depending on the functionality of the nodes (colored graphs),
using information theory applied to the distribution of motifs in such
networks. We find that colored motifs can be viewed as the building blocks of
the networks (much more so than the uncolored structural motifs can be) and
that the relative frequency with which these motifs appear in the network can
be used to define the information content of the network. This information is
defined in such a way that a network with random coloration (but keeping the
relative number of nodes with different colors the same) has zero color
information content. Thus, colored motif information captures the
exceptionality of coloring in the motifs that is maintained via selection. We
study the motif information content of the C. elegans brain as well as the
evolution of colored motif information in networks that reflect the interaction
between instructions in genomes of digital life organisms. While we find that
colored motif information appears to capture essential functionality in the C.
elegans brain (where the color assignment of nodes is straightforward) it is
not obvious whether the colored motif information content always increases
during evolution, as would be expected from a measure that captures network
complexity. For a single choice of color assignment of instructions in the
digital life form Avida, we find rather that colored motif information content
increases or decreases during evolution, depending on how the genomes are
organized, and therefore could be an interesting tool to dissect genomic
rearrangements.
| [
{
"created": "Thu, 31 Mar 2011 20:35:44 GMT",
"version": "v1"
}
] | 2011-11-08 | [
[
"Adami",
"Christoph",
""
],
[
"Qian",
"Jifeng",
""
],
[
"Rupp",
"Matthew",
""
],
[
"Hintze",
"Arend",
""
]
] | We study complex networks in which the nodes of the network are tagged with different colors depending on the functionality of the nodes (colored graphs), using information theory applied to the distribution of motifs in such networks. We find that colored motifs can be viewed as the building blocks of the networks (much more so than the uncolored structural motifs can be) and that the relative frequency with which these motifs appear in the network can be used to define the information content of the network. This information is defined in such a way that a network with random coloration (but keeping the relative number of nodes with different colors the same) has zero color information content. Thus, colored motif information captures the exceptionality of coloring in the motifs that is maintained via selection. We study the motif information content of the C. elegans brain as well as the evolution of colored motif information in networks that reflect the interaction between instructions in genomes of digital life organisms. While we find that colored motif information appears to capture essential functionality in the C. elegans brain (where the color assignment of nodes is straightforward) it is not obvious whether the colored motif information content always increases during evolution, as would be expected from a measure that captures network complexity. For a single choice of color assignment of instructions in the digital life form Avida, we find rather that colored motif information content increases or decreases during evolution, depending on how the genomes are organized, and therefore could be an interesting tool to dissect genomic rearrangements. |
1210.4511 | Joel Miller | Joel C. Miller | Co-circulation of infectious diseases on networks | null | null | 10.1103/PhysRevE.87.060801 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider multiple diseases spreading in a static Configuration Model
network. We make standard assumptions that infection transmits from neighbor to
neighbor at a disease-specific rate and infected individuals recover at a
disease-specific rate. Infection by one disease confers immediate and permanent
immunity to infection by any disease. Under these assumptions, we find a
simple, low-dimensional ordinary differential equations model which captures
the global dynamics of the infection. The dynamics depend strongly on initial
conditions. Although we motivate this article with infectious disease, the
model may be adapted to the spread of other infectious agents such as competing
political beliefs, rumors, or adoption of new technologies if these are
influenced by contacts. As an example, we demonstrate how to model an
infectious disease which can be prevented by a behavior change.
| [
{
"created": "Tue, 16 Oct 2012 17:51:42 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Miller",
"Joel C.",
""
]
] | We consider multiple diseases spreading in a static Configuration Model network. We make standard assumptions that infection transmits from neighbor to neighbor at a disease-specific rate and infected individuals recover at a disease-specific rate. Infection by one disease confers immediate and permanent immunity to infection by any disease. Under these assumptions, we find a simple, low-dimensional ordinary differential equations model which captures the global dynamics of the infection. The dynamics depend strongly on initial conditions. Although we motivate this article with infectious disease, the model may be adapted to the spread of other infectious agents such as competing political beliefs, rumors, or adoption of new technologies if these are influenced by contacts. As an example, we demonstrate how to model an infectious disease which can be prevented by a behavior change. |
2202.12223 | Diego Ferreiro | Ezequiel A. Galpern, Jacopo Marchi, Thierry Mora, Aleksandra M.
Walczak, Diego U. Ferreiro | From evolution to folding of repeat proteins | 8 pages, 5 figures plus Supplementary text and Figures | null | 10.1073/pnas.2204131119 | null | q-bio.BM cond-mat.stat-mech | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Repeat proteins are made with tandem copies of similar amino acid stretches
that fold into elongated architectures. Due to their symmetry, these proteins
constitute excellent model systems to investigate how evolution relates to
structure, folding and function. Here, we propose a scheme to map evolutionary
information at the sequence level to a coarse-grained model for repeat-protein
folding and use it to investigate the folding of thousands of repeat-proteins.
We model the energetics by a combination of an inverse Potts model scheme with
an explicit mechanistic model of duplications and deletions of repeats to
calculate the evolutionary parameters of the system at single residue level.
This is used to inform an Ising-like model that allows for the generation of
folding curves, apparent domain emergence and occupation of intermediate states
that are highly compatible with experimental data in specific case studies. We
analyzed the folding of thousands of natural Ankyrin-repeat proteins and found
that a multiplicity of folding mechanisms are possible. Fully cooperative
all-or-none transition are obtained for arrays with enough sequence-similar
elements and strong interactions between them, while non-cooperative
element-by-element intermittent folding arose if the elements are dissimilar
and the interactions between them are energetically weak. In between, we
characterised nucleation-propagation and multi-domain folding mechanisms.
Finally, we showed that stability and cooperativity of a repeat-array can be
quantitatively predicted from a simple energy score, paving the way for guiding
protein folding design with a co-evolutionary model.
| [
{
"created": "Thu, 24 Feb 2022 17:37:10 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Galpern",
"Ezequiel A.",
""
],
[
"Marchi",
"Jacopo",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Ferreiro",
"Diego U.",
""
]
] | Repeat proteins are made with tandem copies of similar amino acid stretches that fold into elongated architectures. Due to their symmetry, these proteins constitute excellent model systems to investigate how evolution relates to structure, folding and function. Here, we propose a scheme to map evolutionary information at the sequence level to a coarse-grained model for repeat-protein folding and use it to investigate the folding of thousands of repeat-proteins. We model the energetics by a combination of an inverse Potts model scheme with an explicit mechanistic model of duplications and deletions of repeats to calculate the evolutionary parameters of the system at single residue level. This is used to inform an Ising-like model that allows for the generation of folding curves, apparent domain emergence and occupation of intermediate states that are highly compatible with experimental data in specific case studies. We analyzed the folding of thousands of natural Ankyrin-repeat proteins and found that a multiplicity of folding mechanisms are possible. Fully cooperative all-or-none transition are obtained for arrays with enough sequence-similar elements and strong interactions between them, while non-cooperative element-by-element intermittent folding arose if the elements are dissimilar and the interactions between them are energetically weak. In between, we characterised nucleation-propagation and multi-domain folding mechanisms. Finally, we showed that stability and cooperativity of a repeat-array can be quantitatively predicted from a simple energy score, paving the way for guiding protein folding design with a co-evolutionary model. |
2002.08433 | Chen Liao | Chen Liao, Tong Wang, Sergei Maslov, Joao B. Xavier | Modeling microbial cross-feeding at intermediate scale portrays
community dynamics and species coexistence | 6 figures | null | 10.1371/journal.pcbi.1008135 | null | q-bio.PE q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Social interaction between microbes can be described at many levels of
details, ranging from the biochemistry of cell-cell interactions to the
ecological dynamics of populations. Choosing the best level to model microbial
communities without losing generality remains a challenge. Here we propose to
model cross-feeding interactions at an intermediate level between genome-scale
metabolic models of individual species and consumer-resource models of
ecosystems, which is suitable to empirical data. We applied our method to three
published examples of multi-strain Escherichia coli communities with increasing
complexity consisting of uni-, bi-, and multi-directional cross-feeding of
either substitutable metabolic byproducts or essential nutrients. The
intermediate-scale model accurately described empirical data and could quantify
exchange rates elusive by other means, such as the byproduct secretions, even
for a complex community of 14 amino acid auxotrophs. We used the three models
to study each community's limits of robustness to perturbations such as
variations in resource supply, antibiotic treatments and invasion by other
"cheaters" species. Our analysis provides a foundation to quantify
cross-feeding interactions from experimental data, and highlights the
importance of metabolic exchanges in the dynamics and stability of microbial
communities.
| [
{
"created": "Wed, 19 Feb 2020 20:37:51 GMT",
"version": "v1"
}
] | 2020-09-09 | [
[
"Liao",
"Chen",
""
],
[
"Wang",
"Tong",
""
],
[
"Maslov",
"Sergei",
""
],
[
"Xavier",
"Joao B.",
""
]
] | Social interaction between microbes can be described at many levels of details, ranging from the biochemistry of cell-cell interactions to the ecological dynamics of populations. Choosing the best level to model microbial communities without losing generality remains a challenge. Here we propose to model cross-feeding interactions at an intermediate level between genome-scale metabolic models of individual species and consumer-resource models of ecosystems, which is suitable to empirical data. We applied our method to three published examples of multi-strain Escherichia coli communities with increasing complexity consisting of uni-, bi-, and multi-directional cross-feeding of either substitutable metabolic byproducts or essential nutrients. The intermediate-scale model accurately described empirical data and could quantify exchange rates elusive by other means, such as the byproduct secretions, even for a complex community of 14 amino acid auxotrophs. We used the three models to study each community's limits of robustness to perturbations such as variations in resource supply, antibiotic treatments and invasion by other "cheaters" species. Our analysis provides a foundation to quantify cross-feeding interactions from experimental data, and highlights the importance of metabolic exchanges in the dynamics and stability of microbial communities. |
0912.3482 | Luca Ciandrini | L. Ciandrini, I. Stansfield, M. C. Romano | Role of the particle's stepping cycle in an asymmetric exclusion
process: A model of mRNA translation | 9 pages, 9 figures | Phys. Rev. E 81, 051904 (2010) | 10.1103/PhysRevE.81.051904 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Messenger RNA translation is often studied by means of statistical-mechanical
models based on the Asymmetric Simple Exclusion Process (ASEP), which considers
hopping particles (the ribosomes) on a lattice (the polynucleotide chain). In
this work we extend this class of models and consider the two fundamental steps
of the ribosome's biochemical cycle following a coarse-grained perspective. In
order to achieve a better understanding of the underlying biological processes
and compare the theoretical predictions with experimental results, we provide a
description lying between the minimal ASEP-like models and the more detailed
models, which are analytically hard to treat. We use a mean-field approach to
study the dynamics of particles associated with an internal stepping cycle. In
this framework it is possible to characterize analytically different phases of
the system (high density, low density or maximal current phase). Crucially, we
show that the transitions between these different phases occur at different
parameter values than the equivalent transitions in a standard ASEP, indicating
the importance of including the two fundamental steps of the ribosome's
biochemical cycle into the model.
| [
{
"created": "Thu, 17 Dec 2009 18:34:32 GMT",
"version": "v1"
},
{
"created": "Mon, 3 May 2010 20:56:51 GMT",
"version": "v2"
}
] | 2010-05-06 | [
[
"Ciandrini",
"L.",
""
],
[
"Stansfield",
"I.",
""
],
[
"Romano",
"M. C.",
""
]
] | Messenger RNA translation is often studied by means of statistical-mechanical models based on the Asymmetric Simple Exclusion Process (ASEP), which considers hopping particles (the ribosomes) on a lattice (the polynucleotide chain). In this work we extend this class of models and consider the two fundamental steps of the ribosome's biochemical cycle following a coarse-grained perspective. In order to achieve a better understanding of the underlying biological processes and compare the theoretical predictions with experimental results, we provide a description lying between the minimal ASEP-like models and the more detailed models, which are analytically hard to treat. We use a mean-field approach to study the dynamics of particles associated with an internal stepping cycle. In this framework it is possible to characterize analytically different phases of the system (high density, low density or maximal current phase). Crucially, we show that the transitions between these different phases occur at different parameter values than the equivalent transitions in a standard ASEP, indicating the importance of including the two fundamental steps of the ribosome's biochemical cycle into the model. |
2107.14073 | Mark Leake | Sarah Lecinski, Jack W Shepherd, Lewis Frame, Imogen Hayton, Chris
MacDonald, Mark C Leake | Investigating molecular crowding during cell division in budding yeast
with FRET | null | null | null | null | q-bio.CB physics.bio-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell division, aging, and stress recovery triggers spatial reorganization of
cellular components in the cytoplasm, including membrane bound organelles, with
molecular changes in their compositions and structures. However, it is not
clear how these events are coordinated and how they integrate with regulation
of molecular crowding. We use the budding yeast Saccharomyces cerevisiae as a
model system to study these questions using recent progress in optical
fluorescence microscopy and crowding sensing probe technology. We used a
F\"{o}rster Resonance Energy Transfer (FRET) based sensor, illuminated by
confocal microscopy for high throughput analyses and Slimfield microscopy for
single-molecule resolution, to quantify molecular crowding. We determine
crowding in response to cellular growth of both mother and daughter cells, in
addition to osmotic stress, and reveal hot spots of crowding across the bud
neck in the burgeoning daughter cell. This crowding might be rationalized by
the packing of inherited material, like the vacuole, from mother cells. We
discuss recent advances in understanding the role of crowding in cellular
regulation and key current challenges and conclude by presenting our recent
advances in optimizing FRET-based measurements of crowding whilst
simultaneously imaging a third color, which can be used as a marker that labels
organelle membranes. Our approaches can be combined with synchronised cell
populations to increase experimental throughput and correlate molecular
crowding information with different stages in the cell cycle.
| [
{
"created": "Thu, 29 Jul 2021 15:05:04 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Sep 2021 22:41:38 GMT",
"version": "v2"
}
] | 2021-09-06 | [
[
"Lecinski",
"Sarah",
""
],
[
"Shepherd",
"Jack W",
""
],
[
"Frame",
"Lewis",
""
],
[
"Hayton",
"Imogen",
""
],
[
"MacDonald",
"Chris",
""
],
[
"Leake",
"Mark C",
""
]
] | Cell division, aging, and stress recovery triggers spatial reorganization of cellular components in the cytoplasm, including membrane bound organelles, with molecular changes in their compositions and structures. However, it is not clear how these events are coordinated and how they integrate with regulation of molecular crowding. We use the budding yeast Saccharomyces cerevisiae as a model system to study these questions using recent progress in optical fluorescence microscopy and crowding sensing probe technology. We used a F\"{o}rster Resonance Energy Transfer (FRET) based sensor, illuminated by confocal microscopy for high throughput analyses and Slimfield microscopy for single-molecule resolution, to quantify molecular crowding. We determine crowding in response to cellular growth of both mother and daughter cells, in addition to osmotic stress, and reveal hot spots of crowding across the bud neck in the burgeoning daughter cell. This crowding might be rationalized by the packing of inherited material, like the vacuole, from mother cells. We discuss recent advances in understanding the role of crowding in cellular regulation and key current challenges and conclude by presenting our recent advances in optimizing FRET-based measurements of crowding whilst simultaneously imaging a third color, which can be used as a marker that labels organelle membranes. Our approaches can be combined with synchronised cell populations to increase experimental throughput and correlate molecular crowding information with different stages in the cell cycle. |
1412.3800 | Lior Pachter | Akshay Tambe, Jennifer Doudna and Lior Pachter | Identifying RNA contacts from SHAPE-MaP by partial correlation analysis | null | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent paper Siegfried et al. published a new sequence-based structural
RNA assay that utilizes mutational profiling to detect base pairing (MaP).
Output from MaP provides information about both pairing (via reactivities) and
contact (via correlations). Reactivities can be coupled to partition function
folding models for structural inference, while correlations can reveal pairs of
sites that may be in structural proximity. The possibility for inference of 3D
contacts via MaP suggests a novel approach to structural prediction for RNA
analogous to covariance structural prediction for proteins. We explore this
approach and show that partial correlation analysis outperforms na\"ive
correlation analysis. Our results should be applicable to a wide range of
high-throughput sequencing based RNA structural assays that are under
development.
| [
{
"created": "Mon, 8 Dec 2014 19:04:51 GMT",
"version": "v1"
}
] | 2014-12-12 | [
[
"Tambe",
"Akshay",
""
],
[
"Doudna",
"Jennifer",
""
],
[
"Pachter",
"Lior",
""
]
] | In a recent paper Siegfried et al. published a new sequence-based structural RNA assay that utilizes mutational profiling to detect base pairing (MaP). Output from MaP provides information about both pairing (via reactivities) and contact (via correlations). Reactivities can be coupled to partition function folding models for structural inference, while correlations can reveal pairs of sites that may be in structural proximity. The possibility for inference of 3D contacts via MaP suggests a novel approach to structural prediction for RNA analogous to covariance structural prediction for proteins. We explore this approach and show that partial correlation analysis outperforms na\"ive correlation analysis. Our results should be applicable to a wide range of high-throughput sequencing based RNA structural assays that are under development. |
1504.02816 | Greg Gloor Dr | Amy McMillan, Stephen Rulisa, Mark Sumarah, Jean M. Macklaim, Justin
Renaud, Jordan Bisanz, Gregory B. Gloor, and Gregor Reid | A multi-platform metabolomics approach identifies novel biomarkers
associated with bacterial diversity in the human vagina | null | null | 10.1038/srep14174 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacterial vaginosis (BV) increases transmission of HIV, enhances the risk of
preterm labour, and its associated malodour impacts the quality of life for
many women. Clinical diagnosis primarily relies on microscopy to presumptively
detect a loss of lactobacilli and acquisition of anaerobes. This diagnostic
does not reflect the microbiota composition accurately as lactobacilli can
assume different morphotypes, and assigning BV associated morphotypes to
specific organisms is challenging. Using an untargeted metabolomics approach we
identify novel biomarkers for BV in a cohort of 131 Rwandan women, and
demonstrate that metabolic products in the vagina are strongly associated with
bacterial diversity. Metabolites associated with high diversity and clinical BV
include 2-hydroxyisovalerate and gamma-hydroxybutyrate (GHB), but not the
anaerobic end-product succinate. Low diversity, and high relative abundance of
lactobacilli, is characterized by lactate and amino acids. Biomarkers
associated with diversity and BV are independent of pregnancy status, and were
validated in a blinded replication cohort from Tanzania (n=45), in which we
predicted clinical BV with 91% accuracy. Correlations between the metabolome
and microbiota identified Gardnerella vaginalis as a putative producer of GHB,
and we demonstrate production by this species in vitro. This work provides a
deeper understanding of the relationship between the vaginal microbiota and
biomarkers of vaginal health and dysbiosis.
| [
{
"created": "Fri, 10 Apr 2015 23:40:14 GMT",
"version": "v1"
}
] | 2017-04-06 | [
[
"McMillan",
"Amy",
""
],
[
"Rulisa",
"Stephen",
""
],
[
"Sumarah",
"Mark",
""
],
[
"Macklaim",
"Jean M.",
""
],
[
"Renaud",
"Justin",
""
],
[
"Bisanz",
"Jordan",
""
],
[
"Gloor",
"Gregory B.",
""
],
[
"Reid",
"Gregor",
""
]
] | Bacterial vaginosis (BV) increases transmission of HIV, enhances the risk of preterm labour, and its associated malodour impacts the quality of life for many women. Clinical diagnosis primarily relies on microscopy to presumptively detect a loss of lactobacilli and acquisition of anaerobes. This diagnostic does not reflect the microbiota composition accurately as lactobacilli can assume different morphotypes, and assigning BV associated morphotypes to specific organisms is challenging. Using an untargeted metabolomics approach we identify novel biomarkers for BV in a cohort of 131 Rwandan women, and demonstrate that metabolic products in the vagina are strongly associated with bacterial diversity. Metabolites associated with high diversity and clinical BV include 2-hydroxyisovalerate and gamma-hydroxybutyrate (GHB), but not the anaerobic end-product succinate. Low diversity, and high relative abundance of lactobacilli, is characterized by lactate and amino acids. Biomarkers associated with diversity and BV are independent of pregnancy status, and were validated in a blinded replication cohort from Tanzania (n=45), in which we predicted clinical BV with 91% accuracy. Correlations between the metabolome and microbiota identified Gardnerella vaginalis as a putative producer of GHB, and we demonstrate production by this species in vitro. This work provides a deeper understanding of the relationship between the vaginal microbiota and biomarkers of vaginal health and dysbiosis. |
1407.8247 | John Vandermeer | John Vandermeer, Pejman Rohani | The interaction of regional and local in the dynamics of the coffee rust
disease | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple mean field model is proposed that captures the two scales of dynamic
processes involved in the spread of the coffee rust disease. At a local level
the state variable is the fraction of coffee plants on an average farm that are
infested with the disease and at the regional level the state variable is the
fraction of farms that are infested with the disease. These two levels are
interactive with one another in obvious ways, producing qualitatively
interesting results. Specifically there are three distinct outcomes: disease
disappears, disease is potentially persistent, and disease is inevitably
persistent. The basic structure of the model generates the potential for
tipping point behavior due to the presence of a blue sky bifurcation,
suggesting that the emergence of epizootics of the disease may be
unpredictable.
| [
{
"created": "Thu, 31 Jul 2014 00:44:25 GMT",
"version": "v1"
}
] | 2014-08-01 | [
[
"Vandermeer",
"John",
""
],
[
"Rohani",
"Pejman",
""
]
] | A simple mean field model is proposed that captures the two scales of dynamic processes involved in the spread of the coffee rust disease. At a local level the state variable is the fraction of coffee plants on an average farm that are infested with the disease and at the regional level the state variable is the fraction of farms that are infested with the disease. These two levels are interactive with one another in obvious ways, producing qualitatively interesting results. Specifically there are three distinct outcomes: disease disappears, disease is potentially persistent, and disease is inevitably persistent. The basic structure of the model generates the potential for tipping point behavior due to the presence of a blue sky bifurcation, suggesting that the emergence of epizootics of the disease may be unpredictable. |
2102.03746 | Jiayu Shang | Jiayu Shang and Jingzhe Jiang and Yanni Sun | Bacteriophage classification for assembled contigs using Graph
Convolutional Network | 15 pages, 10 figures | Bioinformatics, Volume 37, Issue Supplement1, July 2021, Pages
25-33 | 10.1093/bioinformatics/btab293 | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Motivation: Bacteriophages (aka phages), which mainly infect bacteria, play
key roles in the biology of microbes. As the most abundant biological entities
on the planet, the number of discovered phages is only the tip of the iceberg.
Recently, many new phages have been revealed using high throughput sequencing,
particularly metagenomic sequencing. Compared to the fast accumulation of
phage-like sequences, there is a serious lag in taxonomic classification of
phages. High diversity, abundance, and limited known phages pose great
challenges for taxonomic analysis. In particular, alignment-based tools have
difficulty in classifying fast accumulating contigs assembled from metagenomic
data. Results: In this work, we present a novel semi-supervised learning model,
named PhaGCN, to conduct taxonomic classification for phage contigs. In this
learning model, we construct a knowledge graph by combining the DNA sequence
features learned by convolutional neural network (CNN) and protein sequence
similarity gained from gene-sharing network. Then we apply graph convolutional
network (GCN) to utilize both the labeled and unlabeled samples in training to
enhance the learning ability. We tested PhaGCN on both simulated and real
sequencing data. The results clearly show that our method competes favorably
against available phage classification tools.
| [
{
"created": "Sun, 7 Feb 2021 08:58:35 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Sep 2021 10:08:25 GMT",
"version": "v2"
}
] | 2021-09-07 | [
[
"Shang",
"Jiayu",
""
],
[
"Jiang",
"Jingzhe",
""
],
[
"Sun",
"Yanni",
""
]
] | Motivation: Bacteriophages (aka phages), which mainly infect bacteria, play key roles in the biology of microbes. As the most abundant biological entities on the planet, the number of discovered phages is only the tip of the iceberg. Recently, many new phages have been revealed using high throughput sequencing, particularly metagenomic sequencing. Compared to the fast accumulation of phage-like sequences, there is a serious lag in taxonomic classification of phages. High diversity, abundance, and limited known phages pose great challenges for taxonomic analysis. In particular, alignment-based tools have difficulty in classifying fast accumulating contigs assembled from metagenomic data. Results: In this work, we present a novel semi-supervised learning model, named PhaGCN, to conduct taxonomic classification for phage contigs. In this learning model, we construct a knowledge graph by combining the DNA sequence features learned by convolutional neural network (CNN) and protein sequence similarity gained from gene-sharing network. Then we apply graph convolutional network (GCN) to utilize both the labeled and unlabeled samples in training to enhance the learning ability. We tested PhaGCN on both simulated and real sequencing data. The results clearly show that our method competes favorably against available phage classification tools. |
1005.0311 | Favard Cyril | C. Favard, J. Ehrig, J. Wenger, P.-F. Lenne and H. Rigneault | FCS diffusion laws on two-phases lipid membranes : experimental and
Monte-carlo simulation determination of domain size | 20 pages, 10 figures | null | null | null | q-bio.QM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For more than ten years now, many efforts have been done to identify and
characterize nature of obstructed diffusion in model and cellular lipid
membranes. Amongst all the techniques developed for this purpose, FCS, by means
of determination of FCS diffusion laws, has been shown to be a very efficient
approach. In this paper, FCS diffusion laws are used to probe the behavior of a
pure lipid and a lipid mixture at temperatures below and above phase
transitions, both numerically, using a full thermodynamic model, and
experimentally. In both cases FCS diffusion laws exhibit deviation from free
diffusion and reveal the existence of domains. The variation of these domains
mean size with temperature is in perfect correlation with the enthalpy
fluctuation.
| [
{
"created": "Mon, 3 May 2010 14:36:46 GMT",
"version": "v1"
}
] | 2010-05-04 | [
[
"Favard",
"C.",
""
],
[
"Ehrig",
"J.",
""
],
[
"Wenger",
"J.",
""
],
[
"Lenne",
"P. -F.",
""
],
[
"Rigneault",
"H.",
""
]
] | For more than ten years now, many efforts have been done to identify and characterize nature of obstructed diffusion in model and cellular lipid membranes. Amongst all the techniques developed for this purpose, FCS, by means of determination of FCS diffusion laws, has been shown to be a very efficient approach. In this paper, FCS diffusion laws are used to probe the behavior of a pure lipid and a lipid mixture at temperatures below and above phase transitions, both numerically, using a full thermodynamic model, and experimentally. In both cases FCS diffusion laws exhibit deviation from free diffusion and reveal the existence of domains. The variation of these domains mean size with temperature is in perfect correlation with the enthalpy fluctuation. |
1705.09868 | Andrei Khrennikov Yu | Ichiro Yamato, Takeshi Murata and Andrei Khrennikov | Energy flow in biological system: Bioenergy transduction of V1-ATPase
molecular rotary motor from E. hirae | Progress in Biophysics and Molecular Biology, 2017 | Progress in Biophysics and Molecular Biology 130, Part A, 33-38
(2017) | null | null | q-bio.BM quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We classify research fields in biology into those on flows of materials,
energy, and information. As a representative energy transducing machinery in
biology, our research target, V1-ATPase from a bacterium Enterococcus hirae, a
typical molecular rotary motor is introduced. Structures of several
intermediates of the rotary motor are described and the molecular mechanism of
the motor converting chemical energy into mechanical energy is discussed.
Comments and considerations on the information flow in biology, especially on
the thermodynamic entropy in quantum physical and biological systems, are added
in a separate section containing the biologist friendly presentation of this
complicated question.
| [
{
"created": "Sat, 27 May 2017 21:31:53 GMT",
"version": "v1"
}
] | 2018-07-18 | [
[
"Yamato",
"Ichiro",
""
],
[
"Murata",
"Takeshi",
""
],
[
"Khrennikov",
"Andrei",
""
]
] | We classify research fields in biology into those on flows of materials, energy, and information. As a representative energy transducing machinery in biology, our research target, V1-ATPase from a bacterium Enterococcus hirae, a typical molecular rotary motor is introduced. Structures of several intermediates of the rotary motor are described and the molecular mechanism of the motor converting chemical energy into mechanical energy is discussed. Comments and considerations on the information flow in biology, especially on the thermodynamic entropy in quantum physical and biological systems, are added in a separate section containing the biologist friendly presentation of this complicated question. |
1510.09035 | Jose A. Reinoso | Jose A. Reinoso, M. C. Torrent, Cristina Masoller | Emergence of spike correlations in periodically forced excitable systems | null | null | 10.1103/PhysRevE.94.032218 | null | q-bio.NC nlin.CD physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In sensory neurons the presence of noise can facilitate the detection of weak
information-carrying signals, which are encoded and transmitted via correlated
sequences of spikes. Here we investigate relative temporal order in spike
sequences induced by a subthreshold periodic input, in the presence of white
Gaussian noise. To simulate the spikes, we use the FitzHugh-Nagumo model, and
to investigate the output sequence of inter-spike intervals (ISIs), we use the
symbolic method of ordinal analysis. We find different types of relative
temporal order, in the form of preferred ordinal patterns which depend on both,
the strength of the noise and the period of the input signal. We also
demonstrate a resonance-like behavior, as certain periods and noise levels
enhance temporal ordering in the ISI sequence, maximizing the probability of
the preferred patterns. Our findings could be relevant for understanding the
mechanisms underlying temporal coding, by which single sensory neurons
represent in spike sequences the information about weak periodic stimuli.
| [
{
"created": "Fri, 30 Oct 2015 10:07:59 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Mar 2016 12:51:15 GMT",
"version": "v2"
}
] | 2016-10-12 | [
[
"Reinoso",
"Jose A.",
""
],
[
"Torrent",
"M. C.",
""
],
[
"Masoller",
"Cristina",
""
]
] | In sensory neurons the presence of noise can facilitate the detection of weak information-carrying signals, which are encoded and transmitted via correlated sequences of spikes. Here we investigate relative temporal order in spike sequences induced by a subthreshold periodic input, in the presence of white Gaussian noise. To simulate the spikes, we use the FitzHugh-Nagumo model, and to investigate the output sequence of inter-spike intervals (ISIs), we use the symbolic method of ordinal analysis. We find different types of relative temporal order, in the form of preferred ordinal patterns which depend on both, the strength of the noise and the period of the input signal. We also demonstrate a resonance-like behavior, as certain periods and noise levels enhance temporal ordering in the ISI sequence, maximizing the probability of the preferred patterns. Our findings could be relevant for understanding the mechanisms underlying temporal coding, by which single sensory neurons represent in spike sequences the information about weak periodic stimuli. |
2110.05478 | Yi Zhang | Yi Zhang | An In-depth Summary of Recent Artificial Intelligence Applications in
Drug Design | 26 pages, 6 figures, 3 tables, 253 references | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | As a promising tool to navigate in the vast chemical space, artificial
intelligence (AI) is leveraged for drug design. From the year 2017 to 2021, the
number of applications of several recent AI models (i.e. graph neural network
(GNN), recurrent neural network (RNN), variation autoencoder (VAE), generative
adversarial network (GAN), flow and reinforcement learning (RL)) in drug design
increases significantly. Many relevant literature reviews exist. However, none
of them provides an in-depth summary of many applications of the recent AI
models in drug design. To complement the existing literature, this survey
includes the theoretical development of the previously mentioned AI models and
detailed summaries of 42 recent applications of AI in drug design. Concretely,
13 of them leverage GNN for molecular property prediction and 29 of them use RL
and/or deep generative models for molecule generation and optimization. In most
cases, the focus of the summary is the models, their variants, and
modifications for specific tasks in drug design. Moreover, 60 additional
applications of AI in molecule generation and optimization are briefly
summarized in a table. Finally, this survey provides a holistic discussion of
the abundant applications so that the tasks, potential solutions, and
challenges in AI-based drug design become evident.
| [
{
"created": "Sun, 10 Oct 2021 00:40:53 GMT",
"version": "v1"
}
] | 2021-10-13 | [
[
"Zhang",
"Yi",
""
]
] | As a promising tool to navigate in the vast chemical space, artificial intelligence (AI) is leveraged for drug design. From the year 2017 to 2021, the number of applications of several recent AI models (i.e. graph neural network (GNN), recurrent neural network (RNN), variation autoencoder (VAE), generative adversarial network (GAN), flow and reinforcement learning (RL)) in drug design increases significantly. Many relevant literature reviews exist. However, none of them provides an in-depth summary of many applications of the recent AI models in drug design. To complement the existing literature, this survey includes the theoretical development of the previously mentioned AI models and detailed summaries of 42 recent applications of AI in drug design. Concretely, 13 of them leverage GNN for molecular property prediction and 29 of them use RL and/or deep generative models for molecule generation and optimization. In most cases, the focus of the summary is the models, their variants, and modifications for specific tasks in drug design. Moreover, 60 additional applications of AI in molecule generation and optimization are briefly summarized in a table. Finally, this survey provides a holistic discussion of the abundant applications so that the tasks, potential solutions, and challenges in AI-based drug design become evident. |
1409.4607 | Caitlin Rivers | Caitlin M. Rivers, Eric T. Lofgren, Madhav Marathe, Stephen Eubank,
Bryan L. Lewis | Modeling the Impact of Interventions on an Epidemic of Ebola in Sierra
Leone and Liberia | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An Ebola outbreak of unparalleled size is currently affecting several
countries in West Africa, and international efforts to control the outbreak are
underway. However, the efficacy of these interventions, and their likely impact
on an Ebola epidemic of this size, is unknown. Forecasting and simulation of
these interventions may inform public health efforts. We use existing data from
Liberia and Sierra Leone to parameterize a mathematical model of Ebola and use
this model to forecast the progression of the epidemic, as well as the efficacy
of several interventions, including increased contact tracing, improved
infection control practices, the use of a hypothetical pharmaceutical
intervention to improve survival in hospitalized patients. Model forecasts
until Dec. 31, 2014 show an increasingly severe epidemic with no sign of having
reached a peak. Modeling results suggest that increased contact tracing,
improved infection control, or a combination of the two can have a substantial
impact on the number of Ebola cases, but these interventions are not sufficient
to halt the progress of the epidemic. The hypothetical pharmaceutical
intervention, while impacting mortality, had a smaller effect on the forecasted
trajectory of the epidemic. Near-term, practical interventions to address the
ongoing Ebola epidemic may have a beneficial impact on public health, but they
will not result in the immediate halting, or even obvious slowing of the
epidemic. A long-term commitment of resources and support will be necessary to
address the outbreak.
| [
{
"created": "Tue, 16 Sep 2014 12:29:28 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Sep 2014 20:35:02 GMT",
"version": "v2"
}
] | 2014-09-23 | [
[
"Rivers",
"Caitlin M.",
""
],
[
"Lofgren",
"Eric T.",
""
],
[
"Marathe",
"Madhav",
""
],
[
"Eubank",
"Stephen",
""
],
[
"Lewis",
"Bryan L.",
""
]
] | An Ebola outbreak of unparalleled size is currently affecting several countries in West Africa, and international efforts to control the outbreak are underway. However, the efficacy of these interventions, and their likely impact on an Ebola epidemic of this size, is unknown. Forecasting and simulation of these interventions may inform public health efforts. We use existing data from Liberia and Sierra Leone to parameterize a mathematical model of Ebola and use this model to forecast the progression of the epidemic, as well as the efficacy of several interventions, including increased contact tracing, improved infection control practices, the use of a hypothetical pharmaceutical intervention to improve survival in hospitalized patients. Model forecasts until Dec. 31, 2014 show an increasingly severe epidemic with no sign of having reached a peak. Modeling results suggest that increased contact tracing, improved infection control, or a combination of the two can have a substantial impact on the number of Ebola cases, but these interventions are not sufficient to halt the progress of the epidemic. The hypothetical pharmaceutical intervention, while impacting mortality, had a smaller effect on the forecasted trajectory of the epidemic. Near-term, practical interventions to address the ongoing Ebola epidemic may have a beneficial impact on public health, but they will not result in the immediate halting, or even obvious slowing of the epidemic. A long-term commitment of resources and support will be necessary to address the outbreak. |
q-bio/0701049 | Christian Hagendorf | Francois David, Christian Hagendorf, Kay Joerg Wiese | Random RNA under tension | 6 pages, 10 figures. v2: corrected typos, discussion on locking
argument improved | Europhys. Lett., 78 (2007) 68003 | 10.1209/0295-5075/78/68003 | LPTENS 07/04 | q-bio.BM cond-mat.dis-nn | null | The Laessig-Wiese (LW) field theory for the freezing transition of random RNA
secondary structures is generalized to the situation of an external force. We
find a second-order phase transition at a critical applied force f = f_c. For f
< f_c forces are irrelevant. For f > f_c, the extension L as a function of
pulling force f scales as (f-f_c)^(1/gamma-1). The exponent gamma is calculated
in an epsilon-expansion: At 1-loop order gamma = epsilon/2 = 1/2, equivalent to
the disorder-free case. 2-loop results yielding gamma = 0.6 are briefly
mentioned. Using a locking argument, we speculate that this result extends to
the strong-disorder phase.
| [
{
"created": "Mon, 29 Jan 2007 19:41:49 GMT",
"version": "v1"
},
{
"created": "Tue, 8 May 2007 16:19:32 GMT",
"version": "v2"
}
] | 2007-06-13 | [
[
"David",
"Francois",
""
],
[
"Hagendorf",
"Christian",
""
],
[
"Wiese",
"Kay Joerg",
""
]
] | The Laessig-Wiese (LW) field theory for the freezing transition of random RNA secondary structures is generalized to the situation of an external force. We find a second-order phase transition at a critical applied force f = f_c. For f < f_c forces are irrelevant. For f > f_c, the extension L as a function of pulling force f scales as (f-f_c)^(1/gamma-1). The exponent gamma is calculated in an epsilon-expansion: At 1-loop order gamma = epsilon/2 = 1/2, equivalent to the disorder-free case. 2-loop results yielding gamma = 0.6 are briefly mentioned. Using a locking argument, we speculate that this result extends to the strong-disorder phase. |
1905.11640 | Wenzhi Mao | Wenzhi Mao, Wenze Ding and Haipeng Gong | AmoebaContact and GDFold: a new pipeline for rapid prediction of protein
structures | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Native contacts between residues could be predicted from the amino acid
sequence of proteins, and the predicted contact information could assist the de
novo protein structure prediction. Here, we present a novel pipeline of a
residue contact predictor AmoebaContact and a contact-assisted folder GDFold
for rapid protein structure prediction. Unlike mainstream contact predictors
that utilize human-designed neural networks, AmoebaContact adopts a set of
network architectures that are found as optimal for contact prediction through
automatic searching and predicts the residue contacts at a series of cutoffs.
Different from conventional contact-assisted folders that only use top-scored
contact pairs, GDFold considers all residue pairs from the prediction results
of AmoebaContact in a differentiable loss function and optimizes the atom
coordinates using the gradient descent algorithm. Combination of AmoebaContact
and GDFold allows quick reconstruction of the protein structure, with
comparable model quality to the state-of-the-art protein structure prediction
methods.
| [
{
"created": "Tue, 28 May 2019 06:54:24 GMT",
"version": "v1"
}
] | 2019-05-29 | [
[
"Mao",
"Wenzhi",
""
],
[
"Ding",
"Wenze",
""
],
[
"Gong",
"Haipeng",
""
]
] | Native contacts between residues could be predicted from the amino acid sequence of proteins, and the predicted contact information could assist the de novo protein structure prediction. Here, we present a novel pipeline of a residue contact predictor AmoebaContact and a contact-assisted folder GDFold for rapid protein structure prediction. Unlike mainstream contact predictors that utilize human-designed neural networks, AmoebaContact adopts a set of network architectures that are found as optimal for contact prediction through automatic searching and predicts the residue contacts at a series of cutoffs. Different from conventional contact-assisted folders that only use top-scored contact pairs, GDFold considers all residue pairs from the prediction results of AmoebaContact in a differentiable loss function and optimizes the atom coordinates using the gradient descent algorithm. Combination of AmoebaContact and GDFold allows quick reconstruction of the protein structure, with comparable model quality to the state-of-the-art protein structure prediction methods. |
1504.07303 | Monique Tirion | Monique M. Tirion | On the Sensitivity of Protein Data Bank Normal Mode Analysis: An
Application to GH10 Xylanases | null | Phys. Biol. 12 (2015) 066013 | 10.1088/1478-3975/12/6/066013 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein data bank entries obtain distinct, reproducible flexibility
characteristics determined by normal mode analyses of their three dimensional
coordinate files. We study the effectiveness and sensitivity of this technique
by analyzing the results on one class of glycosidases: family 10 xylanases. A
conserved tryptophan that appears to affect access to the active site can be in
one of two conformations according to X-ray crystallographic electron density
data. The two alternate orientations of this active site tryptophan lead to
distinct flexibility spectra, with one orientation thwarting the oscillations
seen in the other. The particular orientation of this sidechain furthermore
affects the appearance of the motility of a distant, C terminal region we term
the mallet. The mallet region is known to separate members of this family of
enzymes into two classes.
| [
{
"created": "Mon, 27 Apr 2015 23:18:55 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Aug 2015 16:58:02 GMT",
"version": "v2"
}
] | 2015-12-01 | [
[
"Tirion",
"Monique M.",
""
]
] | Protein data bank entries obtain distinct, reproducible flexibility characteristics determined by normal mode analyses of their three dimensional coordinate files. We study the effectiveness and sensitivity of this technique by analyzing the results on one class of glycosidases: family 10 xylanases. A conserved tryptophan that appears to affect access to the active site can be in one of two conformations according to X-ray crystallographic electron density data. The two alternate orientations of this active site tryptophan lead to distinct flexibility spectra, with one orientation thwarting the oscillations seen in the other. The particular orientation of this sidechain furthermore affects the appearance of the motility of a distant, C terminal region we term the mallet. The mallet region is known to separate members of this family of enzymes into two classes. |
2406.19969 | Kaiqi Zhang | Guanzhou Chen, Kaiqi Zhang, Xiaodong Zhang, Hong Xie, Haobo Yang,
Xiaoliang Tan, Tong Wang, Yule Ma, Qing Wang, Jinzhou Cao and Weihong Cui | Enhancing Terrestrial Net Primary Productivity Estimation with EXP-CASA:
A Novel Light Use Efficiency Model Approach | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Light Use Efficiency model, epitomized by the CASA model, is extensively
applied in the quantitative estimation of vegetation Net Primary Productivity.
However, the classic CASA model is marked by significant complexity: the
estimation of environmental stress parameters, in particular, necessitates
multi-source observation data, adding to the complexity and uncertainty of the
model's operation. Additionally, the saturation effect of the Normalized
Difference Vegetation Index (NDVI), a key variable in the CASA model, weakened
the accuracy of CASA's NPP predictions in densely vegetated areas. To address
these limitations, this study introduces the Exponential-CASA (EXP-CASA) model.
The EXP-CASA model effectively improves the CASA model by using novel functions
for estimating the fraction of absorbed photosynthetically active radiation
(FPAR) and environmental stress, by utilizing long-term observational data from
FLUXNET and MODIS surface reflectance data. In a comparative analysis of NPP
estimation accuracy among four different NPP products, EXP-CASA ($R^2 = 0.68,
RMSE= 1.1gC\cdot m^{-2} \cdot d^{-1}$) outperforms others, followed by
GLASS-NPP, and lastly MODIS-NPP and classic CASA. Additionally, this research
assesses the EXP-CASA model's adaptability to various vegetation indices,
evaluates the sensitivity and stability of its parameters over time, and
compares its accuracy against other leading NPP estimation products. The
findings reveal that the EXP-CASA model exhibits strong adaptability to diverse
vegetation indices and stability of model parameters over time series. By
introducing a novel estimation approach that optimizes model construction, the
EXP-CASA model remarkably improves the accuracy of NPP estimations and paves
the way for global-scale, consistent, and continuous assessment of vegetation
NPP.
| [
{
"created": "Fri, 28 Jun 2024 14:57:40 GMT",
"version": "v1"
}
] | 2024-07-01 | [
[
"Chen",
"Guanzhou",
""
],
[
"Zhang",
"Kaiqi",
""
],
[
"Zhang",
"Xiaodong",
""
],
[
"Xie",
"Hong",
""
],
[
"Yang",
"Haobo",
""
],
[
"Tan",
"Xiaoliang",
""
],
[
"Wang",
"Tong",
""
],
[
"Ma",
"Yule",
""
],
[
"Wang",
"Qing",
""
],
[
"Cao",
"Jinzhou",
""
],
[
"Cui",
"Weihong",
""
]
] | The Light Use Efficiency model, epitomized by the CASA model, is extensively applied in the quantitative estimation of vegetation Net Primary Productivity. However, the classic CASA model is marked by significant complexity: the estimation of environmental stress parameters, in particular, necessitates multi-source observation data, adding to the complexity and uncertainty of the model's operation. Additionally, the saturation effect of the Normalized Difference Vegetation Index (NDVI), a key variable in the CASA model, weakened the accuracy of CASA's NPP predictions in densely vegetated areas. To address these limitations, this study introduces the Exponential-CASA (EXP-CASA) model. The EXP-CASA model effectively improves the CASA model by using novel functions for estimating the fraction of absorbed photosynthetically active radiation (FPAR) and environmental stress, by utilizing long-term observational data from FLUXNET and MODIS surface reflectance data. In a comparative analysis of NPP estimation accuracy among four different NPP products, EXP-CASA ($R^2 = 0.68, RMSE= 1.1gC\cdot m^{-2} \cdot d^{-1}$) outperforms others, followed by GLASS-NPP, and lastly MODIS-NPP and classic CASA. Additionally, this research assesses the EXP-CASA model's adaptability to various vegetation indices, evaluates the sensitivity and stability of its parameters over time, and compares its accuracy against other leading NPP estimation products. The findings reveal that the EXP-CASA model exhibits strong adaptability to diverse vegetation indices and stability of model parameters over time series. By introducing a novel estimation approach that optimizes model construction, the EXP-CASA model remarkably improves the accuracy of NPP estimations and paves the way for global-scale, consistent, and continuous assessment of vegetation NPP. |
q-bio/0511046 | David Stockwell PhD | David R.B. Stockwell | Improving ecological niche models by data mining large environmental
datasets for surrogate models | 16 pages, 4 figures, to appear in Ecological Modelling | null | null | null | q-bio.QM cs.AI | null | WhyWhere is a new ecological niche modeling (ENM) algorithm for mapping and
explaining the distribution of species. The algorithm uses image processing
methods to efficiently sift through large amounts of data to find the few
variables that best predict species occurrence. The purpose of this paper is to
describe and justify the main parameterizations and to show preliminary success
at rapidly providing accurate, scalable, and simple ENMs. Preliminary results
for 6 species of plants and animals in different regions indicate a significant
(p<0.01) 14% increase in accuracy over the GARP algorithm using models with
few, typically two, variables. The increase is attributed to access to
additional data, particularly monthly vs. annual climate averages. WhyWhere is
also 6 times faster than GARP on large data sets. A data mining based approach
with transparent access to remote data archives is a new paradigm for ENM,
particularly suited to finding correlates in large databases of fine resolution
surfaces. Software for WhyWhere is freely available, both as a service and in a
desktop downloadable form from the web site http://biodi.sdsc.edu/ww_home.html.
| [
{
"created": "Mon, 28 Nov 2005 19:47:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Stockwell",
"David R. B.",
""
]
] | WhyWhere is a new ecological niche modeling (ENM) algorithm for mapping and explaining the distribution of species. The algorithm uses image processing methods to efficiently sift through large amounts of data to find the few variables that best predict species occurrence. The purpose of this paper is to describe and justify the main parameterizations and to show preliminary success at rapidly providing accurate, scalable, and simple ENMs. Preliminary results for 6 species of plants and animals in different regions indicate a significant (p<0.01) 14% increase in accuracy over the GARP algorithm using models with few, typically two, variables. The increase is attributed to access to additional data, particularly monthly vs. annual climate averages. WhyWhere is also 6 times faster than GARP on large data sets. A data mining based approach with transparent access to remote data archives is a new paradigm for ENM, particularly suited to finding correlates in large databases of fine resolution surfaces. Software for WhyWhere is freely available, both as a service and in a desktop downloadable form from the web site http://biodi.sdsc.edu/ww_home.html. |
q-bio/0503039 | Toshio Fukumi | Toshio Fukumi | Free energy adopted stochastic optimization protein folding | AMS-LaTeX 3 pages, no figure | null | null | null | q-bio.BM | null | Optimal structure of proteins is described by linear stochastic differential
equation with mean decrease of free energy and volatility. Structure
determining strategy is given by a twin of stochastic variables for which
empirical conditions are not postulated. Optimal structure determination will
be deformed to be adoptive to trading strategy employing martingale property
where stochastic integral w.r.t. analytical solution of stochastic differential
equation will be employed.
| [
{
"created": "Tue, 29 Mar 2005 04:17:27 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Fukumi",
"Toshio",
""
]
] | Optimal structure of proteins is described by linear stochastic differential equation with mean decrease of free energy and volatility. Structure determining strategy is given by a twin of stochastic variables for which empirical conditions are not postulated. Optimal structure determination will be deformed to be adoptive to trading strategy employing martingale property where stochastic integral w.r.t. analytical solution of stochastic differential equation will be employed. |
q-bio/0503003 | Tomoshiro Ochiai | T. Ochiai, J.C. Nacher, T. Akutsu | Symmetry and Dynamics in living organisms: The self-similarity principle
governs gene expression dynamics | 27 pages, 14 figures, Latex | null | null | null | q-bio.BM | null | The ambitious and ultimate research purpose in Systems Biology is the
understanding and modelling of the cell's system. Although a vast number of
models have been developed in order to extract biological knowledge from
complex systems composed of basic elements as proteins, genes and chemical
compounds, a need remains for improving our understanding of dynamical features
of the systems (i.e., temporal-dependence).
In this article, we analyze the gene expression dynamics (i.e., how the genes
expression fluctuates in time) by using a new constructive approach. This
approach is based on only two fundamental ingredients: symmetry and the Markov
property of dynamics. First, by using experimental data of human and yeast gene
expression time series, we found a symmetry in short-time transition
probability from time $t$ to time $t+1$. We call it self-similarity symmetry
(i.e., surprisingly, the gene expression short-time fluctuations contain a
repeating pattern of smaller and smaller parts that are like the whole, but
different in size). Secondly, the Markov property of dynamics reflects that the
short-time fluctuation governs the full-time behaviour of the system. Here, we
succeed in reconstructing naturally the global behavior of the observed
distribution of gene expression (i.e., scaling-law) and the local behaviour of
the power-law tail of this distribution, by using only these two ingredients:
symmetry and the Markov property of dynamics. This approach may represent a
step forward toward an integrated image of the basic elements of the whole
cell.
| [
{
"created": "Tue, 1 Mar 2005 14:02:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ochiai",
"T.",
""
],
[
"Nacher",
"J. C.",
""
],
[
"Akutsu",
"T.",
""
]
] | The ambitious and ultimate research purpose in Systems Biology is the understanding and modelling of the cell's system. Although a vast number of models have been developed in order to extract biological knowledge from complex systems composed of basic elements as proteins, genes and chemical compounds, a need remains for improving our understanding of dynamical features of the systems (i.e., temporal-dependence). In this article, we analyze the gene expression dynamics (i.e., how the genes expression fluctuates in time) by using a new constructive approach. This approach is based on only two fundamental ingredients: symmetry and the Markov property of dynamics. First, by using experimental data of human and yeast gene expression time series, we found a symmetry in short-time transition probability from time $t$ to time $t+1$. We call it self-similarity symmetry (i.e., surprisingly, the gene expression short-time fluctuations contain a repeating pattern of smaller and smaller parts that are like the whole, but different in size). Secondly, the Markov property of dynamics reflects that the short-time fluctuation governs the full-time behaviour of the system. Here, we succeed in reconstructing naturally the global behavior of the observed distribution of gene expression (i.e., scaling-law) and the local behaviour of the power-law tail of this distribution, by using only these two ingredients: symmetry and the Markov property of dynamics. This approach may represent a step forward toward an integrated image of the basic elements of the whole cell. |
1101.2170 | Simone Linz | Maria Luisa Bonet, Simone Linz, and Katherine St. John | The Complexity of Finding Multiple Solutions to Betweenness and Quartet
Compatibility | 25 pages, 7 figures | null | null | null | q-bio.PE cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that two important problems that have applications in computational
biology are ASP-complete, which implies that, given a solution to a problem, it
is NP-complete to decide if another solution exists. We show first that a
variation of Betweenness, which is the underlying problem of questions related
to radiation hybrid mapping, is ASP-complete. Subsequently, we use that result
to show that Quartet Compatibility, a fundamental problem in phylogenetics that
asks whether a set of quartets can be represented by a parent tree, is also
ASP-complete. The latter result shows that Steel's \sc Quartet Challenge, which
asks whether a solution to Quartet Compatibility is unique, is coNP-complete.
| [
{
"created": "Tue, 11 Jan 2011 17:48:54 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2011 07:57:11 GMT",
"version": "v2"
}
] | 2015-03-17 | [
[
"Bonet",
"Maria Luisa",
""
],
[
"Linz",
"Simone",
""
],
[
"John",
"Katherine St.",
""
]
] | We show that two important problems that have applications in computational biology are ASP-complete, which implies that, given a solution to a problem, it is NP-complete to decide if another solution exists. We show first that a variation of Betweenness, which is the underlying problem of questions related to radiation hybrid mapping, is ASP-complete. Subsequently, we use that result to show that Quartet Compatibility, a fundamental problem in phylogenetics that asks whether a set of quartets can be represented by a parent tree, is also ASP-complete. The latter result shows that Steel's \sc Quartet Challenge, which asks whether a solution to Quartet Compatibility is unique, is coNP-complete. |
2009.06313 | Minghan Chen | Minghan Chen, Mansooreh Ahmadian, Layne Watson, Yang Cao | Finding Acceptable Parameter Regions of Stochastic Hill functions for
Multisite Phosphorylation Mechanism | null | The Journal of Chemical Physics 152.12 (2020): 124108 | 10.1063/1.5143004 | null | q-bio.MN math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multisite phosphorylation plays an important role in regulating switchlike
protein activity and has been used widely in mathematical models. With the
development of new experimental techniques and more molecular data, molecular
phosphorylation processes emerge in many systems with increasing complexity and
sizes. These developments call for simple yet valid stochastic models to
describe various multisite phosphorylation processes, especially in large and
complex biochemical networks. To reduce model complexity, this work aims to
simplify the multisite phosphorylation mechanism by a stochastic Hill function
model. Further, this work optimizes regions of parameter space to match
simulation results from the stochastic Hill function with the distributive
multisite phosphorylation process. While traditional parameter optimization
methods have been focusing on finding the best parameter vector, in most
circumstances modelers would like to find a set of parameter vectors that
generate similar system dynamics and results. This paper proposes a general
$\alpha$-$\beta$-$\gamma$ rule to return an acceptable parameter region of the
stochastic Hill function based on a quasi-Newton stochastic optimization
(QNSTOP) algorithm. Different objective functions are investigated
characterizing different features of the simulation-based empirical data, among
which the approximate maximum log-likelihood method is recommended for general
applications. Numerical results demonstrate that with an appropriate parameter
vector value, the stochastic Hill function model depicts the multisite
phosphorylation process well except the initial (transient) period.
| [
{
"created": "Mon, 14 Sep 2020 10:27:33 GMT",
"version": "v1"
}
] | 2020-09-15 | [
[
"Chen",
"Minghan",
""
],
[
"Ahmadian",
"Mansooreh",
""
],
[
"Watson",
"Layne",
""
],
[
"Cao",
"Yang",
""
]
] | Multisite phosphorylation plays an important role in regulating switchlike protein activity and has been used widely in mathematical models. With the development of new experimental techniques and more molecular data, molecular phosphorylation processes emerge in many systems with increasing complexity and sizes. These developments call for simple yet valid stochastic models to describe various multisite phosphorylation processes, especially in large and complex biochemical networks. To reduce model complexity, this work aims to simplify the multisite phosphorylation mechanism by a stochastic Hill function model. Further, this work optimizes regions of parameter space to match simulation results from the stochastic Hill function with the distributive multisite phosphorylation process. While traditional parameter optimization methods have been focusing on finding the best parameter vector, in most circumstances modelers would like to find a set of parameter vectors that generate similar system dynamics and results. This paper proposes a general $\alpha$-$\beta$-$\gamma$ rule to return an acceptable parameter region of the stochastic Hill function based on a quasi-Newton stochastic optimization (QNSTOP) algorithm. Different objective functions are investigated characterizing different features of the simulation-based empirical data, among which the approximate maximum log-likelihood method is recommended for general applications. Numerical results demonstrate that with an appropriate parameter vector value, the stochastic Hill function model depicts the multisite phosphorylation process well except the initial (transient) period. |
1412.0207 | Surojit Biswas | Surojit Biswas, Meredith McDonald, Derek S. Lundberg, Jeffery L.
Dangl, Vladimir Jojic | Learning microbial interaction networks from metagenomic count data | Submitted to RECOMB 2015 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many microbes associate with higher eukaryotes and impact their vitality. In
order to engineer microbiomes for host benefit, we must understand the rules of
community assembly and maintenence, which in large part, demands an
understanding of the direct interactions between community members. Toward this
end, we've developed a Poisson-multivariate normal hierarchical model to learn
direct interactions from the count-based output of standard metagenomics
sequencing experiments. Our model controls for confounding predictors at the
Poisson layer, and captures direct taxon-taxon interactions at the multivariate
normal layer using an $\ell_1$ penalized precision matrix. We show in a
synthetic experiment that our method handily outperforms state-of-the-art
methods such as SparCC and the graphical lasso (glasso). In a real, in planta
perturbation experiment of a nine member bacterial community, we show our
model, but not SparCC or glasso, correctly resolves a direct interaction
structure among three community members that associate with Arabidopsis
thaliana roots. We conclude that our method provides a structured, accurate,
and distributionally reasonable way of modeling correlated count based random
variables and capturing direct interactions among them.
| [
{
"created": "Sun, 30 Nov 2014 11:19:59 GMT",
"version": "v1"
}
] | 2014-12-02 | [
[
"Biswas",
"Surojit",
""
],
[
"McDonald",
"Meredith",
""
],
[
"Lundberg",
"Derek S.",
""
],
[
"Dangl",
"Jeffery L.",
""
],
[
"Jojic",
"Vladimir",
""
]
] | Many microbes associate with higher eukaryotes and impact their vitality. In order to engineer microbiomes for host benefit, we must understand the rules of community assembly and maintenence, which in large part, demands an understanding of the direct interactions between community members. Toward this end, we've developed a Poisson-multivariate normal hierarchical model to learn direct interactions from the count-based output of standard metagenomics sequencing experiments. Our model controls for confounding predictors at the Poisson layer, and captures direct taxon-taxon interactions at the multivariate normal layer using an $\ell_1$ penalized precision matrix. We show in a synthetic experiment that our method handily outperforms state-of-the-art methods such as SparCC and the graphical lasso (glasso). In a real, in planta perturbation experiment of a nine member bacterial community, we show our model, but not SparCC or glasso, correctly resolves a direct interaction structure among three community members that associate with Arabidopsis thaliana roots. We conclude that our method provides a structured, accurate, and distributionally reasonable way of modeling correlated count based random variables and capturing direct interactions among them. |
2202.07218 | Susanna Gordleeva | Susanna Gordleeva, Yuliya A. Tsybina, Mikhail I. Krivonosov, Ivan Y.
Tyukin, Victor B. Kazantsev, Alexey A. Zaikin, Alexander N. Gorban | Situation-based memory in spiking neuron-astrocyte network | 38 pages, 11 figures, 4 tables | IEEE Transactions on Neural Networks and Learning Systems, 04
December 2023, Early Access | 10.1109/TNNLS.2023.33354 | null | q-bio.NC q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Mammalian brains operate in a very special surrounding: to survive they have
to react quickly and effectively to the pool of stimuli patterns previously
recognized as danger. Many learning tasks often encountered by living organisms
involve a specific set-up centered around a relatively small set of patterns
presented in a particular environment. For example, at a party, people
recognize friends immediately, without deep analysis, just by seeing a fragment
of their clothes. This set-up with reduced "ontology" is referred to as a
"situation". Situations are usually local in space and time. In this work, we
propose that neuron-astrocyte networks provide a network topology that is
effectively adapted to accommodate situation-based memory. In order to
illustrate this, we numerically simulate and analyze a well-established model
of a neuron-astrocyte network, which is subjected to stimuli conforming to the
situation-driven environment. Three pools of stimuli patterns are considered:
external patterns, patterns from the situation associative pool regularly
presented to the network and learned by the network, and patterns already
learned and remembered by astrocytes. Patterns from the external world are
added to and removed from the associative pool. Then we show that astrocytes
are structurally necessary for an effective function in such a learning and
testing set-up. To demonstrate this we present a novel neuromorphic model for
short-term memory implemented by a two-net spiking neural-astrocytic network.
Our results show that such a system tested on synthesized data with selective
astrocyte-induced modulation of neuronal activity provides an enhancement of
retrieval quality in comparison to standard spiking neural networks trained via
Hebbian plasticity only. We argue that the proposed set-up may offer a new way
to analyze, model, and understand neuromorphic artificial intelligence systems.
| [
{
"created": "Tue, 15 Feb 2022 06:16:30 GMT",
"version": "v1"
}
] | 2023-12-07 | [
[
"Gordleeva",
"Susanna",
""
],
[
"Tsybina",
"Yuliya A.",
""
],
[
"Krivonosov",
"Mikhail I.",
""
],
[
"Tyukin",
"Ivan Y.",
""
],
[
"Kazantsev",
"Victor B.",
""
],
[
"Zaikin",
"Alexey A.",
""
],
[
"Gorban",
"Alexander N.",
""
]
] | Mammalian brains operate in a very special surrounding: to survive they have to react quickly and effectively to the pool of stimuli patterns previously recognized as danger. Many learning tasks often encountered by living organisms involve a specific set-up centered around a relatively small set of patterns presented in a particular environment. For example, at a party, people recognize friends immediately, without deep analysis, just by seeing a fragment of their clothes. This set-up with reduced "ontology" is referred to as a "situation". Situations are usually local in space and time. In this work, we propose that neuron-astrocyte networks provide a network topology that is effectively adapted to accommodate situation-based memory. In order to illustrate this, we numerically simulate and analyze a well-established model of a neuron-astrocyte network, which is subjected to stimuli conforming to the situation-driven environment. Three pools of stimuli patterns are considered: external patterns, patterns from the situation associative pool regularly presented to the network and learned by the network, and patterns already learned and remembered by astrocytes. Patterns from the external world are added to and removed from the associative pool. Then we show that astrocytes are structurally necessary for an effective function in such a learning and testing set-up. To demonstrate this we present a novel neuromorphic model for short-term memory implemented by a two-net spiking neural-astrocytic network. Our results show that such a system tested on synthesized data with selective astrocyte-induced modulation of neuronal activity provides an enhancement of retrieval quality in comparison to standard spiking neural networks trained via Hebbian plasticity only. We argue that the proposed set-up may offer a new way to analyze, model, and understand neuromorphic artificial intelligence systems. |
2204.00007 | Joseph Whittaker Mr | Joseph Whittaker and Kexin Wu | Low-fat diets and testosterone in men: Systematic review and
meta-analysis of intervention studies | null | The Journal of steroid biochemistry and molecular biology (2021)
210: 105878 | 10.1016/j.jsbmb.2021.105878 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: Higher endogenous testosterone levels are associated with reduced
chronic disease risk and mortality. Since the mid-20th century, there have been
significant changes in dietary patterns, and men's testosterone levels have
declined in western countries. Cross-sectional studies show inconsistent
associations between fat intake and testosterone in men.
Methods: Studies eligible for inclusion were intervention studies, with
minimal confounding variables, comparing the effect of low-fat vs high-fat
diets on men's sex hormones. 9 databases were searched from their inception to
October 2020, yielding 6 eligible studies, with a total of 206 participants.
Random effects meta-analyses were performed using Cochrane's Review Manager
software. Cochrane's risk of bias tool was used for quality assessment.
Results: There were significant decreases in sex hormones on low-fat vs
high-fat diets. Standardised mean differences with 95% confidence intervals
(CI) for outcomes were: total testosterone [-0.38 (95% CI -0.75 to -0.01) P =
0.04]; free testosterone [-0.37 (95% CI -0.63 to -0.11) P = 0.005]; urinary
testosterone [-0.38 (CI 95% -0.66 to -0.09) P = 0.009], and dihydrotestosterone
[-0.3 (CI 95% -0.56 to -0.03) P = 0.03]. There were no significant differences
for luteinising hormone or sex hormone binding globulin. Subgroup analysis for
total testosterone, European and American men, showed a stronger effect [-0.52
(95% CI -0.75 to -0.3) P < 0.001].
Conclusions: Low-fat diets appear to decrease testosterone levels in men, but
further randomised controlled trials are needed to confirm this effect. Men
with European ancestry may experience a greater decrease in testosterone, in
response to a low-fat diet.
| [
{
"created": "Thu, 31 Mar 2022 13:20:22 GMT",
"version": "v1"
}
] | 2022-04-04 | [
[
"Whittaker",
"Joseph",
""
],
[
"Wu",
"Kexin",
""
]
] | Background: Higher endogenous testosterone levels are associated with reduced chronic disease risk and mortality. Since the mid-20th century, there have been significant changes in dietary patterns, and men's testosterone levels have declined in western countries. Cross-sectional studies show inconsistent associations between fat intake and testosterone in men. Methods: Studies eligible for inclusion were intervention studies, with minimal confounding variables, comparing the effect of low-fat vs high-fat diets on men's sex hormones. 9 databases were searched from their inception to October 2020, yielding 6 eligible studies, with a total of 206 participants. Random effects meta-analyses were performed using Cochrane's Review Manager software. Cochrane's risk of bias tool was used for quality assessment. Results: There were significant decreases in sex hormones on low-fat vs high-fat diets. Standardised mean differences with 95% confidence intervals (CI) for outcomes were: total testosterone [-0.38 (95% CI -0.75 to -0.01) P = 0.04]; free testosterone [-0.37 (95% CI -0.63 to -0.11) P = 0.005]; urinary testosterone [-0.38 (CI 95% -0.66 to -0.09) P = 0.009], and dihydrotestosterone [-0.3 (CI 95% -0.56 to -0.03) P = 0.03]. There were no significant differences for luteinising hormone or sex hormone binding globulin. Subgroup analysis for total testosterone, European and American men, showed a stronger effect [-0.52 (95% CI -0.75 to -0.3) P < 0.001]. Conclusions: Low-fat diets appear to decrease testosterone levels in men, but further randomised controlled trials are needed to confirm this effect. Men with European ancestry may experience a greater decrease in testosterone, in response to a low-fat diet. |
2301.09097 | Angelo D'Ambrosio | Angelo D'Ambrosio, Raisa Kociurzynski, Alexis Papathanassopoulos,
Fabian B\"urkin, Hajo Grundmann, Tjibbe Donker | Forecasting local hospital bed demand for COVID-19 using on-request
simulations | 23 pages, 3 figures. Code available at
https://github.com/QUPI-IUK/Bed-demand-forecast | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by/4.0/ | For hospitals, realistic forecasting of bed demand during impending epidemics
of infectious diseases is essential to avoid being overwhelmed by a potential
sudden increase in the number of admitted patients. Short-term forecasting can
aid hospitals in adjusting their planning and freeing up beds in time. We
created an easy-to-use online on-request tool based on local data to forecast
COVID-19 bed demand for individual hospitals. The tool is flexible and
adaptable to different settings. It is based on a stochastic compartmental
model for estimating the epidemic dynamics and coupled with an exponential
smoothing model for forecasting. The models are written in R and Julia and
implemented as an R-shiny dashboard. The model is parameterized using COVID-19
incidence, vaccination, and bed occupancy data at customizable geographical
resolutions, loaded from official online sources or uploaded manually. Users
can select their hospital's catchment area and adjust the number of COVID-19
occupied beds at the start of the simulation. The tool provides short-term
forecasts of disease incidence and past and forecasted estimation of the
epidemic reproductive number at the chosen geographical level. These quantities
are then used to estimate the bed occupancy in both general wards and intensive
care unit beds. The platform has proven efficient, providing results within
seconds while coping with many concurrent users. By providing ad-hoc, local
data informed forecasts, this platform allows decision-makers to evaluate
realistic scenarios for allocating scarce resources, such as ICU beds, at
various geographic levels.
| [
{
"created": "Sun, 22 Jan 2023 10:37:01 GMT",
"version": "v1"
}
] | 2023-01-24 | [
[
"D'Ambrosio",
"Angelo",
""
],
[
"Kociurzynski",
"Raisa",
""
],
[
"Papathanassopoulos",
"Alexis",
""
],
[
"Bürkin",
"Fabian",
""
],
[
"Grundmann",
"Hajo",
""
],
[
"Donker",
"Tjibbe",
""
]
] | For hospitals, realistic forecasting of bed demand during impending epidemics of infectious diseases is essential to avoid being overwhelmed by a potential sudden increase in the number of admitted patients. Short-term forecasting can aid hospitals in adjusting their planning and freeing up beds in time. We created an easy-to-use online on-request tool based on local data to forecast COVID-19 bed demand for individual hospitals. The tool is flexible and adaptable to different settings. It is based on a stochastic compartmental model for estimating the epidemic dynamics and coupled with an exponential smoothing model for forecasting. The models are written in R and Julia and implemented as an R-shiny dashboard. The model is parameterized using COVID-19 incidence, vaccination, and bed occupancy data at customizable geographical resolutions, loaded from official online sources or uploaded manually. Users can select their hospital's catchment area and adjust the number of COVID-19 occupied beds at the start of the simulation. The tool provides short-term forecasts of disease incidence and past and forecasted estimation of the epidemic reproductive number at the chosen geographical level. These quantities are then used to estimate the bed occupancy in both general wards and intensive care unit beds. The platform has proven efficient, providing results within seconds while coping with many concurrent users. By providing ad-hoc, local data informed forecasts, this platform allows decision-makers to evaluate realistic scenarios for allocating scarce resources, such as ICU beds, at various geographic levels. |
0902.3784 | Emma Jin | Emma Y. Jin and Christian M. Reidys | Irreducibility in RNA structures | 17 figures 28 pages | null | null | null | q-bio.BM math.CO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study irreducibility in RNA structures. By RNA structure we
mean RNA secondary as well as RNA pseudoknot structures. In our analysis we
shall contrast random and minimum free energy (mfe) configurations. We compute
various distributions: of the numbers of irreducible substructures, their
locations and sizes, parameterized in terms of the maximal number of mutually
crossing arcs, $k-1$, and the minimal size of stacks $\sigma$. In particular,
we analyze the size of the largest irreducible substructure for random and mfe
structures, which is the key factor for the folding time of mfe configurations.
| [
{
"created": "Sun, 22 Feb 2009 10:09:10 GMT",
"version": "v1"
}
] | 2009-02-24 | [
[
"Jin",
"Emma Y.",
""
],
[
"Reidys",
"Christian M.",
""
]
] | In this paper we study irreducibility in RNA structures. By RNA structure we mean RNA secondary as well as RNA pseudoknot structures. In our analysis we shall contrast random and minimum free energy (mfe) configurations. We compute various distributions: of the numbers of irreducible substructures, their locations and sizes, parameterized in terms of the maximal number of mutually crossing arcs, $k-1$, and the minimal size of stacks $\sigma$. In particular, we analyze the size of the largest irreducible substructure for random and mfe structures, which is the key factor for the folding time of mfe configurations. |
0905.0375 | Michael Hagan | Michael F. Hagan | Understanding the Concentration Dependence of Viral Capsid Assembly
Kinetics - the Origin of the Lag Time and Identifying the Critical Nucleus
Size | Submitted to Biophysical Journal | null | 10.1016/j.bpj.2009.11.023 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The kinetics for the assembly of viral proteins into a population of capsids
can be measured in vitro with size exclusion chromatography or dynamic light
scattering, but extracting mechanistic information from these studies is
challenging. For example, it is not straightforward to determine the critical
nucleus size or the elongation time (the time required for a nucleated partial
capsid to grow completion). We show that, for two theoretical models of capsid
assembly, the critical nucleus size can be determined from the concentration
dependence of the assembly reaction half-life and the elongation time is
revealed by the length of the lag phase. Furthermore, we find that the system
becomes kinetically trapped when nucleation becomes fast compared to
elongation. Implications of this constraint for determining elongation
mechanisms from experimental assembly data are discussed.
| [
{
"created": "Mon, 4 May 2009 12:51:25 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Hagan",
"Michael F.",
""
]
] | The kinetics for the assembly of viral proteins into a population of capsids can be measured in vitro with size exclusion chromatography or dynamic light scattering, but extracting mechanistic information from these studies is challenging. For example, it is not straightforward to determine the critical nucleus size or the elongation time (the time required for a nucleated partial capsid to grow completion). We show that, for two theoretical models of capsid assembly, the critical nucleus size can be determined from the concentration dependence of the assembly reaction half-life and the elongation time is revealed by the length of the lag phase. Furthermore, we find that the system becomes kinetically trapped when nucleation becomes fast compared to elongation. Implications of this constraint for determining elongation mechanisms from experimental assembly data are discussed. |
2005.13282 | \v{Z}iga Zaplotnik | Ziga Zaplotnik, Aleksandar Gavric, Luka Medic | Simulation of the COVID-19 pandemic on the social network of Slovenia:
estimating the intrinsic forecast uncertainty | null | null | 10.1371/journal.pone.0238090 | null | q-bio.PE cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the article a virus transmission model is constructed on a simplified
social network. The social network consists of more than 2 million nodes, each
representing an inhabitant of Slovenia. The nodes are organised and
interconnected according to the real household and elderly-care center
distribution, while their connections outside these clusters are semi-randomly
distributed and fully-linked. The virus spread model is coupled to the disease
progression model. The ensemble approach with the perturbed transmission and
disease parameters is used to quantify the ensemble spread, a proxy for the
forecast uncertainty. The presented ongoing forecasts of COVID-19 epidemic in
Slovenia are compared with the collected Slovenian data. Results show that
infection is currently twice more likely to transmit within households/elderly
care centers than outside them. We use an ensemble of simulations (N = 1000) to
inversely obtain posterior distributions of model parameters and to estimate
the COVID-19 forecast uncertainty. We found that in the uncontrolled epidemic,
the intrinsic uncertainty mostly originates from the uncertainty of the virus
biology, i.e. its reproductive number. In the controlled epidemic with low
ratio of infected population, the randomness of the social network becomes the
major source of forecast uncertainty, particularly for the short-range
forecasts. Social-network-based models are thus essential for improving
epidemics forecasting.
| [
{
"created": "Wed, 27 May 2020 11:18:02 GMT",
"version": "v1"
}
] | 2021-01-27 | [
[
"Zaplotnik",
"Ziga",
""
],
[
"Gavric",
"Aleksandar",
""
],
[
"Medic",
"Luka",
""
]
] | In the article a virus transmission model is constructed on a simplified social network. The social network consists of more than 2 million nodes, each representing an inhabitant of Slovenia. The nodes are organised and interconnected according to the real household and elderly-care center distribution, while their connections outside these clusters are semi-randomly distributed and fully-linked. The virus spread model is coupled to the disease progression model. The ensemble approach with the perturbed transmission and disease parameters is used to quantify the ensemble spread, a proxy for the forecast uncertainty. The presented ongoing forecasts of COVID-19 epidemic in Slovenia are compared with the collected Slovenian data. Results show that infection is currently twice more likely to transmit within households/elderly care centers than outside them. We use an ensemble of simulations (N = 1000) to inversely obtain posterior distributions of model parameters and to estimate the COVID-19 forecast uncertainty. We found that in the uncontrolled epidemic, the intrinsic uncertainty mostly originates from the uncertainty of the virus biology, i.e. its reproductive number. In the controlled epidemic with low ratio of infected population, the randomness of the social network becomes the major source of forecast uncertainty, particularly for the short-range forecasts. Social-network-based models are thus essential for improving epidemics forecasting. |
1910.12007 | Anastasios Matzavinos | Karen Larson, Sarah Olson, Anastasios Matzavinos | Bayesian uncertainty quantification for micro-swimmers with fully
resolved hydrodynamics | null | null | null | null | q-bio.QM cs.NA math.NA physics.bio-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the computational complexity of micro-swimmer models with fully
resolved hydrodynamics, parameter estimation has been prohibitively expensive.
Here, we describe a Bayesian uncertainty quantification framework that is
highly parallelizable, making parameter estimation for complex forward models
tractable. Using noisy in-silico data for swimmers, we demonstrate the
methodology's robustness in estimating the fluid and elastic swimmer
parameters. Our proposed methodology allows for analysis of real data and
demonstrates potential for parameter estimation for various types of
micro-swimmers. Better understanding the movement of elastic micro-structures
in a viscous fluid could aid in developing artificial micro-swimmers for
bio-medical applications as well as gain a fundamental understanding of the
range of parameters that allow for certain motility patterns.
| [
{
"created": "Sat, 26 Oct 2019 06:15:37 GMT",
"version": "v1"
}
] | 2019-10-29 | [
[
"Larson",
"Karen",
""
],
[
"Olson",
"Sarah",
""
],
[
"Matzavinos",
"Anastasios",
""
]
] | Due to the computational complexity of micro-swimmer models with fully resolved hydrodynamics, parameter estimation has been prohibitively expensive. Here, we describe a Bayesian uncertainty quantification framework that is highly parallelizable, making parameter estimation for complex forward models tractable. Using noisy in-silico data for swimmers, we demonstrate the methodology's robustness in estimating the fluid and elastic swimmer parameters. Our proposed methodology allows for analysis of real data and demonstrates potential for parameter estimation for various types of micro-swimmers. Better understanding the movement of elastic micro-structures in a viscous fluid could aid in developing artificial micro-swimmers for bio-medical applications as well as gain a fundamental understanding of the range of parameters that allow for certain motility patterns. |
1112.4952 | Tobias Pl\"oger | Tobias A. Pl\"oger and G\"unter von Kiedrowski | A Self-Replicating Peptide Nucleic Acid | 36 pages | null | null | null | q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report on a case of autocatalytic feedback in a template directed
synthesis of a self-complementary hexa-PNA from two trimeric building blocks.
The course of the reaction was monitored in the presence of increasing initial
concentrations of product by means of RP-HPLC. Kinetic modeling with the SimFit
program revealed parabolic growth according to the so-called square-root law.
The observed template effect, as well as the rate of the ligation, was
significantly influenced by factors like nucleophilic catalysts, pH value, and
uncharged co-solvents. Systematic optimization of the reaction conditions
allowed us to increase the autocatalytic efficiency of the system by two orders
of magnitude.
| [
{
"created": "Wed, 21 Dec 2011 08:50:52 GMT",
"version": "v1"
}
] | 2011-12-22 | [
[
"Plöger",
"Tobias A.",
""
],
[
"von Kiedrowski",
"Günter",
""
]
] | We report on a case of autocatalytic feedback in a template directed synthesis of a self-complementary hexa-PNA from two trimeric building blocks. The course of the reaction was monitored in the presence of increasing initial concentrations of product by means of RP-HPLC. Kinetic modeling with the SimFit program revealed parabolic growth according to the so-called square-root law. The observed template effect, as well as the rate of the ligation, was significantly influenced by factors like nucleophilic catalysts, pH value, and uncharged co-solvents. Systematic optimization of the reaction conditions allowed us to increase the autocatalytic efficiency of the system by two orders of magnitude. |
2006.01586 | Hisashi Kobayashi | Hisashi Kobayashi | Stochastic Modeling of an Infectious Disease, Part I: Understand the
Negative Binomial Distribution and Predict an Epidemic More Reliably | 28 pages, 14 figures | null | null | null | q-bio.PE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Why are the epidemic patterns of COVID-19 so different among different cities
or countries which are similar in their populations, medical infrastructures,
and people's behavior? Why are forecasts or predictions made by so-called
experts often grossly wrong, concerning the numbers of people who get infected
or die? The purpose of this study is to better understand the stochastic nature
of an epidemic disease, and answer the above questions. Much of the work on
infectious diseases has been based on "SIR deterministic models," (Kermack and
McKendrick:1927.) We will explore stochastic models that can capture the
essence of the seemingly erratic behavior of an infectious disease. A
stochastic model, in its formulation, takes into account the random nature of
an infectious disease.
The stochastic model we study here is based on the "birth-and-death process
with immigration" (BDI for short), which was proposed in the study of
population growth or extinction of some biological species. The BDI process
model ,however, has not been investigated by the epidemiology community. The
BDI process is one of a few birth-and-death processes, which we can solve
analytically. Its time-dependent probability distribution function is a
"negative binomial distribution" with its parameter $r$ less than $1$. The
"coefficient of variation" of the process is larger than $\sqrt{1/r} > 1$.
Furthermore, it has a long tail like the zeta distribution. These properties
explain why infection patterns exhibit enormously large variations. The number
of infected predicted by a deterministic model is much greater than the median
of the distribution. This explains why any forecast based on a deterministic
model will fail more often than not.
| [
{
"created": "Tue, 2 Jun 2020 13:25:36 GMT",
"version": "v1"
}
] | 2020-06-03 | [
[
"Kobayashi",
"Hisashi",
""
]
] | Why are the epidemic patterns of COVID-19 so different among different cities or countries which are similar in their populations, medical infrastructures, and people's behavior? Why are forecasts or predictions made by so-called experts often grossly wrong, concerning the numbers of people who get infected or die? The purpose of this study is to better understand the stochastic nature of an epidemic disease, and answer the above questions. Much of the work on infectious diseases has been based on "SIR deterministic models," (Kermack and McKendrick:1927.) We will explore stochastic models that can capture the essence of the seemingly erratic behavior of an infectious disease. A stochastic model, in its formulation, takes into account the random nature of an infectious disease. The stochastic model we study here is based on the "birth-and-death process with immigration" (BDI for short), which was proposed in the study of population growth or extinction of some biological species. The BDI process model ,however, has not been investigated by the epidemiology community. The BDI process is one of a few birth-and-death processes, which we can solve analytically. Its time-dependent probability distribution function is a "negative binomial distribution" with its parameter $r$ less than $1$. The "coefficient of variation" of the process is larger than $\sqrt{1/r} > 1$. Furthermore, it has a long tail like the zeta distribution. These properties explain why infection patterns exhibit enormously large variations. The number of infected predicted by a deterministic model is much greater than the median of the distribution. This explains why any forecast based on a deterministic model will fail more often than not. |
q-bio/0605033 | Mathias Kuhnt | Mathias Kuhnt, Ingmar Glauche and Martin Greiner | Impact of observational incompleteness on the structural properties of
protein interaction networks | 17 pages, 7 figures, accepted by Physica A | null | 10.1016/j.physa.2006.04.120 | null | q-bio.MN | null | The observed structure of protein interaction networks is corrupted by many
false positive/negative links. This observational incompleteness is abstracted
as random link removal and a specific, experimentally motivated (spoke) link
rearrangement. Their impact on the structural properties of
gene-duplication-and-mutation network models is studied. For the degree
distribution a curve collapse is found, showing no sensitive dependence on the
link removal/rearrangement strengths and disallowing a quantitative extraction
of model parameters. The spoke link rearrangement process moves other
structural observables, like degree correlations, cluster coefficient and motif
frequencies, closer to their counterparts extracted from the yeast data. This
underlines the importance to take a precise modeling of the observational
incompleteness into account when network structure models are to be
quantitatively compared to data.
| [
{
"created": "Fri, 19 May 2006 20:24:26 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Kuhnt",
"Mathias",
""
],
[
"Glauche",
"Ingmar",
""
],
[
"Greiner",
"Martin",
""
]
] | The observed structure of protein interaction networks is corrupted by many false positive/negative links. This observational incompleteness is abstracted as random link removal and a specific, experimentally motivated (spoke) link rearrangement. Their impact on the structural properties of gene-duplication-and-mutation network models is studied. For the degree distribution a curve collapse is found, showing no sensitive dependence on the link removal/rearrangement strengths and disallowing a quantitative extraction of model parameters. The spoke link rearrangement process moves other structural observables, like degree correlations, cluster coefficient and motif frequencies, closer to their counterparts extracted from the yeast data. This underlines the importance to take a precise modeling of the observational incompleteness into account when network structure models are to be quantitatively compared to data. |
2012.05501 | Bruno. Cessac | Bruno Cessac | The Retina as a Dynamical System | Review paper, submitted as a chapter of the "Book in honour of Prof.
Miguel A.F. Sanju\'an on his 60th Birthday" | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considering the retina as a high dimensional, non autonomous, dynamical
system, layered and structured, with non stationary and spatially inhomogeneous
entries (visual scenes), we present several examples where dynamical systems-,
bifurcations-, and ergodic-theory provide useful insights on retinal behaviour
and dynamics.
| [
{
"created": "Thu, 10 Dec 2020 08:03:29 GMT",
"version": "v1"
}
] | 2020-12-11 | [
[
"Cessac",
"Bruno",
""
]
] | Considering the retina as a high dimensional, non autonomous, dynamical system, layered and structured, with non stationary and spatially inhomogeneous entries (visual scenes), we present several examples where dynamical systems-, bifurcations-, and ergodic-theory provide useful insights on retinal behaviour and dynamics. |
q-bio/0611064 | Yong Chen | L. C. Yu, Yong Chen, and Pan Zhang | Frequency and phase synchronization of two coupled neurons with channel
noise | 8 pages, 10 figures | Eur. Phys. J. B 59, 249-257(2007) | 10.1140/epjb/e2007-00278-0 | null | q-bio.NC q-bio.OT | null | We study the frequency and phase synchronization in two coupled identical and
nonidentical neurons with channel noise. The occupation number method is used
to model the neurons in the context of stochastic Hodgkin-Huxley model in which
the strength of of channel noise is represented by ion channel cluster size of
the initiation region of neuron. It is shown that frequency synchronization
only was achieved at arbitrary value of couple strength as long as two neurons'
channel cluster sizes are the same. We also show that the relative phase of
neurons can display profuse dynamic behavior under the combined action of
coupling and channel noise. Both qualitative and quantitative descriptions are
applied to describe the transitions between those behaviors. Relevance of our
findings to controlling neural synchronization experimentally is discussed.
| [
{
"created": "Mon, 20 Nov 2006 09:08:40 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Nov 2007 01:50:31 GMT",
"version": "v2"
}
] | 2007-11-07 | [
[
"Yu",
"L. C.",
""
],
[
"Chen",
"Yong",
""
],
[
"Zhang",
"Pan",
""
]
] | We study the frequency and phase synchronization in two coupled identical and nonidentical neurons with channel noise. The occupation number method is used to model the neurons in the context of stochastic Hodgkin-Huxley model in which the strength of of channel noise is represented by ion channel cluster size of the initiation region of neuron. It is shown that frequency synchronization only was achieved at arbitrary value of couple strength as long as two neurons' channel cluster sizes are the same. We also show that the relative phase of neurons can display profuse dynamic behavior under the combined action of coupling and channel noise. Both qualitative and quantitative descriptions are applied to describe the transitions between those behaviors. Relevance of our findings to controlling neural synchronization experimentally is discussed. |
1401.1490 | Bahman Afsari | Bahman Afsari, Ulisses M. Braga-Neto, Donald Geman | Rank discriminants for predicting phenotypes from RNA expression | Published in at http://dx.doi.org/10.1214/14-AOAS738 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org) | Annals of Applied Statistics 2014, Vol. 8, No. 3, 1469-1491 | 10.1214/14-AOAS738 | IMS-AOAS-AOAS738 | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical methods for analyzing large-scale biomolecular data are
commonplace in computational biology. A notable example is phenotype prediction
from gene expression data, for instance, detecting human cancers,
differentiating subtypes and predicting clinical outcomes. Still, clinical
applications remain scarce. One reason is that the complexity of the decision
rules that emerge from standard statistical learning impedes biological
understanding, in particular, any mechanistic interpretation. Here we explore
decision rules for binary classification utilizing only the ordering of
expression among several genes; the basic building blocks are then two-gene
expression comparisons. The simplest example, just one comparison, is the TSP
classifier, which has appeared in a variety of cancer-related discovery
studies. Decision rules based on multiple comparisons can better accommodate
class heterogeneity, and thereby increase accuracy, and might provide a link
with biological mechanism. We consider a general framework ("rank-in-context")
for designing discriminant functions, including a data-driven selection of the
number and identity of the genes in the support ("context"). We then specialize
to two examples: voting among several pairs and comparing the median expression
in two groups of genes. Comprehensive experiments assess accuracy relative to
other, more complex, methods, and reinforce earlier observations that simple
classifiers are competitive.
| [
{
"created": "Tue, 7 Jan 2014 20:26:59 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Apr 2014 22:58:29 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Nov 2014 08:18:00 GMT",
"version": "v3"
}
] | 2014-11-24 | [
[
"Afsari",
"Bahman",
""
],
[
"Braga-Neto",
"Ulisses M.",
""
],
[
"Geman",
"Donald",
""
]
] | Statistical methods for analyzing large-scale biomolecular data are commonplace in computational biology. A notable example is phenotype prediction from gene expression data, for instance, detecting human cancers, differentiating subtypes and predicting clinical outcomes. Still, clinical applications remain scarce. One reason is that the complexity of the decision rules that emerge from standard statistical learning impedes biological understanding, in particular, any mechanistic interpretation. Here we explore decision rules for binary classification utilizing only the ordering of expression among several genes; the basic building blocks are then two-gene expression comparisons. The simplest example, just one comparison, is the TSP classifier, which has appeared in a variety of cancer-related discovery studies. Decision rules based on multiple comparisons can better accommodate class heterogeneity, and thereby increase accuracy, and might provide a link with biological mechanism. We consider a general framework ("rank-in-context") for designing discriminant functions, including a data-driven selection of the number and identity of the genes in the support ("context"). We then specialize to two examples: voting among several pairs and comparing the median expression in two groups of genes. Comprehensive experiments assess accuracy relative to other, more complex, methods, and reinforce earlier observations that simple classifiers are competitive. |
2102.08471 | Hrithwik Shalu | Joseph Stember, Parvathy Jayan, Hrithwik Shalu | Deep Neural Network Based Differential Equation Solver for HIV Enzyme
Kinetics | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: We seek to use neural networks (NNs) to solve a well-known system of
differential equations describing the balance between T cells and HIV viral
burden.
Materials and Methods: In this paper, we employ a 3-input parallel NN to
approximate solutions for the system of first-order ordinary differential
equations describing the above biochemical relationship.
Results: The numerical results obtained by the NN are very similar to a host
of numerical approximations from the literature.
Conclusion: We have demonstrated use of NN integration of a well-known and
medically important system of first order coupled ordinary differential
equations. Our trial-and-error approach counteracts the system's inherent scale
imbalance. However, it highlights the need to address scale imbalance more
substantively in future work. Doing so will allow more automated solutions to
larger systems of equations, which could describe increasingly complex and
biologically interesting systems.
| [
{
"created": "Tue, 16 Feb 2021 22:16:57 GMT",
"version": "v1"
}
] | 2021-02-18 | [
[
"Stember",
"Joseph",
""
],
[
"Jayan",
"Parvathy",
""
],
[
"Shalu",
"Hrithwik",
""
]
] | Purpose: We seek to use neural networks (NNs) to solve a well-known system of differential equations describing the balance between T cells and HIV viral burden. Materials and Methods: In this paper, we employ a 3-input parallel NN to approximate solutions for the system of first-order ordinary differential equations describing the above biochemical relationship. Results: The numerical results obtained by the NN are very similar to a host of numerical approximations from the literature. Conclusion: We have demonstrated use of NN integration of a well-known and medically important system of first order coupled ordinary differential equations. Our trial-and-error approach counteracts the system's inherent scale imbalance. However, it highlights the need to address scale imbalance more substantively in future work. Doing so will allow more automated solutions to larger systems of equations, which could describe increasingly complex and biologically interesting systems. |
1812.08569 | Louxin Zhang | Andreas DM Gunawan and Jeyaram Rathin and Louxin Zhang | Counting and Enumerating Galled Networks | 7 figures, 2 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Galled trees are widely studied as a recombination model in population
genetics. This class of phylogenetic networks is generalized into galled
networks by relaxing a structural condition. In this work, a linear recurrence
formula is given for counting 1-galled networks, which are galled networks
satisfying the condition that each reticulate node has only one leaf
descendant. Since every galled network consists of a set of 1-galled networks
stacked one on top of the other, a method is also presented to count and
enumerate galled networks.
| [
{
"created": "Thu, 20 Dec 2018 14:00:04 GMT",
"version": "v1"
}
] | 2018-12-21 | [
[
"Gunawan",
"Andreas DM",
""
],
[
"Rathin",
"Jeyaram",
""
],
[
"Zhang",
"Louxin",
""
]
] | Galled trees are widely studied as a recombination model in population genetics. This class of phylogenetic networks is generalized into galled networks by relaxing a structural condition. In this work, a linear recurrence formula is given for counting 1-galled networks, which are galled networks satisfying the condition that each reticulate node has only one leaf descendant. Since every galled network consists of a set of 1-galled networks stacked one on top of the other, a method is also presented to count and enumerate galled networks. |
1610.09431 | Omar Claflin | Omar Claflin | Attention acts to suppress goal-based conflict under high competition | 25 pages, 3 figures, 3 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is known that when multiple stimuli are present, top-down attention
selectively enhances the neural signal in the visual cortex for task-relevant
stimuli, but this has been tested only under conditions of minimal competition
of visual attention. Here we show during high competition, that is, two stimuli
in a shared receptive field possessing opposing modulatory goals, top-down
attention suppresses both task-relevant and irrelevant neural signals within
100 ms of stimuli onset. This non-selective engagement of top-down attentional
resources serves to reduce the feedforward signal representing irrelevant
stimuli.
| [
{
"created": "Sat, 29 Oct 2016 00:21:31 GMT",
"version": "v1"
}
] | 2016-11-01 | [
[
"Claflin",
"Omar",
""
]
] | It is known that when multiple stimuli are present, top-down attention selectively enhances the neural signal in the visual cortex for task-relevant stimuli, but this has been tested only under conditions of minimal competition of visual attention. Here we show during high competition, that is, two stimuli in a shared receptive field possessing opposing modulatory goals, top-down attention suppresses both task-relevant and irrelevant neural signals within 100 ms of stimuli onset. This non-selective engagement of top-down attentional resources serves to reduce the feedforward signal representing irrelevant stimuli. |
0807.3638 | Moritz Gerstung | Moritz Gerstung and Niko Beerenwinkel | Waiting time models of cancer progression | null | null | 10.1080/08898480.2010.490994 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer progression is an evolutionary process that is driven by mutation and
selection in a population of tumor cells. We discuss mathematical models of
cancer progression, starting from traditional multistage theory. Each stage is
associated with the occurrence of genetic alterations and their fixation in the
population. We describe the accumulation of mutations using conjunctive
Bayesian networks, an exponential family of waiting time models in which the
occurrence of mutations is constrained to a partial temporal order. Two
opposing limit cases arise if mutations either follow a linear order or occur
independently. We derive exact analytical expressions for the waiting time
until a specific number of mutations have accumulated in these limit cases as
well as for the general conjunctive Bayesian network. Finally, we analyze a
stochastic population genetics model that explicitly accounts for mutation and
selection. In this model, waves of clonal expansions sweep through the
population at equidistant intervals. We present an approximate analytical
expression for the waiting time in this model and compare it to the results
obtained for the conjunctive Bayesian networks.
| [
{
"created": "Wed, 23 Jul 2008 14:23:52 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Feb 2009 19:07:23 GMT",
"version": "v2"
}
] | 2011-08-31 | [
[
"Gerstung",
"Moritz",
""
],
[
"Beerenwinkel",
"Niko",
""
]
] | Cancer progression is an evolutionary process that is driven by mutation and selection in a population of tumor cells. We discuss mathematical models of cancer progression, starting from traditional multistage theory. Each stage is associated with the occurrence of genetic alterations and their fixation in the population. We describe the accumulation of mutations using conjunctive Bayesian networks, an exponential family of waiting time models in which the occurrence of mutations is constrained to a partial temporal order. Two opposing limit cases arise if mutations either follow a linear order or occur independently. We derive exact analytical expressions for the waiting time until a specific number of mutations have accumulated in these limit cases as well as for the general conjunctive Bayesian network. Finally, we analyze a stochastic population genetics model that explicitly accounts for mutation and selection. In this model, waves of clonal expansions sweep through the population at equidistant intervals. We present an approximate analytical expression for the waiting time in this model and compare it to the results obtained for the conjunctive Bayesian networks. |
1202.1358 | Yi Fang | Yi Fang | Protein Folding: The Gibbs Free Energy | 18 pages, 2 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fundamental law for protein folding is the Thermodynamic Principle: the
amino acid sequence of a protein determines its native structure and the native
structure has the minimum Gibbs free energy. If all chemical problems can be
answered by quantum mechanics, there should be a quantum mechanics derivation
of Gibbs free energy formula G(X) for every possible conformation X of the
protein. We apply quantum statistics to derive such a formula. For simplicity,
only monomeric self folding globular proteins are covered. We point out some
immediate applications of the formula. We show that the formula explains the
observed phenomena very well. It gives a unified explanation to both folding
and denaturation; it explains why hydrophobic effect is the driving force of
protein folding and clarifies the role played by hydrogen bonding; it explains
the successes and deficients of various surface area models. The formula also
gives a clear kinetic force of the folding: Fi(X) = - \nablaxi G(X). This also
gives a natural way to perform the ab initio prediction of protein structure,
minimizing G(X) by Newton's fastest desciending method.
| [
{
"created": "Tue, 7 Feb 2012 07:11:31 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Apr 2012 04:13:59 GMT",
"version": "v2"
}
] | 2012-04-10 | [
[
"Fang",
"Yi",
""
]
] | The fundamental law for protein folding is the Thermodynamic Principle: the amino acid sequence of a protein determines its native structure and the native structure has the minimum Gibbs free energy. If all chemical problems can be answered by quantum mechanics, there should be a quantum mechanics derivation of Gibbs free energy formula G(X) for every possible conformation X of the protein. We apply quantum statistics to derive such a formula. For simplicity, only monomeric self folding globular proteins are covered. We point out some immediate applications of the formula. We show that the formula explains the observed phenomena very well. It gives a unified explanation to both folding and denaturation; it explains why hydrophobic effect is the driving force of protein folding and clarifies the role played by hydrogen bonding; it explains the successes and deficients of various surface area models. The formula also gives a clear kinetic force of the folding: Fi(X) = - \nablaxi G(X). This also gives a natural way to perform the ab initio prediction of protein structure, minimizing G(X) by Newton's fastest desciending method. |
1301.2885 | Asha Gopinathan | Asha Gopinathan | Solving the Cable Equation Using a Compact Difference Scheme -- Passive
Soma Dendrite | 24 pages, 11 figures, 6 tables | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dendrites are extensions to the neuronal cell body in the brain which are
posited in several functions ranging from electrical and chemical
compartmentalization to coincident detection. Dendrites vary across cell types
but one common feature they share is a branched structure. The cable equation
is a partial differential equation that describes the evolution of voltage in
the dendrite. A solution to this equation is normally found using finite
difference schemes. Spectral methods have also been used to solve this equation
with better accuracy. Here we report the solution to the cable equation using a
compact finite difference scheme which gives spectral like resolution and can
be more easily used with modifications to the cable equation like nonlinearity,
branching and other morphological transforms. Widely used in the study of
turbulent flow and wave propagation, this is the first time it is being used to
study conduction in the brain. Here we discuss its usage in a passive, soma
dendrite construct. The superior resolving power of this scheme compared to the
central difference scheme becomes apparent with increasing complexity of the
model.
| [
{
"created": "Mon, 14 Jan 2013 08:41:40 GMT",
"version": "v1"
}
] | 2013-01-15 | [
[
"Gopinathan",
"Asha",
""
]
] | Dendrites are extensions to the neuronal cell body in the brain which are posited in several functions ranging from electrical and chemical compartmentalization to coincident detection. Dendrites vary across cell types but one common feature they share is a branched structure. The cable equation is a partial differential equation that describes the evolution of voltage in the dendrite. A solution to this equation is normally found using finite difference schemes. Spectral methods have also been used to solve this equation with better accuracy. Here we report the solution to the cable equation using a compact finite difference scheme which gives spectral like resolution and can be more easily used with modifications to the cable equation like nonlinearity, branching and other morphological transforms. Widely used in the study of turbulent flow and wave propagation, this is the first time it is being used to study conduction in the brain. Here we discuss its usage in a passive, soma dendrite construct. The superior resolving power of this scheme compared to the central difference scheme becomes apparent with increasing complexity of the model. |
2012.04077 | Molly Schumer | Benjamin M Moran, Cheyenne Payne, Quinn Langdon, Daniel L Powell,
Yaniv Brandvain, Molly Schumer | The genomic consequences of hybridization | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the past decade, advances in genome sequencing have allowed researchers to
uncover the history of hybridization in diverse groups of species, including
our own. Although the field has made impressive progress in documenting the
extent of natural hybridization, both historical and recent, there are still
many unanswered questions about its genetic and evolutionary consequences.
Recent work has suggested that the outcomes of hybridization in the genome may
be in part predictable, but many open questions about the nature of selection
on hybrids and the biological variables that shape such selection have hampered
progress in this area. We discuss what is known about the mechanisms that drive
changes in ancestry in the genome after hybridization, highlight major
unresolved questions, and discuss their implications for the predictability of
genome evolution after hybridization.
| [
{
"created": "Mon, 7 Dec 2020 21:46:49 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Mar 2021 20:53:40 GMT",
"version": "v2"
}
] | 2021-03-23 | [
[
"Moran",
"Benjamin M",
""
],
[
"Payne",
"Cheyenne",
""
],
[
"Langdon",
"Quinn",
""
],
[
"Powell",
"Daniel L",
""
],
[
"Brandvain",
"Yaniv",
""
],
[
"Schumer",
"Molly",
""
]
] | In the past decade, advances in genome sequencing have allowed researchers to uncover the history of hybridization in diverse groups of species, including our own. Although the field has made impressive progress in documenting the extent of natural hybridization, both historical and recent, there are still many unanswered questions about its genetic and evolutionary consequences. Recent work has suggested that the outcomes of hybridization in the genome may be in part predictable, but many open questions about the nature of selection on hybrids and the biological variables that shape such selection have hampered progress in this area. We discuss what is known about the mechanisms that drive changes in ancestry in the genome after hybridization, highlight major unresolved questions, and discuss their implications for the predictability of genome evolution after hybridization. |
1406.2926 | Nadav M. Shnerb | Michael Kalyuzhny, Ronen Kadmon and Nadav M. Shnerb | A generalized neutral theory explains static and dynamic properties of
biotic communities | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the forces shaping ecological communities is crucially
important to basic science and conservation. In recent years, considerable
progress was made in explaining communities using simple and general models,
with neutral theory as a prominent example. However, while successful in
explaining static patterns such as species abundance distributions, the neutral
theory was criticized for making unrealistic predictions of fundamental dynamic
patterns. Here we incorporate environmental stochasticity into the neutral
framework, and show that the resulting generalized neutral theory is capable of
predicting realistic patterns of both population and community dynamics.
Applying the theory to real data (the tropical forest of Barro-Colorado
Island), we find that it better fits the observed distribution of short-term
fluctuations, the temporal scaling of such fluctuations, and the decay of
compositional similarity with time, than the original theory, while retaining
its power to explain static patterns of species abundance. Importantly,
although the proposed theory is neutral (all species are functionally
equivalent) and stochastic, it is a niche-based theory in the sense that
species differ in their demographic responses to environmental variation. Our
results show that this integration of niche forces and stochasticity within a
minimalistic neutral framework is highly successful in explaining fundamental
static and dynamic characteristics of ecological communities.
| [
{
"created": "Wed, 11 Jun 2014 14:57:16 GMT",
"version": "v1"
}
] | 2014-06-12 | [
[
"Kalyuzhny",
"Michael",
""
],
[
"Kadmon",
"Ronen",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | Understanding the forces shaping ecological communities is crucially important to basic science and conservation. In recent years, considerable progress was made in explaining communities using simple and general models, with neutral theory as a prominent example. However, while successful in explaining static patterns such as species abundance distributions, the neutral theory was criticized for making unrealistic predictions of fundamental dynamic patterns. Here we incorporate environmental stochasticity into the neutral framework, and show that the resulting generalized neutral theory is capable of predicting realistic patterns of both population and community dynamics. Applying the theory to real data (the tropical forest of Barro-Colorado Island), we find that it better fits the observed distribution of short-term fluctuations, the temporal scaling of such fluctuations, and the decay of compositional similarity with time, than the original theory, while retaining its power to explain static patterns of species abundance. Importantly, although the proposed theory is neutral (all species are functionally equivalent) and stochastic, it is a niche-based theory in the sense that species differ in their demographic responses to environmental variation. Our results show that this integration of niche forces and stochasticity within a minimalistic neutral framework is highly successful in explaining fundamental static and dynamic characteristics of ecological communities. |
1911.06105 | Aaron Vose | Aaron D. Vose, Jacob Balma, Damon Farnsworth, Kaylie Anderson, and
Yuri K. Peterson | PharML.Bind: Pharmacologic Machine Learning for Protein-Ligand
Interactions | null | null | null | null | q-bio.BM cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Is it feasible to create an analysis paradigm that can analyze and then
accurately and quickly predict known drugs from experimental data? PharML.Bind
is a machine learning toolkit which is able to accomplish this feat. Utilizing
deep neural networks and big data, PharML.Bind correlates
experimentally-derived drug affinities and protein-ligand X-ray structures to
create novel predictions. The utility of PharML.Bind is in its application as a
rapid, accurate, and robust prediction platform for discovery and personalized
medicine. This paper demonstrates that graph neural networks (GNNs) can be
trained to screen hundreds of thousands of compounds against thousands of
targets in minutes, a vastly shorter time than previous approaches. This
manuscript presents results from training and testing using the entirety of
BindingDB after cleaning; this includes a test set with 19,708 X-ray structures
and 247,633 drugs, leading to 2,708,151 unique protein-ligand pairings.
PharML.Bind achieves a prodigious 98.3% accuracy on this test set in under 25
minutes. PharML.Bind is premised on the following key principles: 1) speed and
a high enrichment factor per unit compute time, provided by high-quality
training data combined with a novel GNN architecture and use of
high-performance computing resources, 2) the ability to generalize to proteins
and drugs outside of the training set, including those with unknown active
sites, through the use of an active-site-agnostic GNN mapping, and 3) the
ability to be easily integrated as a component of increasingly-complex
prediction and analysis pipelines. PharML.Bind represents a timely and
practical approach to leverage the power of machine learning to efficiently
analyze and predict drug action on any practical scale and will provide utility
in a variety of discovery and medical applications.
| [
{
"created": "Wed, 23 Oct 2019 23:32:44 GMT",
"version": "v1"
}
] | 2019-11-15 | [
[
"Vose",
"Aaron D.",
""
],
[
"Balma",
"Jacob",
""
],
[
"Farnsworth",
"Damon",
""
],
[
"Anderson",
"Kaylie",
""
],
[
"Peterson",
"Yuri K.",
""
]
] | Is it feasible to create an analysis paradigm that can analyze and then accurately and quickly predict known drugs from experimental data? PharML.Bind is a machine learning toolkit which is able to accomplish this feat. Utilizing deep neural networks and big data, PharML.Bind correlates experimentally-derived drug affinities and protein-ligand X-ray structures to create novel predictions. The utility of PharML.Bind is in its application as a rapid, accurate, and robust prediction platform for discovery and personalized medicine. This paper demonstrates that graph neural networks (GNNs) can be trained to screen hundreds of thousands of compounds against thousands of targets in minutes, a vastly shorter time than previous approaches. This manuscript presents results from training and testing using the entirety of BindingDB after cleaning; this includes a test set with 19,708 X-ray structures and 247,633 drugs, leading to 2,708,151 unique protein-ligand pairings. PharML.Bind achieves a prodigious 98.3% accuracy on this test set in under 25 minutes. PharML.Bind is premised on the following key principles: 1) speed and a high enrichment factor per unit compute time, provided by high-quality training data combined with a novel GNN architecture and use of high-performance computing resources, 2) the ability to generalize to proteins and drugs outside of the training set, including those with unknown active sites, through the use of an active-site-agnostic GNN mapping, and 3) the ability to be easily integrated as a component of increasingly-complex prediction and analysis pipelines. PharML.Bind represents a timely and practical approach to leverage the power of machine learning to efficiently analyze and predict drug action on any practical scale and will provide utility in a variety of discovery and medical applications. |
2005.14669 | Guo-Wei Wei | Jiahui Chen, Rui Wang, Menglun Wang, and Guo-Wei Wei | Mutations strengthened SARS-CoV-2 infectivity | 24 pages, 2 tables and 19 figures | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infectivity is a
major concern in coronavirus disease 2019 (COVID-19) prevention and economic
reopening. However, rigorous determination of SARS-COV-2 infectivity is
essentially impossible owing to its continuous evolution with over 13752 single
nucleotide polymorphisms (SNP) variants in six different subtypes. We develop
an advanced machine learning algorithm based on the algebraic topology to
quantitatively evaluate the binding affinity changes of SARS-CoV-2 spike
glycoprotein (S protein) and host angiotensin-converting enzyme 2 (ACE2)
receptor following the mutations. Based on mutation-induced binding affinity
changes, we reveal that five out of six SARS-CoV-2 subtypes have become either
moderately or slightly more infectious, while one subtype has weakened its
infectivity. We find that SARS-CoV-2 is slightly more infectious than SARS-CoV
according to computed S protein-ACE2 binding affinity changes. Based on a
systematic evaluation of all possible 3686 future mutations on the S protein
receptor-binding domain (RBD), we show that most likely future mutations will
make SARS-CoV-2 more infectious. Combining sequence alignment, probability
analysis, and binding affinity calculation, we predict that a few residues on
the receptor-binding motif (RBM), i.e., 452, 489, 500, 501, and 505, have very
high chances to mutate into significantly more infectious COVID-19 strains.
| [
{
"created": "Wed, 27 May 2020 20:55:24 GMT",
"version": "v1"
}
] | 2020-06-01 | [
[
"Chen",
"Jiahui",
""
],
[
"Wang",
"Rui",
""
],
[
"Wang",
"Menglun",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infectivity is a major concern in coronavirus disease 2019 (COVID-19) prevention and economic reopening. However, rigorous determination of SARS-COV-2 infectivity is essentially impossible owing to its continuous evolution with over 13752 single nucleotide polymorphisms (SNP) variants in six different subtypes. We develop an advanced machine learning algorithm based on the algebraic topology to quantitatively evaluate the binding affinity changes of SARS-CoV-2 spike glycoprotein (S protein) and host angiotensin-converting enzyme 2 (ACE2) receptor following the mutations. Based on mutation-induced binding affinity changes, we reveal that five out of six SARS-CoV-2 subtypes have become either moderately or slightly more infectious, while one subtype has weakened its infectivity. We find that SARS-CoV-2 is slightly more infectious than SARS-CoV according to computed S protein-ACE2 binding affinity changes. Based on a systematic evaluation of all possible 3686 future mutations on the S protein receptor-binding domain (RBD), we show that most likely future mutations will make SARS-CoV-2 more infectious. Combining sequence alignment, probability analysis, and binding affinity calculation, we predict that a few residues on the receptor-binding motif (RBM), i.e., 452, 489, 500, 501, and 505, have very high chances to mutate into significantly more infectious COVID-19 strains. |
2106.05180 | Jiajia Li | JiaJia Li, Peihua Feng, Liang Zhao, Junying Chen, Mengmeng Du,
Yangyang Yu, Jian Song, Ying Wu | Transition behavior of the seizure dynamics modulated by the astrocyte
inositol triphosphate noise | 26 pages, 8 figures | null | 10.1063/5.0124123 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epilepsy is a neurological disorder with recurrent seizures of complexity and
randomness. Until now, the mechanism of epileptic randomness has not been fully
elucidated. Inspired by the recent finding that astrocyte GTPase-activating
protein (G-protein)-coupled receptors could be involved in stochastic epileptic
seizures, we proposed a neuron-astrocyte network model, incorporating the noise
of the astrocytic second messager, inositol triphosphate (IP3) which is
modulated by the G-protein)-coupled receptor activation. Based on this model,
we have statistically analysed the transitions of epileptic seizures by
performing tens of simulation trials. Our simulation results show that the
increase of the IP3 noise intensity induces the depolarization-block epileptic
seizures together with an increase in neuronal firing frequency. Meanwhile, a
bistable state of neuronal firing emerges under certain noise intensity, during
which the neuronal firing pattern switches between regular sparse spiking and
epileptic seizure states. This random presence of epileptic seizures is absent
when the noise intensity continues to increase, accompanying with an increase
in the epileptic depolarization block duration. The simulation results also
shed light on the fact that calcium signals in astrocytes play significant
roles in the pattern formations of the epileptic seizure. Our results provide a
potential pathway for understanding the epileptic randomness.
| [
{
"created": "Wed, 26 May 2021 15:42:06 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jun 2022 12:58:42 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Aug 2022 16:54:33 GMT",
"version": "v3"
},
{
"created": "Mon, 31 Oct 2022 09:23:57 GMT",
"version": "v4"
}
] | 2024-06-19 | [
[
"Li",
"JiaJia",
""
],
[
"Feng",
"Peihua",
""
],
[
"Zhao",
"Liang",
""
],
[
"Chen",
"Junying",
""
],
[
"Du",
"Mengmeng",
""
],
[
"Yu",
"Yangyang",
""
],
[
"Song",
"Jian",
""
],
[
"Wu",
"Ying",
""
]
] | Epilepsy is a neurological disorder with recurrent seizures of complexity and randomness. Until now, the mechanism of epileptic randomness has not been fully elucidated. Inspired by the recent finding that astrocyte GTPase-activating protein (G-protein)-coupled receptors could be involved in stochastic epileptic seizures, we proposed a neuron-astrocyte network model, incorporating the noise of the astrocytic second messager, inositol triphosphate (IP3) which is modulated by the G-protein)-coupled receptor activation. Based on this model, we have statistically analysed the transitions of epileptic seizures by performing tens of simulation trials. Our simulation results show that the increase of the IP3 noise intensity induces the depolarization-block epileptic seizures together with an increase in neuronal firing frequency. Meanwhile, a bistable state of neuronal firing emerges under certain noise intensity, during which the neuronal firing pattern switches between regular sparse spiking and epileptic seizure states. This random presence of epileptic seizures is absent when the noise intensity continues to increase, accompanying with an increase in the epileptic depolarization block duration. The simulation results also shed light on the fact that calcium signals in astrocytes play significant roles in the pattern formations of the epileptic seizure. Our results provide a potential pathway for understanding the epileptic randomness. |
2110.12286 | Kevin Lin | Zhuo-Cheng Xiao, Kevin K. Lin, Lai-Sang Young | A data-informed mean-field approach to mapping of cortical parameter
landscapes | null | PLoS Computat. Biol. 17 (2021) | 10.1371/journal.pcbi.1009718 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Constraining the many biological parameters that govern cortical dynamics is
computationally and conceptually difficult because of the curse of
dimensionality. This paper addresses these challenges by proposing (1) a novel
data-informed mean-field (MF) approach to efficiently map the parameter space
of network models; and (2) an organizing principle for studying parameter space
that enables the extraction biologically meaningful relations from this
high-dimensional data. We illustrate these ideas using a large-scale network
model of the Macaque primary visual cortex. Of the 10-20 model parameters, we
identify 7 that are especially poorly constrained, and use the MF algorithm in
(1) to discover the firing rate contours in this 7D parameter cube. Defining a
"biologically plausible" region to consist of parameters that exhibit
spontaneous Excitatory and Inhibitory firing rates compatible with experimental
values, we find that this region is a slightly thickened codimension-1
submanifold. An implication of this finding is that while plausible regimes
depend sensitively on parameters, they are also robust and flexible provided
one compensates appropriately when parameters are varied. Our organizing
principle for conceptualizing parameter dependence is to focus on certain 2D
parameter planes that govern lateral inhibition: Intersecting these planes with
the biologically plausible region leads to very simple geometric structures
which, when suitably scaled, have a universal character independent of where
the intersections are taken. In addition to elucidating the geometry of the
plausible region, this invariance suggests useful approximate scaling
relations. Our study offers, for the first time, a complete characterization of
the set of all biologically plausible parameters for a detailed cortical model,
which has been out of reach due to the high dimensionality of parameter space.
| [
{
"created": "Sat, 23 Oct 2021 19:42:14 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 02:25:42 GMT",
"version": "v2"
}
] | 2021-12-30 | [
[
"Xiao",
"Zhuo-Cheng",
""
],
[
"Lin",
"Kevin K.",
""
],
[
"Young",
"Lai-Sang",
""
]
] | Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a "biologically plausible" region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space. |
2004.09912 | V. K. Jindal | V. K. Jindal | COVID 19, a realistic model for saturation, growth and decay of the
India specific disease | 10 pages, 7 figures, 15 references, 1 table. COVID-19 by the Numbers,
Modes, Big Data, and Reality April 24th, 2020 | vvoip-theoretical-geography-debates.org COVID-19 by the Numbers,
Modes, Big Data, and Reality April 24th, 2020 | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a simple and realistic approach to handle the available
data of COVID-19 patients in India and to forecast the scenario. The model
proposed is based on the available facts like the onset of lockdown (as
announced by the Government on 25th day, {\tau}0 and the recovery pattern
dictated by a mean life recovery time of {\tau}1 ( normally said to be around
14 days). The data of infected COVID-19 patients from March 2, to April 16,
2020 has been used to fit the evolution of infected, recovery and death counts.
A slow rising exponential growth, with R0 close to 1/6, is found to represent
the infected counts indicating almost a linear rise. The rest of growth,
saturation and decay of data is comprehensibly modelled by incorporating
lockdown time controlled R0, having a normal error function like behaviour
decaying to zero in some time frame of {\tau}2 . The recovery mean life time
{\tau}1 dictates the peak and decay. The results predicted for coming days are
interesting and optimistic. The introduced time constants based on experimental
data for both the recovery rate as well as for determining the time span of
activity of R0 after the lockdown are subject of debate and provide possibility
to introduce trigger factors to alter these to be more suited to the model. The
model can be extended to other communities with their own R0 and recovery time
parameters.
| [
{
"created": "Tue, 21 Apr 2020 11:29:15 GMT",
"version": "v1"
}
] | 2020-05-07 | [
[
"Jindal",
"V. K.",
""
]
] | This work presents a simple and realistic approach to handle the available data of COVID-19 patients in India and to forecast the scenario. The model proposed is based on the available facts like the onset of lockdown (as announced by the Government on 25th day, {\tau}0 and the recovery pattern dictated by a mean life recovery time of {\tau}1 ( normally said to be around 14 days). The data of infected COVID-19 patients from March 2, to April 16, 2020 has been used to fit the evolution of infected, recovery and death counts. A slow rising exponential growth, with R0 close to 1/6, is found to represent the infected counts indicating almost a linear rise. The rest of growth, saturation and decay of data is comprehensibly modelled by incorporating lockdown time controlled R0, having a normal error function like behaviour decaying to zero in some time frame of {\tau}2 . The recovery mean life time {\tau}1 dictates the peak and decay. The results predicted for coming days are interesting and optimistic. The introduced time constants based on experimental data for both the recovery rate as well as for determining the time span of activity of R0 after the lockdown are subject of debate and provide possibility to introduce trigger factors to alter these to be more suited to the model. The model can be extended to other communities with their own R0 and recovery time parameters. |
2308.14300 | Bryan Briney | Sarah M. Burbach and Bryan Briney | Improving antibody language models with native pairing | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-sa/4.0/ | Current antibody language models are limited by their use of unpaired
antibody sequence data and the biases in publicly available antibody sequence
datasets, which are skewed toward antibodies against a relatively small number
of pathogens. A recently published dataset (by Jaffe, et al) of approximately
1.6 x 10^6 natively paired human antibody sequences from healthy donors
represents by far the largest dataset of its kind and offers a unique
opportunity to evaluate how antibody language models can be improved by
training with natively paired antibody sequence data. We trained two Baseline
Antibody Language Models (BALM), using natively paired (BALM-paired) or
unpaired (BALM-unpaired) sequences from the Jaffe dataset. We provide evidence
that training with natively paired sequences substantially improves model
performance and that this improvement results from the model learning
immunologically relevant features that span the light and heavy chains. We also
show that ESM-2, a state-of-the-art general protein language model, learns
similar cross-chain features when fine-tuned with natively paired antibody
sequence data.
| [
{
"created": "Mon, 28 Aug 2023 04:35:14 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Nov 2023 18:36:11 GMT",
"version": "v2"
}
] | 2023-11-08 | [
[
"Burbach",
"Sarah M.",
""
],
[
"Briney",
"Bryan",
""
]
] | Current antibody language models are limited by their use of unpaired antibody sequence data and the biases in publicly available antibody sequence datasets, which are skewed toward antibodies against a relatively small number of pathogens. A recently published dataset (by Jaffe, et al) of approximately 1.6 x 10^6 natively paired human antibody sequences from healthy donors represents by far the largest dataset of its kind and offers a unique opportunity to evaluate how antibody language models can be improved by training with natively paired antibody sequence data. We trained two Baseline Antibody Language Models (BALM), using natively paired (BALM-paired) or unpaired (BALM-unpaired) sequences from the Jaffe dataset. We provide evidence that training with natively paired sequences substantially improves model performance and that this improvement results from the model learning immunologically relevant features that span the light and heavy chains. We also show that ESM-2, a state-of-the-art general protein language model, learns similar cross-chain features when fine-tuned with natively paired antibody sequence data. |
2211.08618 | Nikola Mili\'cevi\'c | Nikola Mili\'cevi\'c and Vladimir Itskov | The combinatorial code and the graph rules of Dale networks | 24 pages, changes that improved the presentation of results based on
referee comments | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the combinatorics of equilibria and steady states of neurons in
threshold-linear networks that satisfy Dale's law. The combinatorial code of a
Dale network is characterized in terms of two conditions: (i) a condition on
the network connectivity graph, and (ii) a spectral condition on the synaptic
matrix. We find that in the weak coupling regime the combinatorial code depends
only on the connectivity graph, and not on the particulars of the synaptic
strengths. Moreover, we prove that the combinatorial code of a weakly coupled
network is a sublattice, and we provide a learning rule for encoding a
sublattice in a weakly coupled excitatory network. In the strong coupling
regime we prove that the combinatorial code of a generic Dale network is
intersection-complete and is therefore a convex code, as is common in some
sensory systems in the brain.
| [
{
"created": "Wed, 16 Nov 2022 02:08:59 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 00:43:02 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 19:04:16 GMT",
"version": "v3"
}
] | 2024-06-07 | [
[
"Milićević",
"Nikola",
""
],
[
"Itskov",
"Vladimir",
""
]
] | We describe the combinatorics of equilibria and steady states of neurons in threshold-linear networks that satisfy Dale's law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersection-complete and is therefore a convex code, as is common in some sensory systems in the brain. |
2107.13247 | Sigve Nakken | Sigve Nakken (1 and 2 and 3), Sveinung Gundersen (3), Fabian L. M.
Bernal (4), Dimitris Polychronopoulos (5), Eivind Hovig (1 and 3), J{\o}rgen
Wesche (1 and 2) ((1) Department of Tumor Biology, Institute for Cancer
Research, Oslo University Hospital, Norway, (2) Centre for Cancer Cell
Reprogramming, Institute of Clinical Medicine, Faculty of Medicine,
University of Oslo, Norway, (3) Centre for Bioinformatics, Department of
Informatics, University of Oslo, Norway, (4) University Center for
Information Technology, University of Oslo, Norway, (5) Early Data Science
and Computational Oncology, Research and Early Development, Oncology R&D,
Cambridge, UK) | OncoEnrichR: cancer-dedicated gene set interpretation | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Genome-scale screening experiments in cancer produce long lists of candidate
genes that require extensive interpretation for biological insight and
prioritization for follow-up studies. Interrogation of gene lists frequently
represents a significant and time-consuming undertaking, in which experimental
biologists typically combine results from a variety of bioinformatics resources
in an attempt to portray and understand cancer relevance. As a means to
simplify and strengthen the support for this endeavor, we have developed
oncoEnrichR, a flexible bioinformatics tool that allows cancer researchers to
comprehensively interrogate a given gene list along multiple facets of cancer
relevance. oncoEnrichR differs from general gene set analysis frameworks
through the integration of an extensive set of prior knowledge specifically
relevant for cancer, including ranked gene-tumor type associations,
literature-supported proto-oncogene and tumor suppressor gene annotations,
target druggability data, regulatory interactions, synthetic lethality
predictions, as well as prognostic associations, gene aberrations, and
co-expression patterns across tumor types. The software produces a structured
and user-friendly analysis report as its main output, where versions of all
underlying data resources are explicitly logged, the latter being a critical
component for reproducible science. We demonstrate the usefulness of
oncoEnrichR through interrogation of two candidate lists from proteomic and
CRISPR screens. oncoEnrichR is freely available as a web-based workflow hosted
by the Galaxy platform (https://oncotools.elixir.no), and can also be accessed
as a stand-alone R package (https://github.com/sigven/oncoEnrichR).
| [
{
"created": "Wed, 28 Jul 2021 10:07:51 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Aug 2021 08:40:00 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Sep 2022 21:14:23 GMT",
"version": "v3"
}
] | 2022-09-29 | [
[
"Nakken",
"Sigve",
"",
"1 and 2 and 3"
],
[
"Gundersen",
"Sveinung",
"",
"1 and 3"
],
[
"Bernal",
"Fabian L. M.",
"",
"1 and 3"
],
[
"Polychronopoulos",
"Dimitris",
"",
"1 and 3"
],
[
"Hovig",
"Eivind",
"",
"1 and 3"
],
[
"Wesche",
"Jørgen",
"",
"1 and 2"
]
] | Genome-scale screening experiments in cancer produce long lists of candidate genes that require extensive interpretation for biological insight and prioritization for follow-up studies. Interrogation of gene lists frequently represents a significant and time-consuming undertaking, in which experimental biologists typically combine results from a variety of bioinformatics resources in an attempt to portray and understand cancer relevance. As a means to simplify and strengthen the support for this endeavor, we have developed oncoEnrichR, a flexible bioinformatics tool that allows cancer researchers to comprehensively interrogate a given gene list along multiple facets of cancer relevance. oncoEnrichR differs from general gene set analysis frameworks through the integration of an extensive set of prior knowledge specifically relevant for cancer, including ranked gene-tumor type associations, literature-supported proto-oncogene and tumor suppressor gene annotations, target druggability data, regulatory interactions, synthetic lethality predictions, as well as prognostic associations, gene aberrations, and co-expression patterns across tumor types. The software produces a structured and user-friendly analysis report as its main output, where versions of all underlying data resources are explicitly logged, the latter being a critical component for reproducible science. We demonstrate the usefulness of oncoEnrichR through interrogation of two candidate lists from proteomic and CRISPR screens. oncoEnrichR is freely available as a web-based workflow hosted by the Galaxy platform (https://oncotools.elixir.no), and can also be accessed as a stand-alone R package (https://github.com/sigven/oncoEnrichR). |
2010.11765 | Aran Nayebi | Aran Nayebi, Sanjana Srivastava, Surya Ganguli, Daniel L.K. Yamins | Identifying Learning Rules From Neural Network Observables | NeurIPS 2020 Camera Ready Version, 21 pages including supplementary
information, 13 figures | null | null | null | q-bio.NC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain modifies its synaptic strengths during learning in order to better
adapt to its environment. However, the underlying plasticity rules that govern
learning are unknown. Many proposals have been suggested, including Hebbian
mechanisms, explicit error backpropagation, and a variety of alternatives. It
is an open question as to what specific experimental measurements would need to
be made to determine whether any given learning rule is operative in a real
biological system. In this work, we take a "virtual experimental" approach to
this problem. Simulating idealized neuroscience experiments with artificial
neural networks, we generate a large-scale dataset of learning trajectories of
aggregate statistics measured in a variety of neural network architectures,
loss functions, learning rule hyperparameters, and parameter initializations.
We then take a discriminative approach, training linear and simple non-linear
classifiers to identify learning rules from features based on these
observables. We show that different classes of learning rules can be separated
solely on the basis of aggregate statistics of the weights, activations, or
instantaneous layer-wise activity changes, and that these results generalize to
limited access to the trajectory and held-out architectures and learning
curricula. We identify the statistics of each observable that are most relevant
for rule identification, finding that statistics from network activities across
training are more robust to unit undersampling and measurement noise than those
obtained from the synaptic strengths. Our results suggest that activation
patterns, available from electrophysiological recordings of post-synaptic
activities on the order of several hundred units, frequently measured at wider
intervals over the course of learning, may provide a good basis on which to
identify learning rules.
| [
{
"created": "Thu, 22 Oct 2020 14:36:54 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Dec 2020 18:48:02 GMT",
"version": "v2"
}
] | 2020-12-09 | [
[
"Nayebi",
"Aran",
""
],
[
"Srivastava",
"Sanjana",
""
],
[
"Ganguli",
"Surya",
""
],
[
"Yamins",
"Daniel L. K.",
""
]
] | The brain modifies its synaptic strengths during learning in order to better adapt to its environment. However, the underlying plasticity rules that govern learning are unknown. Many proposals have been suggested, including Hebbian mechanisms, explicit error backpropagation, and a variety of alternatives. It is an open question as to what specific experimental measurements would need to be made to determine whether any given learning rule is operative in a real biological system. In this work, we take a "virtual experimental" approach to this problem. Simulating idealized neuroscience experiments with artificial neural networks, we generate a large-scale dataset of learning trajectories of aggregate statistics measured in a variety of neural network architectures, loss functions, learning rule hyperparameters, and parameter initializations. We then take a discriminative approach, training linear and simple non-linear classifiers to identify learning rules from features based on these observables. We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes, and that these results generalize to limited access to the trajectory and held-out architectures and learning curricula. We identify the statistics of each observable that are most relevant for rule identification, finding that statistics from network activities across training are more robust to unit undersampling and measurement noise than those obtained from the synaptic strengths. Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities on the order of several hundred units, frequently measured at wider intervals over the course of learning, may provide a good basis on which to identify learning rules. |
2003.13901 | HongGuang Sun | Yong Zhang, Xiangnan Yu, HongGuang Sun, Geoffrey R. Tick, Wei Wei, Bin
Jin | COVID-19 infection and recovery in various countries: Modeling the
dynamics and evaluating the non-pharmaceutical mitigation scenarios | 29 pages, 8 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The coronavirus disease 2019 (COVID-19) pandemic radically impacts our lives,
while the transmission/infection and recovery dynamics of COVID-19 remain
obscure. A time-dependent Susceptible, Exposed, Infectious, and Recovered
(SEIR) model was proposed and applied to fit and then predict the time series
of COVID-19 evolution observed in the last three months (till 3/22/2020) in
various provinces and metropolises in China. The model results revealed the
space dependent transmission/infection rate and the significant spatiotemporal
variation in the recovery rate, likely due to the continuous improvement of
screening techniques and public hospital systems, as well as full city
lockdowns in China. The validated SEIR model was then applied to predict
COVID-19 evolution in United States, Italy, Japan, and South Korea which have
responded differently to monitoring and mitigating COVID-19 so far, although
these predictions contain high uncertainty due to the intrinsic change of the
maximum infected population and the infection/recovery rates within the
different countries. In addition, a stochastic model based on the random walk
particle tracking scheme, analogous to a mixing-limited bimolecular reaction
model, was developed to evaluate non-pharmaceutical strategies to mitigate
COVID-19 spread. Preliminary tests using the stochastic model showed that
self-quarantine may not be as efficient as strict social distancing in slowing
COVID-19 spread, if not all of the infected people can be promptly diagnosed
and quarantined.
| [
{
"created": "Tue, 31 Mar 2020 01:30:55 GMT",
"version": "v1"
}
] | 2020-04-01 | [
[
"Zhang",
"Yong",
""
],
[
"Yu",
"Xiangnan",
""
],
[
"Sun",
"HongGuang",
""
],
[
"Tick",
"Geoffrey R.",
""
],
[
"Wei",
"Wei",
""
],
[
"Jin",
"Bin",
""
]
] | The coronavirus disease 2019 (COVID-19) pandemic radically impacts our lives, while the transmission/infection and recovery dynamics of COVID-19 remain obscure. A time-dependent Susceptible, Exposed, Infectious, and Recovered (SEIR) model was proposed and applied to fit and then predict the time series of COVID-19 evolution observed in the last three months (till 3/22/2020) in various provinces and metropolises in China. The model results revealed the space dependent transmission/infection rate and the significant spatiotemporal variation in the recovery rate, likely due to the continuous improvement of screening techniques and public hospital systems, as well as full city lockdowns in China. The validated SEIR model was then applied to predict COVID-19 evolution in United States, Italy, Japan, and South Korea which have responded differently to monitoring and mitigating COVID-19 so far, although these predictions contain high uncertainty due to the intrinsic change of the maximum infected population and the infection/recovery rates within the different countries. In addition, a stochastic model based on the random walk particle tracking scheme, analogous to a mixing-limited bimolecular reaction model, was developed to evaluate non-pharmaceutical strategies to mitigate COVID-19 spread. Preliminary tests using the stochastic model showed that self-quarantine may not be as efficient as strict social distancing in slowing COVID-19 spread, if not all of the infected people can be promptly diagnosed and quarantined. |
1908.05517 | Tom Britton | Tom Britton | Epidemic models on social networks -- with inference | 20 pages, 4 figures | null | null | null | q-bio.PE math.PR stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consider stochastic models for the spread of an infection in a structured
community, where this structured community is itself described by a random
network model. Some common network models and transmission models are defined
and large population proporties of them are presented. Focus is then shifted to
statistical methodology: what can be estimated and how, depending on the
underlying network, transmission model and the available data? This survey
paper discusses several different scenarios, also giving references to
publications where more details can be found.
| [
{
"created": "Thu, 15 Aug 2019 12:44:37 GMT",
"version": "v1"
}
] | 2019-08-16 | [
[
"Britton",
"Tom",
""
]
] | Consider stochastic models for the spread of an infection in a structured community, where this structured community is itself described by a random network model. Some common network models and transmission models are defined and large population proporties of them are presented. Focus is then shifted to statistical methodology: what can be estimated and how, depending on the underlying network, transmission model and the available data? This survey paper discusses several different scenarios, also giving references to publications where more details can be found. |
q-bio/0402040 | Dorjsuren Battogtokh | Dorjsuren Battogtokh and John J. Tyson | Turbulence near cyclic fold bifurcations in birhythmic media | 14 pages 10 figures | null | 10.1103/PhysRevE.70.026212 | null | q-bio.SC | null | We show that at the onset of a cyclic fold bifurcation, a birhythmic medium
composed of glycolytic oscillators displays turbulent dynamics. By computing
the largest Lyapunov exponent, the spatial correlation function, and the
average transient lifetime, we classify it as a weak turbulence with transient
nature. Virtual heterogeneities generating unstable fast oscillations are the
mechanism of the transient turbulence. In the presence of wavenumber
instability, unstable oscillations can be reinjected leading to stationary
turbulence. We also find similar turbulence in a cell cycle model. These
findings suggest that weak turbulence may be universal in biochemical
birhythmic media exhibiting cyclic fold bifurcations.
| [
{
"created": "Tue, 24 Feb 2004 21:04:46 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Battogtokh",
"Dorjsuren",
""
],
[
"Tyson",
"John J.",
""
]
] | We show that at the onset of a cyclic fold bifurcation, a birhythmic medium composed of glycolytic oscillators displays turbulent dynamics. By computing the largest Lyapunov exponent, the spatial correlation function, and the average transient lifetime, we classify it as a weak turbulence with transient nature. Virtual heterogeneities generating unstable fast oscillations are the mechanism of the transient turbulence. In the presence of wavenumber instability, unstable oscillations can be reinjected leading to stationary turbulence. We also find similar turbulence in a cell cycle model. These findings suggest that weak turbulence may be universal in biochemical birhythmic media exhibiting cyclic fold bifurcations. |
1109.4085 | Ovidiu Radulescu | Vincent Noel, Dima Grigoriev, Sergei Vakulenko and Ovidiu Radulescu | Tropical geometries and dynamics of biochemical networks. Application to
hybrid cell cycle models | conference SASB 2011, to be published in Electronic Notes in
Theoretical Computer Science | null | null | null | q-bio.MN math.AG | http://creativecommons.org/licenses/publicdomain/ | We use the Litvinov-Maslov correspondence principle to reduce and hybridize
networks of biochemical reactions. We apply this method to a cell cycle
oscillator model. The reduced and hybridized model can be used as a hybrid
model for the cell cycle. We also propose a practical recipe for detecting
quasi-equilibrium QE reactions and quasi-steady state QSS species in
biochemical models with rational rate functions and use this recipe for model
reduction. Interestingly, the QE/QSS invariant manifold of the smooth model and
the reduced dynamics along this manifold can be put into correspondence to the
tropical variety of the hybridization and to sliding modes along this variety,
respectively
| [
{
"created": "Mon, 19 Sep 2011 16:41:36 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Sep 2011 05:45:49 GMT",
"version": "v2"
}
] | 2011-09-21 | [
[
"Noel",
"Vincent",
""
],
[
"Grigoriev",
"Dima",
""
],
[
"Vakulenko",
"Sergei",
""
],
[
"Radulescu",
"Ovidiu",
""
]
] | We use the Litvinov-Maslov correspondence principle to reduce and hybridize networks of biochemical reactions. We apply this method to a cell cycle oscillator model. The reduced and hybridized model can be used as a hybrid model for the cell cycle. We also propose a practical recipe for detecting quasi-equilibrium QE reactions and quasi-steady state QSS species in biochemical models with rational rate functions and use this recipe for model reduction. Interestingly, the QE/QSS invariant manifold of the smooth model and the reduced dynamics along this manifold can be put into correspondence to the tropical variety of the hybridization and to sliding modes along this variety, respectively |
1308.4102 | Inti Pedroso | Inti Pedroso | Gene and Gene-Set Analysis for Genome-Wide Association Studies | null | null | null | null | q-bio.GN q-bio.MN q-bio.NC q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | Genome-wide association studies (GWAS) have identified hundreds of loci at
very stringent levels of statistical significance across many different human
traits. However, it is now clear that very large samples (n~10^4-10^5) are
needed to find the majority of genetic variants underlying risk for most human
diseases. Therefore, the field has engaged itself in a race to increase study
sample sizes with some studies yielding very successful results but also
studies which provide little or no new insights. This project started early on
in this new wave of studies and I decided to use an alternative approach that
uses prior biological knowledge to improve both interpretation and power of
GWAS. The project aimed to a) implement and develop new gene-based methods to
derive gene-level statistics to use GWAS in well established system biology
tools; b) use of these gene-level statistics in networks and gene-set analyses
of GWAS data; c) mine GWAS of neuropsychiatric disorders using gene, gene-sets
and integrative biology analyses with gene-expression studies; and d) explore
the ability of these methods to improve the analysis GWAS on disease
sub-phenotypes which usually suffer of very small sample sizes.
| [
{
"created": "Mon, 19 Aug 2013 19:20:12 GMT",
"version": "v1"
}
] | 2013-08-20 | [
[
"Pedroso",
"Inti",
""
]
] | Genome-wide association studies (GWAS) have identified hundreds of loci at very stringent levels of statistical significance across many different human traits. However, it is now clear that very large samples (n~10^4-10^5) are needed to find the majority of genetic variants underlying risk for most human diseases. Therefore, the field has engaged itself in a race to increase study sample sizes with some studies yielding very successful results but also studies which provide little or no new insights. This project started early on in this new wave of studies and I decided to use an alternative approach that uses prior biological knowledge to improve both interpretation and power of GWAS. The project aimed to a) implement and develop new gene-based methods to derive gene-level statistics to use GWAS in well established system biology tools; b) use of these gene-level statistics in networks and gene-set analyses of GWAS data; c) mine GWAS of neuropsychiatric disorders using gene, gene-sets and integrative biology analyses with gene-expression studies; and d) explore the ability of these methods to improve the analysis GWAS on disease sub-phenotypes which usually suffer of very small sample sizes. |
1610.04123 | Bob Eisenberg | Bob Eisenberg | Ionic channels in biological membranes. Electrostatic analysis of a
natural nanotube | Spelling errors corrected | Contemporary Physics (1998) 39(6):447- 466 | 10.1080/001075198181775 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple theory of ion permeation through a channel is presented, in which
diffusion occurs according to Fick's law and drift according to Ohm's law, in
the electric field determined by all the charges present. This theory accounts
for permeation in the channels studied to date in a wide range of solutions.
Interestingly, the theory works because the shape of the electric field is a
sensitive function of experimental conditions, e.g., ion concentration. Rate
constants for flux are sensitive functions of ionic concentration because the
fixed charge of the channel protein is shielded by the ions in an near it. Such
shielding effects are not included in traditional theories of ionic channels,
or other proteins, for that matter.
| [
{
"created": "Thu, 13 Oct 2016 15:18:26 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2016 08:43:51 GMT",
"version": "v2"
}
] | 2016-10-17 | [
[
"Eisenberg",
"Bob",
""
]
] | A simple theory of ion permeation through a channel is presented, in which diffusion occurs according to Fick's law and drift according to Ohm's law, in the electric field determined by all the charges present. This theory accounts for permeation in the channels studied to date in a wide range of solutions. Interestingly, the theory works because the shape of the electric field is a sensitive function of experimental conditions, e.g., ion concentration. Rate constants for flux are sensitive functions of ionic concentration because the fixed charge of the channel protein is shielded by the ions in an near it. Such shielding effects are not included in traditional theories of ionic channels, or other proteins, for that matter. |
2201.05669 | Nan Xi | Nan Miles Xi and Dalong Patrick Huang | Prediction of Drug-Induced TdP Risks Using Machine Learning and Rabbit
Ventricular Wedge Assay | arXiv admin note: text overlap with arXiv:2108.00543 | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | The evaluation of drug-induced Torsades de pointes (TdP) risks is crucial in
drug safety assessment. In this study, we discuss machine learning approaches
in the prediction of drug-induced TdP risks using preclinical data.
Specifically, the random forest model was trained on the dataset generated by
the rabbit ventricular wedge assay. The model prediction performance was
measured on 28 drugs from the Comprehensive In Vitro Proarrhythmia Assay
initiative. Leave-one-drug-out cross-validation provided an unbiased estimation
of model performance. Stratified bootstrap revealed the uncertainty in the
asymptotic model prediction. Our study validated the utility of machine
learning approaches in predicting drug-induced TdP risks from preclinical data.
Our methods can be extended to other preclinical protocols and serve as a
supplementary evaluation in drug safety assessment.
| [
{
"created": "Fri, 14 Jan 2022 21:03:20 GMT",
"version": "v1"
}
] | 2022-01-19 | [
[
"Xi",
"Nan Miles",
""
],
[
"Huang",
"Dalong Patrick",
""
]
] | The evaluation of drug-induced Torsades de pointes (TdP) risks is crucial in drug safety assessment. In this study, we discuss machine learning approaches in the prediction of drug-induced TdP risks using preclinical data. Specifically, the random forest model was trained on the dataset generated by the rabbit ventricular wedge assay. The model prediction performance was measured on 28 drugs from the Comprehensive In Vitro Proarrhythmia Assay initiative. Leave-one-drug-out cross-validation provided an unbiased estimation of model performance. Stratified bootstrap revealed the uncertainty in the asymptotic model prediction. Our study validated the utility of machine learning approaches in predicting drug-induced TdP risks from preclinical data. Our methods can be extended to other preclinical protocols and serve as a supplementary evaluation in drug safety assessment. |
1608.01912 | Toan T. Nguyen | Nguyen Viet Duc, Toan T. Nguyen, and Paolo Carloni | DNA like$-$charge attraction and overcharging by divalent counterions in
the presence of divalent co$-$ions | 10 pages, 6 figures | J. Biol. Phys. (2017) | 10.1007/s10867-017-9443-x | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Strongly correlated electrostatics of DNA systems has drawn the interest of
many groups, especially the condensation and overcharging of DNA by multivalent
counterions. By adding counterions of different valencies and shapes, one can
enhance or reduce DNA overcharging. In this papers, we focus on the effect of
multivalent co-ions, specifically divalent co-ions such as SO$_4^{2-}$. A
computational experiment of DNA condensation using Monte$-$Carlo simulation in
grand canonical ensemble is carried out where DNA system is in equilibrium with
a bulk solution containing a mixture of salt of different valency of co-ions.
Compared to system with purely monovalent co-ions, the influence of divalent
co-ions shows up in multiple aspects. Divalent co-ions lead to an increase of
monovalent salt in the DNA condensate. Because monovalent salts mostly
participate in linear screening of electrostatic interactions in the system,
more monovalent salt molecules enter the condensate leads to screening out of
short-range DNA$-$DNA like charge attraction and weaker DNA condensation free
energy. The overcharging of DNA by multivalent counterions is also reduced in
the presence of divalent co$-$ions. Strong repulsions between DNA and divalent
co-ions and among divalent co-ions themselves leads to a {\em depletion} of
negative ions near DNA surface as compared to the case without divalent
co-ions. At large distance, the DNA$-$DNA repulsive interaction is stronger in
the presence of divalent co$-$ions, suggesting that divalent co$-$ions role is
not only that of simple stronger linear screening.
| [
{
"created": "Fri, 5 Aug 2016 15:17:43 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Aug 2016 04:40:44 GMT",
"version": "v2"
},
{
"created": "Wed, 24 May 2017 16:20:57 GMT",
"version": "v3"
}
] | 2017-05-25 | [
[
"Duc",
"Nguyen Viet",
""
],
[
"Nguyen",
"Toan T.",
""
],
[
"Carloni",
"Paolo",
""
]
] | Strongly correlated electrostatics of DNA systems has drawn the interest of many groups, especially the condensation and overcharging of DNA by multivalent counterions. By adding counterions of different valencies and shapes, one can enhance or reduce DNA overcharging. In this papers, we focus on the effect of multivalent co-ions, specifically divalent co-ions such as SO$_4^{2-}$. A computational experiment of DNA condensation using Monte$-$Carlo simulation in grand canonical ensemble is carried out where DNA system is in equilibrium with a bulk solution containing a mixture of salt of different valency of co-ions. Compared to system with purely monovalent co-ions, the influence of divalent co-ions shows up in multiple aspects. Divalent co-ions lead to an increase of monovalent salt in the DNA condensate. Because monovalent salts mostly participate in linear screening of electrostatic interactions in the system, more monovalent salt molecules enter the condensate leads to screening out of short-range DNA$-$DNA like charge attraction and weaker DNA condensation free energy. The overcharging of DNA by multivalent counterions is also reduced in the presence of divalent co$-$ions. Strong repulsions between DNA and divalent co-ions and among divalent co-ions themselves leads to a {\em depletion} of negative ions near DNA surface as compared to the case without divalent co-ions. At large distance, the DNA$-$DNA repulsive interaction is stronger in the presence of divalent co$-$ions, suggesting that divalent co$-$ions role is not only that of simple stronger linear screening. |
q-bio/0508005 | Antonia Kropfinger | Marie-Anne F\'elix (IJM) | An inversion in the wiring of an intercellular signal: evolution of Wnt
signaling in the nematode vulva | null | BioEssays 27 (2005) 765-769 | 10.1002/bies.20275 | null | q-bio.PE q-bio.MN | null | Signal transduction pathways are largely conserved throughout the animal
kingdom. The repertoire of pathways is limited and each pathway is used in
different intercellular signaling events during the development of a given
animal. For example, Wnt signaling is recruited, sometimes redundantly with
other molecular pathways, in four cell specification events during
Caenorhabditis elegans vulva development, including the activation of vulval
differentiation. Strikingly, a recent study finds that Wnts act to repress
vulval differentiation in the nematode Pristionchus pacificus, demonstrating
evolutionary flexibility in the use of intercellular signaling pathways.
| [
{
"created": "Tue, 2 Aug 2005 07:24:56 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Félix",
"Marie-Anne",
"",
"IJM"
]
] | Signal transduction pathways are largely conserved throughout the animal kingdom. The repertoire of pathways is limited and each pathway is used in different intercellular signaling events during the development of a given animal. For example, Wnt signaling is recruited, sometimes redundantly with other molecular pathways, in four cell specification events during Caenorhabditis elegans vulva development, including the activation of vulval differentiation. Strikingly, a recent study finds that Wnts act to repress vulval differentiation in the nematode Pristionchus pacificus, demonstrating evolutionary flexibility in the use of intercellular signaling pathways. |
1104.2252 | Antti Niemi | Andrei Krokhotine and Antti J. Niemi | Solitons and Physics of the Lysogenic to Lytic Transition in
Enterobacteria Lambda Phage | 5 pages 3 figures | null | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lambda phage is a paradigm temperate bacteriophage. Its lysogenic and
lytic life cycles echo competition between the DNA binding CI and CRO proteins.
Here we address the Physics of this transition in terms of an energy function
that portrays the backbone as a multi-soliton configuration. The precision of
the individual solitons far exceeds the B-factor accuracy of the experimentally
determined protein conformations giving us confidence to conclude that three of
the four loops are each composites of two closely located solitons. The only
exception is the repressive DNA binding turn, it is the sole single soliton
configuration of the backbone. When we compare the solitons with the Protein
Data Bank we find that the one preceding the DNA recognition helix is unique to
the CI protein, prompting us to conclude that the lysogenic to lytic transition
is due to a saddle-node bifurcation involving a soliton-antisoliton
annihilation that removes the first loop.
| [
{
"created": "Tue, 12 Apr 2011 15:53:13 GMT",
"version": "v1"
}
] | 2011-04-13 | [
[
"Krokhotine",
"Andrei",
""
],
[
"Niemi",
"Antti J.",
""
]
] | The lambda phage is a paradigm temperate bacteriophage. Its lysogenic and lytic life cycles echo competition between the DNA binding CI and CRO proteins. Here we address the Physics of this transition in terms of an energy function that portrays the backbone as a multi-soliton configuration. The precision of the individual solitons far exceeds the B-factor accuracy of the experimentally determined protein conformations giving us confidence to conclude that three of the four loops are each composites of two closely located solitons. The only exception is the repressive DNA binding turn, it is the sole single soliton configuration of the backbone. When we compare the solitons with the Protein Data Bank we find that the one preceding the DNA recognition helix is unique to the CI protein, prompting us to conclude that the lysogenic to lytic transition is due to a saddle-node bifurcation involving a soliton-antisoliton annihilation that removes the first loop. |
q-bio/0603012 | Richard Yamada | Yujiro Richard Yamada, Charles S. Peskin | A Chemical Kinetic Model of Transcriptional Elongation | 9 pages, 3 figures, 1 table. Corrected typos; condensed size of
manuscript to meet PRL submission guidelines. Details in previous edition | null | null | null | q-bio.BM q-bio.SC | null | A chemical kinetic model of the elongation dynamics of RNA polymerase along a
DNA sequence is introduced. The proposed model governs the discrete movement of
the RNA polymerase along a DNA template, with no consideration given to elastic
effects. The model's novel concept is a ``look-ahead'' feature, in which
nucleotides bind reversibly to the DNA prior to being incorporated covalently
into the nascent RNA chain. Results are presented for specific DNA sequences
that have been used in single-molecule experiments of the random walk of RNA
polymerase along DNA. By replicating the data analysis algorithm from the
experimental procedure, the model produces velocity histograms, enabling direct
comparison with these published results.
| [
{
"created": "Sun, 12 Mar 2006 02:50:18 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2006 20:39:49 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Yamada",
"Yujiro Richard",
""
],
[
"Peskin",
"Charles S.",
""
]
] | A chemical kinetic model of the elongation dynamics of RNA polymerase along a DNA sequence is introduced. The proposed model governs the discrete movement of the RNA polymerase along a DNA template, with no consideration given to elastic effects. The model's novel concept is a ``look-ahead'' feature, in which nucleotides bind reversibly to the DNA prior to being incorporated covalently into the nascent RNA chain. Results are presented for specific DNA sequences that have been used in single-molecule experiments of the random walk of RNA polymerase along DNA. By replicating the data analysis algorithm from the experimental procedure, the model produces velocity histograms, enabling direct comparison with these published results. |
2002.07810 | Kritika Singhal | Sarah Tymochko, Kritika Singhal, and Giseon Heo | Classifying sleep states using persistent homology and Markov chain: a
Pilot Study | null | null | null | null | q-bio.QM eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obstructive sleep Apnea (OSA) is a form of sleep disordered breathing
characterized by frequent episodes of upper airway collapse during sleep.
Pediatric OSA occurs in 1-5% of children and can related to other serious
health conditions such as high blood pressure, behavioral issues, or altered
growth. OSA is often diagnosed by studying the patient's sleep cycle, the
pattern with which they progress through various sleep states such as
wakefulness, rapid eye-movement, and non-rapid eye-movement. The sleep state
data is obtained using an overnight polysomnography test that the patient
undergoes at a hospital or sleep clinic, where a technician manually labels
each 30 second time interval, also called an "epoch", with the current sleep
state. This process is laborious and prone to human error. We seek an automatic
method of classifying the sleep state, as well as a method to analyze the sleep
cycles. This article is a pilot study in sleep state classification using two
approaches: first, we'll use methods from the field of topological data
analysis to classify the sleep state and second, we'll model sleep states as a
Markov chain and visually analyze the sleep patterns. In the future, we will
continue to build on this work to improve our methods.
| [
{
"created": "Mon, 17 Feb 2020 22:24:14 GMT",
"version": "v1"
}
] | 2020-02-20 | [
[
"Tymochko",
"Sarah",
""
],
[
"Singhal",
"Kritika",
""
],
[
"Heo",
"Giseon",
""
]
] | Obstructive sleep Apnea (OSA) is a form of sleep disordered breathing characterized by frequent episodes of upper airway collapse during sleep. Pediatric OSA occurs in 1-5% of children and can related to other serious health conditions such as high blood pressure, behavioral issues, or altered growth. OSA is often diagnosed by studying the patient's sleep cycle, the pattern with which they progress through various sleep states such as wakefulness, rapid eye-movement, and non-rapid eye-movement. The sleep state data is obtained using an overnight polysomnography test that the patient undergoes at a hospital or sleep clinic, where a technician manually labels each 30 second time interval, also called an "epoch", with the current sleep state. This process is laborious and prone to human error. We seek an automatic method of classifying the sleep state, as well as a method to analyze the sleep cycles. This article is a pilot study in sleep state classification using two approaches: first, we'll use methods from the field of topological data analysis to classify the sleep state and second, we'll model sleep states as a Markov chain and visually analyze the sleep patterns. In the future, we will continue to build on this work to improve our methods. |
1504.02751 | Michael Famulare | Michael Famulare | Has wild poliovirus been eliminated from Nigeria? | Added model validation section and updated cVDPV2 forecast in
response to new case data; expanded material on surveillance sensitivity;
additional minor edits; and references added. 24 pages, 4 figures | null | 10.1371/journal.pone.0135765 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wild poliovirus type 3 (WPV3) has not been seen anywhere since the last case
of WPV3-associated paralysis in Nigeria in November 2012. At the time of
writing, the most recent case of wild poliovirus type 1 (WPV1) in Nigeria
occurred in July 2014, and WPV1 has not been seen in Africa since a case in
Somalia in August 2014. No cases associated with circulating vaccine-derived
type 2 poliovirus (cVDPV2) have been detected in Nigeria since November 2014.
Has WPV1 been eliminated from Africa? Has WPV3 been eradicated globally? Has
Nigeria interrupted cVDPV2 transmission? These questions are difficult because
polio surveillance is based on paralysis and paralysis only occurs in a small
fraction of infections.
This report provides estimates for the probabilities of poliovirus
elimination in Nigeria given available data as of March 31, 2015. It is based
on a model of disease transmission that is built from historical polio
incidence rates and is designed to represent the uncertainties in transmission
dynamics and poliovirus detection that are fundamental to interpreting long
time periods without cases.
The model estimates that, as of March 31, 2015, the probability of WPV1
elimination in Nigeria is 84%, and that if WPV1 has not been eliminated, a new
case will be detected with 99% probability by the end of 2015. The probability
of WPV3 elimination (and thus global eradication) is >99%. However, it is
unlikely that the ongoing transmission of cVDPV2 has been interrupted; the
probability of cVDPV2 elimination rises to 83% if no new cases are detected by
April 2016.
Added July 10, 2015: On June 26, a cVDPV2 case was confirmed by the Global
Polio Laboratory Network. The date of paralysis was May 16. The case provides
new information about cVDPV2 prevalence that is useful for assessing the
accuracy of previous predictions and informing an updated forecast for the time
to cVDPV2 elimination.
| [
{
"created": "Fri, 10 Apr 2015 18:12:55 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jul 2015 21:42:47 GMT",
"version": "v2"
}
] | 2016-02-17 | [
[
"Famulare",
"Michael",
""
]
] | Wild poliovirus type 3 (WPV3) has not been seen anywhere since the last case of WPV3-associated paralysis in Nigeria in November 2012. At the time of writing, the most recent case of wild poliovirus type 1 (WPV1) in Nigeria occurred in July 2014, and WPV1 has not been seen in Africa since a case in Somalia in August 2014. No cases associated with circulating vaccine-derived type 2 poliovirus (cVDPV2) have been detected in Nigeria since November 2014. Has WPV1 been eliminated from Africa? Has WPV3 been eradicated globally? Has Nigeria interrupted cVDPV2 transmission? These questions are difficult because polio surveillance is based on paralysis and paralysis only occurs in a small fraction of infections. This report provides estimates for the probabilities of poliovirus elimination in Nigeria given available data as of March 31, 2015. It is based on a model of disease transmission that is built from historical polio incidence rates and is designed to represent the uncertainties in transmission dynamics and poliovirus detection that are fundamental to interpreting long time periods without cases. The model estimates that, as of March 31, 2015, the probability of WPV1 elimination in Nigeria is 84%, and that if WPV1 has not been eliminated, a new case will be detected with 99% probability by the end of 2015. The probability of WPV3 elimination (and thus global eradication) is >99%. However, it is unlikely that the ongoing transmission of cVDPV2 has been interrupted; the probability of cVDPV2 elimination rises to 83% if no new cases are detected by April 2016. Added July 10, 2015: On June 26, a cVDPV2 case was confirmed by the Global Polio Laboratory Network. The date of paralysis was May 16. The case provides new information about cVDPV2 prevalence that is useful for assessing the accuracy of previous predictions and informing an updated forecast for the time to cVDPV2 elimination. |
1807.01491 | David Gall | David Gall and Genevi\`eve Dupont | Tonic activation of extrasynaptic NMDA receptors decreases intrinsic
excitability and promotes bistability in a model of neuronal activity | 20 pages, 5 figures | Int J Mol Sci. 2019 Dec 27;21(1). pii: E206 | 10.3390/ijms21010206 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NMDA receptors (NMDA-R) typically contribute to excitatory synaptic
transmission in the central nervous system. While calcium influx through NMDA-R
plays a critical role in synaptic plasticity, experimental evidence indicates
that NMDAR-mediated calcium influx also modifies neuronal excitability through
the activation of calcium-activated potassium channels. This mechanism has not
yet been studied theoretically. Our theoretical model provides a simple
description of neuronal electrical activity that takes into account the tonic
activity of extrasynaptic NMDA receptors and a cytosolic calcium compartment.
We show that calcium influx mediated by the tonic activity of NMDA-R can be
coupled directly to the activation of calcium-activated potassium channels,
resulting in an overall inhibitory effect on neuronal excitability.
Furthermore, the presence of tonic NMDA-R activity promotes bistability in
electrical activity by dramatically increasing the stimulus interval where both
a stable steady state and repetitive firing can coexist. These results could
provide an intrinsic mechanism for the constitution of memory traces in
neuronal circuits. They also shed light on the way by which $\beta$-amyloids
can alter neuronal activity when interfering with NMDA-R in Alzheimer's disease
and cerebral ischemia.
| [
{
"created": "Wed, 4 Jul 2018 09:17:50 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jan 2020 10:21:54 GMT",
"version": "v2"
}
] | 2020-02-03 | [
[
"Gall",
"David",
""
],
[
"Dupont",
"Geneviève",
""
]
] | NMDA receptors (NMDA-R) typically contribute to excitatory synaptic transmission in the central nervous system. While calcium influx through NMDA-R plays a critical role in synaptic plasticity, experimental evidence indicates that NMDAR-mediated calcium influx also modifies neuronal excitability through the activation of calcium-activated potassium channels. This mechanism has not yet been studied theoretically. Our theoretical model provides a simple description of neuronal electrical activity that takes into account the tonic activity of extrasynaptic NMDA receptors and a cytosolic calcium compartment. We show that calcium influx mediated by the tonic activity of NMDA-R can be coupled directly to the activation of calcium-activated potassium channels, resulting in an overall inhibitory effect on neuronal excitability. Furthermore, the presence of tonic NMDA-R activity promotes bistability in electrical activity by dramatically increasing the stimulus interval where both a stable steady state and repetitive firing can coexist. These results could provide an intrinsic mechanism for the constitution of memory traces in neuronal circuits. They also shed light on the way by which $\beta$-amyloids can alter neuronal activity when interfering with NMDA-R in Alzheimer's disease and cerebral ischemia. |
2111.03565 | Cl\'audia Arag\~ao | Rita Teod\'osio, Cl\'audia Arag\~ao, Rita Colen, Raquel Carrilho,
Jorge Dias, Sofia Engrola | A nutritional strategy to promote gilthead seabream performance under
low temperatures | null | Aquaculture 537: 736494 (2021) | 10.1016/j.aquaculture.2021.736494 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gilthead seabream (Sparus aurata) is vulnerable to low water temperature,
which may occur in the Southern Europe and Mediterranean region during Winter.
Fish are poikilothermic animals, so feed intake, digestion, metabolism and
ultimately growth are affected by water temperature. This study aimed to
evaluate growth performance, feed utilisation, nutrient apparent digestibility,
and N losses to the environment in seabream juveniles reared under low
temperature (13 degrees Celsius). Three isolipid and isoenergetic diets were
formulated: a commercial-like diet (COM) with 44% crude protein and 27.5%
fishmeal; and 2 diets with 42% CP (ECO and ECOSup), reduced FM inclusion, and
15% poultry meal. ECOSup diet was supplemented with a mix of feed additives
intended to promote fish growth and feed intake. The ECO diets presented lower
production costs than the COM diet and included more sustainable ingredients.
Seabream juveniles (154.5 g) were randomly assigned to triplicate tanks and fed
the diets for 84 days. Fish fed the ECOSup and COM diets attained a similar
final body weight. ECOSup fed fish presented significantly higher HSI than COM
fed fish, probably due to higher hepatic glycogen reserves. The VSI of ECOSup
fed fish were significantly lower compared to COM fed fish, which is a positive
achievement from a consumer point of view. Nutrient digestibility was similar
in ECOSup and COM diets. Feeding fish with the ECO diets resulted in lower
faecal N losses when compared to COM fed fish. Feeding seabream with an
eco-friendly diet with a mix of feed additives promoted growth, improved fish
nutritional status and minimised N losses to the environment whilst lowering
production costs. Nutritional strategies that ultimately promote feed intake
and diet utilisation are valuable tools that may help conditioning fish to
sustain growth even under adverse conditions.
| [
{
"created": "Fri, 5 Nov 2021 15:41:59 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Nov 2021 15:26:59 GMT",
"version": "v2"
}
] | 2021-11-11 | [
[
"Teodósio",
"Rita",
""
],
[
"Aragão",
"Cláudia",
""
],
[
"Colen",
"Rita",
""
],
[
"Carrilho",
"Raquel",
""
],
[
"Dias",
"Jorge",
""
],
[
"Engrola",
"Sofia",
""
]
] | Gilthead seabream (Sparus aurata) is vulnerable to low water temperature, which may occur in the Southern Europe and Mediterranean region during Winter. Fish are poikilothermic animals, so feed intake, digestion, metabolism and ultimately growth are affected by water temperature. This study aimed to evaluate growth performance, feed utilisation, nutrient apparent digestibility, and N losses to the environment in seabream juveniles reared under low temperature (13 degrees Celsius). Three isolipid and isoenergetic diets were formulated: a commercial-like diet (COM) with 44% crude protein and 27.5% fishmeal; and 2 diets with 42% CP (ECO and ECOSup), reduced FM inclusion, and 15% poultry meal. ECOSup diet was supplemented with a mix of feed additives intended to promote fish growth and feed intake. The ECO diets presented lower production costs than the COM diet and included more sustainable ingredients. Seabream juveniles (154.5 g) were randomly assigned to triplicate tanks and fed the diets for 84 days. Fish fed the ECOSup and COM diets attained a similar final body weight. ECOSup fed fish presented significantly higher HSI than COM fed fish, probably due to higher hepatic glycogen reserves. The VSI of ECOSup fed fish were significantly lower compared to COM fed fish, which is a positive achievement from a consumer point of view. Nutrient digestibility was similar in ECOSup and COM diets. Feeding fish with the ECO diets resulted in lower faecal N losses when compared to COM fed fish. Feeding seabream with an eco-friendly diet with a mix of feed additives promoted growth, improved fish nutritional status and minimised N losses to the environment whilst lowering production costs. Nutritional strategies that ultimately promote feed intake and diet utilisation are valuable tools that may help conditioning fish to sustain growth even under adverse conditions. |
1506.06717 | Dante Chialvo | Dante R. Chialvo, Ana Maria Gonzalez Torrado, Ewa Gudowska-Nowak,
Jeremi K. Ochab, Pedro Montoya, Maciej A. Nowak and Enzo Tagliazucchi | How we move is universal: scaling in the average shape of human activity | Communicated to the Granada Seminar, "Physics Meets the Social
Sciences: Emergent cooperative phenomena, from bacterial to human group
behavior". June 14-19, 2015. La Herradura, Spain | Papers in Physics, vol. 7, art. 070017 (2015) | 10.4279/PIP.070017 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human motor activity is constrained by the rhythmicity of the 24 hours
circadian cycle, including the usual 12-15 hours sleep-wake cycle. However,
activity fluctuations also appear over a wide range of temporal scales, from
days to a few seconds, resulting from the concatenation of a myriad of
individual smaller motor events. Furthermore, individuals present different
propensity to wakefulness and thus to motor activity throughout the circadian
cycle. Are activity fluctuations across temporal scales intrinsically
different, or is there a universal description encompassing them? Is this
description also universal across individuals, considering the aforementioned
variability? Here we establish the presence of universality in motor activity
fluctuations based on the empirical study of a month of continuous wristwatch
accelerometer recordings. We study the scaling of average fluctuations across
temporal scales and determine a universal law characterized by critical
exponents $\alpha$, $\tau$ and $1/{\mu}$. Results are highly reminiscent of the
universality described for the average shape of avalanches in systems
exhibiting crackling noise. Beyond its theoretical relevance, the present
results can be important for developing objective markers of healthy as well as
pathological human motor behavior.
| [
{
"created": "Mon, 22 Jun 2015 18:58:01 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Nov 2015 11:50:07 GMT",
"version": "v2"
}
] | 2015-11-24 | [
[
"Chialvo",
"Dante R.",
""
],
[
"Torrado",
"Ana Maria Gonzalez",
""
],
[
"Gudowska-Nowak",
"Ewa",
""
],
[
"Ochab",
"Jeremi K.",
""
],
[
"Montoya",
"Pedro",
""
],
[
"Nowak",
"Maciej A.",
""
],
[
"Tagliazucchi",
"Enzo",
""
]
] | Human motor activity is constrained by the rhythmicity of the 24 hours circadian cycle, including the usual 12-15 hours sleep-wake cycle. However, activity fluctuations also appear over a wide range of temporal scales, from days to a few seconds, resulting from the concatenation of a myriad of individual smaller motor events. Furthermore, individuals present different propensity to wakefulness and thus to motor activity throughout the circadian cycle. Are activity fluctuations across temporal scales intrinsically different, or is there a universal description encompassing them? Is this description also universal across individuals, considering the aforementioned variability? Here we establish the presence of universality in motor activity fluctuations based on the empirical study of a month of continuous wristwatch accelerometer recordings. We study the scaling of average fluctuations across temporal scales and determine a universal law characterized by critical exponents $\alpha$, $\tau$ and $1/{\mu}$. Results are highly reminiscent of the universality described for the average shape of avalanches in systems exhibiting crackling noise. Beyond its theoretical relevance, the present results can be important for developing objective markers of healthy as well as pathological human motor behavior. |
2307.00858 | Zijian Dong | Zijian Dong, Yilei Wu, Yu Xiao, Joanna Su Xian Chong, Yueming Jin,
Juan Helen Zhou | Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal
Brain Functional Connectome Embedding | MICCAI 2023 | null | null | null | q-bio.NC cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Under the framework of network-based neurodegeneration, brain functional
connectome (FC)-based Graph Neural Networks (GNN) have emerged as a valuable
tool for the diagnosis and prognosis of neurodegenerative diseases such as
Alzheimer's disease (AD). However, these models are tailored for brain FC at a
single time point instead of characterizing FC trajectory. Discerning how FC
evolves with disease progression, particularly at the predementia stages such
as cognitively normal individuals with amyloid deposition or individuals with
mild cognitive impairment (MCI), is crucial for delineating disease spreading
patterns and developing effective strategies to slow down or even halt disease
advancement. In this work, we proposed the first interpretable framework for
brain FC trajectory embedding with application to neurodegenerative disease
diagnosis and prognosis, namely Brain Tokenized Graph Transformer (Brain
TokenGT). It consists of two modules: 1) Graph Invariant and Variant Embedding
(GIVE) for generation of node and spatio-temporal edge embeddings, which were
tokenized for downstream processing; 2) Brain Informed Graph Transformer
Readout (BIGTR) which augments previous tokens with trainable type identifiers
and non-trainable node identifiers and feeds them into a standard transformer
encoder to readout. We conducted extensive experiments on two public
longitudinal fMRI datasets of the AD continuum for three tasks, including
differentiating MCI from controls, predicting dementia conversion in MCI, and
classification of amyloid positive or negative cognitively normal individuals.
Based on brain FC trajectory, the proposed Brain TokenGT approach outperformed
all the other benchmark models and at the same time provided excellent
interpretability. The code is available at
https://github.com/ZijianD/Brain-TokenGT.git
| [
{
"created": "Mon, 3 Jul 2023 08:57:30 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jul 2023 03:29:05 GMT",
"version": "v2"
}
] | 2023-07-14 | [
[
"Dong",
"Zijian",
""
],
[
"Wu",
"Yilei",
""
],
[
"Xiao",
"Yu",
""
],
[
"Chong",
"Joanna Su Xian",
""
],
[
"Jin",
"Yueming",
""
],
[
"Zhou",
"Juan Helen",
""
]
] | Under the framework of network-based neurodegeneration, brain functional connectome (FC)-based Graph Neural Networks (GNN) have emerged as a valuable tool for the diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's disease (AD). However, these models are tailored for brain FC at a single time point instead of characterizing FC trajectory. Discerning how FC evolves with disease progression, particularly at the predementia stages such as cognitively normal individuals with amyloid deposition or individuals with mild cognitive impairment (MCI), is crucial for delineating disease spreading patterns and developing effective strategies to slow down or even halt disease advancement. In this work, we proposed the first interpretable framework for brain FC trajectory embedding with application to neurodegenerative disease diagnosis and prognosis, namely Brain Tokenized Graph Transformer (Brain TokenGT). It consists of two modules: 1) Graph Invariant and Variant Embedding (GIVE) for generation of node and spatio-temporal edge embeddings, which were tokenized for downstream processing; 2) Brain Informed Graph Transformer Readout (BIGTR) which augments previous tokens with trainable type identifiers and non-trainable node identifiers and feeds them into a standard transformer encoder to readout. We conducted extensive experiments on two public longitudinal fMRI datasets of the AD continuum for three tasks, including differentiating MCI from controls, predicting dementia conversion in MCI, and classification of amyloid positive or negative cognitively normal individuals. Based on brain FC trajectory, the proposed Brain TokenGT approach outperformed all the other benchmark models and at the same time provided excellent interpretability. The code is available at https://github.com/ZijianD/Brain-TokenGT.git |
1901.05051 | Pierre-Philippe Dechant | Pierre-Philippe Dechant and Yang-Hui He | Machine-learning a virus assembly fitness landscape | 13 pages, 4 figures | PLoS ONE 16(5): e0250227 (2021) | 10.1371/journal.pone.0250227 | null | q-bio.BM cs.LG q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Realistic evolutionary fitness landscapes are notoriously difficult to
construct. A recent cutting-edge model of virus assembly consists of a
dodecahedral capsid with $12$ corresponding packaging signals in three affinity
bands. This whole genome/phenotype space consisting of $3^{12}$ genomes has
been explored via computationally expensive stochastic assembly models, giving
a fitness landscape in terms of the assembly efficiency. Using latest
machine-learning techniques by establishing a neural network, we show that the
intensive computation can be short-circuited in a matter of minutes to
astounding accuracy.
| [
{
"created": "Sun, 13 Jan 2019 12:17:02 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Dechant",
"Pierre-Philippe",
""
],
[
"He",
"Yang-Hui",
""
]
] | Realistic evolutionary fitness landscapes are notoriously difficult to construct. A recent cutting-edge model of virus assembly consists of a dodecahedral capsid with $12$ corresponding packaging signals in three affinity bands. This whole genome/phenotype space consisting of $3^{12}$ genomes has been explored via computationally expensive stochastic assembly models, giving a fitness landscape in terms of the assembly efficiency. Using latest machine-learning techniques by establishing a neural network, we show that the intensive computation can be short-circuited in a matter of minutes to astounding accuracy. |
2103.06578 | Weifeng Li | Yanmei Yang, Yunju Zhang, Yuanyuan Qu, Xuewei Liu, Mingwen Zhao,
Yuguang Mu, Weifeng Li | Quantitative Interpretations of Energetic Features and Key Residues at
SARS Coronavirus Spike Receptor-Binding Domain and ACE2 Receptor Interface | 12 pages, 4 figures, 1 table | Nanoscale, 2021 | 10.1039/D1NR01672E | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The wide spread of coronavirus disease 2019 (COVID-19) has declared a global
health emergency. As one of the most important targets for antibody and drug
developments, Spike RBD-ACE2 interface has received extensive attention. Here,
using molecular dynamics simulations, we explicitly evaluated the binding
energetic features of the RBD-ACE2 complex of both SARS-CoV and SARS-CoV-2 to
find the key residues. Although the overall ACE2-binding mode of the SARS-CoV-2
RBD is nearly identical to that of the SARS-CoV RBD, the difference in binding
affinity is as large as -16.35 kcal/mol. Energy decomposition analyses
identified three binding patches in the SARS-CoV-2 RBD and eleven key residues
(Phe486, Tyr505, Asn501, Tyr489, Gln493, Leu455 and etc) which are believed to
be the main targets for drug development. The dominating forces are from van
der Waals attractions and dehydration of these residues. It is also worth
mention that we found seven mutational sites (Lys417, Leu455, Ala475, Gly476,
Glu484, Gln498 and Val503) on SARS-CoV-2 which unexpectedly weakened the
RBD-ACE2 binding. Very interestingly, the most repulsive residue at the
RBD-ACE2 interface (E484), is found to be mutated in the latest UK variant,
B1.1.7, cause complete virus neutralization escapes from highly neutralizing
COVID-19 convalescent plasma. Our present results indicate that at least from
the energetic point of view such E484 mutation may have beneficial effects on
ACE2 binding. The present study provides a systematical understanding, from the
energetic point of view, of the binding features of SARS-CoV-2 RBD with ACE2
acceptor. We hope that the present findings of three binding patches, key
attracting residues and unexpected mutational sites can provide insights to the
design of SARS-CoV-2 drugs and identification of cross-active antibodies.
| [
{
"created": "Thu, 11 Mar 2021 10:06:34 GMT",
"version": "v1"
}
] | 2021-05-14 | [
[
"Yang",
"Yanmei",
""
],
[
"Zhang",
"Yunju",
""
],
[
"Qu",
"Yuanyuan",
""
],
[
"Liu",
"Xuewei",
""
],
[
"Zhao",
"Mingwen",
""
],
[
"Mu",
"Yuguang",
""
],
[
"Li",
"Weifeng",
""
]
] | The wide spread of coronavirus disease 2019 (COVID-19) has declared a global health emergency. As one of the most important targets for antibody and drug developments, Spike RBD-ACE2 interface has received extensive attention. Here, using molecular dynamics simulations, we explicitly evaluated the binding energetic features of the RBD-ACE2 complex of both SARS-CoV and SARS-CoV-2 to find the key residues. Although the overall ACE2-binding mode of the SARS-CoV-2 RBD is nearly identical to that of the SARS-CoV RBD, the difference in binding affinity is as large as -16.35 kcal/mol. Energy decomposition analyses identified three binding patches in the SARS-CoV-2 RBD and eleven key residues (Phe486, Tyr505, Asn501, Tyr489, Gln493, Leu455 and etc) which are believed to be the main targets for drug development. The dominating forces are from van der Waals attractions and dehydration of these residues. It is also worth mention that we found seven mutational sites (Lys417, Leu455, Ala475, Gly476, Glu484, Gln498 and Val503) on SARS-CoV-2 which unexpectedly weakened the RBD-ACE2 binding. Very interestingly, the most repulsive residue at the RBD-ACE2 interface (E484), is found to be mutated in the latest UK variant, B1.1.7, cause complete virus neutralization escapes from highly neutralizing COVID-19 convalescent plasma. Our present results indicate that at least from the energetic point of view such E484 mutation may have beneficial effects on ACE2 binding. The present study provides a systematical understanding, from the energetic point of view, of the binding features of SARS-CoV-2 RBD with ACE2 acceptor. We hope that the present findings of three binding patches, key attracting residues and unexpected mutational sites can provide insights to the design of SARS-CoV-2 drugs and identification of cross-active antibodies. |
2308.05131 | Rosalia Ferraro | Rosalia Ferraro, Sergio Caserta, Stefano Guido | A low-cost, user-friendly rheo-optical compression assay to measure
mechanical properties of cell spheroids in standard cell culture plates | 32 pages, 7 figures | null | null | null | q-bio.QM physics.bio-ph q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The mechanical characterization of cell spheroids, one of the most widely
used 3D biology models in vitro, is a hotspot of current research on the role
played by the mechanical response of cells and tissues. The techniques proposed
so far in the literature, while providing important scientific insights,
require a specialized equipment and technical skills which are not usually
available in cell biology facilities. Here, we present an innovative
rheo-optical compression assay based on microscopy glass coverslips as the load
applied to cell spheroids in standard cell culture plates and on image
acquisition with an optical microscope or even a smartphone equipped with
adequate magnification lenses. Mechanical properties can be simply obtained by
correlating the applied load to the deformation of cell spheroids measured by
image analysis. The low-cost, user-friendly features of the proposed technique
can boost mechanobiology research making it easily affordable to any biomedical
lab equipped with cell culture facilities.
| [
{
"created": "Wed, 9 Aug 2023 12:40:58 GMT",
"version": "v1"
}
] | 2023-08-11 | [
[
"Ferraro",
"Rosalia",
""
],
[
"Caserta",
"Sergio",
""
],
[
"Guido",
"Stefano",
""
]
] | The mechanical characterization of cell spheroids, one of the most widely used 3D biology models in vitro, is a hotspot of current research on the role played by the mechanical response of cells and tissues. The techniques proposed so far in the literature, while providing important scientific insights, require a specialized equipment and technical skills which are not usually available in cell biology facilities. Here, we present an innovative rheo-optical compression assay based on microscopy glass coverslips as the load applied to cell spheroids in standard cell culture plates and on image acquisition with an optical microscope or even a smartphone equipped with adequate magnification lenses. Mechanical properties can be simply obtained by correlating the applied load to the deformation of cell spheroids measured by image analysis. The low-cost, user-friendly features of the proposed technique can boost mechanobiology research making it easily affordable to any biomedical lab equipped with cell culture facilities. |
2308.03532 | Ioannis Karafyllidis G. | G. D. Varsamis, I. G. Karafyllidis, K. M. Gilkes, U. Arranz, R.
Martin-Cuevas, G. Calleja, J. Wong, H. C. Jessen, P. Dimitrakis, P. Kolovos,
R. Sandaltzopoulos | Quantum algorithm for de novo DNA sequence assembly based on quantum
walks on graphs | 13 pages, 7 figures | null | null | null | q-bio.BM quant-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | De novo DNA sequence assembly is based on finding paths in overlap graphs,
which is a NP-hard problem. We developed a quantum algorithm for de novo
assembly based on quantum walks in graphs. The overlap graph is partitioned
repeatedly to smaller graphs that form a hierarchical structure. We use quantum
walks to find paths in low rank graphs and a quantum algorithm that finds
Hamiltonian paths in high hierarchical rank. We tested the partitioning quantum
algorithm, as well as the quantum algorithm that finds Hamiltonian paths in
high hierarchical rank and confirmed its correct operation using Qiskit. We
developed a custom simulation for quantum walks to search for paths in low rank
graphs. The approach described in this paper may serve as a basis for the
development of efficient quantum algorithms that solve the de novo DNA assembly
problem.
| [
{
"created": "Mon, 7 Aug 2023 12:30:25 GMT",
"version": "v1"
}
] | 2023-08-08 | [
[
"Varsamis",
"G. D.",
""
],
[
"Karafyllidis",
"I. G.",
""
],
[
"Gilkes",
"K. M.",
""
],
[
"Arranz",
"U.",
""
],
[
"Martin-Cuevas",
"R.",
""
],
[
"Calleja",
"G.",
""
],
[
"Wong",
"J.",
""
],
[
"Jessen",
"H. C.",
""
],
[
"Dimitrakis",
"P.",
""
],
[
"Kolovos",
"P.",
""
],
[
"Sandaltzopoulos",
"R.",
""
]
] | De novo DNA sequence assembly is based on finding paths in overlap graphs, which is a NP-hard problem. We developed a quantum algorithm for de novo assembly based on quantum walks in graphs. The overlap graph is partitioned repeatedly to smaller graphs that form a hierarchical structure. We use quantum walks to find paths in low rank graphs and a quantum algorithm that finds Hamiltonian paths in high hierarchical rank. We tested the partitioning quantum algorithm, as well as the quantum algorithm that finds Hamiltonian paths in high hierarchical rank and confirmed its correct operation using Qiskit. We developed a custom simulation for quantum walks to search for paths in low rank graphs. The approach described in this paper may serve as a basis for the development of efficient quantum algorithms that solve the de novo DNA assembly problem. |
0811.3538 | Arne Traulsen | Arne Traulsen and Christoph Hauert | Stochastic evolutionary game dynamics | To appear in "Reviews of Nonlinear Dynamics and Complexity" Vol. II,
Wiley-VCH, 2009, edited by H.-G. Schuster | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this review, we summarize recent developments in stochastic evolutionary
game dynamics of finite populations.
| [
{
"created": "Fri, 21 Nov 2008 13:57:52 GMT",
"version": "v1"
}
] | 2008-11-24 | [
[
"Traulsen",
"Arne",
""
],
[
"Hauert",
"Christoph",
""
]
] | In this review, we summarize recent developments in stochastic evolutionary game dynamics of finite populations. |
2308.14135 | Ilya Timofeyev | Yingxue Su, Brett Geiger, Ilya Timofeyev, Andreas Mang, Robert
Azencott | Rare Events Analysis and Computation for Stochastic Evolution of
Bacterial Populations | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a computational approach for computing most likely
trajectories describing rare events that correspond to the emergence of
non-dominant genotypes. This work is based on the large deviations approach for
discrete Markov chains describing the genetic evolution of large bacterial
populations. We demonstrate that a gradient descent algorithm developed in this
paper results in the fast and accurate computation of most-likely trajectories
for a large number of bacterial genotypes. We supplement our analysis with
extensive numerical simulations demonstrating the computational advantage of
the designed gradient descent algorithm over other, more simplified,
approaches.
| [
{
"created": "Sun, 27 Aug 2023 15:29:30 GMT",
"version": "v1"
}
] | 2023-08-29 | [
[
"Su",
"Yingxue",
""
],
[
"Geiger",
"Brett",
""
],
[
"Timofeyev",
"Ilya",
""
],
[
"Mang",
"Andreas",
""
],
[
"Azencott",
"Robert",
""
]
] | In this paper, we develop a computational approach for computing most likely trajectories describing rare events that correspond to the emergence of non-dominant genotypes. This work is based on the large deviations approach for discrete Markov chains describing the genetic evolution of large bacterial populations. We demonstrate that a gradient descent algorithm developed in this paper results in the fast and accurate computation of most-likely trajectories for a large number of bacterial genotypes. We supplement our analysis with extensive numerical simulations demonstrating the computational advantage of the designed gradient descent algorithm over other, more simplified, approaches. |
2304.01975 | Paterne Gahungu | Steve Sibomanaa, Kelly Joelle Gatore Sinigirira, Paterne Gahungu,
David Niyukuri | Mathematical Model for Transmission Dynamics of Tuberculosis in Burundi | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Tuberculosis (TB) is among the main public health challenges in Burundi. The
literature lacks mathematical models for key parameter estimates of TB
transmission dynamics in Burundi. In this paper, the
supectible-exposed-infected-recovered (SEIR) model is used to investigate the
transmission dynamics of tuberculosis in Burundi. Using the next generation
method, we calculated the basic reproduction number, R0. The model is
demonstrated to have a disease-free equilibrium (DEF) that is locally and
globally asymptotically stable. When the corresponding reproduction threshold
quantity approaches unity, the model enters an endemic equilibrium (EE). That
means, the disease can be controlled through different interventions in
Burundi. A sensitivity analysis of the model parameters was also investigated.
It shows that the progression rate from latent to becoming infectious had the
highest positive sensitivity, which means that R0 increases and decreases
proportionally with an increase and a decrease of that progression rate.
| [
{
"created": "Tue, 4 Apr 2023 17:32:01 GMT",
"version": "v1"
}
] | 2023-04-05 | [
[
"Sibomanaa",
"Steve",
""
],
[
"Sinigirira",
"Kelly Joelle Gatore",
""
],
[
"Gahungu",
"Paterne",
""
],
[
"Niyukuri",
"David",
""
]
] | Tuberculosis (TB) is among the main public health challenges in Burundi. The literature lacks mathematical models for key parameter estimates of TB transmission dynamics in Burundi. In this paper, the supectible-exposed-infected-recovered (SEIR) model is used to investigate the transmission dynamics of tuberculosis in Burundi. Using the next generation method, we calculated the basic reproduction number, R0. The model is demonstrated to have a disease-free equilibrium (DEF) that is locally and globally asymptotically stable. When the corresponding reproduction threshold quantity approaches unity, the model enters an endemic equilibrium (EE). That means, the disease can be controlled through different interventions in Burundi. A sensitivity analysis of the model parameters was also investigated. It shows that the progression rate from latent to becoming infectious had the highest positive sensitivity, which means that R0 increases and decreases proportionally with an increase and a decrease of that progression rate. |
0705.3759 | Alain Destexhe | Claude Bedard and Alain Destexhe | A modified cable formalism for modeling neuronal membranes at high
frequencies | To appear in Biophysical Journal; Submitted on May 25, 2007; accepted
on Sept 11th, 2007 | Biophysical Journal 2008 Feb 15;94(4):1133-43. Epub 2007 Oct 5 | 10.1529/biophysj.107.113571 | null | q-bio.NC | null | Intracellular recordings of cortical neurons in vivo display intense
subthreshold membrane potential (Vm) activity. The power spectral density (PSD)
of the Vm displays a power-law structure at high frequencies (>50 Hz) with a
slope of about -2.5. This type of frequency scaling cannot be accounted for by
traditional models, as either single-compartment models or models based on
reconstructed cell morphologies display a frequency scaling with a slope close
to -4. This slope is due to the fact that the membrane resistance is
"short-circuited" by the capacitance for high frequencies, a situation which
may not be realistic. Here, we integrate non-ideal capacitors in cable
equations to reflect the fact that the capacitance cannot be charged
instantaneously. We show that the resulting "non-ideal" cable model can be
solved analytically using Fourier transforms. Numerical simulations using a
ball-and-stick model yield membrane potential activity with similar frequency
scaling as in the experiments. We also discuss the consequences of using
non-ideal capacitors on other cellular properties such as the transmission of
high frequencies, which is boosted in non-ideal cables, or voltage attenuation
in dendrites. These results suggest that cable equations based on non-ideal
capacitors should be used to capture the behavior of neuronal membranes at high
frequencies.
| [
{
"created": "Fri, 25 May 2007 12:34:43 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Aug 2007 08:30:58 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Sep 2007 18:57:14 GMT",
"version": "v3"
}
] | 2009-11-13 | [
[
"Bedard",
"Claude",
""
],
[
"Destexhe",
"Alain",
""
]
] | Intracellular recordings of cortical neurons in vivo display intense subthreshold membrane potential (Vm) activity. The power spectral density (PSD) of the Vm displays a power-law structure at high frequencies (>50 Hz) with a slope of about -2.5. This type of frequency scaling cannot be accounted for by traditional models, as either single-compartment models or models based on reconstructed cell morphologies display a frequency scaling with a slope close to -4. This slope is due to the fact that the membrane resistance is "short-circuited" by the capacitance for high frequencies, a situation which may not be realistic. Here, we integrate non-ideal capacitors in cable equations to reflect the fact that the capacitance cannot be charged instantaneously. We show that the resulting "non-ideal" cable model can be solved analytically using Fourier transforms. Numerical simulations using a ball-and-stick model yield membrane potential activity with similar frequency scaling as in the experiments. We also discuss the consequences of using non-ideal capacitors on other cellular properties such as the transmission of high frequencies, which is boosted in non-ideal cables, or voltage attenuation in dendrites. These results suggest that cable equations based on non-ideal capacitors should be used to capture the behavior of neuronal membranes at high frequencies. |
1105.2823 | Dhagash Mehta | Esteban A. Hernandez-Vargas, Dhagash Mehta, Richard H. Middleton | Towards Modeling HIV Long Term Behavior | Accepted in proceedings of 18th IFAC World Congress, Milan, August 28
- September 2, 2011 (Invited Contribution: Modeling methods and clinical
applications in medical and biological systems I). 6 pages, 7 figures | null | null | null | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The precise mechanism that causes HIV infection to progress to AIDS is still
unknown. This paper presents a mathematical model which is able to predict the
entire trajectory of the HIV/AIDS dynamics, then a possible explanation for
this progression is examined. A dynamical analysis of this model reveals a set
of parameters which may produce two real equilibria in the model. One
equilibrium is stable and represents those individuals who have been living
with HIV for at least 7 to 9 years, and do not develop AIDS. The other one is
unstable and represents those patients who developed AIDS in an average period
of 10 years. However, further work is needed since the proposed model is
sensitive to parameter variations.
| [
{
"created": "Fri, 13 May 2011 20:00:11 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Hernandez-Vargas",
"Esteban A.",
""
],
[
"Mehta",
"Dhagash",
""
],
[
"Middleton",
"Richard H.",
""
]
] | The precise mechanism that causes HIV infection to progress to AIDS is still unknown. This paper presents a mathematical model which is able to predict the entire trajectory of the HIV/AIDS dynamics, then a possible explanation for this progression is examined. A dynamical analysis of this model reveals a set of parameters which may produce two real equilibria in the model. One equilibrium is stable and represents those individuals who have been living with HIV for at least 7 to 9 years, and do not develop AIDS. The other one is unstable and represents those patients who developed AIDS in an average period of 10 years. However, further work is needed since the proposed model is sensitive to parameter variations. |
2101.03147 | Enrico Borriello Dr. | William Bains, Enrico Borriello, Dirk Schulze-Makuch | Evolution of default genetic control mechanisms | 25 pages, 8 figures, 1 table | null | 10.1371/journal.pone.0251568 | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | We present a model of the evolution of control systems in a genome under
environmental constraints. The model conceptually follows the Jacob and Monod
model of gene control. Genes contain control elements which respond to the
internal state of the cell as well as the environment to control expression of
a coding region. Control and coding regions evolve to maximize a fitness
function between expressed coding sequences and the environment. 118 runs of
the model run to an average of 1.4 x 10^6 `generations' each with a range of
starting parameters probed the conditions under which genomes evolved a
`default style' of control. Unexpectedly, the control logic that evolved was
not significantly correlated to the complexity of the environment. Genetic
logic was strongly correlated with genome complexity and with the fraction of
genes active in the cell at any one time. More complex genomes correlated with
the evolution of genetic controls in which genes were active (`default on'),
and a low fraction of genes being expressed correlated with a genetic logic in
which genes were biased to being inactive unless positively activated (`default
off' logic). We discuss how this might relate to the evolution of the complex
eukaryotic genome, which operates in a `default off' mode.
| [
{
"created": "Fri, 8 Jan 2021 18:23:13 GMT",
"version": "v1"
}
] | 2021-06-09 | [
[
"Bains",
"William",
""
],
[
"Borriello",
"Enrico",
""
],
[
"Schulze-Makuch",
"Dirk",
""
]
] | We present a model of the evolution of control systems in a genome under environmental constraints. The model conceptually follows the Jacob and Monod model of gene control. Genes contain control elements which respond to the internal state of the cell as well as the environment to control expression of a coding region. Control and coding regions evolve to maximize a fitness function between expressed coding sequences and the environment. 118 runs of the model run to an average of 1.4 x 10^6 `generations' each with a range of starting parameters probed the conditions under which genomes evolved a `default style' of control. Unexpectedly, the control logic that evolved was not significantly correlated to the complexity of the environment. Genetic logic was strongly correlated with genome complexity and with the fraction of genes active in the cell at any one time. More complex genomes correlated with the evolution of genetic controls in which genes were active (`default on'), and a low fraction of genes being expressed correlated with a genetic logic in which genes were biased to being inactive unless positively activated (`default off' logic). We discuss how this might relate to the evolution of the complex eukaryotic genome, which operates in a `default off' mode. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.