id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0409019
|
Joachim Krug
|
Joachim Krug and Kavita Jain
|
Breaking records in the evolutionary race
|
Proceedings of 8th ICCMSP/Marrakech, to be published in Physica A
|
Physica A 358, 1 (2005)
|
10.1016/j.physa.2005.06.002
| null |
q-bio.PE cond-mat.dis-nn
| null |
We explore some aspects of the relationship between biological evolution
processes and the mathematical theory of records. For Eigen's quasispecies
model with an uncorrelated fitness landscape, we show that the evolutionary
trajectories traced out by a population initially localized at a randomly
chosen point in sequence space can be described in close analogy to record
dynamics, with two complications. First, the increasing number of genotypes
that become available with increasing distance from the starting point implies
that fitness records are more frequent than for the standard case of
independent, identically distributed random variables. Second, fitness records
can be bypassed, which strongly reduces the number of genotypes that take part
in an evolutionary trajectory. For exponential and Gaussian fitness
distributions, this number scales with sequence length $N$ as $\sqrt{N}$, and
it is of order unity for distributions with a power law tail. This is in strong
contrast to the number of records, which is of order $N$ for any fitness
distribution.
|
[
{
"created": "Thu, 16 Sep 2004 15:31:25 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Krug",
"Joachim",
""
],
[
"Jain",
"Kavita",
""
]
] |
We explore some aspects of the relationship between biological evolution processes and the mathematical theory of records. For Eigen's quasispecies model with an uncorrelated fitness landscape, we show that the evolutionary trajectories traced out by a population initially localized at a randomly chosen point in sequence space can be described in close analogy to record dynamics, with two complications. First, the increasing number of genotypes that become available with increasing distance from the starting point implies that fitness records are more frequent than for the standard case of independent, identically distributed random variables. Second, fitness records can be bypassed, which strongly reduces the number of genotypes that take part in an evolutionary trajectory. For exponential and Gaussian fitness distributions, this number scales with sequence length $N$ as $\sqrt{N}$, and it is of order unity for distributions with a power law tail. This is in strong contrast to the number of records, which is of order $N$ for any fitness distribution.
|
1511.06754
|
Lauren Assour
|
Lauren A. Assour, Nicholas LaRosa, Scott J. Emrich
|
Hot RAD: A Tool for Analysis of Next-Gen RAD Tag Data
| null | null | null | null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restriction site Associated DNA (RAD) tagging (also known as RAD-seq, etc.)
is an emerging method for analyzing an organism's genome without completely
sequencing it. This can be applied to a non-model organism without a reference
genome, though this creates the problem of how to begin data analysis on
unmapped and unannotated reads. Our program, Hot RAD, presents a
straightforward and easy-to-use method to take raw Illumina data that has been
RAD tagged and produce consensus contigs or sequence stacks using a distributed
framework, creating a basis on which to begin analyzing an organism's DNA. The
GUI (graphical user interface) element of our tool makes it easy for those not
familiar with the command line to take raw sequence files and produce usable
data in a timely manner.
|
[
{
"created": "Fri, 20 Nov 2015 20:56:45 GMT",
"version": "v1"
}
] |
2015-11-23
|
[
[
"Assour",
"Lauren A.",
""
],
[
"LaRosa",
"Nicholas",
""
],
[
"Emrich",
"Scott J.",
""
]
] |
Restriction site Associated DNA (RAD) tagging (also known as RAD-seq, etc.) is an emerging method for analyzing an organism's genome without completely sequencing it. This can be applied to a non-model organism without a reference genome, though this creates the problem of how to begin data analysis on unmapped and unannotated reads. Our program, Hot RAD, presents a straightforward and easy-to-use method to take raw Illumina data that has been RAD tagged and produce consensus contigs or sequence stacks using a distributed framework, creating a basis on which to begin analyzing an organism's DNA. The GUI (graphical user interface) element of our tool makes it easy for those not familiar with the command line to take raw sequence files and produce usable data in a timely manner.
|
1703.00698
|
Alain Destexhe
|
Yann Zerlaut and Alain Destexhe
|
A mean-field model for conductance-based networks of adaptive
exponential integrate-and-fire neurons
|
21 pages, 7 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of
neocortical processing at mesoscopic scales. Since VSDi signals report the
average membrane potential, it seems natural to use a mean-field formalism to
model such signals. Here, we investigate a mean-field model of networks of
Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based
synaptic interactions. The AdEx model can capture the spiking response of
different cell types, such as regular-spiking (RS) excitatory neurons and
fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism,
together with a semi-analytic approach to the transfer function of AdEx
neurons. We compare the predictions of this mean-field model to simulated
networks of RS-FS cells, first at the level of the spontaneous activity of the
network, which is well predicted by the mean-field model. Second, we
investigate the response of the network to time-varying external input, and
show that the mean-field model accurately predicts the response time course of
the population. One notable exception was that the "tail" of the response at
long times was not well predicted, because the mean-field does not include
adaptation mechanisms. We conclude that the Master Equation formalism can yield
mean-field models that predict well the behavior of nonlinear networks with
conductance-based interactions and various electrophysiolgical properties, and
should be a good candidate to model VSDi signals where both excitatory and
inhibitory neurons contribute.
|
[
{
"created": "Thu, 2 Mar 2017 10:19:17 GMT",
"version": "v1"
}
] |
2017-03-03
|
[
[
"Zerlaut",
"Yann",
""
],
[
"Destexhe",
"Alain",
""
]
] |
Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at mesoscopic scales. Since VSDi signals report the average membrane potential, it seems natural to use a mean-field formalism to model such signals. Here, we investigate a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. The AdEx model can capture the spiking response of different cell types, such as regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the mean-field model. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model accurately predicts the response time course of the population. One notable exception was that the "tail" of the response at long times was not well predicted, because the mean-field does not include adaptation mechanisms. We conclude that the Master Equation formalism can yield mean-field models that predict well the behavior of nonlinear networks with conductance-based interactions and various electrophysiolgical properties, and should be a good candidate to model VSDi signals where both excitatory and inhibitory neurons contribute.
|
1710.04400
|
Olha Shchur
|
A. Vidybida, O. Shchur
|
Information reduction in a reverberatory neuronal network through
convergence to complex oscillatory firing patterns
|
15 pages, 8 figures, 4 tables, manuscript accepted by Biosystems.
Content of this work was presented at the Twelvth International Neural Coding
Workshop in Cologne, Germany
|
BioSystems (2017) 161, pp. 24-30
|
10.1016/j.biosystems.2017.07.008
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study dynamics of a reverberating neural net by means of computer
simulation. The net, which is composed of 9 leaky integrate-and-fire (LIF)
neurons arranged in a square lattice, is fully connected with interneuronal
communication delay proportional to the corresponding distance. The network is
initially stimulated with different stimuli and then goes freely. For each
stimulus, in the course of free evolution, activity either dies out completely
or the network converges to a periodic trajectory, which may be different for
different stimuli. The latter is observed for a set of 285290 initial stimuli
which constitutes 83% of all stimuli applied. By applying each stimulus from
the set, we found 102 different periodic end-states. By analyzing the
trajectories, we conclude that neuronal firing is the necessary prerequisite
for merging different trajectories into a single one, which eventually
transforms into a periodic regime. Observed phenomena of self-organization in
the time domain are discussed as a possible model for processes taking place
during perception. The repetitive firing in the periodic regimes could underpin
memory formation.
|
[
{
"created": "Thu, 12 Oct 2017 08:07:14 GMT",
"version": "v1"
}
] |
2023-06-16
|
[
[
"Vidybida",
"A.",
""
],
[
"Shchur",
"O.",
""
]
] |
We study dynamics of a reverberating neural net by means of computer simulation. The net, which is composed of 9 leaky integrate-and-fire (LIF) neurons arranged in a square lattice, is fully connected with interneuronal communication delay proportional to the corresponding distance. The network is initially stimulated with different stimuli and then goes freely. For each stimulus, in the course of free evolution, activity either dies out completely or the network converges to a periodic trajectory, which may be different for different stimuli. The latter is observed for a set of 285290 initial stimuli which constitutes 83% of all stimuli applied. By applying each stimulus from the set, we found 102 different periodic end-states. By analyzing the trajectories, we conclude that neuronal firing is the necessary prerequisite for merging different trajectories into a single one, which eventually transforms into a periodic regime. Observed phenomena of self-organization in the time domain are discussed as a possible model for processes taking place during perception. The repetitive firing in the periodic regimes could underpin memory formation.
|
1905.03972
|
Zhongqi Tian
|
Zhong-Qi Kyle Tian, Douglas Zhou, David Cai
|
Digital System Reconstruction by Pairwise Transfer Entropy
|
10 pages, 4 figures
| null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transfer entropy (TE) is an attractive model-free method to detect causality
and infer structural connectivity of general digital systems. However it relies
on high dimensions used in its definition to clearly remove the memory effect
and distinguish the direct causality from the indirect ones which makes it
almost inoperable in practice. In this work, we try to use a low order and
pairwise TE framework with binary data suitably filtered from the recorded
signals to avoid the high dimensional problem. Under this setting, we find and
explain that the TE values from the connected and unconnected pairs have a
significant difference of magnitude, which can be easily classified by cluster
methods. This phenomenon widely and robustly holds over a wide range of systems
and dynamical regimes. In addition, we find the TE value is quadratically
related to the coupling strength and thus we can establish a quantitative
mapping between the causal and structural connectivity.
|
[
{
"created": "Fri, 10 May 2019 07:08:33 GMT",
"version": "v1"
}
] |
2019-05-13
|
[
[
"Tian",
"Zhong-Qi Kyle",
""
],
[
"Zhou",
"Douglas",
""
],
[
"Cai",
"David",
""
]
] |
Transfer entropy (TE) is an attractive model-free method to detect causality and infer structural connectivity of general digital systems. However it relies on high dimensions used in its definition to clearly remove the memory effect and distinguish the direct causality from the indirect ones which makes it almost inoperable in practice. In this work, we try to use a low order and pairwise TE framework with binary data suitably filtered from the recorded signals to avoid the high dimensional problem. Under this setting, we find and explain that the TE values from the connected and unconnected pairs have a significant difference of magnitude, which can be easily classified by cluster methods. This phenomenon widely and robustly holds over a wide range of systems and dynamical regimes. In addition, we find the TE value is quadratically related to the coupling strength and thus we can establish a quantitative mapping between the causal and structural connectivity.
|
q-bio/0603032
|
Olaf Wolkenhauer
|
Peter Wellstead, Rick Middleton, Olaf Wolkenhauer
|
Feedback Medicine: Control Systems Concepts in Personalised, Predictive
Medicine and Combinatorial Intervention
|
22 pages, 10 figures
| null | null |
27-03-06
|
q-bio.TO q-bio.QM
| null |
In its broadest definition, systems biology is the application of a `systems'
way of thinking about and doing cell biology. By implication, this also invites
us to consider a systems approach in the context of medicine and the treatment
of disease. In particular, the idea that systems biology can form the basis of
a personalised, predictive medicine will require that much closer attention is
paid to the analytic properties of the feedback loops which will be set up by a
personalised approach to healthcare. To emphasize the role that feedback theory
will play in understanding personalised medicine, we use the term feedback
medicine to describe the issues outlined.In these notes we consider feedback
and control systems concepts applied to two important themes in medical systems
biology - personalised medicine and combinatorial intervention. In particular,
we formulate a feedback control interpretation for the administration of
medicine, and relate them to various forms of medical treatment.
|
[
{
"created": "Mon, 27 Mar 2006 10:25:17 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Wellstead",
"Peter",
""
],
[
"Middleton",
"Rick",
""
],
[
"Wolkenhauer",
"Olaf",
""
]
] |
In its broadest definition, systems biology is the application of a `systems' way of thinking about and doing cell biology. By implication, this also invites us to consider a systems approach in the context of medicine and the treatment of disease. In particular, the idea that systems biology can form the basis of a personalised, predictive medicine will require that much closer attention is paid to the analytic properties of the feedback loops which will be set up by a personalised approach to healthcare. To emphasize the role that feedback theory will play in understanding personalised medicine, we use the term feedback medicine to describe the issues outlined.In these notes we consider feedback and control systems concepts applied to two important themes in medical systems biology - personalised medicine and combinatorial intervention. In particular, we formulate a feedback control interpretation for the administration of medicine, and relate them to various forms of medical treatment.
|
1802.09381
|
Jean-Philippe Vert
|
Beyrem Khalfaoui (CBIO), Jean-Philippe Vert (CBIO, DMA)
|
DropLasso: A robust variant of Lasso for single cell RNA-seq data
| null | null | null | null |
q-bio.QM cs.CV q-bio.GN stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure
the genome-wide transcriptome of many individual cells in parallel, but results
in noisy data with many dropout events. Existing methods to learn molecular
signatures from bulk transcriptomic data may therefore not be adapted to
scRNA-seq data, in order to automatically classify individual cells into
predefined classes. We propose a new method called DropLasso to learn a
molecular signature from scRNA-seq data. DropLasso extends the dropout
regularisation technique, popular in neural network training, to esti- mate
sparse linear models. It is well adapted to data corrupted by dropout noise,
such as scRNA-seq data, and we clarify how it relates to elastic net
regularisation. We provide promising results on simulated and real scRNA-seq
data, suggesting that DropLasso may be better adapted than standard regularisa-
tions to infer molecular signatures from scRNA-seq data.
|
[
{
"created": "Mon, 26 Feb 2018 15:10:44 GMT",
"version": "v1"
}
] |
2018-02-27
|
[
[
"Khalfaoui",
"Beyrem",
"",
"CBIO"
],
[
"Vert",
"Jean-Philippe",
"",
"CBIO, DMA"
]
] |
Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure the genome-wide transcriptome of many individual cells in parallel, but results in noisy data with many dropout events. Existing methods to learn molecular signatures from bulk transcriptomic data may therefore not be adapted to scRNA-seq data, in order to automatically classify individual cells into predefined classes. We propose a new method called DropLasso to learn a molecular signature from scRNA-seq data. DropLasso extends the dropout regularisation technique, popular in neural network training, to esti- mate sparse linear models. It is well adapted to data corrupted by dropout noise, such as scRNA-seq data, and we clarify how it relates to elastic net regularisation. We provide promising results on simulated and real scRNA-seq data, suggesting that DropLasso may be better adapted than standard regularisa- tions to infer molecular signatures from scRNA-seq data.
|
2009.08491
|
Qiong Wang
|
Qiong Wang, Tengteng Tang, David Cooper, Felipe Eltit, Peter Fratzl,
Pierre Guy and Rizhi Wang
|
Globular structure of the hypermineralized tissue in human femoral neck
| null |
Journal of Structural Biology,Volume 212, Issue 2, 1 November
2020, 107606
|
10.1016/j.jsb.2020.107606
| null |
q-bio.TO cond-mat.mtrl-sci
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bone becomes more fragile with ageing. Among many structural changes, a thin
layer of highly mineralized and brittle tissue covers part of the external
surface of the thin femoral neck cortex in older people and has been proposed
to increase hip fragility. However, there have been very limited reports on
this hypermineralized tissue in the femoral neck, especially on its
ultrastructure. Such information is critical to understanding both the
mineralization process and its contributions to hip fracture. Here, we use
multiple advanced techniques to characterize the ultrastructure of the
hypermineralized tissue in the neck across various length scales. Synchrotron
radiation micro-CT found larger but less densely distributed cellular lacunae
in hypermineralized tissue than in lamellar bone. When examined under FIB-SEM,
the hypermineralized tissue was mainly composed of mineral globules with sizes
varying from submicron to a few microns. Nano-sized channels were present
within the mineral globules and oriented with the surrounding organic matrix.
Transmission electron microscopy showed the apatite inside globules were poorly
crystalline, while those at the boundaries between the globules had
well-defined lattice structure with crystallinity similar to the apatite
mineral in lamellar bone. No preferred mineral orientation was observed both
inside each globule and at the boundaries. Collectively, we conclude based on
these new observations that the hypermineralized tissue is non-lamellar and has
less organized mineral, which may contribute to the high brittleness of the
tissue.
|
[
{
"created": "Thu, 17 Sep 2020 18:26:17 GMT",
"version": "v1"
}
] |
2020-09-21
|
[
[
"Wang",
"Qiong",
""
],
[
"Tang",
"Tengteng",
""
],
[
"Cooper",
"David",
""
],
[
"Eltit",
"Felipe",
""
],
[
"Fratzl",
"Peter",
""
],
[
"Guy",
"Pierre",
""
],
[
"Wang",
"Rizhi",
""
]
] |
Bone becomes more fragile with ageing. Among many structural changes, a thin layer of highly mineralized and brittle tissue covers part of the external surface of the thin femoral neck cortex in older people and has been proposed to increase hip fragility. However, there have been very limited reports on this hypermineralized tissue in the femoral neck, especially on its ultrastructure. Such information is critical to understanding both the mineralization process and its contributions to hip fracture. Here, we use multiple advanced techniques to characterize the ultrastructure of the hypermineralized tissue in the neck across various length scales. Synchrotron radiation micro-CT found larger but less densely distributed cellular lacunae in hypermineralized tissue than in lamellar bone. When examined under FIB-SEM, the hypermineralized tissue was mainly composed of mineral globules with sizes varying from submicron to a few microns. Nano-sized channels were present within the mineral globules and oriented with the surrounding organic matrix. Transmission electron microscopy showed the apatite inside globules were poorly crystalline, while those at the boundaries between the globules had well-defined lattice structure with crystallinity similar to the apatite mineral in lamellar bone. No preferred mineral orientation was observed both inside each globule and at the boundaries. Collectively, we conclude based on these new observations that the hypermineralized tissue is non-lamellar and has less organized mineral, which may contribute to the high brittleness of the tissue.
|
q-bio/0411027
|
Jose Vilar
|
Jose M. G. Vilar and Stanislas Leibler
|
DNA looping and physical constraints on transcription regulation
| null |
J. Mol. Biol. 331, 981-989 (2003)
| null | null |
q-bio.SC cond-mat.soft cond-mat.stat-mech physics.bio-ph
| null |
DNA looping participates in transcriptional regulation, for instance, by
allowing distal binding sites to act synergistically. Here we study this
process and compare different regulatory mechanisms based on repression with
and without looping. Within a simple mathematical model for the lac operon, we
show that regulation based on DNA looping, in addition to increasing the
repression level, can reduce the fluctuations of transcription and, at the same
time, decrease the sensitivity to changes in the number of regulatory proteins.
Looping is thus able to circumvent some of the constraints inherent to
mechanisms based solely on binding to a single operator site and provides a
mechanism to regulate not only the average properties of transcription but also
its fluctuations.
|
[
{
"created": "Thu, 11 Nov 2004 23:50:28 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Vilar",
"Jose M. G.",
""
],
[
"Leibler",
"Stanislas",
""
]
] |
DNA looping participates in transcriptional regulation, for instance, by allowing distal binding sites to act synergistically. Here we study this process and compare different regulatory mechanisms based on repression with and without looping. Within a simple mathematical model for the lac operon, we show that regulation based on DNA looping, in addition to increasing the repression level, can reduce the fluctuations of transcription and, at the same time, decrease the sensitivity to changes in the number of regulatory proteins. Looping is thus able to circumvent some of the constraints inherent to mechanisms based solely on binding to a single operator site and provides a mechanism to regulate not only the average properties of transcription but also its fluctuations.
|
1707.09908
|
Mareike Fischer
|
Kristina Wicke and Mareike Fischer
|
On the Shapley value of unrooted phylogenetic trees
| null | null | null | null |
q-bio.PE math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Shapley value, a solution concept from cooperative game theory, has
recently been considered for both unrooted and rooted phylogenetic trees. Here,
we focus on the Shapley value of unrooted trees and first revisit the so-called
split counts of a phylogenetic tree and the Shapley transformation matrix that
allows for the calculation of the Shapley value from the edge lengths of a
tree. We show that non-isomorphic trees may have permutation-equivalent Shapley
transformation matrices and permutation-equivalent null spaces. This implies
that estimating the split counts associated with a tree or the Shapley values
of its leaves does not suffice to reconstruct the correct tree topology. We
then turn to the use of the Shapley value as a prioritization criterion in
biodiversity conservation and compare it to a greedy solution concept. Here, we
show that for certain phylogenetic trees, the Shapley value may fail as a
prioritization criterion, meaning that the diversity spanned by the top $k$
species (ranked by their Shapley values) cannot approximate the total diversity
of all $n$ species.
|
[
{
"created": "Mon, 31 Jul 2017 15:04:38 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Feb 2018 21:45:33 GMT",
"version": "v2"
}
] |
2018-02-07
|
[
[
"Wicke",
"Kristina",
""
],
[
"Fischer",
"Mareike",
""
]
] |
The Shapley value, a solution concept from cooperative game theory, has recently been considered for both unrooted and rooted phylogenetic trees. Here, we focus on the Shapley value of unrooted trees and first revisit the so-called split counts of a phylogenetic tree and the Shapley transformation matrix that allows for the calculation of the Shapley value from the edge lengths of a tree. We show that non-isomorphic trees may have permutation-equivalent Shapley transformation matrices and permutation-equivalent null spaces. This implies that estimating the split counts associated with a tree or the Shapley values of its leaves does not suffice to reconstruct the correct tree topology. We then turn to the use of the Shapley value as a prioritization criterion in biodiversity conservation and compare it to a greedy solution concept. Here, we show that for certain phylogenetic trees, the Shapley value may fail as a prioritization criterion, meaning that the diversity spanned by the top $k$ species (ranked by their Shapley values) cannot approximate the total diversity of all $n$ species.
|
0908.4408
|
Rui Vilela-Mendes
|
R. Vilela Mendes and Carlos Aguirre
|
Cooperation, punishment, emergence of government and the tragedy of
authorities
|
13 pages, 4 figures
|
Complex Systems 20 (2012) 363-374
| null | null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Under the conditions prevalent in the late Pleistocene (small hunter-gatherer
groups and frequent inter-group conflicts), coevolution of gene-related
behavior and culturally transmitted group-level institutions provides a
plausible explanation for the parochial altruistic and reciprocator traits of
most modern humans. When, with the agricultural revolution, societies became
larger and more complex, the collective nature of the monitoring and punishment
of norm violators was no longer effective. This led to the emergence of new
institutions of governance and social hierarchies. Likely, the smooth
acceptance of the new institutions was possible only because, in the majority
of the population, the reciprocator trait had become an internalized norm.
However the new ruling class has its own dynamics which in turn may lead to new
social crisis. Using a simple model, inspired on previous work by Bowles and
Gintis, these effects are studied here.
|
[
{
"created": "Sun, 30 Aug 2009 17:55:09 GMT",
"version": "v1"
}
] |
2012-11-27
|
[
[
"Mendes",
"R. Vilela",
""
],
[
"Aguirre",
"Carlos",
""
]
] |
Under the conditions prevalent in the late Pleistocene (small hunter-gatherer groups and frequent inter-group conflicts), coevolution of gene-related behavior and culturally transmitted group-level institutions provides a plausible explanation for the parochial altruistic and reciprocator traits of most modern humans. When, with the agricultural revolution, societies became larger and more complex, the collective nature of the monitoring and punishment of norm violators was no longer effective. This led to the emergence of new institutions of governance and social hierarchies. Likely, the smooth acceptance of the new institutions was possible only because, in the majority of the population, the reciprocator trait had become an internalized norm. However the new ruling class has its own dynamics which in turn may lead to new social crisis. Using a simple model, inspired on previous work by Bowles and Gintis, these effects are studied here.
|
1505.06915
|
Jean-Philippe Vert
|
K\'evin Vervier (CBIO), Pierre Mah\'e, Maud Tournoud, Jean-Baptiste
Veyrieras, Jean-Philippe Vert (CBIO)
|
Large-scale Machine Learning for Metagenomics Sequence Classification
| null | null | null | null |
q-bio.QM cs.CE cs.LG q-bio.GN stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metagenomics characterizes the taxonomic diversity of microbial communities
by sequencing DNA directly from an environmental sample. One of the main
challenges in metagenomics data analysis is the binning step, where each
sequenced read is assigned to a taxonomic clade. Due to the large volume of
metagenomics datasets, binning methods need fast and accurate algorithms that
can operate with reasonable computing requirements. While standard
alignment-based methods provide state-of-the-art performance, compositional
approaches that assign a taxonomic class to a DNA read based on the k-mers it
contains have the potential to provide faster solutions. In this work, we
investigate the potential of modern, large-scale machine learning
implementations for taxonomic affectation of next-generation sequencing reads
based on their k-mers profile. We show that machine learning-based
compositional approaches benefit from increasing the number of fragments
sampled from reference genome to tune their parameters, up to a coverage of
about 10, and from increasing the k-mer size to about 12. Tuning these models
involves training a machine learning model on about 10 8 samples in 10 7
dimensions, which is out of reach of standard soft-wares but can be done
efficiently with modern implementations for large-scale machine learning. The
resulting models are competitive in terms of accuracy with well-established
alignment tools for problems involving a small to moderate number of candidate
species, and for reasonable amounts of sequencing errors. We show, however,
that compositional approaches are still limited in their ability to deal with
problems involving a greater number of species, and more sensitive to
sequencing errors. We finally confirm that compositional approach achieve
faster prediction times, with a gain of 3 to 15 times with respect to the
BWA-MEM short read mapper, depending on the number of candidate species and the
level of sequencing noise.
|
[
{
"created": "Tue, 26 May 2015 12:02:04 GMT",
"version": "v1"
}
] |
2015-05-27
|
[
[
"Vervier",
"Kévin",
"",
"CBIO"
],
[
"Mahé",
"Pierre",
"",
"CBIO"
],
[
"Tournoud",
"Maud",
"",
"CBIO"
],
[
"Veyrieras",
"Jean-Baptiste",
"",
"CBIO"
],
[
"Vert",
"Jean-Philippe",
"",
"CBIO"
]
] |
Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Due to the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. In this work, we investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile. We show that machine learning-based compositional approaches benefit from increasing the number of fragments sampled from reference genome to tune their parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning these models involves training a machine learning model on about 10 8 samples in 10 7 dimensions, which is out of reach of standard soft-wares but can be done efficiently with modern implementations for large-scale machine learning. The resulting models are competitive in terms of accuracy with well-established alignment tools for problems involving a small to moderate number of candidate species, and for reasonable amounts of sequencing errors. We show, however, that compositional approaches are still limited in their ability to deal with problems involving a greater number of species, and more sensitive to sequencing errors. We finally confirm that compositional approach achieve faster prediction times, with a gain of 3 to 15 times with respect to the BWA-MEM short read mapper, depending on the number of candidate species and the level of sequencing noise.
|
2006.00652
|
Areejit Samal
|
R.P. Vivek-Ananth, Abhijit Rana, Nithin Rajan, Himansu S. Biswal,
Areejit Samal
|
In silico identification of potential natural product inhibitors of
human proteases key to SARS-CoV-2 infection
|
51 pages, 7 Figures, 2 Tables, 4 SI Figures, SI Tables available upon
request from authors
|
Molecules 2020, 25(17), 3822
|
10.3390/molecules25173822
| null |
q-bio.BM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Presently, there are no approved drugs or vaccines to treat COVID-19 which
has spread to over 200 countries and is responsible for over 3,65,000 deaths
worldwide. Recent studies have shown that two human proteases, TMPRSS2 and
cathepsin L, play a key role in host cell entry of SARS-CoV-2. Importantly,
inhibitors of these proteases were shown to block SARS-CoV-2 infection. Here,
we perform virtual screening of 14010 phytochemicals produced by Indian
medicinal plants to identify natural product inhibitors of TMPRSS2 and
cathepsin L. We built a homology model of TMPRSS2 as an experimentally
determined structure is not available. AutoDock Vina was used to perform
molecular docking of phytochemicals against TMPRSS2 model structure and
cathepsin L crystal structure. Potential phytochemical inhibitors were filtered
by comparing their docked binding energies with those of known inhibitors of
TMPRSS2 and cathepsin L. Further, the ligand binding site residues and
non-covalent protein-ligand interactions were used as an additional filter to
identify phytochemical inhibitors that either bind to or form interactions with
residues important for the specificity of the target proteases. We have
identified 96 inhibitors of TMPRSS2 and 9 inhibitors of cathepsin L among
phytochemicals of Indian medicinal plants. The top inhibitors of TMPRSS2 are
Edgeworoside C, Adlumidine and Qingdainone, and of cathepsin L is Ararobinol.
Interestingly, several herbal sources of identified phytochemical inhibitors
have antiviral or anti-inflammatory use in traditional medicine. Further in
vitro and in vivo testing is needed before clinical trials of the promising
phytochemical inhibitors identified here.
|
[
{
"created": "Mon, 1 Jun 2020 00:37:13 GMT",
"version": "v1"
}
] |
2020-09-02
|
[
[
"Vivek-Ananth",
"R. P.",
""
],
[
"Rana",
"Abhijit",
""
],
[
"Rajan",
"Nithin",
""
],
[
"Biswal",
"Himansu S.",
""
],
[
"Samal",
"Areejit",
""
]
] |
Presently, there are no approved drugs or vaccines to treat COVID-19 which has spread to over 200 countries and is responsible for over 3,65,000 deaths worldwide. Recent studies have shown that two human proteases, TMPRSS2 and cathepsin L, play a key role in host cell entry of SARS-CoV-2. Importantly, inhibitors of these proteases were shown to block SARS-CoV-2 infection. Here, we perform virtual screening of 14010 phytochemicals produced by Indian medicinal plants to identify natural product inhibitors of TMPRSS2 and cathepsin L. We built a homology model of TMPRSS2 as an experimentally determined structure is not available. AutoDock Vina was used to perform molecular docking of phytochemicals against TMPRSS2 model structure and cathepsin L crystal structure. Potential phytochemical inhibitors were filtered by comparing their docked binding energies with those of known inhibitors of TMPRSS2 and cathepsin L. Further, the ligand binding site residues and non-covalent protein-ligand interactions were used as an additional filter to identify phytochemical inhibitors that either bind to or form interactions with residues important for the specificity of the target proteases. We have identified 96 inhibitors of TMPRSS2 and 9 inhibitors of cathepsin L among phytochemicals of Indian medicinal plants. The top inhibitors of TMPRSS2 are Edgeworoside C, Adlumidine and Qingdainone, and of cathepsin L is Ararobinol. Interestingly, several herbal sources of identified phytochemical inhibitors have antiviral or anti-inflammatory use in traditional medicine. Further in vitro and in vivo testing is needed before clinical trials of the promising phytochemical inhibitors identified here.
|
2111.08436
|
Gholamreza Jafari
|
Nastaran Allahyari, Amir Kargaran, Ali Hosseiny, G. R. Jafari
|
The structure of gene-gene networks beyond pairwise interactions
|
16 pages, 5 figures, 4 tables
| null |
10.1371/journal.pone.0258596
| null |
q-bio.MN physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite its high and direct impact on nearly all biological processes, the
underlying structure of gene-gene interaction networks is investigated so far
according to pair connections. To address this, we explore the gene interaction
networks of the yeast Saccharomyces cerevisiae beyond pairwise interaction
using the structural balance theory (SBT). Specifically, we ask whether
essential and nonessential gene interaction networks are structurally balanced.
We study triadic interactions in the weighted signed undirected gene networks
and observe that balanced and unbalanced triads are over and underrepresented
in both networks, thus beautifully in line with the strong notion of balance.
Moreover, we note that the energy distribution of triads is significantly
different in both essential and nonessential networks compared with the
shuffled networks. Yet, this difference is greater in the essential network
regarding the frequency as well as the energy of triads. Additionally, results
demonstrate that triads in the essential gene network are more interconnected
through sharing common links, while in the nonessential network they tend to be
isolated. Last but not least, we investigate the contribution of all-length
signed walks and its impact on the degree of balance. Our findings reveal that
interestingly when considering longer cycles the nonessential gene network is
more balanced compared to the essential network.
|
[
{
"created": "Thu, 14 Oct 2021 14:51:09 GMT",
"version": "v1"
}
] |
2022-05-04
|
[
[
"Allahyari",
"Nastaran",
""
],
[
"Kargaran",
"Amir",
""
],
[
"Hosseiny",
"Ali",
""
],
[
"Jafari",
"G. R.",
""
]
] |
Despite its high and direct impact on nearly all biological processes, the underlying structure of gene-gene interaction networks is investigated so far according to pair connections. To address this, we explore the gene interaction networks of the yeast Saccharomyces cerevisiae beyond pairwise interaction using the structural balance theory (SBT). Specifically, we ask whether essential and nonessential gene interaction networks are structurally balanced. We study triadic interactions in the weighted signed undirected gene networks and observe that balanced and unbalanced triads are over and underrepresented in both networks, thus beautifully in line with the strong notion of balance. Moreover, we note that the energy distribution of triads is significantly different in both essential and nonessential networks compared with the shuffled networks. Yet, this difference is greater in the essential network regarding the frequency as well as the energy of triads. Additionally, results demonstrate that triads in the essential gene network are more interconnected through sharing common links, while in the nonessential network they tend to be isolated. Last but not least, we investigate the contribution of all-length signed walks and its impact on the degree of balance. Our findings reveal that interestingly when considering longer cycles the nonessential gene network is more balanced compared to the essential network.
|
0809.1479
|
Jeferson J. Arenzon
|
Estrella A. Sicardi, Hugo Fort, Mendeli H. Vainstein, Jeferson J.
Arenzon
|
Random mobility and spatial structure often enhance cooperation
|
Submitted to J. Theor. Biol
|
J. Theor. Biol. 256 (2009) 240
|
10.1016/j.jtbi.2008.09.022
| null |
q-bio.PE physics.bio-ph physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The effects of an unconditional move rule in the spatial Prisoner's Dilemma,
Snowdrift and Stag Hunt games are studied. Spatial structure by itself is known
to modify the outcome of many games when compared with a randomly mixed
population, sometimes promoting, sometimes inhibiting cooperation. Here we show
that random dilution and mobility may suppress the inhibiting factors of the
spatial structure in the Snowdrift game, while enhancing the already larger
cooperation found in the Prisoner's dilemma and Stag Hunt games.
|
[
{
"created": "Tue, 9 Sep 2008 12:53:37 GMT",
"version": "v1"
}
] |
2009-02-25
|
[
[
"Sicardi",
"Estrella A.",
""
],
[
"Fort",
"Hugo",
""
],
[
"Vainstein",
"Mendeli H.",
""
],
[
"Arenzon",
"Jeferson J.",
""
]
] |
The effects of an unconditional move rule in the spatial Prisoner's Dilemma, Snowdrift and Stag Hunt games are studied. Spatial structure by itself is known to modify the outcome of many games when compared with a randomly mixed population, sometimes promoting, sometimes inhibiting cooperation. Here we show that random dilution and mobility may suppress the inhibiting factors of the spatial structure in the Snowdrift game, while enhancing the already larger cooperation found in the Prisoner's dilemma and Stag Hunt games.
|
2006.01171
|
Austin Clyde
|
Austin Clyde, Xiaotian Duan, Rick Stevens
|
Regression Enrichment Surfaces: a Simple Analysis Technique for Virtual
Drug Screening Models
| null | null | null | null |
q-bio.QM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new method for understanding the performance of a model in
virtual drug screening tasks. While most virtual screening problems present as
a mix between ranking and classification, the models are typically trained as
regression models presenting a problem requiring either a choice of a cutoff or
ranking measure. Our method, regression enrichment surfaces (RES), is based on
the goal of virtual screening: to detect as many of the top-performing
treatments as possible. We outline history of virtual screening performance
measures and the idea behind RES. We offer a python package and details on how
to implement and interpret the results.
|
[
{
"created": "Mon, 1 Jun 2020 18:03:25 GMT",
"version": "v1"
}
] |
2020-06-03
|
[
[
"Clyde",
"Austin",
""
],
[
"Duan",
"Xiaotian",
""
],
[
"Stevens",
"Rick",
""
]
] |
We present a new method for understanding the performance of a model in virtual drug screening tasks. While most virtual screening problems present as a mix between ranking and classification, the models are typically trained as regression models presenting a problem requiring either a choice of a cutoff or ranking measure. Our method, regression enrichment surfaces (RES), is based on the goal of virtual screening: to detect as many of the top-performing treatments as possible. We outline history of virtual screening performance measures and the idea behind RES. We offer a python package and details on how to implement and interpret the results.
|
2403.16933
|
Paul Haider
|
Benjamin Ellenberger, Paul Haider, Jakob Jordan, Kevin Max, Ismael
Jaras, Laura Kriener, Federico Benitez, Mihai A. Petrovici
|
Backpropagation through space, time, and the brain
|
First authorship shared by Benjamin Ellenberger and Paul Haider
| null | null | null |
q-bio.NC cs.AI cs.LG cs.NE eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
How physical networks of neurons, bound by spatio-temporal locality
constraints, can perform efficient credit assignment, remains, to a large
extent, an open question. In machine learning, the answer is almost universally
given by the error backpropagation algorithm, through both space and time.
However, this algorithm is well-known to rely on biologically implausible
assumptions, in particular with respect to spatio-temporal (non-)locality.
Alternative forward-propagation models such as real-time recurrent learning
only partially solve the locality problem, but only at the cost of scaling, due
to prohibitive storage requirements.
We introduce Generalized Latent Equilibrium (GLE), a computational framework
for fully local spatio-temporal credit assignment in physical, dynamical
networks of neurons. We start by defining an energy based on neuron-local
mismatches, from which we derive both neuronal dynamics via stationarity and
parameter dynamics via gradient descent. The resulting dynamics can be
interpreted as a real-time, biologically plausible approximation of
backpropagation through space and time in deep cortical networks with
continuous-time neuronal dynamics and continuously active, local synaptic
plasticity. In particular, GLE exploits the morphology of dendritic trees to
enable more complex information storage and processing in single neurons, as
well as the ability of biological neurons to phase-shift their output rate with
respect to their membrane potential, which is essential in both directions of
information propagation. For the forward computation, it enables the mapping of
time-continuous inputs to neuronal space, effectively performing a
spatio-temporal convolution. For the backward computation, it permits the
temporal inversion of feedback signals, which consequently approximate the
adjoint variables necessary for useful parameter updates.
|
[
{
"created": "Mon, 25 Mar 2024 16:57:02 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2024 17:37:05 GMT",
"version": "v2"
}
] |
2024-07-17
|
[
[
"Ellenberger",
"Benjamin",
""
],
[
"Haider",
"Paul",
""
],
[
"Jordan",
"Jakob",
""
],
[
"Max",
"Kevin",
""
],
[
"Jaras",
"Ismael",
""
],
[
"Kriener",
"Laura",
""
],
[
"Benitez",
"Federico",
""
],
[
"Petrovici",
"Mihai A.",
""
]
] |
How physical networks of neurons, bound by spatio-temporal locality constraints, can perform efficient credit assignment, remains, to a large extent, an open question. In machine learning, the answer is almost universally given by the error backpropagation algorithm, through both space and time. However, this algorithm is well-known to rely on biologically implausible assumptions, in particular with respect to spatio-temporal (non-)locality. Alternative forward-propagation models such as real-time recurrent learning only partially solve the locality problem, but only at the cost of scaling, due to prohibitive storage requirements. We introduce Generalized Latent Equilibrium (GLE), a computational framework for fully local spatio-temporal credit assignment in physical, dynamical networks of neurons. We start by defining an energy based on neuron-local mismatches, from which we derive both neuronal dynamics via stationarity and parameter dynamics via gradient descent. The resulting dynamics can be interpreted as a real-time, biologically plausible approximation of backpropagation through space and time in deep cortical networks with continuous-time neuronal dynamics and continuously active, local synaptic plasticity. In particular, GLE exploits the morphology of dendritic trees to enable more complex information storage and processing in single neurons, as well as the ability of biological neurons to phase-shift their output rate with respect to their membrane potential, which is essential in both directions of information propagation. For the forward computation, it enables the mapping of time-continuous inputs to neuronal space, effectively performing a spatio-temporal convolution. For the backward computation, it permits the temporal inversion of feedback signals, which consequently approximate the adjoint variables necessary for useful parameter updates.
|
1902.06122
|
Daniel Baker
|
Daniel H. Baker, Greta Vilidaite, Freya A. Lygo, Anika K. Smith, Tessa
R. Flack, Andre D. Gouws and Timothy J. Andrews
|
Power contours: optimising sample size and precision in experimental
psychology and human neuroscience
| null |
Psychological Methods (2021), 26(3): 295-314
|
10.1037/met0000337
| null |
q-bio.NC stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
When designing experimental studies with human participants, experimenters
must decide how many trials each participant will complete, as well as how many
participants to test. Most discussion of statistical power (the ability of a
study design to detect an effect) has focussed on sample size, and assumed
sufficient trials. Here we explore the influence of both factors on statistical
power, represented as a two-dimensional plot on which iso-power contours can be
visualised. We demonstrate the conditions under which the number of trials is
particularly important, i.e. when the within-participant variance is large
relative to the between-participants variance. We then derive power contour
plots using existing data sets for eight experimental paradigms and
methodologies (including reaction times, sensory thresholds, fMRI, MEG, and
EEG), and provide example code to calculate estimates of the within- and
between-participant variance for each method. In all cases, the
within-participant variance was larger than the between-participants variance,
meaning that the number of trials has a meaningful influence on statistical
power in commonly used paradigms. An online tool is provided
(https://shiny.york.ac.uk/powercontours/) for generating power contours, from
which the optimal combination of trials and participants can be calculated when
designing future studies.
|
[
{
"created": "Sat, 16 Feb 2019 16:30:59 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Feb 2019 06:43:51 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Jul 2019 15:24:19 GMT",
"version": "v3"
},
{
"created": "Sat, 2 Nov 2019 10:02:27 GMT",
"version": "v4"
},
{
"created": "Tue, 4 Feb 2020 05:57:53 GMT",
"version": "v5"
}
] |
2021-08-30
|
[
[
"Baker",
"Daniel H.",
""
],
[
"Vilidaite",
"Greta",
""
],
[
"Lygo",
"Freya A.",
""
],
[
"Smith",
"Anika K.",
""
],
[
"Flack",
"Tessa R.",
""
],
[
"Gouws",
"Andre D.",
""
],
[
"Andrews",
"Timothy J.",
""
]
] |
When designing experimental studies with human participants, experimenters must decide how many trials each participant will complete, as well as how many participants to test. Most discussion of statistical power (the ability of a study design to detect an effect) has focussed on sample size, and assumed sufficient trials. Here we explore the influence of both factors on statistical power, represented as a two-dimensional plot on which iso-power contours can be visualised. We demonstrate the conditions under which the number of trials is particularly important, i.e. when the within-participant variance is large relative to the between-participants variance. We then derive power contour plots using existing data sets for eight experimental paradigms and methodologies (including reaction times, sensory thresholds, fMRI, MEG, and EEG), and provide example code to calculate estimates of the within- and between-participant variance for each method. In all cases, the within-participant variance was larger than the between-participants variance, meaning that the number of trials has a meaningful influence on statistical power in commonly used paradigms. An online tool is provided (https://shiny.york.ac.uk/powercontours/) for generating power contours, from which the optimal combination of trials and participants can be calculated when designing future studies.
|
2208.11233
|
Carlos Hernandez-Suarez M
|
Carlos Hernandez-Suarez and Osval Montesinos Lopez
|
Revised calculation of the coefficient of parentage in plant breeding
|
The pdf version is 23 pages long, with 6 figures and 5 tables
|
Plant breeding, 2022
|
10.1111/pbr.13071
| null |
q-bio.PE q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Coefficient of Parentage (COP) between two individuals is the expected
inbreeding of their offspring. Originally exploited by animal breeders, is now
a routine calculation among plant breeders as part of crop improvement
programs. Here we show that the COP between strains requires a different
calculation than the used to calculate the COP between individuals. Failure to
do so may result in an overestimation of the amount of inbreeding. Here we
provide a simple methodology to calculate the correct coefficient of parentage
between strains.
|
[
{
"created": "Tue, 23 Aug 2022 23:42:19 GMT",
"version": "v1"
}
] |
2022-12-01
|
[
[
"Hernandez-Suarez",
"Carlos",
""
],
[
"Lopez",
"Osval Montesinos",
""
]
] |
The Coefficient of Parentage (COP) between two individuals is the expected inbreeding of their offspring. Originally exploited by animal breeders, is now a routine calculation among plant breeders as part of crop improvement programs. Here we show that the COP between strains requires a different calculation than the used to calculate the COP between individuals. Failure to do so may result in an overestimation of the amount of inbreeding. Here we provide a simple methodology to calculate the correct coefficient of parentage between strains.
|
q-bio/0702005
|
Mikl\'os Cs\H{u}r\"os
|
Mikl\'os Cs\H{u}r\"os, J. Andrew Holey, Igor B. Rogozin
|
In search of lost introns
| null | null | null | null |
q-bio.PE q-bio.GN
| null |
Many fundamental questions concerning the emergence and subsequent evolution
of eukaryotic exon-intron organization are still unsettled. Genome-scale
comparative studies, which can shed light on crucial aspects of eukaryotic
evolution, require adequate computational tools.
We describe novel computational methods for studying spliceosomal intron
evolution. Our goal is to give a reliable characterization of the dynamics of
intron evolution. Our algorithmic innovations address the identification of
orthologous introns, and the likelihood-based analysis of intron data. We
discuss a compression method for the evaluation of the likelihood function,
which is noteworthy for phylogenetic likelihood problems in general. We prove
that after $O(nL)$ preprocessing time, subsequent evaluations take $O(nL/\log
L)$ time almost surely in the Yule-Harding random model of $n$-taxon
phylogenies, where $L$ is the input sequence length.
We illustrate the practicality of our methods by compiling and analyzing a
data set involving 18 eukaryotes, more than in any other study to date. The
study yields the surprising result that ancestral eukaryotes were fairly
intron-rich. For example, the bilaterian ancestor is estimated to have had more
than 90% as many introns as vertebrates do now.
|
[
{
"created": "Sat, 3 Feb 2007 21:26:02 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Csűrös",
"Miklós",
""
],
[
"Holey",
"J. Andrew",
""
],
[
"Rogozin",
"Igor B.",
""
]
] |
Many fundamental questions concerning the emergence and subsequent evolution of eukaryotic exon-intron organization are still unsettled. Genome-scale comparative studies, which can shed light on crucial aspects of eukaryotic evolution, require adequate computational tools. We describe novel computational methods for studying spliceosomal intron evolution. Our goal is to give a reliable characterization of the dynamics of intron evolution. Our algorithmic innovations address the identification of orthologous introns, and the likelihood-based analysis of intron data. We discuss a compression method for the evaluation of the likelihood function, which is noteworthy for phylogenetic likelihood problems in general. We prove that after $O(nL)$ preprocessing time, subsequent evaluations take $O(nL/\log L)$ time almost surely in the Yule-Harding random model of $n$-taxon phylogenies, where $L$ is the input sequence length. We illustrate the practicality of our methods by compiling and analyzing a data set involving 18 eukaryotes, more than in any other study to date. The study yields the surprising result that ancestral eukaryotes were fairly intron-rich. For example, the bilaterian ancestor is estimated to have had more than 90% as many introns as vertebrates do now.
|
2210.13323
|
Samuel W.K. Wong
|
Yuxuan Zhao, Samuel W.K. Wong
|
A Comparative Study of Compartmental Models for COVID-19 Transmission in
Ontario, Canada
|
26 pages, 8 figures
| null | null | null |
q-bio.PE stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of confirmed COVID-19 cases reached over 1.3 million in Ontario,
Canada by June 4, 2022. The continued spread of the virus underlying COVID-19
has been spurred by the emergence of variants since the initial outbreak in
December, 2019. Much attention has thus been devoted to tracking and modelling
the transmission of COVID-19. Compartmental models are commonly used to mimic
epidemic transmission mechanisms and are easy to understand. Their performance
in real-world settings, however, needs to be more thoroughly assessed. In this
comparative study, we examine five compartmental models -- four existing ones
and an extended model that we propose -- and analyze their ability to describe
COVID-19 transmission in Ontario from January 2022 to June 2022.
|
[
{
"created": "Mon, 24 Oct 2022 15:24:54 GMT",
"version": "v1"
}
] |
2022-10-25
|
[
[
"Zhao",
"Yuxuan",
""
],
[
"Wong",
"Samuel W. K.",
""
]
] |
The number of confirmed COVID-19 cases reached over 1.3 million in Ontario, Canada by June 4, 2022. The continued spread of the virus underlying COVID-19 has been spurred by the emergence of variants since the initial outbreak in December, 2019. Much attention has thus been devoted to tracking and modelling the transmission of COVID-19. Compartmental models are commonly used to mimic epidemic transmission mechanisms and are easy to understand. Their performance in real-world settings, however, needs to be more thoroughly assessed. In this comparative study, we examine five compartmental models -- four existing ones and an extended model that we propose -- and analyze their ability to describe COVID-19 transmission in Ontario from January 2022 to June 2022.
|
1605.00748
|
Hemachander Subramanian
|
Hemachander Subramanian, Robert A. Gatenby
|
Evolutionary advantage of a broken symmetry in autocatalytic polymers
explains fundamental properties of DNA
|
41 pages, 7 figures
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The macromolecules that encode and translate information in living systems,
DNA and RNA, exhibit distinctive structural asymmetries, including
homochirality or mirror image asymmetry and $3' - 5'$ directionality, that are
invariant across all life forms. The evolutionary advantages of these broken
symmetries remain unknown. Here we utilize a very simple model of hypothetical
self-replicating polymers to show that asymmetric autocatalytic polymers are
more successful in self-replication compared to their symmetric counterparts in
the Darwinian competition for space and common substrates. This broken-symmetry
property, called asymmetric cooperativity, arises with the maximization of a
replication potential, where the catalytic influence of inter-strand bonds on
their left and right neighbors is unequal. Asymmetric cooperativity also leads
to tentative, qualitative and simple evolution-based explanations for a number
of other properties of DNA that include four nucleotide alphabet, three
nucleotide codons, circular genomes, helicity, anti-parallel double-strand
orientation, heteromolecular base-pairing, asymmetric base compositions, and
palindromic instability, apart from the structural asymmetries mentioned above.
Our model results and tentative explanations are consistent with multiple lines
of experimental evidence, which include evidence for the presence of asymmetric
cooperativity in DNA.
|
[
{
"created": "Tue, 3 May 2016 03:34:36 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2016 16:41:17 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Mar 2017 03:27:46 GMT",
"version": "v3"
}
] |
2017-03-10
|
[
[
"Subramanian",
"Hemachander",
""
],
[
"Gatenby",
"Robert A.",
""
]
] |
The macromolecules that encode and translate information in living systems, DNA and RNA, exhibit distinctive structural asymmetries, including homochirality or mirror image asymmetry and $3' - 5'$ directionality, that are invariant across all life forms. The evolutionary advantages of these broken symmetries remain unknown. Here we utilize a very simple model of hypothetical self-replicating polymers to show that asymmetric autocatalytic polymers are more successful in self-replication compared to their symmetric counterparts in the Darwinian competition for space and common substrates. This broken-symmetry property, called asymmetric cooperativity, arises with the maximization of a replication potential, where the catalytic influence of inter-strand bonds on their left and right neighbors is unequal. Asymmetric cooperativity also leads to tentative, qualitative and simple evolution-based explanations for a number of other properties of DNA that include four nucleotide alphabet, three nucleotide codons, circular genomes, helicity, anti-parallel double-strand orientation, heteromolecular base-pairing, asymmetric base compositions, and palindromic instability, apart from the structural asymmetries mentioned above. Our model results and tentative explanations are consistent with multiple lines of experimental evidence, which include evidence for the presence of asymmetric cooperativity in DNA.
|
0705.2092
|
Erik Volz
|
Erik Volz
|
SIR dynamics in random networks with heterogeneous connectivity
|
25 pages, 6 figures. Greatly revised version of arXiv:physics/0508160
| null | null | null |
q-bio.PE q-bio.QM
| null |
Random networks with specified degree distributions have been proposed as
realistic models of population structure, yet the problem of dynamically
modeling SIR-type epidemics in random networks remains complex. I resolve this
dilemma by showing how the SIR dynamics can be modeled with a system of three
nonlinear ODE's. The method makes use of the probability generating function
(PGF) formalism for representing the degree distribution of a random network
and makes use of network-centric quantities such as the number of edges in a
well-defined category rather than node-centric quantities such as the number of
infecteds or susceptibles. The PGF provides a simple means of translating
between network and node-centric variables and determining the epidemic
incidence at any time. The theory also provides a simple means of tracking the
evolution of the degree distribution among susceptibles or infecteds. The
equations are used to demonstrate the dramatic effects that the degree
distribution plays on the final size of an epidemic as well as the speed with
which it spreads through the population. Power law degree distributions are
observed to generate an almost immediate expansion phase yet have a smaller
final size compared to homogeneous degree distributions such as the Poisson.
The equations are compared to stochastic simulations, which show good agreement
with the theory. Finally, the dynamic equations provide an alternative way of
determining the epidemic threshold where large-scale epidemics are expected to
occur, and below which epidemic behavior is limited to finite-sized outbreaks.
|
[
{
"created": "Tue, 15 May 2007 08:16:56 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Volz",
"Erik",
""
]
] |
Random networks with specified degree distributions have been proposed as realistic models of population structure, yet the problem of dynamically modeling SIR-type epidemics in random networks remains complex. I resolve this dilemma by showing how the SIR dynamics can be modeled with a system of three nonlinear ODE's. The method makes use of the probability generating function (PGF) formalism for representing the degree distribution of a random network and makes use of network-centric quantities such as the number of edges in a well-defined category rather than node-centric quantities such as the number of infecteds or susceptibles. The PGF provides a simple means of translating between network and node-centric variables and determining the epidemic incidence at any time. The theory also provides a simple means of tracking the evolution of the degree distribution among susceptibles or infecteds. The equations are used to demonstrate the dramatic effects that the degree distribution plays on the final size of an epidemic as well as the speed with which it spreads through the population. Power law degree distributions are observed to generate an almost immediate expansion phase yet have a smaller final size compared to homogeneous degree distributions such as the Poisson. The equations are compared to stochastic simulations, which show good agreement with the theory. Finally, the dynamic equations provide an alternative way of determining the epidemic threshold where large-scale epidemics are expected to occur, and below which epidemic behavior is limited to finite-sized outbreaks.
|
1005.4301
|
Amit Lakhanpal
|
Amit Lakhanpal, David Sprinzak, Michael B. Elowitz
|
Mutual inactivation of Notch and Delta permits a simple mechanism for
lateral inhibition patterning
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lateral inhibition patterns mediated by the Notch-Delta signaling system
occur in diverse developmental contexts. These systems are based on an
intercellular feedback loop in which Notch activation leads to down-regulation
of Delta. However, even in relatively well-characterized systems, the pathway
leading from Notch activation to Delta repression often remains elusive. Recent
work has shown that cis-interactions between Notch and Delta lead to mutual
inactivation of both proteins. Here we show that this type of cis-interaction
enables a simpler and more direct mechanism for lateral inhibition feedback
than those proposed previously. In this mechanism, Notch signaling directly
up-regulates Notch expression, thereby inactivating Delta through the mutual
inactivation of Notch and Delta proteins. This mechanism, which we term
Simplest Lateral Inhibition by Mutual Inactivation (SLIMI), can implement
patterning without requiring any additional genes or regulatory interactions.
Moreover, the key interaction of Notch expression in response to Notch
signaling has been observed in some systems. Stability analysis and simulation
of SLIMI mathematical models show that this lateral inhibition circuit is
capable of pattern formation across a broad range of parameter values. These
results provide a simple and plausible explanation for lateral inhibition
pattern formation during development.
|
[
{
"created": "Mon, 24 May 2010 10:40:38 GMT",
"version": "v1"
}
] |
2010-05-25
|
[
[
"Lakhanpal",
"Amit",
""
],
[
"Sprinzak",
"David",
""
],
[
"Elowitz",
"Michael B.",
""
]
] |
Lateral inhibition patterns mediated by the Notch-Delta signaling system occur in diverse developmental contexts. These systems are based on an intercellular feedback loop in which Notch activation leads to down-regulation of Delta. However, even in relatively well-characterized systems, the pathway leading from Notch activation to Delta repression often remains elusive. Recent work has shown that cis-interactions between Notch and Delta lead to mutual inactivation of both proteins. Here we show that this type of cis-interaction enables a simpler and more direct mechanism for lateral inhibition feedback than those proposed previously. In this mechanism, Notch signaling directly up-regulates Notch expression, thereby inactivating Delta through the mutual inactivation of Notch and Delta proteins. This mechanism, which we term Simplest Lateral Inhibition by Mutual Inactivation (SLIMI), can implement patterning without requiring any additional genes or regulatory interactions. Moreover, the key interaction of Notch expression in response to Notch signaling has been observed in some systems. Stability analysis and simulation of SLIMI mathematical models show that this lateral inhibition circuit is capable of pattern formation across a broad range of parameter values. These results provide a simple and plausible explanation for lateral inhibition pattern formation during development.
|
q-bio/0601021
|
Le Zhang
|
Thomas S. Deisboeck, Caterina Guiot, Pier Paolo Delsanto, Nicola Pugno
|
Does Cancer Growth Depend on Surface Extension?
|
11 pages, 1 figure
| null | null | null |
q-bio.TO
| null |
We argue that volumetric growth dynamics of a solid cancer depend on the
tumor system's overall surface extension. While this at first may seem evident,
to our knowledge, so far no theoretical argument has been presented explaining
this relationship explicitly. In here, we therefore develop a conceptual
framework based on the universal scaling law and then support our conjecture
through evaluation with experimental data.
|
[
{
"created": "Sat, 14 Jan 2006 16:28:51 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Deisboeck",
"Thomas S.",
""
],
[
"Guiot",
"Caterina",
""
],
[
"Delsanto",
"Pier Paolo",
""
],
[
"Pugno",
"Nicola",
""
]
] |
We argue that volumetric growth dynamics of a solid cancer depend on the tumor system's overall surface extension. While this at first may seem evident, to our knowledge, so far no theoretical argument has been presented explaining this relationship explicitly. In here, we therefore develop a conceptual framework based on the universal scaling law and then support our conjecture through evaluation with experimental data.
|
1906.04590
|
Gerardo Chowell
|
G. Chowell, A. Tariq, M. Kiskowski
|
Vaccination strategies to control Ebola epidemics in the context of
variable household inaccessibility levels
|
36 pages; 9 figures
| null | null | null |
q-bio.QM q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of the ongoing Ebola epidemic in DRC, active conflict and
community distrust are undermining control efforts, including vaccination
strategies. In this paper, we employed an individual-level stochastic
structured transmission model to assess the impact of vaccination strategies on
epidemic control in the context of variable levels of household
inaccessibility. We found that a ring vaccination strategy of close contacts
would not be effective for containing the epidemic in the context of
significant delays to vaccinating contacts even for low levels of household
inaccessibility and evaluate the impact of a supplemental community vaccination
strategy. For lower levels of inaccessibility, the probability of epidemic
containment increases over time. For higher levels of inaccessibility, even the
combined ring and community vaccination strategies are not expected to contain
the epidemic even though they help lower incidence levels, which saves lives,
makes the epidemic easier to contain and reduces spread to other communities.
We found that ring vaccination is effective for containing an outbreak until
the levels of inaccessibility exceeds approximately 10%, a combined ring and
community vaccination strategy is effective until the levels of inaccessibility
exceeds approximately 50%. Our findings underscore the need to enhance
community engagement to public health interventions.
|
[
{
"created": "Tue, 11 Jun 2019 13:42:32 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jun 2019 18:46:16 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Aug 2019 23:35:33 GMT",
"version": "v3"
}
] |
2019-08-26
|
[
[
"Chowell",
"G.",
""
],
[
"Tariq",
"A.",
""
],
[
"Kiskowski",
"M.",
""
]
] |
In the context of the ongoing Ebola epidemic in DRC, active conflict and community distrust are undermining control efforts, including vaccination strategies. In this paper, we employed an individual-level stochastic structured transmission model to assess the impact of vaccination strategies on epidemic control in the context of variable levels of household inaccessibility. We found that a ring vaccination strategy of close contacts would not be effective for containing the epidemic in the context of significant delays to vaccinating contacts even for low levels of household inaccessibility and evaluate the impact of a supplemental community vaccination strategy. For lower levels of inaccessibility, the probability of epidemic containment increases over time. For higher levels of inaccessibility, even the combined ring and community vaccination strategies are not expected to contain the epidemic even though they help lower incidence levels, which saves lives, makes the epidemic easier to contain and reduces spread to other communities. We found that ring vaccination is effective for containing an outbreak until the levels of inaccessibility exceeds approximately 10%, a combined ring and community vaccination strategy is effective until the levels of inaccessibility exceeds approximately 50%. Our findings underscore the need to enhance community engagement to public health interventions.
|
1204.6231
|
Carl Boettiger
|
Carl Boettiger and Alan Hastings
|
Quantifying Limits to Detection of Early Warning for Critical
Transitions
|
Accepted to Journal of the Royal Society Interface. 29 pages, 8
figures
| null | null | null |
q-bio.OT physics.data-an q-bio.PE
|
http://creativecommons.org/licenses/by/3.0/
|
Catastrophic regime shifts in complex natural systems may be averted through
advanced detection. Recent work has provided a proof-of-principle that many
systems approaching a catastrophic transition may be identified through the
lens of early warning indicators such as rising variance or increased return
times. Despite widespread appreciation of the difficulties and uncertainty
involved in such forecasts, proposed methods hardly ever characterize their
expected error rates. Without the benefits of replicates, controls, or
hindsight, applications of these approaches must quantify how reliable
different indicators are in avoiding false alarms, and how sensitive they are
to missing subtle warning signs. We propose a model based approach in order to
quantify this trade-off between reliability and sensitivity and allow
comparisons between different indicators. We show these error rates can be
quite severe for common indicators even under favorable assumptions, and also
illustrate how a model-based indicator can improve this performance. We
demonstrate how the performance of an early warning indicator varies in
different data sets, and suggest that uncertainty quantification become a more
central part of early warning predictions.
|
[
{
"created": "Thu, 26 Apr 2012 19:22:31 GMT",
"version": "v1"
}
] |
2012-04-30
|
[
[
"Boettiger",
"Carl",
""
],
[
"Hastings",
"Alan",
""
]
] |
Catastrophic regime shifts in complex natural systems may be averted through advanced detection. Recent work has provided a proof-of-principle that many systems approaching a catastrophic transition may be identified through the lens of early warning indicators such as rising variance or increased return times. Despite widespread appreciation of the difficulties and uncertainty involved in such forecasts, proposed methods hardly ever characterize their expected error rates. Without the benefits of replicates, controls, or hindsight, applications of these approaches must quantify how reliable different indicators are in avoiding false alarms, and how sensitive they are to missing subtle warning signs. We propose a model based approach in order to quantify this trade-off between reliability and sensitivity and allow comparisons between different indicators. We show these error rates can be quite severe for common indicators even under favorable assumptions, and also illustrate how a model-based indicator can improve this performance. We demonstrate how the performance of an early warning indicator varies in different data sets, and suggest that uncertainty quantification become a more central part of early warning predictions.
|
1311.2789
|
Stian Soiland-Reyes
|
Kristina M. Hettne, Harish Dharuri, Jun Zhao, Katherine Wolstencroft,
Khalid Belhajjame, Stian Soiland-Reyes, Eleni Mina, Mark Thompson, Don
Cruickshank, Lourdes Verdes-Montenegro, Julian Garrido, David de Roure, Oscar
Corcho, Graham Klyne, Reinout van Schouwen, Peter A. C. 't Hoen, Sean
Bechhofer, Carole Goble, Marco Roos
|
Structuring research methods and data with the Research Object model:
genomics workflows as a case study
|
35 pages, 10 figures, 1 table. Submitted to Journal of Biomedical
Semantics on 2013-05-13, resubmitted after reviews 2013-11-09, 2014-06-27.
Accepted in principle 2014-07-29. Published: 2014-09-18
http://www.jbiomedsem.com/content/5/1/41. Research Object homepage:
http://www.researchobject.org/
| null |
10.1186/2041-1480-5-41
|
uk-ac-man-scw:212837
|
q-bio.GN cs.DL
|
http://creativecommons.org/licenses/by/3.0/
|
One of the main challenges for biomedical research lies in the
computer-assisted integrative study of large and increasingly complex
combinations of data in order to understand molecular mechanisms. The
preservation of the materials and methods of such computational experiments
with clear annotations is essential for understanding an experiment, and this
is increasingly recognized in the bioinformatics community. Our assumption is
that offering means of digital, structured aggregation and annotation of the
objects of an experiment will provide necessary meta-data for a scientist to
understand and recreate the results of an experiment. To support this we
explored a model for the semantic description of a workflow-centric Research
Object (RO), where an RO is defined as a resource that aggregates other
resources, e.g., datasets, software, spreadsheets, text, etc. We applied this
model to a case study where we analysed human metabolite variation by
workflows.
|
[
{
"created": "Tue, 12 Nov 2013 14:23:33 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Aug 2014 13:28:07 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Sep 2014 10:37:56 GMT",
"version": "v3"
}
] |
2014-09-22
|
[
[
"Hettne",
"Kristina M.",
""
],
[
"Dharuri",
"Harish",
""
],
[
"Zhao",
"Jun",
""
],
[
"Wolstencroft",
"Katherine",
""
],
[
"Belhajjame",
"Khalid",
""
],
[
"Soiland-Reyes",
"Stian",
""
],
[
"Mina",
"Eleni",
""
],
[
"Thompson",
"Mark",
""
],
[
"Cruickshank",
"Don",
""
],
[
"Verdes-Montenegro",
"Lourdes",
""
],
[
"Garrido",
"Julian",
""
],
[
"de Roure",
"David",
""
],
[
"Corcho",
"Oscar",
""
],
[
"Klyne",
"Graham",
""
],
[
"van Schouwen",
"Reinout",
""
],
[
"Hoen",
"Peter A. C. 't",
""
],
[
"Bechhofer",
"Sean",
""
],
[
"Goble",
"Carole",
""
],
[
"Roos",
"Marco",
""
]
] |
One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows.
|
1508.02408
|
Michael Margaliot
|
Alon Raveh and Michael Margaliot and Eduardo D. Sontag and Tamir
Tuller
|
A Model for Competition for Ribosomes in the Cell
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale simultaneous mRNA translation and the resulting competition for
the available ribosomes has important implications to the cell's functioning
and evolution. Developing a better understanding of the intricate correlations
between these simultaneous processes, rather than focusing on the translation
of a single isolated transcript, should help in gaining a better understanding
of mRNA translation regulation and the way elongation rates affect organismal
fitness. A model of simultaneous translation is specifically important when
dealing with highly expressed genes, as these consume more resources. In
addition, such a model can lead to more accurate predictions that are needed in
the interconnection of translational modules in synthetic biology. We develop
and analyze a general model for large-scale simultaneous mRNA translation and
competition for ribosomes. This is based on combining several ribosome flow
models (RFMs) interconnected via a pool of free ribosomes. We prove that the
compound system always converges to a steady-state and that it always entrains
or phase locks to periodically time-varying transition rates in any of the mRNA
molecules. We use this model to explore the interactions between the various
mRNA molecules and ribosomes at steady-state. We show that increasing the
length of an mRNA molecule decreases the production rate of all the mRNAs.
Increasing any of the codon translation rates in a specific mRNA molecule
yields a local effect: an increase in the translation rate of this mRNA, and
also a global effect: the translation rates in the other mRNA molecules all
increase or all decrease. These results suggest that the effect of codon
decoding rates of endogenous and heterologous mRNAs on protein production is
more complicated than previously thought.
|
[
{
"created": "Mon, 10 Aug 2015 20:11:06 GMT",
"version": "v1"
}
] |
2015-08-12
|
[
[
"Raveh",
"Alon",
""
],
[
"Margaliot",
"Michael",
""
],
[
"Sontag",
"Eduardo D.",
""
],
[
"Tuller",
"Tamir",
""
]
] |
Large-scale simultaneous mRNA translation and the resulting competition for the available ribosomes has important implications to the cell's functioning and evolution. Developing a better understanding of the intricate correlations between these simultaneous processes, rather than focusing on the translation of a single isolated transcript, should help in gaining a better understanding of mRNA translation regulation and the way elongation rates affect organismal fitness. A model of simultaneous translation is specifically important when dealing with highly expressed genes, as these consume more resources. In addition, such a model can lead to more accurate predictions that are needed in the interconnection of translational modules in synthetic biology. We develop and analyze a general model for large-scale simultaneous mRNA translation and competition for ribosomes. This is based on combining several ribosome flow models (RFMs) interconnected via a pool of free ribosomes. We prove that the compound system always converges to a steady-state and that it always entrains or phase locks to periodically time-varying transition rates in any of the mRNA molecules. We use this model to explore the interactions between the various mRNA molecules and ribosomes at steady-state. We show that increasing the length of an mRNA molecule decreases the production rate of all the mRNAs. Increasing any of the codon translation rates in a specific mRNA molecule yields a local effect: an increase in the translation rate of this mRNA, and also a global effect: the translation rates in the other mRNA molecules all increase or all decrease. These results suggest that the effect of codon decoding rates of endogenous and heterologous mRNAs on protein production is more complicated than previously thought.
|
2104.13146
|
Zheng Zhao
|
Zheng Zhao and Philip E. Bourne
|
Using the structural kinome to systematize kinase drug discovery
|
22 pages, 2 figures, 3 tables
| null | null | null |
q-bio.BM q-bio.MN
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Kinase-targeted drug design is challenging. It requires designing inhibitors
that can bind to specific kinases when all kinase catalytic domains share a
common folding scaffold that binds ATP. Thus, obtaining the desired
selectivity, given the whole human kinome, is a fundamental task during
early-stage drug discovery. This begins with deciphering the kinase-ligand
characteristics, analyzing the structure-activity relationships, and
prioritizing the desired drug molecules across the whole kinome. Currently,
there are more than 300 kinases with released PDB structures, which provides a
substantial structural basis to gain these necessary insights. Here, we review
in silico structure-based methods - notably, a function-site interaction
fingerprint approach used in exploring the complete human kinome. In silico
methods can be explored synergistically with multiple cell-based or
protein-based assay platforms such as KINOMEscan. We conclude with new drug
discovery opportunities associated with kinase signaling networks and using
machine/deep learning techniques broadly referred to as structural biomedical
data science.
|
[
{
"created": "Tue, 27 Apr 2021 12:47:33 GMT",
"version": "v1"
}
] |
2021-04-28
|
[
[
"Zhao",
"Zheng",
""
],
[
"Bourne",
"Philip E.",
""
]
] |
Kinase-targeted drug design is challenging. It requires designing inhibitors that can bind to specific kinases when all kinase catalytic domains share a common folding scaffold that binds ATP. Thus, obtaining the desired selectivity, given the whole human kinome, is a fundamental task during early-stage drug discovery. This begins with deciphering the kinase-ligand characteristics, analyzing the structure-activity relationships, and prioritizing the desired drug molecules across the whole kinome. Currently, there are more than 300 kinases with released PDB structures, which provides a substantial structural basis to gain these necessary insights. Here, we review in silico structure-based methods - notably, a function-site interaction fingerprint approach used in exploring the complete human kinome. In silico methods can be explored synergistically with multiple cell-based or protein-based assay platforms such as KINOMEscan. We conclude with new drug discovery opportunities associated with kinase signaling networks and using machine/deep learning techniques broadly referred to as structural biomedical data science.
|
q-bio/0403032
|
Tzipe Govezensky
|
Jose A Garcia, Samantha Alvarez, Alejandro Flores, Tzipe Govezensky,
Juan R. Bobadilla, Marco V. Jose
|
Statistical analysis of the distribution of amino acids in Borrelia
burgdorferi genome under different genetic codes
|
7 pages,1 figure
| null |
10.1016/j.physa.2004.04.090
| null |
q-bio.GN
| null |
The genetic code is considered to be universal. In order to test if some
statistical properties of the coding bacterial genome were due to inherent
properties of the genetic code, we compared the autocorrelation function, the
scaling properties and the maximum entropy of the distribution of distances of
amino acids in sequences obtained by translating protein-coding regions from
the genome of Borrelia burgdorferi, under different genetic codes. Overall our
results indicate that these properties are very stable to perturbations made by
altering the genetic code. We also discuss the evolutionary likely implications
of the present results.
|
[
{
"created": "Mon, 22 Mar 2004 22:52:01 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Garcia",
"Jose A",
""
],
[
"Alvarez",
"Samantha",
""
],
[
"Flores",
"Alejandro",
""
],
[
"Govezensky",
"Tzipe",
""
],
[
"Bobadilla",
"Juan R.",
""
],
[
"Jose",
"Marco V.",
""
]
] |
The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.
|
1812.06186
|
Felix Meister
|
Felix Meister, Tiziano Passerini, Viorel Mihalef, Ahmet Tuysuzoglu,
Andreas Maier and Tommaso Mansi
|
Towards Fast Biomechanical Modeling of Soft Tissue Using Neural Networks
|
Accepted in Medical Imaging meets NeurIPS Workshop, NeurIPS 2018
| null | null | null |
q-bio.QM physics.med-ph q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To date, the simulation of organ deformations for applications like therapy
planning or image-guided interventions is calculated by solving the
elastodynamics equations. While efficient solvers have been proposed for fast
simulations, methods that are both real-time and accurate are still an open
challenge. An ideal, interactive solver would be able to provide physically and
numerically accurate results at high frame rate, which requires efficient force
computation and time integration. Towards this goal, we explore in this paper
for the first time the use of neural networks to directly learn the underlying
biomechanics. Given a 3D mesh of a soft tissue segmented from medical images,
we train a neural network to predict vertex-wise accelerations for a large time
step based on the current state of the system. The model is trained using the
deformation of a bar under torsion, and evaluated on different motions,
geometries, and hyperelastic material models. For predictions of ten times the
original time step we observed a mean error of 0.017mm $\pm$ 0.014 (0.032) at a
mesh size of 50mm x 50mm x 100mm. Predictions at 20dt yield an error of 2.10mm
$\pm$ 1.73 (4.37) and by further increasing the prediction time step the
maximum error rises to 38.3mm due to an artificial stiffening. In all
experiments our proposed method stayed stable, while the reference solver fails
to converge. Our experiments suggest that it is possible to directly learn the
mechanical simulation and open further investigations for the direct
application of machine learning to speed-up biophysics solvers.
|
[
{
"created": "Thu, 13 Dec 2018 18:57:53 GMT",
"version": "v1"
}
] |
2018-12-18
|
[
[
"Meister",
"Felix",
""
],
[
"Passerini",
"Tiziano",
""
],
[
"Mihalef",
"Viorel",
""
],
[
"Tuysuzoglu",
"Ahmet",
""
],
[
"Maier",
"Andreas",
""
],
[
"Mansi",
"Tommaso",
""
]
] |
To date, the simulation of organ deformations for applications like therapy planning or image-guided interventions is calculated by solving the elastodynamics equations. While efficient solvers have been proposed for fast simulations, methods that are both real-time and accurate are still an open challenge. An ideal, interactive solver would be able to provide physically and numerically accurate results at high frame rate, which requires efficient force computation and time integration. Towards this goal, we explore in this paper for the first time the use of neural networks to directly learn the underlying biomechanics. Given a 3D mesh of a soft tissue segmented from medical images, we train a neural network to predict vertex-wise accelerations for a large time step based on the current state of the system. The model is trained using the deformation of a bar under torsion, and evaluated on different motions, geometries, and hyperelastic material models. For predictions of ten times the original time step we observed a mean error of 0.017mm $\pm$ 0.014 (0.032) at a mesh size of 50mm x 50mm x 100mm. Predictions at 20dt yield an error of 2.10mm $\pm$ 1.73 (4.37) and by further increasing the prediction time step the maximum error rises to 38.3mm due to an artificial stiffening. In all experiments our proposed method stayed stable, while the reference solver fails to converge. Our experiments suggest that it is possible to directly learn the mechanical simulation and open further investigations for the direct application of machine learning to speed-up biophysics solvers.
|
1511.03963
|
Alexandre Ferreira Ramos
|
Guilherme N. Prata, Jos\'e Eduardo M. Hornos, Alexandre F. Ramos
|
A stochastic model for gene transcription on Drosophila melanogaster
embryos
|
27 pages, 6 figures in Phys. Rev. E 2015
|
Phys. Rev. E 93, 022403 (2016)
|
10.1103/PhysRevE.93.022403
| null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We examine immunostaining experimental data for the formation of the strip 2
of $even-skipped$ ($eve$) transcripts on $D.$ $melanogaster$ embryos. An
estimate of the factor converting immunofluorescence intensity units into
molecular numbers is given. The analysis of the $eve$ mRNA's dynamics at the
region of the stripe 2 suggests that the promoter site of the gene has two
distinct regimes: an earlier phase when it is predominantly activated until a
critical time when it becomes mainly repressed. That suggests proposing a
stochastic binary model for gene transcription on $D.$ $melanogaster$ embryos.
Our model has two random variables: the transcripts number and the state of the
source of mRNAs given as active or repressed. We are able to reproduce
available experimental data for the average number of transcripts. An analysis
of the random fluctuations on the number of $eve$ mRNA's and their consequences
on the spatial precision of the stripe 2 is presented. We show that the
position of the anterior/posterior borders fluctuate around their average
position by $\sim 1 \%$ of the embryo length which is similar to what is found
experimentally. The fitting of data by such a simple model suggests that it can
be useful to understand the functions of randomness during developmental
processes.
|
[
{
"created": "Thu, 12 Nov 2015 16:55:53 GMT",
"version": "v1"
}
] |
2016-02-10
|
[
[
"Prata",
"Guilherme N.",
""
],
[
"Hornos",
"José Eduardo M.",
""
],
[
"Ramos",
"Alexandre F.",
""
]
] |
We examine immunostaining experimental data for the formation of the strip 2 of $even-skipped$ ($eve$) transcripts on $D.$ $melanogaster$ embryos. An estimate of the factor converting immunofluorescence intensity units into molecular numbers is given. The analysis of the $eve$ mRNA's dynamics at the region of the stripe 2 suggests that the promoter site of the gene has two distinct regimes: an earlier phase when it is predominantly activated until a critical time when it becomes mainly repressed. That suggests proposing a stochastic binary model for gene transcription on $D.$ $melanogaster$ embryos. Our model has two random variables: the transcripts number and the state of the source of mRNAs given as active or repressed. We are able to reproduce available experimental data for the average number of transcripts. An analysis of the random fluctuations on the number of $eve$ mRNA's and their consequences on the spatial precision of the stripe 2 is presented. We show that the position of the anterior/posterior borders fluctuate around their average position by $\sim 1 \%$ of the embryo length which is similar to what is found experimentally. The fitting of data by such a simple model suggests that it can be useful to understand the functions of randomness during developmental processes.
|
1610.04962
|
Omid Sadat Rezai
|
Omid Rezai, Pinar Boyraz Jentsch, Bryan Tripp
|
A Rich Source of Labels for Deep Network Models of the Primate Dorsal
Visual Stream
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep convolutional neural networks (CNNs) have structures that are loosely
related to that of the primate visual cortex. Surprisingly, when these networks
are trained for object classification, the activity of their early,
intermediate, and later layers becomes closely related to activity patterns in
corresponding parts of the primate ventral visual stream. The activity
statistics are far from identical, but perhaps remaining differences can be
minimized in order to produce artificial networks with highly brain-like
activity and performance, which would provide a rich source of insight into
primate vision. One way to align CNN activity more closely with neural activity
is to add cost functions that directly drive deep layers to approximate neural
recordings. However, suitably large datasets are particularly difficult to
obtain for deep structures, such as the primate middle temporal area (MT). To
work around this barrier, we have developed a rich empirical model of activity
in MT. The model is pixel-computable, so it can provide an arbitrarily large
(but approximate) set of labels to better guide learning in the corresponding
layers of deep networks. Our model approximates a number of MT phenomena more
closely than previous models. Furthermore, our model approximates population
statistics in detail through fourteen parameter distributions that we estimated
from the electrophysiology literature. In general, deep networks with internal
representations that closely approximate those of the brain may help to clarify
the mechanisms that produce these representations, and the roles of various
properties of these representations in performance of vision tasks. Although
our empirical model inevitably differs from real neural activity, it allows
tuning properties to be modulated independently, which may allow very detailed
exploration of the origins and functional roles of these properties.
|
[
{
"created": "Mon, 17 Oct 2016 03:16:58 GMT",
"version": "v1"
}
] |
2016-10-18
|
[
[
"Rezai",
"Omid",
""
],
[
"Jentsch",
"Pinar Boyraz",
""
],
[
"Tripp",
"Bryan",
""
]
] |
Deep convolutional neural networks (CNNs) have structures that are loosely related to that of the primate visual cortex. Surprisingly, when these networks are trained for object classification, the activity of their early, intermediate, and later layers becomes closely related to activity patterns in corresponding parts of the primate ventral visual stream. The activity statistics are far from identical, but perhaps remaining differences can be minimized in order to produce artificial networks with highly brain-like activity and performance, which would provide a rich source of insight into primate vision. One way to align CNN activity more closely with neural activity is to add cost functions that directly drive deep layers to approximate neural recordings. However, suitably large datasets are particularly difficult to obtain for deep structures, such as the primate middle temporal area (MT). To work around this barrier, we have developed a rich empirical model of activity in MT. The model is pixel-computable, so it can provide an arbitrarily large (but approximate) set of labels to better guide learning in the corresponding layers of deep networks. Our model approximates a number of MT phenomena more closely than previous models. Furthermore, our model approximates population statistics in detail through fourteen parameter distributions that we estimated from the electrophysiology literature. In general, deep networks with internal representations that closely approximate those of the brain may help to clarify the mechanisms that produce these representations, and the roles of various properties of these representations in performance of vision tasks. Although our empirical model inevitably differs from real neural activity, it allows tuning properties to be modulated independently, which may allow very detailed exploration of the origins and functional roles of these properties.
|
0905.4092
|
Pankaj Mehta
|
Pankaj Mehta, Sidhartha Goyal, Tao Long, Bonnie Bassler, Ned S.
Wingreen
|
Information processing and signal integration in bacterial quorum
sensing
|
Supporting information is in appendix
| null | null | null |
q-bio.MN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bacteria communicate using secreted chemical signaling molecules called
autoinducers in a process known as quorum sensing. The quorum-sensing network
of the marine bacterium {\it Vibrio harveyi} employs three autoinducers, each
known to encode distinct ecological information. Yet how cells integrate and
interpret the information contained within the three autoinducer signals
remains a mystery. Here, we develop a new framework for analyzing signal
integration based on Information Theory and use it to analyze quorum sensing in
{\it V. harveyi}. We quantify how much the cells can learn about individual
autoinducers and explain the experimentally observed input-output relation of
the {\it V. harveyi} quorum-sensing circuit. Our results suggest that the need
to limit interference between input signals places strong constraints on the
architecture of bacterial signal-integration networks, and that bacteria likely
have evolved active strategies for minimizing this interference. Here we
analyze two such strategies: manipulation of autoinducer production and
feedback on receptor number ratios.
|
[
{
"created": "Mon, 25 May 2009 22:50:20 GMT",
"version": "v1"
}
] |
2009-05-27
|
[
[
"Mehta",
"Pankaj",
""
],
[
"Goyal",
"Sidhartha",
""
],
[
"Long",
"Tao",
""
],
[
"Bassler",
"Bonnie",
""
],
[
"Wingreen",
"Ned S.",
""
]
] |
Bacteria communicate using secreted chemical signaling molecules called autoinducers in a process known as quorum sensing. The quorum-sensing network of the marine bacterium {\it Vibrio harveyi} employs three autoinducers, each known to encode distinct ecological information. Yet how cells integrate and interpret the information contained within the three autoinducer signals remains a mystery. Here, we develop a new framework for analyzing signal integration based on Information Theory and use it to analyze quorum sensing in {\it V. harveyi}. We quantify how much the cells can learn about individual autoinducers and explain the experimentally observed input-output relation of the {\it V. harveyi} quorum-sensing circuit. Our results suggest that the need to limit interference between input signals places strong constraints on the architecture of bacterial signal-integration networks, and that bacteria likely have evolved active strategies for minimizing this interference. Here we analyze two such strategies: manipulation of autoinducer production and feedback on receptor number ratios.
|
2209.09971
|
Luciano Stucchi
|
Luciano Stucchi, Javier Galeano, Juan Manuel Pastor, Jos\'e Mar\'ia
Iriondo, Jos\'e A. Cuesta
|
Prevalence of mutualism in a simple model of microbial co-evolution
|
13 pages, 11 figures, 2 tables, includes Supplementary Material
|
Physical Review E 106, 054401 (2022)
|
10.1103/PhysRevE.106.054401
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Evolutionary transitions among ecological interactions are widely known,
although their detailed dynamics remain absent for most population models.
Adaptive dynamics has been used to illustrate how the parameters of population
models might shift through evolution, but within an ecological regime. Here we
use adaptive dynamics combined with a generalised logistic model of population
dynamics to show that transitions of ecological interactions might appear as a
consequence of evolution. To this purpose we introduce a two-microbial toy
model in which population parameters are determined by a bookkeeping of
resources taken from (and excreted to) the environment, as well as from the
byproducts of the other species. Despite its simplicity, this model exhibits
all kinds of potential ecological transitions, some of which resemble those
found in nature. Overall, the model shows a clear trend toward the emergence of
mutualism.
|
[
{
"created": "Tue, 20 Sep 2022 20:00:34 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Sep 2022 04:41:55 GMT",
"version": "v2"
}
] |
2022-11-14
|
[
[
"Stucchi",
"Luciano",
""
],
[
"Galeano",
"Javier",
""
],
[
"Pastor",
"Juan Manuel",
""
],
[
"Iriondo",
"José María",
""
],
[
"Cuesta",
"José A.",
""
]
] |
Evolutionary transitions among ecological interactions are widely known, although their detailed dynamics remain absent for most population models. Adaptive dynamics has been used to illustrate how the parameters of population models might shift through evolution, but within an ecological regime. Here we use adaptive dynamics combined with a generalised logistic model of population dynamics to show that transitions of ecological interactions might appear as a consequence of evolution. To this purpose we introduce a two-microbial toy model in which population parameters are determined by a bookkeeping of resources taken from (and excreted to) the environment, as well as from the byproducts of the other species. Despite its simplicity, this model exhibits all kinds of potential ecological transitions, some of which resemble those found in nature. Overall, the model shows a clear trend toward the emergence of mutualism.
|
q-bio/0410028
|
Binder Hans
|
Hans Binder and Stephan Preibisch
|
Specific and non specific hybridization of oligonucleotide probes on
microarrays
| null | null |
10.1529/biophysj.104.055343
| null |
q-bio.BM
| null |
Gene expression analysis by means of microarrays is based on the sequence
specific binding of mRNA to DNA oligonucleotide probes and its measurement
using fluorescent labels. The binding of RNA fragments involving other
sequences than the intended target is problematic because it adds a "chemical
background" to the signal, which is not related to the expression degree of the
target gene. The paper presents a molecular signature of specific and non
specific hybridization with potential consequences for gene expression
analysis. We analyzed the signal intensities of perfect match (PM) and mismatch
(MM) probes of GeneChip microarrays to specify the effect of specific and non
specific hybridization. We found that these events give rise to different
relations between the PM and MM intensities as function of the middle base of
the PMs, namely a triplet- (C>G=T>A>0) and a duplet-like (C=T>0>G=A) pattern of
the PM-MM log-intensity difference upon binding of specific and non specific
RNA fragments, respectively. The systematic behaviour of the intensity
difference can be rationalized on the level of base pairings of DNA/RNA
oligonucleotide duplexes in the middle of the probe sequence. Non-specific
binding is characterized by the reversal of the central Watson Crick (WC)
pairing for each PM/MM probe pair, whereas specific binding refers to the
combination of a WC and a self complementary (SC) pairing in PM and MM probes,
respectively. The intensity of complementary MM introduces a systematic source
of variation which decreases the precision of expression measures based on the
MM intensities.
|
[
{
"created": "Mon, 25 Oct 2004 09:15:36 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Apr 2005 15:57:06 GMT",
"version": "v2"
}
] |
2009-11-10
|
[
[
"Binder",
"Hans",
""
],
[
"Preibisch",
"Stephan",
""
]
] |
Gene expression analysis by means of microarrays is based on the sequence specific binding of mRNA to DNA oligonucleotide probes and its measurement using fluorescent labels. The binding of RNA fragments involving other sequences than the intended target is problematic because it adds a "chemical background" to the signal, which is not related to the expression degree of the target gene. The paper presents a molecular signature of specific and non specific hybridization with potential consequences for gene expression analysis. We analyzed the signal intensities of perfect match (PM) and mismatch (MM) probes of GeneChip microarrays to specify the effect of specific and non specific hybridization. We found that these events give rise to different relations between the PM and MM intensities as function of the middle base of the PMs, namely a triplet- (C>G=T>A>0) and a duplet-like (C=T>0>G=A) pattern of the PM-MM log-intensity difference upon binding of specific and non specific RNA fragments, respectively. The systematic behaviour of the intensity difference can be rationalized on the level of base pairings of DNA/RNA oligonucleotide duplexes in the middle of the probe sequence. Non-specific binding is characterized by the reversal of the central Watson Crick (WC) pairing for each PM/MM probe pair, whereas specific binding refers to the combination of a WC and a self complementary (SC) pairing in PM and MM probes, respectively. The intensity of complementary MM introduces a systematic source of variation which decreases the precision of expression measures based on the MM intensities.
|
1703.05428
|
Yu Takagi
|
Yu Takagi, Yuki Sakai, Giuseppe Lisi, Noriaki Yahata, Yoshinari Abe,
Seiji Nishida, Takashi Nakamae, Jun Morimoto, Mitsuo Kawato, Jin Narumoto and
Saori C. Tanaka
|
A neural marker of obsessive-compulsive disorder from whole-brain
functional connectivity
|
47 pages, 3 figures
|
Sci Rep. 2017 Aug 8;7(1):7538
|
10.1038/s41598-017-07792-7
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Obsessive-compulsive disorder (OCD) is a common psychiatric disorder with a
lifetime prevalence of 2-3 percent. Recently, brain activity in the resting
state is gathering attention as a new means of exploring altered functional
connectivity in psychiatric disorders. Although previous resting-state
functional magnetic resonance imaging studies investigated neurobiological
abnormalities of patients with OCD, there are concerns that should be
addressed. One concern is the validity of the hypothesis employed. Most studies
used seed-based analysis of the fronto-striatal circuit, despite the potential
for abnormalities in other regions. A hypothesis-free study is a promising
approach in such a case, while it requires researchers to handle a dataset with
large dimensions. Another concern is the reliability of biomarkers derived from
a single dataset, which may be influenced by cohort-specific features. Here, by
employing a recently developed machine-learning algorithm to avoid these
concerns, we identified the first OCD biomarker that is generalized to an
external dataset. We also demonstrated that the functional connectivities that
contributed to the classification were widely distributed rather than locally
constrained. Our generalizable classifier has the potential not only to deepen
our understanding of the abnormal neural substrates of OCD but also to find use
in clinical applications.
|
[
{
"created": "Wed, 15 Mar 2017 23:54:37 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2017 16:53:09 GMT",
"version": "v2"
}
] |
2017-09-07
|
[
[
"Takagi",
"Yu",
""
],
[
"Sakai",
"Yuki",
""
],
[
"Lisi",
"Giuseppe",
""
],
[
"Yahata",
"Noriaki",
""
],
[
"Abe",
"Yoshinari",
""
],
[
"Nishida",
"Seiji",
""
],
[
"Nakamae",
"Takashi",
""
],
[
"Morimoto",
"Jun",
""
],
[
"Kawato",
"Mitsuo",
""
],
[
"Narumoto",
"Jin",
""
],
[
"Tanaka",
"Saori C.",
""
]
] |
Obsessive-compulsive disorder (OCD) is a common psychiatric disorder with a lifetime prevalence of 2-3 percent. Recently, brain activity in the resting state is gathering attention as a new means of exploring altered functional connectivity in psychiatric disorders. Although previous resting-state functional magnetic resonance imaging studies investigated neurobiological abnormalities of patients with OCD, there are concerns that should be addressed. One concern is the validity of the hypothesis employed. Most studies used seed-based analysis of the fronto-striatal circuit, despite the potential for abnormalities in other regions. A hypothesis-free study is a promising approach in such a case, while it requires researchers to handle a dataset with large dimensions. Another concern is the reliability of biomarkers derived from a single dataset, which may be influenced by cohort-specific features. Here, by employing a recently developed machine-learning algorithm to avoid these concerns, we identified the first OCD biomarker that is generalized to an external dataset. We also demonstrated that the functional connectivities that contributed to the classification were widely distributed rather than locally constrained. Our generalizable classifier has the potential not only to deepen our understanding of the abnormal neural substrates of OCD but also to find use in clinical applications.
|
2302.12381
|
Md Nurul Anwar
|
Md Nurul Anwar, Roslyn I. Hickson, Somya Mehra, David J. Price, James
M. McCaw, Mark B. Flegg, and Jennifer A. Flegg
|
Optimal interruption of P. vivax malaria transmission using mass drug
administration
|
39 pages, 14 figures
| null |
10.1007/s11538-023-01153-4
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
\textit{Plasmodium vivax} is the most geographically widespread
malaria-causing parasite resulting in significant associated global morbidity
and mortality. One of the factors driving this widespread phenomenon is the
ability of the parasites to remain dormant in the liver. Known as hypnozoites,
they reside in the liver following an initial exposure, before activating later
to cause further infections, referred to as relapses. As around 79-96$\%$ of
infections are attributed to relapses, we expect it will be highly impactful to
apply treatment to target the hypnozoite reservoir to eliminate \textit{P.
vivax}. Treatment with a radical cure to target the hypnozoite reservoir is a
potential tool to control or eliminate \textit{P. vivax}. We have developed a
multiscale mathematical model as a system of integro-differential equations
that captures the complex dynamics of \textit{P. vivax} hypnozoites and the
effect of hypnozoite relapse on disease transmission. Here, we use our model to
study the anticipated effect of radical cure treatment administered via a mass
drug administration (MDA) program. We implement multiple rounds of MDA with a
fixed interval between rounds, starting from different steady-state disease
prevalences. We then construct an optimisation model to obtain the optimal MDA
interval. We also incorporate mosquito seasonality in our model to study its
effect on the optimal treatment regime. We find that the effect of MDA
interventions is temporary and depends on the pre-intervention disease
prevalence (and choice of model parameters) as well as the number of MDA rounds
under consideration. We find radical cure alone may not be enough to lead to
\textit{P. vivax} elimination under our mathematical model (and choice of model
parameters) since the prevalence of infection eventually returns to pre-MDA
levels.
|
[
{
"created": "Fri, 24 Feb 2023 00:59:14 GMT",
"version": "v1"
}
] |
2023-09-18
|
[
[
"Anwar",
"Md Nurul",
""
],
[
"Hickson",
"Roslyn I.",
""
],
[
"Mehra",
"Somya",
""
],
[
"Price",
"David J.",
""
],
[
"McCaw",
"James M.",
""
],
[
"Flegg",
"Mark B.",
""
],
[
"Flegg",
"Jennifer A.",
""
]
] |
\textit{Plasmodium vivax} is the most geographically widespread malaria-causing parasite resulting in significant associated global morbidity and mortality. One of the factors driving this widespread phenomenon is the ability of the parasites to remain dormant in the liver. Known as hypnozoites, they reside in the liver following an initial exposure, before activating later to cause further infections, referred to as relapses. As around 79-96$\%$ of infections are attributed to relapses, we expect it will be highly impactful to apply treatment to target the hypnozoite reservoir to eliminate \textit{P. vivax}. Treatment with a radical cure to target the hypnozoite reservoir is a potential tool to control or eliminate \textit{P. vivax}. We have developed a multiscale mathematical model as a system of integro-differential equations that captures the complex dynamics of \textit{P. vivax} hypnozoites and the effect of hypnozoite relapse on disease transmission. Here, we use our model to study the anticipated effect of radical cure treatment administered via a mass drug administration (MDA) program. We implement multiple rounds of MDA with a fixed interval between rounds, starting from different steady-state disease prevalences. We then construct an optimisation model to obtain the optimal MDA interval. We also incorporate mosquito seasonality in our model to study its effect on the optimal treatment regime. We find that the effect of MDA interventions is temporary and depends on the pre-intervention disease prevalence (and choice of model parameters) as well as the number of MDA rounds under consideration. We find radical cure alone may not be enough to lead to \textit{P. vivax} elimination under our mathematical model (and choice of model parameters) since the prevalence of infection eventually returns to pre-MDA levels.
|
2007.13028
|
Aubain Nzokem PhD
|
Aubain Nzokem and Neal Madras
|
epidemic dynamics and adaptive vaccination strategy: renewal equation
approach
| null |
Bull Math Biol 82, 122 (2020)
|
10.1007/s11538-020-00802-2
| null |
q-bio.PE math.CA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use analytical methods to investigate a continuous vaccination strategy
effects on the infectious disease dynamics in a closed population and a
demographically open population. The methodology and key assumptions are based
on Breda et al (2012). We show that the cumulative force of infection for the
closed population and the endemic force of infection in the demographically
open population can be reduced significantly by combining two factors: the
vaccine effectiveness and the vaccination rate. The impact of these factors on
the force of infection can transform an endemic steady state into a
disease-free state. Keywords: Force of infection, Cumulative force of
infection, Scalar-renewal equation, Per capita death rate, Lambert function,
adaptive vaccination strategy
|
[
{
"created": "Sat, 25 Jul 2020 22:56:42 GMT",
"version": "v1"
}
] |
2020-09-24
|
[
[
"Nzokem",
"Aubain",
""
],
[
"Madras",
"Neal",
""
]
] |
We use analytical methods to investigate a continuous vaccination strategy effects on the infectious disease dynamics in a closed population and a demographically open population. The methodology and key assumptions are based on Breda et al (2012). We show that the cumulative force of infection for the closed population and the endemic force of infection in the demographically open population can be reduced significantly by combining two factors: the vaccine effectiveness and the vaccination rate. The impact of these factors on the force of infection can transform an endemic steady state into a disease-free state. Keywords: Force of infection, Cumulative force of infection, Scalar-renewal equation, Per capita death rate, Lambert function, adaptive vaccination strategy
|
1005.3244
|
Valmir Barbosa
|
Valmir C. Barbosa, Raul Donangelo, Sergio R. Souza
|
Early appraisal of the fixation probability in directed networks
| null |
Physical Review E 82 (2010), 046114
|
10.1103/PhysRevE.82.046114
| null |
q-bio.PE cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In evolutionary dynamics, the probability that a mutation spreads through the
whole population, having arisen in a single individual, is known as the
fixation probability. In general, it is not possible to find the fixation
probability analytically given the mutant's fitness and the topological
constraints that govern the spread of the mutation, so one resorts to
simulations instead. Depending on the topology in use, a great number of
evolutionary steps may be needed in each of the simulation events, particularly
in those that end with the population containing mutants only. We introduce two
techniques to accelerate the determination of the fixation probability. The
first one skips all evolutionary steps in which the number of mutants does not
change and thereby reduces the number of steps per simulation event
considerably. This technique is computationally advantageous for some of the
so-called layered networks. The second technique, which is not restricted to
layered networks, consists of aborting any simulation event in which the number
of mutants has grown beyond a certain threshold value, and counting that event
as having led to a total spread of the mutation. For large populations, and
regardless of the network's topology, we demonstrate, both analytically and by
means of simulations, that using a threshold of about 100 mutants leads to an
estimate of the fixation probability that deviates in no significant way from
that obtained from the full-fledged simulations. We have observed speedups of
two orders of magnitude for layered networks with 10000 nodes.
|
[
{
"created": "Tue, 18 May 2010 16:20:30 GMT",
"version": "v1"
}
] |
2010-10-26
|
[
[
"Barbosa",
"Valmir C.",
""
],
[
"Donangelo",
"Raul",
""
],
[
"Souza",
"Sergio R.",
""
]
] |
In evolutionary dynamics, the probability that a mutation spreads through the whole population, having arisen in a single individual, is known as the fixation probability. In general, it is not possible to find the fixation probability analytically given the mutant's fitness and the topological constraints that govern the spread of the mutation, so one resorts to simulations instead. Depending on the topology in use, a great number of evolutionary steps may be needed in each of the simulation events, particularly in those that end with the population containing mutants only. We introduce two techniques to accelerate the determination of the fixation probability. The first one skips all evolutionary steps in which the number of mutants does not change and thereby reduces the number of steps per simulation event considerably. This technique is computationally advantageous for some of the so-called layered networks. The second technique, which is not restricted to layered networks, consists of aborting any simulation event in which the number of mutants has grown beyond a certain threshold value, and counting that event as having led to a total spread of the mutation. For large populations, and regardless of the network's topology, we demonstrate, both analytically and by means of simulations, that using a threshold of about 100 mutants leads to an estimate of the fixation probability that deviates in no significant way from that obtained from the full-fledged simulations. We have observed speedups of two orders of magnitude for layered networks with 10000 nodes.
|
0710.3784
|
Carlos Gershenson
|
Carlos Gershenson and Tom Lenaerts
|
Evolution of Complexity
|
Introduction to Special Issue
|
Artificial Life 14(3): 241-243. 2008
|
10.1162/artl.2008.14.3.14300
| null |
q-bio.PE
| null |
The evolution of complexity has been a central theme for Biology [2] and
Artificial Life research [1]. It is generally agreed that complexity has
increased in our universe, giving way to life, multi-cellularity, societies,
and systems of higher complexities. However, the mechanisms behind the
complexification and its relation to evolution are not well understood.
Moreover complexification can be used to mean different things in different
contexts. For example, complexification has been interpreted as a process of
diversification between evolving units [2] or as a scaling process related to
the idea of transitions between different levels of complexity [7].
Understanding the difference or overlap between the mechanisms involved in both
situations is mandatory to create acceptable synthetic models of the process,
as is required in Artificial Life research. (...)
|
[
{
"created": "Fri, 19 Oct 2007 20:46:09 GMT",
"version": "v1"
}
] |
2011-09-06
|
[
[
"Gershenson",
"Carlos",
""
],
[
"Lenaerts",
"Tom",
""
]
] |
The evolution of complexity has been a central theme for Biology [2] and Artificial Life research [1]. It is generally agreed that complexity has increased in our universe, giving way to life, multi-cellularity, societies, and systems of higher complexities. However, the mechanisms behind the complexification and its relation to evolution are not well understood. Moreover complexification can be used to mean different things in different contexts. For example, complexification has been interpreted as a process of diversification between evolving units [2] or as a scaling process related to the idea of transitions between different levels of complexity [7]. Understanding the difference or overlap between the mechanisms involved in both situations is mandatory to create acceptable synthetic models of the process, as is required in Artificial Life research. (...)
|
2306.03162
|
A. David Redish
|
Ugurcan Mugan, Seiichiro Amemiya, Paul S. Regier, A. David Redish
|
Navigation through the complex world -- the neurophysiology of
decision-making processes
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Current theories suggest that adaptive decision-making necessitates the
interaction between multiple decision-making systems. The computational
definitions of different models of decision-making suggest interactions with
task demands and complexity. We review these computational theories and derive
experimental predictions that will shed light on the underlying neurobiological
mechanisms. We use a well-established multi-strategy task and novel
neurophysiological analyses from hippocampus and striatum as a case study in
the interaction between task structure and navigational complexity. This
approach reveals how task structure and navigational complexity interact with
each other to identify differences between habitual and planned action choices.
|
[
{
"created": "Mon, 5 Jun 2023 18:16:49 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jul 2023 13:38:16 GMT",
"version": "v2"
}
] |
2023-07-28
|
[
[
"Mugan",
"Ugurcan",
""
],
[
"Amemiya",
"Seiichiro",
""
],
[
"Regier",
"Paul S.",
""
],
[
"Redish",
"A. David",
""
]
] |
Current theories suggest that adaptive decision-making necessitates the interaction between multiple decision-making systems. The computational definitions of different models of decision-making suggest interactions with task demands and complexity. We review these computational theories and derive experimental predictions that will shed light on the underlying neurobiological mechanisms. We use a well-established multi-strategy task and novel neurophysiological analyses from hippocampus and striatum as a case study in the interaction between task structure and navigational complexity. This approach reveals how task structure and navigational complexity interact with each other to identify differences between habitual and planned action choices.
|
2212.06995
|
Maximilian Nguyen
|
Sinan A. Ozbay, Bjarke F. Nielsen, Maximilian M. Nguyen
|
Bifurcations in the Herd Immunity Threshold for Discrete-Time Models of
Epidemic Spread
| null | null | null | null |
q-bio.PE cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We performed a thorough sensitivity analysis of the herd immunity threshold
for discrete-time SIR compartmental models with a static network structure. We
find unexpectedly that these models violate classical intuition which holds
that the herd immunity threshold should monotonically increase with the
transmission parameter. We find the existence of bifurcations in the herd
immunity threshold in the high transmission probability regime. The extent of
these bifurcations is modulated by the graph heterogeneity, the recovery
parameter, and the network size. In the limit of large, well-mixed networks,
the behavior approaches that of difference equation models, suggesting this
behavior is a universal feature of all discrete-time SIR models. These results
suggest careful attention is needed in both selecting the assumptions on how to
model time and heterogeneity in epidemiological models and the subsequent
conclusions that can be drawn.
|
[
{
"created": "Wed, 14 Dec 2022 03:04:46 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 00:25:35 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Feb 2023 15:47:21 GMT",
"version": "v3"
}
] |
2023-02-27
|
[
[
"Ozbay",
"Sinan A.",
""
],
[
"Nielsen",
"Bjarke F.",
""
],
[
"Nguyen",
"Maximilian M.",
""
]
] |
We performed a thorough sensitivity analysis of the herd immunity threshold for discrete-time SIR compartmental models with a static network structure. We find unexpectedly that these models violate classical intuition which holds that the herd immunity threshold should monotonically increase with the transmission parameter. We find the existence of bifurcations in the herd immunity threshold in the high transmission probability regime. The extent of these bifurcations is modulated by the graph heterogeneity, the recovery parameter, and the network size. In the limit of large, well-mixed networks, the behavior approaches that of difference equation models, suggesting this behavior is a universal feature of all discrete-time SIR models. These results suggest careful attention is needed in both selecting the assumptions on how to model time and heterogeneity in epidemiological models and the subsequent conclusions that can be drawn.
|
1009.3697
|
Changbong Hyeon
|
Changbong Hyeon
|
Exploring the energy landscape of biopolymers using single molecule
force spectroscopy and molecular simulations
|
32 pages, 4 figures, Book Chapter in "Simulations in
Nanobiotechnology"
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, single molecule force techniques have opened a new avenue to
decipher the folding landscapes of biopolymers by allowing us to watch and
manipulate the dynamics of individual proteins and nucleic acids. In single
molecule force experiments, quantitative analyses of measurements employing
sound theoretical models and molecular simulations play central role more than
any other field. With a brief description of basic theories for force mechanics
and molecular simulation technique using self-organized polymer (SOP) model,
this chapter will discuss various issues in single molecule force spectroscopy
(SMFS) experiments, which include pulling speed dependent unfolding pathway,
measurement of energy landscape roughness, the in uence of molecular handles in
optical tweezers on measurement and molecular motion, and folding dynamics of
biopolymers under force quench condition.
|
[
{
"created": "Mon, 20 Sep 2010 05:50:06 GMT",
"version": "v1"
}
] |
2016-11-25
|
[
[
"Hyeon",
"Changbong",
""
]
] |
In recent years, single molecule force techniques have opened a new avenue to decipher the folding landscapes of biopolymers by allowing us to watch and manipulate the dynamics of individual proteins and nucleic acids. In single molecule force experiments, quantitative analyses of measurements employing sound theoretical models and molecular simulations play central role more than any other field. With a brief description of basic theories for force mechanics and molecular simulation technique using self-organized polymer (SOP) model, this chapter will discuss various issues in single molecule force spectroscopy (SMFS) experiments, which include pulling speed dependent unfolding pathway, measurement of energy landscape roughness, the in uence of molecular handles in optical tweezers on measurement and molecular motion, and folding dynamics of biopolymers under force quench condition.
|
0904.1637
|
Timothy Saunders
|
Timothy E Saunders and Martin Howard
|
Morphogen Profiles Can Be Optimised to Buffer Against Noise
|
5 pages, 3 figures
|
Physical Review E: 80, 041902 (October 2009)
|
10.1103/PhysRevE.80.041902
| null |
q-bio.MN cond-mat.soft cond-mat.stat-mech physics.bio-ph q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Morphogen profiles play a vital role in biology by specifying position in
embryonic development. However, the factors that influence the shape of a
morphogen profile remain poorly understood. Since morphogens should provide
precise positional information, one significant factor is the robustness of the
profile to noise. We compare three classes of morphogen profiles (linear,
exponential, algebraic) to see which is most precise when subject to both
external embryo-to-embryo fluctuations and internal fluctuations due to
intrinsically random processes such as diffusion. We find that both the kinetic
parameters and the overall gradient shape (e.g. exponential versus algebraic)
can be optimised to generate maximally precise positional information.
|
[
{
"created": "Fri, 10 Apr 2009 13:26:56 GMT",
"version": "v1"
}
] |
2009-11-27
|
[
[
"Saunders",
"Timothy E",
""
],
[
"Howard",
"Martin",
""
]
] |
Morphogen profiles play a vital role in biology by specifying position in embryonic development. However, the factors that influence the shape of a morphogen profile remain poorly understood. Since morphogens should provide precise positional information, one significant factor is the robustness of the profile to noise. We compare three classes of morphogen profiles (linear, exponential, algebraic) to see which is most precise when subject to both external embryo-to-embryo fluctuations and internal fluctuations due to intrinsically random processes such as diffusion. We find that both the kinetic parameters and the overall gradient shape (e.g. exponential versus algebraic) can be optimised to generate maximally precise positional information.
|
2001.11589
|
Gurcan Comert
|
Nurullah Arslan, Kazim Besirli, Gurcan Comert, Omer F Beyca
|
Computational Fluid Dynamic Simulations In a Model of a Carotid
Bifurcation Under Steady Flow Conditions
|
4 Pages, 7 figures, 4th International Advanced Technologies Symposium
September 28 30, 2005, Konya, Turkey
| null | null | null |
q-bio.TO physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strokes are still one of the leading causes for death after heart diseases
and cancer in all over the world. Most strokes happen because an artery that
carries blood uphill from the heart to the head gets clogged. Most of the time,
as with heart attacks, the problem is atherosclerosis, hardening of the
arteries, calcified buildup of fatty deposits on the vessel wall. The primary
troublemaker is the carotid artery, one on each side of the neck, the main
thoroughfare for blood to the brain. Only within the last 25 years, though,
have researchers been able to put their finger on why the carotid is especially
susceptible to atherosclerosis. In this study, the fluid dynamic simulations
were done in a carotid bifurcation under the steady flow conditions
computationally. In vivo geometry and boundary conditions were obtained from a
diseased who has both sides of stenosis located in his carotid artery patients.
The location of critical flow fields such as low wall shear stress (WSS),
stagnation regions and separation regions were detected. Low WSS was found at
the downstream of the bifurcation.
|
[
{
"created": "Thu, 16 Jan 2020 15:00:06 GMT",
"version": "v1"
}
] |
2020-02-03
|
[
[
"Arslan",
"Nurullah",
""
],
[
"Besirli",
"Kazim",
""
],
[
"Comert",
"Gurcan",
""
],
[
"Beyca",
"Omer F",
""
]
] |
Strokes are still one of the leading causes for death after heart diseases and cancer in all over the world. Most strokes happen because an artery that carries blood uphill from the heart to the head gets clogged. Most of the time, as with heart attacks, the problem is atherosclerosis, hardening of the arteries, calcified buildup of fatty deposits on the vessel wall. The primary troublemaker is the carotid artery, one on each side of the neck, the main thoroughfare for blood to the brain. Only within the last 25 years, though, have researchers been able to put their finger on why the carotid is especially susceptible to atherosclerosis. In this study, the fluid dynamic simulations were done in a carotid bifurcation under the steady flow conditions computationally. In vivo geometry and boundary conditions were obtained from a diseased who has both sides of stenosis located in his carotid artery patients. The location of critical flow fields such as low wall shear stress (WSS), stagnation regions and separation regions were detected. Low WSS was found at the downstream of the bifurcation.
|
2311.13000
|
Reza Bozorgpour
|
Reza Bozorgpour
|
Computational Explorations in Biomedicine: Unraveling Molecular Dynamics
for Cancer, Drug Delivery, and Biomolecular Insights using LAMMPS Simulations
|
16 pages- 11 figures
| null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the rapid advancement of computational techniques, Molecular Dynamics
(MD) simulations have emerged as powerful tools in biomedical research,
enabling in-depth investigations of biological systems at the atomic level.
Among the diverse range of simulation software available, LAMMPS (Large-scale
Atomic/Molecular Massively Parallel Simulator) has gained significant
recognition for its versatility, scalability, and extensive range of
functionalities. This literature review aims to provide a comprehensive
overview of the utilization of LAMMPS in the field of biomedical applications.
This review begins by outlining the fundamental principles of MD simulations
and highlighting the unique features of LAMMPS that make it suitable for
biomedical research. Subsequently, a survey of the literature is conducted to
identify key studies that have employed LAMMPS in various biomedical contexts,
such as protein folding, drug design, biomaterials, and cellular processes. The
reviewed studies demonstrate the remarkable contributions of LAMMPS in
understanding the behavior of biological macromolecules, investigating
drug-protein interactions, elucidating the mechanical properties of
biomaterials, and studying cellular processes at the molecular level.
Additionally, this review explores the integration of LAMMPS with other
computational tools and experimental techniques, showcasing its potential for
synergistic investigations that bridge the gap between theory and experiment.
Moreover, this review discusses the challenges and limitations associated with
using LAMMPS in biomedical simulations, including the parameterization of force
fields, system size limitations, and computational efficiency. Strategies
employed by researchers to mitigate these challenges are presented, along with
potential future directions for enhancing LAMMPS capabilities in the biomedical
field.
|
[
{
"created": "Tue, 21 Nov 2023 21:21:23 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2024 20:46:02 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jul 2024 05:30:42 GMT",
"version": "v3"
}
] |
2024-07-10
|
[
[
"Bozorgpour",
"Reza",
""
]
] |
With the rapid advancement of computational techniques, Molecular Dynamics (MD) simulations have emerged as powerful tools in biomedical research, enabling in-depth investigations of biological systems at the atomic level. Among the diverse range of simulation software available, LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) has gained significant recognition for its versatility, scalability, and extensive range of functionalities. This literature review aims to provide a comprehensive overview of the utilization of LAMMPS in the field of biomedical applications. This review begins by outlining the fundamental principles of MD simulations and highlighting the unique features of LAMMPS that make it suitable for biomedical research. Subsequently, a survey of the literature is conducted to identify key studies that have employed LAMMPS in various biomedical contexts, such as protein folding, drug design, biomaterials, and cellular processes. The reviewed studies demonstrate the remarkable contributions of LAMMPS in understanding the behavior of biological macromolecules, investigating drug-protein interactions, elucidating the mechanical properties of biomaterials, and studying cellular processes at the molecular level. Additionally, this review explores the integration of LAMMPS with other computational tools and experimental techniques, showcasing its potential for synergistic investigations that bridge the gap between theory and experiment. Moreover, this review discusses the challenges and limitations associated with using LAMMPS in biomedical simulations, including the parameterization of force fields, system size limitations, and computational efficiency. Strategies employed by researchers to mitigate these challenges are presented, along with potential future directions for enhancing LAMMPS capabilities in the biomedical field.
|
2110.01806
|
Teresa Head-Gordon
|
Jie Li, Oufan Zhang, Yingze Wang, Kunyang Sun, Xingyi Guan, Dorian
Bagni, Mojtaba Haghighatlari, Fiona L. Kearns, Conor Parks, Rommie E.Amaro,
Teresa Head-Gordon
|
Mining for Potent Inhibitors through Artificial Intelligence and
Physics: A Unified Methodology for Ligand Based and Structure Based Drug
Design
| null | null | null | null |
q-bio.BM physics.bio-ph physics.chem-ph physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
The viability of a new drug molecule is a time and resource intensive task
that makes computer-aided assessments a vital approach to rapid drug discovery.
Here we develop a machine learning algorithm, iMiner, that generates novel
inhibitor molecules for target proteins by combining deep reinforcement
learning with real-time 3D molecular docking using AutoDock Vina, thereby
simultaneously creating chemical novelty while constraining molecules for shape
and molecular compatibility with target active sites. Moreover, through the use
of various types of reward functions, we can generate new molecules that are
chemically similar to a target ligand, which can be grown from known protein
bound fragments, as well as to create molecules that enforce interactions with
target residues in the protein active site. The iMiner algorithm is embedded in
a composite workflow that filters out Pan-assay interference compounds,
Lipinski rule violations, and poor synthetic accessibility, with options for
cross-validation against other docking scoring functions and automation of a
molecular dynamics simulation to measure pose stability. Because our approach
only relies on the structure of the target protein, iMiner can be easily
adapted for future development of other inhibitors or small molecule
therapeutics of any target protein.
|
[
{
"created": "Tue, 5 Oct 2021 03:45:15 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jan 2024 18:14:37 GMT",
"version": "v2"
}
] |
2024-01-11
|
[
[
"Li",
"Jie",
""
],
[
"Zhang",
"Oufan",
""
],
[
"Wang",
"Yingze",
""
],
[
"Sun",
"Kunyang",
""
],
[
"Guan",
"Xingyi",
""
],
[
"Bagni",
"Dorian",
""
],
[
"Haghighatlari",
"Mojtaba",
""
],
[
"Kearns",
"Fiona L.",
""
],
[
"Parks",
"Conor",
""
],
[
"Amaro",
"Rommie E.",
""
],
[
"Head-Gordon",
"Teresa",
""
]
] |
The viability of a new drug molecule is a time and resource intensive task that makes computer-aided assessments a vital approach to rapid drug discovery. Here we develop a machine learning algorithm, iMiner, that generates novel inhibitor molecules for target proteins by combining deep reinforcement learning with real-time 3D molecular docking using AutoDock Vina, thereby simultaneously creating chemical novelty while constraining molecules for shape and molecular compatibility with target active sites. Moreover, through the use of various types of reward functions, we can generate new molecules that are chemically similar to a target ligand, which can be grown from known protein bound fragments, as well as to create molecules that enforce interactions with target residues in the protein active site. The iMiner algorithm is embedded in a composite workflow that filters out Pan-assay interference compounds, Lipinski rule violations, and poor synthetic accessibility, with options for cross-validation against other docking scoring functions and automation of a molecular dynamics simulation to measure pose stability. Because our approach only relies on the structure of the target protein, iMiner can be easily adapted for future development of other inhibitors or small molecule therapeutics of any target protein.
|
2405.10035
|
Alessio Perinelli
|
Alessio Perinelli, Leonardo Ricci
|
A quality control analysis of the resting state hypothesis via
permutation entropy on EEG recordings
|
17 pages, 6 figures. Submitted to: Chaos
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The analysis of electrophysiological recordings of the human brain in resting
state is a key experimental technique in neuroscience. Resting state is indeed
the default condition to characterize brain dynamics. Its successful
implementation relies both on the capacity of subjects to comply with the
requirement of staying awake while not performing any cognitive task, and on
the capacity of the experimenter to validate that compliance. Here we propose a
novel approach, based on permutation entropy, to provide a quality control of
the resting state condition by evaluating its stability during a recording. We
combine the calculation of permutation entropy with a method for the estimation
of its uncertainty out of a single time series, thus enabling a statistically
robust assessment of resting state stationarity. The approach is showcased on
electroencephalographic data recorded from young and elderly subjects and
considering both eyes-closed and eyes-opened resting state conditions. Besides
showing the reliability of the approach, the results showed higher instability
in elderly subjects that hint at a qualitative difference between the two age
groups with regard to the distribution of unstable activity within the brain.
The method is therefore a tool that provides insights on the issue of resting
state stability of interest for neuroscience experiments. The method can be
applied to other kinds of electrophysiological data like, for example,
magnetoencephalographic recordings. In addition, provided that suitable
hardware and software processing units are used, its implementation, which
consists here of a posteriori analysis, can be translated into a real time one.
|
[
{
"created": "Thu, 16 May 2024 12:16:03 GMT",
"version": "v1"
}
] |
2024-05-17
|
[
[
"Perinelli",
"Alessio",
""
],
[
"Ricci",
"Leonardo",
""
]
] |
The analysis of electrophysiological recordings of the human brain in resting state is a key experimental technique in neuroscience. Resting state is indeed the default condition to characterize brain dynamics. Its successful implementation relies both on the capacity of subjects to comply with the requirement of staying awake while not performing any cognitive task, and on the capacity of the experimenter to validate that compliance. Here we propose a novel approach, based on permutation entropy, to provide a quality control of the resting state condition by evaluating its stability during a recording. We combine the calculation of permutation entropy with a method for the estimation of its uncertainty out of a single time series, thus enabling a statistically robust assessment of resting state stationarity. The approach is showcased on electroencephalographic data recorded from young and elderly subjects and considering both eyes-closed and eyes-opened resting state conditions. Besides showing the reliability of the approach, the results showed higher instability in elderly subjects that hint at a qualitative difference between the two age groups with regard to the distribution of unstable activity within the brain. The method is therefore a tool that provides insights on the issue of resting state stability of interest for neuroscience experiments. The method can be applied to other kinds of electrophysiological data like, for example, magnetoencephalographic recordings. In addition, provided that suitable hardware and software processing units are used, its implementation, which consists here of a posteriori analysis, can be translated into a real time one.
|
2406.02623
|
Sabrina Toro
|
Kathleen R. Mullen (1), Imke Tammen (2), Nicolas A. Matentzoglu (3),
Marius Mather (2), Christopher J. Mungall (4), Melissa A. Haendel (5), Frank
W. Nicholas (2), Sabrina Toro (5), the Vertebrate Breed Ontology Consortium
((1) University of Colorado Anschutz Medical Campus, (2) University of
Sydney, (3) Semanticly Ltd, (4) Lawrence Berkeley National Laboratory, (5)
University of North Carolina at Chapel Hill)
|
The Vertebrate Breed Ontology: Towards Effective Breed Data
Standardization
| null | null | null | null |
q-bio.OT cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Background: Limited universally adopted data standards in veterinary science
hinders data interoperability and therefore integration and comparison; this
ultimately impedes application of existing information-based tools to support
advancement in veterinary diagnostics, treatments, and precision medicine.
Objectives: Creation of a Vertebrate Breed Ontology (VBO) as a single,
coherent logic-based standard for documenting breed names in animal health,
production and research-related records will improve data use capabilities in
veterinary and comparative medicine.
Animals: No live animals were used in this study.
Methods: A list of breed names and related information was compiled from
relevant sources, organizations, communities, and experts using manual and
computational approaches to create VBO. Each breed is represented by a VBO term
that includes all provenance and the breed's related information as metadata.
VBO terms are classified using description logic to allow computational
applications and Artificial Intelligence-readiness.
Results: VBO is an open, community-driven ontology representing over 19,000
livestock and companion animal breeds covering 41 species. Breeds are
classified based on community and expert conventions (e.g., horse breed, cattle
breed). This classification is supported by relations to the breeds' genus and
species indicated by NCBI Taxonomy terms. Relationships between VBO terms, e.g.
relating breeds to their foundation stock, provide additional context to
support advanced data analytics. VBO term metadata includes common names and
synonyms, breed identifiers or codes, and attributed cross-references to other
databases.
Conclusion and clinical importance: Veterinary data interoperability and
computability can be enhanced by the adoption of VBO as a source of standard
breed names in databases and veterinary electronic health records.
|
[
{
"created": "Mon, 3 Jun 2024 20:06:41 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Mullen",
"Kathleen R.",
""
],
[
"Tammen",
"Imke",
""
],
[
"Matentzoglu",
"Nicolas A.",
""
],
[
"Mather",
"Marius",
""
],
[
"Mungall",
"Christopher J.",
""
],
[
"Haendel",
"Melissa A.",
""
],
[
"Nicholas",
"Frank W.",
""
],
[
"Toro",
"Sabrina",
""
],
[
"Consortium",
"the Vertebrate Breed Ontology",
""
]
] |
Background: Limited universally adopted data standards in veterinary science hinders data interoperability and therefore integration and comparison; this ultimately impedes application of existing information-based tools to support advancement in veterinary diagnostics, treatments, and precision medicine. Objectives: Creation of a Vertebrate Breed Ontology (VBO) as a single, coherent logic-based standard for documenting breed names in animal health, production and research-related records will improve data use capabilities in veterinary and comparative medicine. Animals: No live animals were used in this study. Methods: A list of breed names and related information was compiled from relevant sources, organizations, communities, and experts using manual and computational approaches to create VBO. Each breed is represented by a VBO term that includes all provenance and the breed's related information as metadata. VBO terms are classified using description logic to allow computational applications and Artificial Intelligence-readiness. Results: VBO is an open, community-driven ontology representing over 19,000 livestock and companion animal breeds covering 41 species. Breeds are classified based on community and expert conventions (e.g., horse breed, cattle breed). This classification is supported by relations to the breeds' genus and species indicated by NCBI Taxonomy terms. Relationships between VBO terms, e.g. relating breeds to their foundation stock, provide additional context to support advanced data analytics. VBO term metadata includes common names and synonyms, breed identifiers or codes, and attributed cross-references to other databases. Conclusion and clinical importance: Veterinary data interoperability and computability can be enhanced by the adoption of VBO as a source of standard breed names in databases and veterinary electronic health records.
|
1605.07789
|
Mark Leake
|
Helen Miller, Adam J. M. Wollman, Mark C. Leake
|
Designing a single-molecule biophysics tool for characterizing DNA
damage for techniques that kill infectious pathogens through DNA damage
effects
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Antibiotics such as the quinolones and fluoroquinolones kill bacterial
pathogens ultimately through DNA damage. They target the essential type IIA
topoisomerases in bacteria by stabilising the normally transient double strand
break state which is created to modify the supercoiling state of the DNA. Here
we discuss the development of these antibiotics and their method of action.
Existing methods for DNA damage visualisation, such as the comet assay and
immunofluorescence imaging can often only be analysed qualitatively and this
analysis is subjective. We describe a putative single-molecule fluorescence
technique for quantifying DNA damage via the total fluorescence intensity of a
DNA origami tile fully saturated with an intercalating dye, along with the
optical requirements for how to implement these into a light microscopy imaging
system capable of single-molecule millisecond timescale imaging. This system
promises significant improvements in reproducibility of the quantification of
DNA damage over traditional techniques.
|
[
{
"created": "Wed, 25 May 2016 09:17:38 GMT",
"version": "v1"
}
] |
2016-05-26
|
[
[
"Miller",
"Helen",
""
],
[
"Wollman",
"Adam J. M.",
""
],
[
"Leake",
"Mark C.",
""
]
] |
Antibiotics such as the quinolones and fluoroquinolones kill bacterial pathogens ultimately through DNA damage. They target the essential type IIA topoisomerases in bacteria by stabilising the normally transient double strand break state which is created to modify the supercoiling state of the DNA. Here we discuss the development of these antibiotics and their method of action. Existing methods for DNA damage visualisation, such as the comet assay and immunofluorescence imaging can often only be analysed qualitatively and this analysis is subjective. We describe a putative single-molecule fluorescence technique for quantifying DNA damage via the total fluorescence intensity of a DNA origami tile fully saturated with an intercalating dye, along with the optical requirements for how to implement these into a light microscopy imaging system capable of single-molecule millisecond timescale imaging. This system promises significant improvements in reproducibility of the quantification of DNA damage over traditional techniques.
|
2108.02315
|
Scott Greenhalgh
|
Alisha Kumari, Elijah Reece, Kursad Tosun, Scott Greenhalgh
|
On the origin of zombies: a modeling approach
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A zombie apocalypse is one pandemic that would likely be worse than anything
humanity has ever seen. However, despite the mechanisms for zombie uprisings in
pop culture, it is unknown whether zombies, from an evolutionary point of view,
can actually rise from the dead. To provide insight into this unknown, we
created a mathematical model that predicts the trajectory of human and zombie
populations during a zombie apocalypse. We parameterized our model according to
the demographics of the US, the zombie literature, and then conducted an
evolutionary invasion analysis to determine conditions that permit the
evolution of zombies. Our results indicate a zombie invasion is theoretically
possible, provided there is a sufficiently large ratio of transmission rate to
the zombie death rate. While achieving this ratio is uncommon in nature, the
existence of zombie ant fungus illustrates it is possible and thereby suggests
that a zombie apocalypse among humans could occur.
|
[
{
"created": "Wed, 4 Aug 2021 23:36:06 GMT",
"version": "v1"
}
] |
2021-08-06
|
[
[
"Kumari",
"Alisha",
""
],
[
"Reece",
"Elijah",
""
],
[
"Tosun",
"Kursad",
""
],
[
"Greenhalgh",
"Scott",
""
]
] |
A zombie apocalypse is one pandemic that would likely be worse than anything humanity has ever seen. However, despite the mechanisms for zombie uprisings in pop culture, it is unknown whether zombies, from an evolutionary point of view, can actually rise from the dead. To provide insight into this unknown, we created a mathematical model that predicts the trajectory of human and zombie populations during a zombie apocalypse. We parameterized our model according to the demographics of the US, the zombie literature, and then conducted an evolutionary invasion analysis to determine conditions that permit the evolution of zombies. Our results indicate a zombie invasion is theoretically possible, provided there is a sufficiently large ratio of transmission rate to the zombie death rate. While achieving this ratio is uncommon in nature, the existence of zombie ant fungus illustrates it is possible and thereby suggests that a zombie apocalypse among humans could occur.
|
1512.00844
|
Soumya Banerjee
|
Soumya Banerjee
|
Optimal Strategies for Virus Propagation
|
10 pages, 3 figures
| null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores a number of questions regarding optimal strategies
evolved by viruses upon entry into a vertebrate host. The infected cell life
cycle consists of a non-productively infected stage in which it is producing
virions but not releasing them and of a productively infected stage in which it
is just releasing virions. The study explores why the infected cell cycle
should be so delineated, something which is akin to a classic bang-bang control
or all-or-none principle. The times spent in each of these stages represent a
viral strategy to optimize peak viral load. Increasing the time spent in the
non-productively infected phase ({\tau}1) would lead to a concomitant increase
in peak viremia. However increasing this time would also invite a more vigorous
response from Cytotoxic T-Lymphocytes (CTLs). Simultaneously, if there is a
vigorous antibody response, then we might expect {\tau}1 to be high, in order
that the virus builds up its population and conversely if there is a weak
antibody response, {\tau}1 might be small. These tradeoffs are explored using a
mathematical model of virus propagation using Ordinary Differential Equations
(ODEs). The study raises questions about whether common viruses have actually
settled into an optimum, the role for reliability and whether experimental
infections of hosts with non-endemic strains could help elicit answers about
viral progression.
|
[
{
"created": "Wed, 2 Dec 2015 10:25:33 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Feb 2016 23:05:55 GMT",
"version": "v2"
}
] |
2016-02-09
|
[
[
"Banerjee",
"Soumya",
""
]
] |
This paper explores a number of questions regarding optimal strategies evolved by viruses upon entry into a vertebrate host. The infected cell life cycle consists of a non-productively infected stage in which it is producing virions but not releasing them and of a productively infected stage in which it is just releasing virions. The study explores why the infected cell cycle should be so delineated, something which is akin to a classic bang-bang control or all-or-none principle. The times spent in each of these stages represent a viral strategy to optimize peak viral load. Increasing the time spent in the non-productively infected phase ({\tau}1) would lead to a concomitant increase in peak viremia. However increasing this time would also invite a more vigorous response from Cytotoxic T-Lymphocytes (CTLs). Simultaneously, if there is a vigorous antibody response, then we might expect {\tau}1 to be high, in order that the virus builds up its population and conversely if there is a weak antibody response, {\tau}1 might be small. These tradeoffs are explored using a mathematical model of virus propagation using Ordinary Differential Equations (ODEs). The study raises questions about whether common viruses have actually settled into an optimum, the role for reliability and whether experimental infections of hosts with non-endemic strains could help elicit answers about viral progression.
|
2311.03421
|
Arnau Marin-Llobet
|
Arnau Marin-Llobet and Arnau Manasanch and Maria V. Sanchez-Vives
|
Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain
State Decoding
| null | null | null | null |
q-bio.NC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The study of brain states, ranging from highly synchronous to asynchronous
neuronal patterns like the sleep-wake cycle, is fundamental for assessing the
brain's spatiotemporal dynamics and their close connection to behavior.
However, the development of new techniques to accurately identify them still
remains a challenge, as these are often compromised by the presence of noise,
artifacts, and suboptimal recording quality. In this study, we propose a
two-stage computational framework combining Hopfield Networks for artifact data
preprocessing with Convolutional Neural Networks (CNNs) for classification of
brain states in rat neural recordings under different levels of anesthesia. To
evaluate the robustness of our framework, we deliberately introduced noise
artifacts into the neural recordings. We evaluated our hybrid Hopfield-CNN
pipeline by benchmarking it against two comparative models: a standalone CNN
handling the same noisy inputs, and another CNN trained and tested on
artifact-free data. Performance across various levels of data compression and
noise intensities showed that our framework can effectively mitigate artifacts,
allowing the model to reach parity with the clean-data CNN at lower noise
levels. Although this study mainly benefits small-scale experiments, the
findings highlight the necessity for advanced deep learning and Hopfield
Network models to improve scalability and robustness in diverse real-world
settings.
|
[
{
"created": "Mon, 6 Nov 2023 15:08:13 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Nov 2023 17:39:47 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Nov 2023 16:52:26 GMT",
"version": "v3"
}
] |
2023-11-13
|
[
[
"Marin-Llobet",
"Arnau",
""
],
[
"Manasanch",
"Arnau",
""
],
[
"Sanchez-Vives",
"Maria V.",
""
]
] |
The study of brain states, ranging from highly synchronous to asynchronous neuronal patterns like the sleep-wake cycle, is fundamental for assessing the brain's spatiotemporal dynamics and their close connection to behavior. However, the development of new techniques to accurately identify them still remains a challenge, as these are often compromised by the presence of noise, artifacts, and suboptimal recording quality. In this study, we propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Convolutional Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia. To evaluate the robustness of our framework, we deliberately introduced noise artifacts into the neural recordings. We evaluated our hybrid Hopfield-CNN pipeline by benchmarking it against two comparative models: a standalone CNN handling the same noisy inputs, and another CNN trained and tested on artifact-free data. Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels. Although this study mainly benefits small-scale experiments, the findings highlight the necessity for advanced deep learning and Hopfield Network models to improve scalability and robustness in diverse real-world settings.
|
2310.01743
|
Rebekah Rogers
|
James E. Titus-McQuillan, Brandon A. Turner, Rebekah L. Rogers
|
Sex-specific ultraviolet radiation tolerance across Drosophila
|
18 pages text. 5 figures. 4 tables
| null | null | null |
q-bio.PE q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The genetic basis of phenotypic differences between species is among the most
longstanding questions in evolutionary biology. How new genes form and the
processes selection acts to produce differences across species are fundamental
to understand how species persist and evolve in an ever-changing environment.
Adaptation and genetic innovation arise in the genome by a variety of sources.
Functional genomics requires both intrinsic genetic discoveries, as well as
empirical testing to observe adaptation between lineages. Here we explore two
species of Drosophila on the island of Sao Tome and mainland Africa, D.
santomea and D. yakuba. These two species both inhabit the island, but occupy
differing species distributions based on elevation, with D. yakuba also having
populations on mainland Africa. Intrinsic evidence shows genes between species
may have a role in adaptation to higher UV tolerance with DNA repair mechanisms
(PARP) and resistance to humeral stress lethal effects (Victoria). We conducted
empirical assays between island D. santomea, D. yakuba, and mainland D. yakuba.
Flies were shocked with UVB radiation (@ 302 nm) at 1650-1990 mW/cm2 for 30
minutes on a transilluminator apparatus. Custom 5-wall acrylic enclosures were
constructed for viewing and containment of flies. All assays were filmed.
Island groups did show significant differences between fall-time under UV
stress and recovery time post-UV stress test between regions and sex. This
study shows evidence that mainland flies are less resistant to UV radiation
than their island counterparts. Further work exploring the genetic basis for UV
tolerance will be conducted from empirical assays. Understanding the mechanisms
and processes that promote adaptation and testing extrinsic traits within the
context of the genome is crucially important to understand evolutionary
machinery.
|
[
{
"created": "Tue, 3 Oct 2023 02:10:08 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Titus-McQuillan",
"James E.",
""
],
[
"Turner",
"Brandon A.",
""
],
[
"Rogers",
"Rebekah L.",
""
]
] |
The genetic basis of phenotypic differences between species is among the most longstanding questions in evolutionary biology. How new genes form and the processes selection acts to produce differences across species are fundamental to understand how species persist and evolve in an ever-changing environment. Adaptation and genetic innovation arise in the genome by a variety of sources. Functional genomics requires both intrinsic genetic discoveries, as well as empirical testing to observe adaptation between lineages. Here we explore two species of Drosophila on the island of Sao Tome and mainland Africa, D. santomea and D. yakuba. These two species both inhabit the island, but occupy differing species distributions based on elevation, with D. yakuba also having populations on mainland Africa. Intrinsic evidence shows genes between species may have a role in adaptation to higher UV tolerance with DNA repair mechanisms (PARP) and resistance to humeral stress lethal effects (Victoria). We conducted empirical assays between island D. santomea, D. yakuba, and mainland D. yakuba. Flies were shocked with UVB radiation (@ 302 nm) at 1650-1990 mW/cm2 for 30 minutes on a transilluminator apparatus. Custom 5-wall acrylic enclosures were constructed for viewing and containment of flies. All assays were filmed. Island groups did show significant differences between fall-time under UV stress and recovery time post-UV stress test between regions and sex. This study shows evidence that mainland flies are less resistant to UV radiation than their island counterparts. Further work exploring the genetic basis for UV tolerance will be conducted from empirical assays. Understanding the mechanisms and processes that promote adaptation and testing extrinsic traits within the context of the genome is crucially important to understand evolutionary machinery.
|
2303.14448
|
Trevor McCourt
|
Trevor McCourt, Ila R. Fiete, Isaac L. Chuang
|
Noisy dynamical systems evolve error correcting codes and modularity
|
7 pages 4 figures
| null | null | null |
q-bio.PE nlin.AO q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Noise is a ubiquitous feature of the physical world. As a result, the first
prerequisite of life is fault tolerance: maintaining integrity of state despite
external bombardment. Recent experimental advances have revealed that
biological systems achieve fault tolerance by implementing mathematically
intricate error-correcting codes and by organizing in a modular fashion that
physically separates functionally distinct subsystems. These elaborate
structures represent a vanishing volume in the massive genetic configuration
space. How is it possible that the primitive process of evolution, by which all
biological systems evolved, achieved such unusual results? In this work,
through experiments in Boolean networks, we show that the simultaneous presence
of error correction and modularity in biological systems is no coincidence.
Rather, it is a typical co-occurrence in noisy dynamic systems undergoing
evolution. From this, we deduce the principle of error correction enhanced
evolvability: systems possessing error-correcting codes are more effectively
improved by evolution than those without.
|
[
{
"created": "Sat, 25 Mar 2023 11:54:18 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Apr 2023 22:16:35 GMT",
"version": "v2"
}
] |
2023-04-14
|
[
[
"McCourt",
"Trevor",
""
],
[
"Fiete",
"Ila R.",
""
],
[
"Chuang",
"Isaac L.",
""
]
] |
Noise is a ubiquitous feature of the physical world. As a result, the first prerequisite of life is fault tolerance: maintaining integrity of state despite external bombardment. Recent experimental advances have revealed that biological systems achieve fault tolerance by implementing mathematically intricate error-correcting codes and by organizing in a modular fashion that physically separates functionally distinct subsystems. These elaborate structures represent a vanishing volume in the massive genetic configuration space. How is it possible that the primitive process of evolution, by which all biological systems evolved, achieved such unusual results? In this work, through experiments in Boolean networks, we show that the simultaneous presence of error correction and modularity in biological systems is no coincidence. Rather, it is a typical co-occurrence in noisy dynamic systems undergoing evolution. From this, we deduce the principle of error correction enhanced evolvability: systems possessing error-correcting codes are more effectively improved by evolution than those without.
|
1512.03094
|
Pedro Jeferson Miranda
|
Pedro Jeferson Miranda, Sandro Ely de Souza Pinto, Murilo da Silva
Baptista and Giuliano Gadioli La Guardia
|
Theoretical knock-outs on biological networks
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we formalize a method to compute the degree of importance of
biological agents that participates on the dynamics of a biological phenomenon
build upon a complex network. We call this new procedure by theoretical
knock-out (KO). To devise this method, we make two approaches: algebraically
and algorithmically. In both cases we compute a vector on an asymptotic state,
called flux vector. The flux is given by a random walk on a directed graph that
represents a biological phenomenon. This vector gives us the information about
the relative flux of walkers on a vertex which represents a biological agent.
With two vector of this kind, we can calculate the relative mean error between
them by averaging over its coefficients. This quantity allows us to assess the
degree of importance of each vertex of a complex network that evolves in time
and has experimental background. We find out that this procedure can be applied
in any sort of biological phenomena in which we can know the role and
interrelationships of its agents. These results also provide experimental
biologists to predict the order of importance of biological agents on a mounted
complex network.
|
[
{
"created": "Tue, 1 Dec 2015 04:01:01 GMT",
"version": "v1"
}
] |
2015-12-11
|
[
[
"Miranda",
"Pedro Jeferson",
""
],
[
"Pinto",
"Sandro Ely de Souza",
""
],
[
"Baptista",
"Murilo da Silva",
""
],
[
"La Guardia",
"Giuliano Gadioli",
""
]
] |
In this work we formalize a method to compute the degree of importance of biological agents that participates on the dynamics of a biological phenomenon build upon a complex network. We call this new procedure by theoretical knock-out (KO). To devise this method, we make two approaches: algebraically and algorithmically. In both cases we compute a vector on an asymptotic state, called flux vector. The flux is given by a random walk on a directed graph that represents a biological phenomenon. This vector gives us the information about the relative flux of walkers on a vertex which represents a biological agent. With two vector of this kind, we can calculate the relative mean error between them by averaging over its coefficients. This quantity allows us to assess the degree of importance of each vertex of a complex network that evolves in time and has experimental background. We find out that this procedure can be applied in any sort of biological phenomena in which we can know the role and interrelationships of its agents. These results also provide experimental biologists to predict the order of importance of biological agents on a mounted complex network.
|
2203.11402
|
Eleanor Dunlop Mrs
|
Eleanor Dunlop, Jette Jakobsen, Marie Bagge Jensen, Jayashree Arcot,
Liang Qiao, Judy Cunningham and Lucinda J Black
|
Vitamin K content of cheese, yoghurt and meat products in Australia
|
23 pages, 2 tables
| null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vitamin K is vital for normal blood coagulation, and may influence bone,
neurological and vascular health. Data on the vitamin K content of Australian
foods are limited, preventing estimation of vitamin K intakes in the Australian
population. We measured phylloquinone (PK) and menaquinone (MK) -4 to -10 in
cheese, yoghurt and meat products (48 composite samples from 288 primary
samples) by liquid chromatography with electrospray ionisation-tandem mass
spectrometry. At least one K vitamer was found in every sample. The greatest
mean concentrations of PK, MK-4 and MK-9 were found in lamb liver, chicken leg
meat and Cheddar cheese, respectively. Cheddar cheese and cream cheese
contained MK-5. MK-8 was found in Cheddar cheese only. As the K vitamer profile
and concentrations appear to vary considerably by geographical location,
Australia needs a vitamin K food composition dataset that is representative of
foods consumed in Australia.
|
[
{
"created": "Tue, 22 Mar 2022 00:50:11 GMT",
"version": "v1"
}
] |
2022-03-23
|
[
[
"Dunlop",
"Eleanor",
""
],
[
"Jakobsen",
"Jette",
""
],
[
"Jensen",
"Marie Bagge",
""
],
[
"Arcot",
"Jayashree",
""
],
[
"Qiao",
"Liang",
""
],
[
"Cunningham",
"Judy",
""
],
[
"Black",
"Lucinda J",
""
]
] |
Vitamin K is vital for normal blood coagulation, and may influence bone, neurological and vascular health. Data on the vitamin K content of Australian foods are limited, preventing estimation of vitamin K intakes in the Australian population. We measured phylloquinone (PK) and menaquinone (MK) -4 to -10 in cheese, yoghurt and meat products (48 composite samples from 288 primary samples) by liquid chromatography with electrospray ionisation-tandem mass spectrometry. At least one K vitamer was found in every sample. The greatest mean concentrations of PK, MK-4 and MK-9 were found in lamb liver, chicken leg meat and Cheddar cheese, respectively. Cheddar cheese and cream cheese contained MK-5. MK-8 was found in Cheddar cheese only. As the K vitamer profile and concentrations appear to vary considerably by geographical location, Australia needs a vitamin K food composition dataset that is representative of foods consumed in Australia.
|
2309.00694
|
William Cuello
|
William S. Cuello, Marcio Gameiro, Juan A. Bonachela, and Konstantin
Mischaikow
|
Inferring Long-term Dynamics of Ecological Communities Using
Combinatorics
|
25 pages, 9 figures
| null | null | null |
q-bio.PE math.CO math.DS q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an increasingly changing world, predicting the fate of species across the
globe has become a major concern. Understanding how the population dynamics of
various species and communities will unfold requires predictive tools that
experimental data alone can not capture. Here, we introduce our combinatorial
framework, Widespread Ecological Networks and their Dynamical Signatures
(WENDyS) which, using data on the relative strengths of interactions and growth
rates within a community of species predicts all possible long-term outcomes of
the community. To this end, WENDyS partitions the multidimensional parameter
space (formed by the strengths of interactions and growth rates) into a finite
number of regions, each corresponding to a unique set of coarse population
dynamics. Thus, WENDyS ultimately creates a library of all possible outcomes
for the community. On the one hand, our framework avoids the typical
``parameter sweeps'' that have become ubiquitous across other forms of
mathematical modeling, which can be computationally expensive for ecologically
realistic models and examples. On the other hand, WENDyS opens the opportunity
for interdisciplinary teams to use standard experimental data (i.e., strengths
of interactions and growth rates) to filter down the possible end states of a
community. To demonstrate the latter, here we present a case study from the
Indonesian Coral Reef. We analyze how different interactions between anemone
and anemonefish species lead to alternative stable states for the coral reef
community, and how competition can increase the chance of exclusion for one or
more species. WENDyS, thus, can be used to anticipate ecological outcomes and
test the effectiveness of management (e.g., conservation) strategies.
|
[
{
"created": "Fri, 1 Sep 2023 18:27:54 GMT",
"version": "v1"
}
] |
2023-09-06
|
[
[
"Cuello",
"William S.",
""
],
[
"Gameiro",
"Marcio",
""
],
[
"Bonachela",
"Juan A.",
""
],
[
"Mischaikow",
"Konstantin",
""
]
] |
In an increasingly changing world, predicting the fate of species across the globe has become a major concern. Understanding how the population dynamics of various species and communities will unfold requires predictive tools that experimental data alone can not capture. Here, we introduce our combinatorial framework, Widespread Ecological Networks and their Dynamical Signatures (WENDyS) which, using data on the relative strengths of interactions and growth rates within a community of species predicts all possible long-term outcomes of the community. To this end, WENDyS partitions the multidimensional parameter space (formed by the strengths of interactions and growth rates) into a finite number of regions, each corresponding to a unique set of coarse population dynamics. Thus, WENDyS ultimately creates a library of all possible outcomes for the community. On the one hand, our framework avoids the typical ``parameter sweeps'' that have become ubiquitous across other forms of mathematical modeling, which can be computationally expensive for ecologically realistic models and examples. On the other hand, WENDyS opens the opportunity for interdisciplinary teams to use standard experimental data (i.e., strengths of interactions and growth rates) to filter down the possible end states of a community. To demonstrate the latter, here we present a case study from the Indonesian Coral Reef. We analyze how different interactions between anemone and anemonefish species lead to alternative stable states for the coral reef community, and how competition can increase the chance of exclusion for one or more species. WENDyS, thus, can be used to anticipate ecological outcomes and test the effectiveness of management (e.g., conservation) strategies.
|
1503.07518
|
Sadie Ryan
|
Sadie J. Ryan, Tal Ben-Horin, Leah R. Johnson
|
Malaria control and senescence: the importance of accounting for the
pace and shape of aging in wild mosquitoes
|
35 + 214 pages, 4 figures, 1 table, 1 supplemental figure, 1
supplemental table, +1 appendix [Ecological Archives]
|
Ecosphere 6(9):170 (2015)
|
10.1890/ES15-00094.1
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
The assumption that vector mortality remains constant with age is used widely
to assess malaria transmission risk and predict the public health consequences
of vector control strategies. However, laboratory studies commonly demonstrate
clear evidence of senescence, or a decrease in physiological function and
increase in vector mortality rate with age. We developed methods to integrate
available field data to understand mortality in wild Anopheles gambiae, the
most import vector of malaria in sub-Saharan Africa. We found evidence for an
increase in rates of mortality with age, a component of senescence. As
expected, we also found that overall mortality is far greater in wild cohorts
than commonly observed under protected laboratory conditions. The magnitude of
senescence increases with An. gambiae lifespan, implying that wild mosquitoes
die long before cohorts can exhibit strong senescence. We reviewed available
published mortality studies of Anopheles spp. to confirm this fundamental
prediction of aging in wild populations. Senescence becomes most apparent in
long-living mosquito cohorts, and cohorts with low extrinsic mortality, such as
those raised under protected laboratory conditions, suffer a relatively high
proportion of senescent deaths. Imprecision in estimates of vector mortality
and changes in mortality with age will severely bias models of vector borne
disease transmission risk, such as malaria, and the sensitivity of transmission
to bias increases as the extrinsic incubation period of the parasite decreases.
While we focus here on malaria, we caution that future models for
anti-vectorial interventions must therefore incorporate both realistic
mortality rates and age-dependent changes in vector mortality.
|
[
{
"created": "Wed, 25 Mar 2015 17:44:54 GMT",
"version": "v1"
},
{
"created": "Wed, 27 May 2015 20:03:28 GMT",
"version": "v2"
},
{
"created": "Wed, 30 Sep 2015 01:16:55 GMT",
"version": "v3"
}
] |
2015-10-01
|
[
[
"Ryan",
"Sadie J.",
""
],
[
"Ben-Horin",
"Tal",
""
],
[
"Johnson",
"Leah R.",
""
]
] |
The assumption that vector mortality remains constant with age is used widely to assess malaria transmission risk and predict the public health consequences of vector control strategies. However, laboratory studies commonly demonstrate clear evidence of senescence, or a decrease in physiological function and increase in vector mortality rate with age. We developed methods to integrate available field data to understand mortality in wild Anopheles gambiae, the most import vector of malaria in sub-Saharan Africa. We found evidence for an increase in rates of mortality with age, a component of senescence. As expected, we also found that overall mortality is far greater in wild cohorts than commonly observed under protected laboratory conditions. The magnitude of senescence increases with An. gambiae lifespan, implying that wild mosquitoes die long before cohorts can exhibit strong senescence. We reviewed available published mortality studies of Anopheles spp. to confirm this fundamental prediction of aging in wild populations. Senescence becomes most apparent in long-living mosquito cohorts, and cohorts with low extrinsic mortality, such as those raised under protected laboratory conditions, suffer a relatively high proportion of senescent deaths. Imprecision in estimates of vector mortality and changes in mortality with age will severely bias models of vector borne disease transmission risk, such as malaria, and the sensitivity of transmission to bias increases as the extrinsic incubation period of the parasite decreases. While we focus here on malaria, we caution that future models for anti-vectorial interventions must therefore incorporate both realistic mortality rates and age-dependent changes in vector mortality.
|
2202.07290
|
Pierre Orhan
|
Pierre Orhan, Yves Boubenec, Jean-R\'emi King
|
Don't stop the training: continuously-updating self-supervised
algorithms best account for auditory responses in the cortex
| null | null | null | null |
q-bio.NC cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last decade, numerous studies have shown that deep neural networks
exhibit sensory representations similar to those of the mammalian brain, in
that their activations linearly map onto cortical responses to the same sensory
inputs. However, it remains unknown whether these artificial networks also
learn like the brain. To address this issue, we analyze the brain responses of
two ferret auditory cortices recorded with functional UltraSound imaging (fUS),
while the animals were presented with 320 10\,s sounds. We compare these brain
responses to the activations of Wav2vec 2.0, a self-supervised neural network
pretrained with 960\,h of speech, and input with the same 320 sounds.
Critically, we evaluate Wav2vec 2.0 under two distinct modes: (i) "Pretrained",
where the same model is used for all sounds, and (ii) "Continuous Update",
where the weights of the pretrained model are modified with back-propagation
after every sound, presented in the same order as the ferrets. Our results show
that the Continuous-Update mode leads Wav2Vec 2.0 to generate activations that
are more similar to the brain than a Pretrained Wav2Vec 2.0 or than other
control models using different training modes. These results suggest that the
trial-by-trial modifications of self-supervised algorithms induced by
back-propagation aligns with the corresponding fluctuations of cortical
responses to sounds. Our finding thus provides empirical evidence of a common
learning mechanism between self-supervised models and the mammalian cortex
during sound processing.
|
[
{
"created": "Tue, 15 Feb 2022 10:12:56 GMT",
"version": "v1"
}
] |
2022-02-16
|
[
[
"Orhan",
"Pierre",
""
],
[
"Boubenec",
"Yves",
""
],
[
"King",
"Jean-Rémi",
""
]
] |
Over the last decade, numerous studies have shown that deep neural networks exhibit sensory representations similar to those of the mammalian brain, in that their activations linearly map onto cortical responses to the same sensory inputs. However, it remains unknown whether these artificial networks also learn like the brain. To address this issue, we analyze the brain responses of two ferret auditory cortices recorded with functional UltraSound imaging (fUS), while the animals were presented with 320 10\,s sounds. We compare these brain responses to the activations of Wav2vec 2.0, a self-supervised neural network pretrained with 960\,h of speech, and input with the same 320 sounds. Critically, we evaluate Wav2vec 2.0 under two distinct modes: (i) "Pretrained", where the same model is used for all sounds, and (ii) "Continuous Update", where the weights of the pretrained model are modified with back-propagation after every sound, presented in the same order as the ferrets. Our results show that the Continuous-Update mode leads Wav2Vec 2.0 to generate activations that are more similar to the brain than a Pretrained Wav2Vec 2.0 or than other control models using different training modes. These results suggest that the trial-by-trial modifications of self-supervised algorithms induced by back-propagation aligns with the corresponding fluctuations of cortical responses to sounds. Our finding thus provides empirical evidence of a common learning mechanism between self-supervised models and the mammalian cortex during sound processing.
|
1204.3126
|
Mustafa Barasa
|
Barasa Mustafa, Gicheru Muita MichaeL, Kagasi Ambogo Esther, Ozwara
Suba Hastings
|
Characterisation of placental malaria in olive baboons (Papio anubis)
infected with Plasmodium knowlesi H strain
|
Five pages, four figures: This reserch was supported by the research
capability strengthening World Health Organisation (WHO Grant Number: A
50075) for malaria research in Africa under the Multilateral Initiative on
Malaria / Special Programme for Research and Training in Tropical Diseases
(WHO-MIM/TDR). The Institute of Primate Research in Nairobi (Kenya) provided
the research facilities
|
Int. J. Integ. Biol: Year 2012 Volume 9, Issue No. 2: 54 - 58
| null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pregnant women have increased susceptibility to malaria infection. In these
women, malaria parasites are frequently found sequestered in the placental
intervillous spaces, a condition referred to as placental malaria (PM).
Placental malaria threatens the health of the mother and the child's life by
causing still births and reduction in gestational age. An estimated 24 million
pregnant women in Sub-Saharan Africa are at risk. Mechanisms responsible for
increased susceptibility in pregnant women are not fully understood. Pregnancy
malaria studies have been limited by the lack of a suitable animal model. This
research aimed to develop a baboon (Papio anubis) model for studying PM. The
pregnancies of three adult female baboons were synchronized and their
gestational levels confirmed by ultrasonography. On the 150th day of gestation
the pregnant baboons were infected with Plasmodium knowlesi H strain parasites
together with four nulligravid control baboons. Parasitaemia was monitored from
two days post inoculation until the 159th day of gestation when caesarean
section was done on one baboon in order to obtain the placenta. Two baboons
aborted their conceptus. Smears prepared from placental blood demonstrated the
presence of Plasmodium knowlesi parasites in all the three sampled placentas.
These new findings show that P. knowlesi sequesters in the baboon placenta. In
addition, this study has characterized haemoglobin, eosinophil, Immunoglobulin
G and Immunoglobulin M profiles in this model. Thus a non human primate
(baboon) model for studying PM has been established. The established baboon -
P. knowlesi model for studying human placental/pregnancy malaria now offers an
opportunity for circumventing the obstacles experienced during human studies
like having inadequate tissue for analysis, inaccurate estimation of
gestational age, moral, ethical and financial limitations.
|
[
{
"created": "Sat, 14 Apr 2012 00:41:42 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Apr 2012 08:10:37 GMT",
"version": "v2"
}
] |
2012-04-18
|
[
[
"Mustafa",
"Barasa",
""
],
[
"MichaeL",
"Gicheru Muita",
""
],
[
"Esther",
"Kagasi Ambogo",
""
],
[
"Hastings",
"Ozwara Suba",
""
]
] |
Pregnant women have increased susceptibility to malaria infection. In these women, malaria parasites are frequently found sequestered in the placental intervillous spaces, a condition referred to as placental malaria (PM). Placental malaria threatens the health of the mother and the child's life by causing still births and reduction in gestational age. An estimated 24 million pregnant women in Sub-Saharan Africa are at risk. Mechanisms responsible for increased susceptibility in pregnant women are not fully understood. Pregnancy malaria studies have been limited by the lack of a suitable animal model. This research aimed to develop a baboon (Papio anubis) model for studying PM. The pregnancies of three adult female baboons were synchronized and their gestational levels confirmed by ultrasonography. On the 150th day of gestation the pregnant baboons were infected with Plasmodium knowlesi H strain parasites together with four nulligravid control baboons. Parasitaemia was monitored from two days post inoculation until the 159th day of gestation when caesarean section was done on one baboon in order to obtain the placenta. Two baboons aborted their conceptus. Smears prepared from placental blood demonstrated the presence of Plasmodium knowlesi parasites in all the three sampled placentas. These new findings show that P. knowlesi sequesters in the baboon placenta. In addition, this study has characterized haemoglobin, eosinophil, Immunoglobulin G and Immunoglobulin M profiles in this model. Thus a non human primate (baboon) model for studying PM has been established. The established baboon - P. knowlesi model for studying human placental/pregnancy malaria now offers an opportunity for circumventing the obstacles experienced during human studies like having inadequate tissue for analysis, inaccurate estimation of gestational age, moral, ethical and financial limitations.
|
1703.01818
|
Daniele De Martino
|
Daniele De Martino, Anna MC Andersson, Tobias Bergmiller, C\u{a}lin C
Guet, Ga\v{s}per Tka\v{c}ik
|
Statistical mechanics for metabolic networks during steady-state growth
|
12 pages, 4 figures
| null |
10.1038/s41467-018-05417-9
| null |
q-bio.MN cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Which properties of metabolic networks can be derived solely from
stoichiometric information about the network's constituent reactions?
Predictive results have been obtained by Flux Balance Analysis (FBA), by
postulating that cells set metabolic fluxes within the allowed stoichiometry so
as to maximize their growth. Here, we generalize this framework to single cell
level using maximum entropy models from statistical physics. We define and
compute, for the core metabolism of Escherichia coli, a joint distribution over
all fluxes that yields the experimentally observed growth rate. This solution,
containing FBA as a limiting case, provides a better match to the measured
fluxes in the wild type and several mutants. We find that E. coli metabolism is
close to, but not at, the optimality assumed by FBA. Moreover, our model makes
a wide range of predictions: (i) on flux variability, its regulation, and flux
correlations across individual cells; (ii) on the relative importance of
stoichiometric constraints vs. growth rate optimization; (iii) on quantitative
scaling relations for singe-cell growth rate distributions. We validate these
scaling predictions using data from individual bacterial cells grown in a
microfluidic device at different sub-inhibitory antibiotic concentrations.
Under mild dynamical assumptions, fluctuation-response relations further
predict the autocorrelation timescale in growth data and growth rate adaptation
times following an environmental perturbation.
|
[
{
"created": "Mon, 6 Mar 2017 11:34:45 GMT",
"version": "v1"
}
] |
2018-09-05
|
[
[
"De Martino",
"Daniele",
""
],
[
"Andersson",
"Anna MC",
""
],
[
"Bergmiller",
"Tobias",
""
],
[
"Guet",
"Călin C",
""
],
[
"Tkačik",
"Gašper",
""
]
] |
Which properties of metabolic networks can be derived solely from stoichiometric information about the network's constituent reactions? Predictive results have been obtained by Flux Balance Analysis (FBA), by postulating that cells set metabolic fluxes within the allowed stoichiometry so as to maximize their growth. Here, we generalize this framework to single cell level using maximum entropy models from statistical physics. We define and compute, for the core metabolism of Escherichia coli, a joint distribution over all fluxes that yields the experimentally observed growth rate. This solution, containing FBA as a limiting case, provides a better match to the measured fluxes in the wild type and several mutants. We find that E. coli metabolism is close to, but not at, the optimality assumed by FBA. Moreover, our model makes a wide range of predictions: (i) on flux variability, its regulation, and flux correlations across individual cells; (ii) on the relative importance of stoichiometric constraints vs. growth rate optimization; (iii) on quantitative scaling relations for singe-cell growth rate distributions. We validate these scaling predictions using data from individual bacterial cells grown in a microfluidic device at different sub-inhibitory antibiotic concentrations. Under mild dynamical assumptions, fluctuation-response relations further predict the autocorrelation timescale in growth data and growth rate adaptation times following an environmental perturbation.
|
2312.11338
|
Matthias Schott
|
Lucas Heger, Kerem Akdogan, Matthias Schott
|
On the Impact of School Closures on COVID-19 Transmission in Germany
using an agent-based Simulation
|
5 pages, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The effect of school closures on the spread of COVID-19 has been discussed
among experts and the general public since those measures have been taken only
a few months after the start of the pandemic in 2020. Within this study, the
JuneGermany framework, is used to quantify the impact of school closures in the
German state Rhineland Palatinate using an agent-based simulation approach. It
was found that the simulations predicts a reduction of the number of
infections, hospitalizations as well as death by a factor of 2.5 compared to
scenarios, where no school closures are enforced, during the second wave
between October 2020 and February 2021
|
[
{
"created": "Mon, 18 Dec 2023 16:41:34 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Heger",
"Lucas",
""
],
[
"Akdogan",
"Kerem",
""
],
[
"Schott",
"Matthias",
""
]
] |
The effect of school closures on the spread of COVID-19 has been discussed among experts and the general public since those measures have been taken only a few months after the start of the pandemic in 2020. Within this study, the JuneGermany framework, is used to quantify the impact of school closures in the German state Rhineland Palatinate using an agent-based simulation approach. It was found that the simulations predicts a reduction of the number of infections, hospitalizations as well as death by a factor of 2.5 compared to scenarios, where no school closures are enforced, during the second wave between October 2020 and February 2021
|
1804.06507
|
Michael Plank
|
Michael J Plank
|
How should fishing mortality be distributed under balanced harvesting?
| null |
Fisheries Research, Volume 207, November 2018, Pages 171-174
|
10.1016/j.fishres.2018.06.003
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zhou and Smith (2017) investigate different multi-species harvesting
scenarios using a simple Holling-Tanner model. Among these scenarios are two
methods for implementing balanced harvesting, where fishing is distributed
across trophic levels in accordance with their productivity. This note examines
the effects of a different quantitative implementation of balanced harvesting,
where the fishing mortality rate is proportional to the total production rate
of each trophic level. The results show that setting fishing mortality rate to
be proportional to total production rate, rather than to productivity per unit
biomass, better preserves trophic structure and provides a crucial safeguard
for rare and threatened ecological groups. This is a key ingredient of balanced
harvesting if it is to meet its objective of preserving biodiversity.
|
[
{
"created": "Tue, 17 Apr 2018 23:55:29 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2019 02:42:24 GMT",
"version": "v2"
}
] |
2019-10-23
|
[
[
"Plank",
"Michael J",
""
]
] |
Zhou and Smith (2017) investigate different multi-species harvesting scenarios using a simple Holling-Tanner model. Among these scenarios are two methods for implementing balanced harvesting, where fishing is distributed across trophic levels in accordance with their productivity. This note examines the effects of a different quantitative implementation of balanced harvesting, where the fishing mortality rate is proportional to the total production rate of each trophic level. The results show that setting fishing mortality rate to be proportional to total production rate, rather than to productivity per unit biomass, better preserves trophic structure and provides a crucial safeguard for rare and threatened ecological groups. This is a key ingredient of balanced harvesting if it is to meet its objective of preserving biodiversity.
|
1308.2278
|
Simon Childs
|
S. J. Childs
|
A Model of Teneral Dehydration in Glossina
|
28 pages, 9 figures, 3 tables
|
Acta Tropica, 131: 79-91, 2014
| null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The results of a long-established investigation into teneral transpiration
are used as a rudimentary data set. These data are not complete in that all are
at 25 $^\circ\mathrm{C}$ and the temperature-dependence cannot, therefore be
resolved. An allowance is, nonetheless, made for the outstanding
temperature-dependent data. The data are generalised to all humidities, levels
of activity and, in theory, temperatures, by invoking the property of
multiplicative separability. In this way a formulation, which is a very simple,
first order, ordinary differential equation, is devised. The model is extended
to include a variety of Glossina species by resorting to their relative,
resting water loss rates in dry air. The calculated, total water loss is
converted to the relevant humidity, at 24 $^\circ\mathrm{C}$, that which
produced an equivalent water loss in the pupa, in order to exploit an adaption
of an established survival relationship. The resulting computational model
calculates total, teneral water loss, consequent mortality and adult
recruitment. Surprisingly, the postulated race against time, to feed, applies
more to the mesophilic and xerophilic species, in that increasing order. So
much so that it is reasonable to conclude that, should Glossina brevipalpis
survive the pupal phase, it will almost certainly survive to locate a host,
without there being any significant prospect of death from dehydration. With
the conclusion of this work comes the revelation that the classification of
species as hygrophilic, mesophilic and xerophilic is largely true only in so
much as their third and fourth instars are and, possibly, the hours shortly
before eclosion.
|
[
{
"created": "Sat, 10 Aug 2013 05:14:49 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jun 2014 12:53:01 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Oct 2015 09:04:17 GMT",
"version": "v3"
}
] |
2015-10-29
|
[
[
"Childs",
"S. J.",
""
]
] |
The results of a long-established investigation into teneral transpiration are used as a rudimentary data set. These data are not complete in that all are at 25 $^\circ\mathrm{C}$ and the temperature-dependence cannot, therefore be resolved. An allowance is, nonetheless, made for the outstanding temperature-dependent data. The data are generalised to all humidities, levels of activity and, in theory, temperatures, by invoking the property of multiplicative separability. In this way a formulation, which is a very simple, first order, ordinary differential equation, is devised. The model is extended to include a variety of Glossina species by resorting to their relative, resting water loss rates in dry air. The calculated, total water loss is converted to the relevant humidity, at 24 $^\circ\mathrm{C}$, that which produced an equivalent water loss in the pupa, in order to exploit an adaption of an established survival relationship. The resulting computational model calculates total, teneral water loss, consequent mortality and adult recruitment. Surprisingly, the postulated race against time, to feed, applies more to the mesophilic and xerophilic species, in that increasing order. So much so that it is reasonable to conclude that, should Glossina brevipalpis survive the pupal phase, it will almost certainly survive to locate a host, without there being any significant prospect of death from dehydration. With the conclusion of this work comes the revelation that the classification of species as hygrophilic, mesophilic and xerophilic is largely true only in so much as their third and fourth instars are and, possibly, the hours shortly before eclosion.
|
q-bio/0508021
|
Eli Eisenberg
|
Shai Carmi, Erez Y. Levanon, Shlomo Havlin, Eli Eisenberg
|
Connectivity and expression in protein networks: Proteins in a complex
are uniformly expressed
|
revised version; accepted for publication in PRE
|
Phys. Rev. E 73, 031909 (2006)
|
10.1103/PhysRevE.73.031909
| null |
q-bio.MN cond-mat.other q-bio.OT
| null |
We explore the interplay between the protein-protein interactions network and
the expression of the interacting proteins. It is shown that interacting
proteins are expressed in significantly more similar cellular concentrations.
This is largely due to interacting pairs which are part of protein complexes.
We solve a generic model of complex formation and show explicitly that
complexes form most efficiently when their members have roughly the same
concentrations. Therefore, the observed similarity in interacting protein
concentrations could be attributed to optimization for efficiency of complex
formation.
|
[
{
"created": "Wed, 17 Aug 2005 14:35:43 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jan 2006 17:17:40 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Carmi",
"Shai",
""
],
[
"Levanon",
"Erez Y.",
""
],
[
"Havlin",
"Shlomo",
""
],
[
"Eisenberg",
"Eli",
""
]
] |
We explore the interplay between the protein-protein interactions network and the expression of the interacting proteins. It is shown that interacting proteins are expressed in significantly more similar cellular concentrations. This is largely due to interacting pairs which are part of protein complexes. We solve a generic model of complex formation and show explicitly that complexes form most efficiently when their members have roughly the same concentrations. Therefore, the observed similarity in interacting protein concentrations could be attributed to optimization for efficiency of complex formation.
|
1508.02980
|
Susan Khor
|
Susan Khor
|
Comparing local search paths with global search paths on protein residue
networks: allosteric communication
|
32 pages
|
J Complex Netw (2017) 5 (3): 409-432
|
10.1093/comnet/cnw020
| null |
q-bio.MN q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Although proteins have been recognized as small-world networks and their
small-world network properties of clustering and short paths have been
exploited computationally to produce biologically relevant information, they
have not been truly explored as such, i.e. as navigable small-world networks in
the original spirit of Milgram's work. This research seeks to fill this gap by
exploring local search on a network representation of proteins and to probe the
source of navigability in proteins. Previously, we confirmed that proteins are
navigable small-world networks and observed that local search paths exhibit
different characteristics from global search paths. In this paper, we
investigate the biological relevance of the differences in path characteristics
on a type III receptor tyrosine kinase (KIT). A chief difference that works in
favour of local search paths as intra-protein communication pathways is their
weaker proclivity, compared to global search paths, for traversing long-range
edges. Long-range edges tend to be less stable and their inclusion tends to
decrease the communication propensity of a path. The source of protein
navigability is traced to clustering provided by short-range edges. The
majority of a protein's short-range edges reside within structures deemed
important for long-range energy transport and modulation of allosteric
communication in proteins. Therefore, the disruption of intra-protein
communication as a result of the destruction of these structures via random
rewiring is expected. A local search perspective leads us to this expected
conclusion while a global search perspective does not. These findings initiate
the compilation of a list of path properties that are characteristic of
intra-protein pathways and could suggest fresh avenues for evolving and
regulating navigable (small-world) networks.
|
[
{
"created": "Wed, 12 Aug 2015 16:29:54 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Oct 2015 02:46:43 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Jan 2016 18:52:41 GMT",
"version": "v3"
},
{
"created": "Thu, 12 May 2016 21:52:09 GMT",
"version": "v4"
},
{
"created": "Fri, 29 Jul 2016 16:47:48 GMT",
"version": "v5"
}
] |
2017-06-20
|
[
[
"Khor",
"Susan",
""
]
] |
Although proteins have been recognized as small-world networks and their small-world network properties of clustering and short paths have been exploited computationally to produce biologically relevant information, they have not been truly explored as such, i.e. as navigable small-world networks in the original spirit of Milgram's work. This research seeks to fill this gap by exploring local search on a network representation of proteins and to probe the source of navigability in proteins. Previously, we confirmed that proteins are navigable small-world networks and observed that local search paths exhibit different characteristics from global search paths. In this paper, we investigate the biological relevance of the differences in path characteristics on a type III receptor tyrosine kinase (KIT). A chief difference that works in favour of local search paths as intra-protein communication pathways is their weaker proclivity, compared to global search paths, for traversing long-range edges. Long-range edges tend to be less stable and their inclusion tends to decrease the communication propensity of a path. The source of protein navigability is traced to clustering provided by short-range edges. The majority of a protein's short-range edges reside within structures deemed important for long-range energy transport and modulation of allosteric communication in proteins. Therefore, the disruption of intra-protein communication as a result of the destruction of these structures via random rewiring is expected. A local search perspective leads us to this expected conclusion while a global search perspective does not. These findings initiate the compilation of a list of path properties that are characteristic of intra-protein pathways and could suggest fresh avenues for evolving and regulating navigable (small-world) networks.
|
1012.3607
|
Jose Vilar
|
Jose M. G. Vilar
|
Accurate prediction of gene expression by integration of DNA sequence
statistics with detailed modeling of transcription regulation
|
15 pages, 5 figures
|
Biophys. J. 99, 2408-2413 (2010)
|
10.1016/j.bpj.2010.08.006
| null |
q-bio.MN cond-mat.stat-mech cs.CE physics.bio-ph q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gene regulation involves a hierarchy of events that extend from specific
protein-DNA interactions to the combinatorial assembly of nucleoprotein
complexes. The effects of DNA sequence on these processes have typically been
studied based either on its quantitative connection with single-domain binding
free energies or on empirical rules that combine different DNA motifs to
predict gene expression trends on a genomic scale. The middle-point approach
that quantitatively bridges these two extremes, however, remains largely
unexplored. Here, we provide an integrated approach to accurately predict gene
expression from statistical sequence information in combination with detailed
biophysical modeling of transcription regulation by multidomain binding on
multiple DNA sites. For the regulation of the prototypical lac operon, this
approach predicts within 0.3-fold accuracy transcriptional activity over a
10,000-fold range from DNA sequence statistics for different intracellular
conditions.
|
[
{
"created": "Thu, 16 Dec 2010 14:06:26 GMT",
"version": "v1"
}
] |
2015-05-20
|
[
[
"Vilar",
"Jose M. G.",
""
]
] |
Gene regulation involves a hierarchy of events that extend from specific protein-DNA interactions to the combinatorial assembly of nucleoprotein complexes. The effects of DNA sequence on these processes have typically been studied based either on its quantitative connection with single-domain binding free energies or on empirical rules that combine different DNA motifs to predict gene expression trends on a genomic scale. The middle-point approach that quantitatively bridges these two extremes, however, remains largely unexplored. Here, we provide an integrated approach to accurately predict gene expression from statistical sequence information in combination with detailed biophysical modeling of transcription regulation by multidomain binding on multiple DNA sites. For the regulation of the prototypical lac operon, this approach predicts within 0.3-fold accuracy transcriptional activity over a 10,000-fold range from DNA sequence statistics for different intracellular conditions.
|
1505.06898
|
Mareike Fischer
|
Christopher Bryant and Mareike Fischer and Simone Linz and Charles
Semple
|
On the Quirks of Maximum Parsimony and Likelihood on Phylogenetic
Networks
|
28 pages, 5 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maximum parsimony is one of the most frequently-discussed tree reconstruction
methods in phylogenetic estimation. However, in recent years it has become more
and more apparent that phylogenetic trees are often not sufficient to describe
evolution accurately. For instance, processes like hybridization or lateral
gene transfer that are commonplace in many groups of organisms and result in
mosaic patterns of relationships cannot be represented by a single phylogenetic
tree. This is why phylogenetic networks, which can display such events, are
becoming of more and more interest in phylogenetic research. It is therefore
necessary to extend concepts like maximum parsimony from phylogenetic trees to
networks. Several suggestions for possible extensions can be found in recent
literature, for instance the softwired and the hardwired parsimony concepts. In
this paper, we analyze the so-called big parsimony problem under these two
concepts, i.e. we investigate maximum parsimonious networks and analyze their
properties. In particular, we show that finding a softwired maximum parsimony
network is possible in polynomial time. We also show that the set of maximum
parsimony networks for the hardwired definition always contains at least one
phylogenetic tree. Lastly, we investigate some parallels of parsimony to
different likelihood concepts on phylogenetic networks.
|
[
{
"created": "Tue, 26 May 2015 11:03:32 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jun 2015 20:45:38 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Nov 2015 14:50:41 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Sep 2016 19:33:40 GMT",
"version": "v4"
},
{
"created": "Mon, 24 Oct 2016 18:41:47 GMT",
"version": "v5"
}
] |
2016-10-25
|
[
[
"Bryant",
"Christopher",
""
],
[
"Fischer",
"Mareike",
""
],
[
"Linz",
"Simone",
""
],
[
"Semple",
"Charles",
""
]
] |
Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks.
|
0909.2676
|
Serguey Mayburov NN
|
S.Mayburov
|
Coherent and Noncoherent Photonic Communications in Biological Systems
|
9 pages, talk given at PIERS-2009 conference, Moscow, august 2009, to
appear in Proceedings
| null | null | null |
q-bio.OT physics.bio-ph q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The possible mechanisms of communications between distant bio-systems by
means of optical and UV photons are studied. It is argued that their main
production mechanism is owed to the biochemical reactions, occurring during the
cell division.. In the proposed model the bio-systems perform such
communications, radiating the photons in form of short periodic bursts, which
were observed experimentally for fish and frog eggs1. For experimentally
measured photon rates the communication algorithm is supposedly similar to the
exchange of binary encoded data in computer net via optical channels
|
[
{
"created": "Mon, 14 Sep 2009 10:10:13 GMT",
"version": "v1"
}
] |
2009-11-17
|
[
[
"Mayburov",
"S.",
""
]
] |
The possible mechanisms of communications between distant bio-systems by means of optical and UV photons are studied. It is argued that their main production mechanism is owed to the biochemical reactions, occurring during the cell division.. In the proposed model the bio-systems perform such communications, radiating the photons in form of short periodic bursts, which were observed experimentally for fish and frog eggs1. For experimentally measured photon rates the communication algorithm is supposedly similar to the exchange of binary encoded data in computer net via optical channels
|
1607.01706
|
Shi Gu
|
Shi Gu, Richard F. Betzel, Matthew Cieslak, Philip R. Delio, Scott T.
Grafton, Fabio Pasqualetti, Danielle S. Bassett
|
Optimal Trajectories of Brain State Transitions
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complexity of neural dynamics stems in part from the complexity of the
underlying anatomy. Yet how the organization of white matter architecture
constrains how the brain transitions from one cognitive state to another
remains unknown. Here we address this question from a computational perspective
by defining a brain state as a pattern of activity across brain regions.
Drawing on recent advances in network control theory, we model the underlying
mechanisms of brain state transitions as elicited by the collective control of
region sets. Specifically, we examine how the brain moves from a specified
initial state (characterized by high activity in the default mode) to a
specified target state (characterized by high activity in primary sensorimotor
cortex) in finite time. Across all state transitions, we observe that the
supramarginal gyrus and the inferior parietal lobule consistently acted as
efficient, low energy control hubs, consistent with their strong anatomical
connections to key input areas of sensorimotor cortex. Importantly, both these
and other regions in the fronto-parietal, cingulo-opercular, and attention
systems are poised to affect a broad array of state transitions that cannot
easily be classified by traditional notions of control common in the
engineering literature. This theoretical versatility comes with a vulnerability
to injury. In patients with mild traumatic brain injury, we observe a loss of
specificity in putative control processes, suggesting greater susceptibility to
damage-induced noise in neurophysiological activity. These results offer
fundamentally new insights into the mechanisms driving brain state transitions
in healthy cognition and their alteration following injury.
|
[
{
"created": "Wed, 6 Jul 2016 16:57:46 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Jan 2017 02:33:35 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Jan 2017 04:18:23 GMT",
"version": "v3"
}
] |
2017-01-11
|
[
[
"Gu",
"Shi",
""
],
[
"Betzel",
"Richard F.",
""
],
[
"Cieslak",
"Matthew",
""
],
[
"Delio",
"Philip R.",
""
],
[
"Grafton",
"Scott T.",
""
],
[
"Pasqualetti",
"Fabio",
""
],
[
"Bassett",
"Danielle S.",
""
]
] |
The complexity of neural dynamics stems in part from the complexity of the underlying anatomy. Yet how the organization of white matter architecture constrains how the brain transitions from one cognitive state to another remains unknown. Here we address this question from a computational perspective by defining a brain state as a pattern of activity across brain regions. Drawing on recent advances in network control theory, we model the underlying mechanisms of brain state transitions as elicited by the collective control of region sets. Specifically, we examine how the brain moves from a specified initial state (characterized by high activity in the default mode) to a specified target state (characterized by high activity in primary sensorimotor cortex) in finite time. Across all state transitions, we observe that the supramarginal gyrus and the inferior parietal lobule consistently acted as efficient, low energy control hubs, consistent with their strong anatomical connections to key input areas of sensorimotor cortex. Importantly, both these and other regions in the fronto-parietal, cingulo-opercular, and attention systems are poised to affect a broad array of state transitions that cannot easily be classified by traditional notions of control common in the engineering literature. This theoretical versatility comes with a vulnerability to injury. In patients with mild traumatic brain injury, we observe a loss of specificity in putative control processes, suggesting greater susceptibility to damage-induced noise in neurophysiological activity. These results offer fundamentally new insights into the mechanisms driving brain state transitions in healthy cognition and their alteration following injury.
|
2407.19349
|
Tengyao Tu
|
Tengyao Tu, Wei Zeng, Kun Zhao, Zhenyu Zhang
|
Predicting T-Cell Receptor Specificity
| null | null | null | null |
q-bio.QM cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Researching the specificity of TCR contributes to the development of
immunotherapy and provides new opportunities and strategies for personalized
cancer immunotherapy. Therefore, we established a TCR generative specificity
detection framework consisting of an antigen selector and a TCR classifier
based on the Random Forest algorithm, aiming to efficiently screen out TCRs and
target antigens and achieve TCR specificity prediction. Furthermore, we used
the k-fold validation method to compare the performance of our model with
ordinary deep learning methods. The result proves that adding a classifier to
the model based on the random forest algorithm is very effective, and our model
generally outperforms ordinary deep learning methods. Moreover, we put forward
feasible optimization suggestions for the shortcomings and challenges of our
model found during model implementation.
|
[
{
"created": "Sat, 27 Jul 2024 23:21:07 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Tu",
"Tengyao",
""
],
[
"Zeng",
"Wei",
""
],
[
"Zhao",
"Kun",
""
],
[
"Zhang",
"Zhenyu",
""
]
] |
Researching the specificity of TCR contributes to the development of immunotherapy and provides new opportunities and strategies for personalized cancer immunotherapy. Therefore, we established a TCR generative specificity detection framework consisting of an antigen selector and a TCR classifier based on the Random Forest algorithm, aiming to efficiently screen out TCRs and target antigens and achieve TCR specificity prediction. Furthermore, we used the k-fold validation method to compare the performance of our model with ordinary deep learning methods. The result proves that adding a classifier to the model based on the random forest algorithm is very effective, and our model generally outperforms ordinary deep learning methods. Moreover, we put forward feasible optimization suggestions for the shortcomings and challenges of our model found during model implementation.
|
1610.01507
|
Areejit Samal
|
R.P. Sreejith, J\"urgen Jost, Emil Saucan, Areejit Samal
|
Systematic evaluation of a new combinatorial curvature for complex
networks
|
29 pages, 14 figures, 1 table. Accepted for publication in Chaos,
Solitons and Fractals
|
Chaos, Solitons & Fractals, 101:50-67 (2017)
|
10.1016/j.chaos.2017.05.021
| null |
q-bio.MN physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have recently introduced Forman's discretization of Ricci curvature to the
realm of complex networks. Forman curvature is an edge-based measure whose
mathematical definition elegantly encapsulates the weights of nodes and edges
in a complex network. In this contribution, we perform a comparative analysis
of Forman curvature with other edge-based measures such as edge betweenness,
embeddedness and dispersion in diverse model and real networks. We find that
Forman curvature in comparison to embeddedness or dispersion is a better
indicator of the importance of an edge for the large-scale connectivity of
complex networks. Based on the definition of the Forman curvature of edges,
there are two natural ways to define the Forman curvature of nodes in a
network. In this contribution, we also examine these two possible definitions
of Forman curvature of nodes in diverse model and real networks. Based on our
empirical analysis, we find that in practice the unnormalized definition of the
Forman curvature of nodes with the choice of combinatorial node weights is a
better indicator of the importance of nodes in complex networks.
|
[
{
"created": "Wed, 5 Oct 2016 16:27:00 GMT",
"version": "v1"
},
{
"created": "Mon, 15 May 2017 21:55:11 GMT",
"version": "v2"
}
] |
2017-06-01
|
[
[
"Sreejith",
"R. P.",
""
],
[
"Jost",
"Jürgen",
""
],
[
"Saucan",
"Emil",
""
],
[
"Samal",
"Areejit",
""
]
] |
We have recently introduced Forman's discretization of Ricci curvature to the realm of complex networks. Forman curvature is an edge-based measure whose mathematical definition elegantly encapsulates the weights of nodes and edges in a complex network. In this contribution, we perform a comparative analysis of Forman curvature with other edge-based measures such as edge betweenness, embeddedness and dispersion in diverse model and real networks. We find that Forman curvature in comparison to embeddedness or dispersion is a better indicator of the importance of an edge for the large-scale connectivity of complex networks. Based on the definition of the Forman curvature of edges, there are two natural ways to define the Forman curvature of nodes in a network. In this contribution, we also examine these two possible definitions of Forman curvature of nodes in diverse model and real networks. Based on our empirical analysis, we find that in practice the unnormalized definition of the Forman curvature of nodes with the choice of combinatorial node weights is a better indicator of the importance of nodes in complex networks.
|
1111.6493
|
Taiki Takahashi
|
Taiki Takahashi (1), Hidemi Oono (2), Takeshi Inoue (3), Shuken Boku
(3), Yuki Kako (3), Yuji Kitaichi (3), Ichiro Kusumi (3), Takuya Masui (3),
Shin Nakagawa (3), Katsuji Suzuki (3), Teruaki Tanaka (3), Tsukasa Koyama
(3), and Mark H. B. Radford (4) ((1) Direct all correspondence to Taiki
Takahashi, Unit of Cognitive and Behavioral Sciences Department of Life
Sciences, School of Arts and Sciences, The University of Tokyo, Komaba,
(taikitakahashi@gmail.com), (2) Department of Behavioral Science, Hokkaido
University, Sapporo, Japan, (3) Department of Psychiatry, Graduate School of
Medicine, Hokkaido University, Sapporo, (4) Symbiosis Group Limited, Milton,
Australia, and Department of Behavioral Science, Hokkaido University,
Sapporo, Japan)
|
Depressive patients are more impulsive and inconsistent in intertemporal
choice behavior for monetary gain and loss than healthy subjects- an analysis
based on Tsallis' statistics
| null |
Neuro Endocrinol Lett. 2008, 29(3):351-358
| null | null |
q-bio.NC q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depression has been associated with impaired neural processing of reward and
punishment. However, to date, little is known regarding the relationship
between depression and intertemporal choice for gain and loss. We compared
impulsivity and inconsistency in intertemporal choice for monetary gain and
loss (quantified with parameters in the q-exponential discount function based
on Tsallis' statistics) between depressive patients and healthy control
subjects. This examination is potentially important for advances in
neuroeconomics of intertemporal choice, because depression is associated with
reduced serotonergic activities in the brain. We observed that depressive
patients were more impulsive and time-inconsistent in intertemporal choice
action for gain and loss, in comparison to healthy controls. The usefulness of
the q-exponential discount function for assessing the impaired decision-making
by depressive patients was demonstrated. Furthermore, biophysical mechanisms
underlying the altered intertemporal choice by depressive patients are
discussed in relation to impaired serotonergic neural systems.
Keywords: Depression, Discounting, Neuroeconomics, Impulsivity,
Inconsistency, Tsallis' statistics
|
[
{
"created": "Tue, 22 Nov 2011 15:38:02 GMT",
"version": "v1"
}
] |
2012-12-04
|
[
[
"Takahashi",
"Taiki",
""
],
[
"Oono",
"Hidemi",
""
],
[
"Inoue",
"Takeshi",
""
],
[
"Boku",
"Shuken",
""
],
[
"Kako",
"Yuki",
""
],
[
"Kitaichi",
"Yuji",
""
],
[
"Kusumi",
"Ichiro",
""
],
[
"Masui",
"Takuya",
""
],
[
"Nakagawa",
"Shin",
""
],
[
"Suzuki",
"Katsuji",
""
],
[
"Tanaka",
"Teruaki",
""
],
[
"Koyama",
"Tsukasa",
""
],
[
"Radford",
"Mark H. B.",
""
]
] |
Depression has been associated with impaired neural processing of reward and punishment. However, to date, little is known regarding the relationship between depression and intertemporal choice for gain and loss. We compared impulsivity and inconsistency in intertemporal choice for monetary gain and loss (quantified with parameters in the q-exponential discount function based on Tsallis' statistics) between depressive patients and healthy control subjects. This examination is potentially important for advances in neuroeconomics of intertemporal choice, because depression is associated with reduced serotonergic activities in the brain. We observed that depressive patients were more impulsive and time-inconsistent in intertemporal choice action for gain and loss, in comparison to healthy controls. The usefulness of the q-exponential discount function for assessing the impaired decision-making by depressive patients was demonstrated. Furthermore, biophysical mechanisms underlying the altered intertemporal choice by depressive patients are discussed in relation to impaired serotonergic neural systems. Keywords: Depression, Discounting, Neuroeconomics, Impulsivity, Inconsistency, Tsallis' statistics
|
1907.00070
|
Rostislav Serota
|
M. Dashti Moghaddam, Jiong Liu, John G. Holden and R. A. Serota
|
Modeling Response Time Distributions with Generalized Beta Prime
|
15 pages, 11 figure, 5 tables
|
Discontinuity, Nonlinearity, and Complexity, 9 (3), 477 - 488
(2020)
|
10.5890/DNC.2020.09.009
| null |
q-bio.NC stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use Generalized Beta Prime distribution, also known as GB2, for fitting
response time distributions. This distribution, characterized by one scale and
three shape parameters, is incredibly flexible in that it can mimic behavior of
many other distributions. GB2 exhibits power-law behavior at both front and
tail ends and is a steady-state distribution of a simple stochastic
differential equation. We apply GB2 in contrast studies between two distinct
groups -- in this case children with dyslexia and a control group -- and show
that it provides superior fitting. We compare aggregate response time
distributions of the two groups for scale and shape differences (including
several scale-independent measures of variability, such as Hoover index), which
may in turn reflect on cognitive dynamics differences. In this approach,
response time distribution of an individual can be considered as a random
variate of that individual's group distribution.
|
[
{
"created": "Fri, 28 Jun 2019 20:46:21 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Moghaddam",
"M. Dashti",
""
],
[
"Liu",
"Jiong",
""
],
[
"Holden",
"John G.",
""
],
[
"Serota",
"R. A.",
""
]
] |
We use Generalized Beta Prime distribution, also known as GB2, for fitting response time distributions. This distribution, characterized by one scale and three shape parameters, is incredibly flexible in that it can mimic behavior of many other distributions. GB2 exhibits power-law behavior at both front and tail ends and is a steady-state distribution of a simple stochastic differential equation. We apply GB2 in contrast studies between two distinct groups -- in this case children with dyslexia and a control group -- and show that it provides superior fitting. We compare aggregate response time distributions of the two groups for scale and shape differences (including several scale-independent measures of variability, such as Hoover index), which may in turn reflect on cognitive dynamics differences. In this approach, response time distribution of an individual can be considered as a random variate of that individual's group distribution.
|
1609.02956
|
Steven Frank
|
Steven A. Frank
|
Puzzles in modern biology. I. Male sterility, failure reveals design
| null |
F1000Research 5:2088 (2016)
|
10.12688/f1000research.9567.1
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Many human males produce dysfunctional sperm. Various plants frequently abort
pollen. Hybrid matings often produce sterile males. Widespread male sterility
is puzzling. Natural selection prunes reproductive failure. Puzzling failure
implies something that we do not understand about how organisms are designed.
Solving the puzzle reveals the hidden processes of design.
|
[
{
"created": "Fri, 9 Sep 2016 22:07:27 GMT",
"version": "v1"
}
] |
2016-09-13
|
[
[
"Frank",
"Steven A.",
""
]
] |
Many human males produce dysfunctional sperm. Various plants frequently abort pollen. Hybrid matings often produce sterile males. Widespread male sterility is puzzling. Natural selection prunes reproductive failure. Puzzling failure implies something that we do not understand about how organisms are designed. Solving the puzzle reveals the hidden processes of design.
|
1112.4808
|
Christoph Adami
|
Evan D. Dorn and Christoph Adami
|
Robust monomer-distribution biosignatures in evolving digital biota
|
22 pages, 4 figures, 1 table. Supplementary Material available from
CA
|
Astrobiology 11 (2011) 959-968
|
10.1089/ast.2010.0556
| null |
q-bio.BM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because organisms synthesize component molecules at rates that reflect those
molecules' adaptive utility, we expect a population of biota to leave a
distinctive chemical signature on their environment that is anomalous given the
local (abiotic) chemistry. We observe the same effect in the distribution of
computer instructions used by an evolving population of digital organisms, and
characterize the robustness of the evolved signature with respect to a number
of different changes in the system's physics. The observed instruction
abundance anomaly has features that are consistent over a large number of
evolutionary trials and alterations in system parameters, which makes it a
candidate for a non-Earth-centric life-diagnostic
|
[
{
"created": "Tue, 20 Dec 2011 19:45:31 GMT",
"version": "v1"
}
] |
2011-12-21
|
[
[
"Dorn",
"Evan D.",
""
],
[
"Adami",
"Christoph",
""
]
] |
Because organisms synthesize component molecules at rates that reflect those molecules' adaptive utility, we expect a population of biota to leave a distinctive chemical signature on their environment that is anomalous given the local (abiotic) chemistry. We observe the same effect in the distribution of computer instructions used by an evolving population of digital organisms, and characterize the robustness of the evolved signature with respect to a number of different changes in the system's physics. The observed instruction abundance anomaly has features that are consistent over a large number of evolutionary trials and alterations in system parameters, which makes it a candidate for a non-Earth-centric life-diagnostic
|
1705.06132
|
Eslam Abbas
|
Eslam Abbas
|
Mathematical Analysis of the Probability of Spontaneous Mutations in
HIV-1 Genome and Their Role in the Emergence of Resistance to Anti-Retroviral
Therapy
| null | null |
10.4236/aim.2019.911057
| null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
$\textbf{Background:}$ High mutability of HIV is the driving force of
antiretroviral drug resistance, which represents a medical care challenge.
$\textbf{Method and Model Equation:}$ To detect the mutability of each gene
in the HIV-1 genome; a mathematical analysis of HIV-1 genome is performed,
depending on a linear relation wherein the probability of spontaneous mutations
emergence is directly proportional to the ratio of the gene length to the whole
genome length. \begin{equation*} {P_g}{S_i} =\frac{g}{G} \end{equation*}
$\textbf{Results:}$ $\textbf{tat}$, $\textbf{vpr}$ and $\textbf{vpu}$ are the
least mutant genes in HIV-1 genome. Protease $\textbf{PROT}$ gene is the least
mutant gene component of the polymerases $\textbf{pol}$.
$\textbf{Conclusion:}$ $\textbf{tat}$, $\textbf{vpr}$ and $\textbf{vpu}$ are
the best candidates for HIV-1 recombinant subunit vaccines or as a part of
$\textit{prime and boost}$ vaccine combinations. Also; the protease
inhibitor-based regime represents a high genetic barrier for HIV to overcome.
|
[
{
"created": "Mon, 15 May 2017 14:53:45 GMT",
"version": "v1"
}
] |
2019-11-11
|
[
[
"Abbas",
"Eslam",
""
]
] |
$\textbf{Background:}$ High mutability of HIV is the driving force of antiretroviral drug resistance, which represents a medical care challenge. $\textbf{Method and Model Equation:}$ To detect the mutability of each gene in the HIV-1 genome; a mathematical analysis of HIV-1 genome is performed, depending on a linear relation wherein the probability of spontaneous mutations emergence is directly proportional to the ratio of the gene length to the whole genome length. \begin{equation*} {P_g}{S_i} =\frac{g}{G} \end{equation*} $\textbf{Results:}$ $\textbf{tat}$, $\textbf{vpr}$ and $\textbf{vpu}$ are the least mutant genes in HIV-1 genome. Protease $\textbf{PROT}$ gene is the least mutant gene component of the polymerases $\textbf{pol}$. $\textbf{Conclusion:}$ $\textbf{tat}$, $\textbf{vpr}$ and $\textbf{vpu}$ are the best candidates for HIV-1 recombinant subunit vaccines or as a part of $\textit{prime and boost}$ vaccine combinations. Also; the protease inhibitor-based regime represents a high genetic barrier for HIV to overcome.
|
1101.2175
|
Tao Jia
|
Tao Jia and Rahul V. Kulkarni
|
Intrinsic noise in stochastic models of gene expression with molecular
memory and bursting
|
Accepted by Physical Review Letters
|
Phys. Rev. Lett. 106, 058102, 2011
|
10.1103/PhysRevLett.106.058102
| null |
q-bio.MN cond-mat.soft cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regulation of intrinsic noise in gene expression is essential for many
cellular functions. Correspondingly, there is considerable interest in
understanding how different molecular mechanisms of gene expression impact
variations in protein levels across a population of cells. In this work, we
analyze a stochastic model of bursty gene expression which considers general
waiting-time distributions governing arrival and decay of proteins. By mapping
the system to models analyzed in queueing theory, we derive analytical
expressions for the noise in steady-state protein distributions. The derived
results extend previous work by including the effects of arbitrary probability
distributions representing the effects of molecular memory and bursting. The
analytical expressions obtained provide insight into the role of
transcriptional, post-transcriptional and post-translational mechanisms in
controlling the noise in gene expression.
|
[
{
"created": "Tue, 11 Jan 2011 18:19:27 GMT",
"version": "v1"
}
] |
2011-03-02
|
[
[
"Jia",
"Tao",
""
],
[
"Kulkarni",
"Rahul V.",
""
]
] |
Regulation of intrinsic noise in gene expression is essential for many cellular functions. Correspondingly, there is considerable interest in understanding how different molecular mechanisms of gene expression impact variations in protein levels across a population of cells. In this work, we analyze a stochastic model of bursty gene expression which considers general waiting-time distributions governing arrival and decay of proteins. By mapping the system to models analyzed in queueing theory, we derive analytical expressions for the noise in steady-state protein distributions. The derived results extend previous work by including the effects of arbitrary probability distributions representing the effects of molecular memory and bursting. The analytical expressions obtained provide insight into the role of transcriptional, post-transcriptional and post-translational mechanisms in controlling the noise in gene expression.
|
1710.08542
|
Yen Ting Lin
|
Yen Ting Lin, Peter G. Hufton, Esther J. Lee, Davit A. Potoyan
|
A stochastic and dynamical view of pluripotency in mouse embryonic stem
cells
|
11 pages, 7 figures
| null |
10.1371/journal.pcbi.1006000
| null |
q-bio.MN cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pluripotent embryonic stem cells are of paramount importance for biomedical
research thanks to their innate ability for self-renewal and differentiation
into all major cell lines. The fateful decision to exit or remain in the
pluripotent state is regulated by complex genetic regulatory network. Latest
advances in transcriptomics have made it possible to infer basic topologies of
pluripotency governing networks. The inferred network topologies, however, only
encode boolean information while remaining silent about the roles of dynamics
and molecular noise in gene expression. These features are widely considered
essential for functional decision making. Herein we developed a framework for
extending the boolean level networks into models accounting for individual
genetic switches and promoter architecture which allows mechanistic
interrogation of the roles of molecular noise, external signaling, and network
topology. We demonstrate the pluripotent state of the network to be a broad
attractor which is robust to variations of gene expression. Dynamics of exiting
the pluripotent state, on the other hand, is significantly influenced by the
molecular noise originating from genetic switching events which makes cells
more responsive to extracellular signals. Lastly we show that steady state
probability landscape can be significantly remodeled by global gene switching
rates alone which can be taken as a proxy for how global epigenetic
modifications exert control over stability of pluripotent states.
|
[
{
"created": "Mon, 23 Oct 2017 22:56:44 GMT",
"version": "v1"
}
] |
2018-07-04
|
[
[
"Lin",
"Yen Ting",
""
],
[
"Hufton",
"Peter G.",
""
],
[
"Lee",
"Esther J.",
""
],
[
"Potoyan",
"Davit A.",
""
]
] |
Pluripotent embryonic stem cells are of paramount importance for biomedical research thanks to their innate ability for self-renewal and differentiation into all major cell lines. The fateful decision to exit or remain in the pluripotent state is regulated by complex genetic regulatory network. Latest advances in transcriptomics have made it possible to infer basic topologies of pluripotency governing networks. The inferred network topologies, however, only encode boolean information while remaining silent about the roles of dynamics and molecular noise in gene expression. These features are widely considered essential for functional decision making. Herein we developed a framework for extending the boolean level networks into models accounting for individual genetic switches and promoter architecture which allows mechanistic interrogation of the roles of molecular noise, external signaling, and network topology. We demonstrate the pluripotent state of the network to be a broad attractor which is robust to variations of gene expression. Dynamics of exiting the pluripotent state, on the other hand, is significantly influenced by the molecular noise originating from genetic switching events which makes cells more responsive to extracellular signals. Lastly we show that steady state probability landscape can be significantly remodeled by global gene switching rates alone which can be taken as a proxy for how global epigenetic modifications exert control over stability of pluripotent states.
|
q-bio/0703042
|
Jeroen van Zon
|
Jeroen S. van Zon, David K. Lubensky, Pim R.H. Altena and Pieter Rein
ten Wolde
|
An allosteric model for circadian KaiC phosphorylation - Supporting
Information
|
Supporting Information for q-bio.MN/0703009
| null | null | null |
q-bio.MN q-bio.CB
| null |
In this Supporting Information, we provide background information on our
model of the in vitro Kai system and the calculations that we have performed.
We will closely follow the outline of the main text.
|
[
{
"created": "Mon, 19 Mar 2007 14:25:39 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"van Zon",
"Jeroen S.",
""
],
[
"Lubensky",
"David K.",
""
],
[
"Altena",
"Pim R. H.",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] |
In this Supporting Information, we provide background information on our model of the in vitro Kai system and the calculations that we have performed. We will closely follow the outline of the main text.
|
1910.03577
|
Jin Ming
|
Suprateek Kundu, Jin Ming, and Jennifer Stevens
|
Dynamic Brain Functional Networks Guided By Anatomical Knowledge
|
45 pages, 5 figures
| null | null | null |
q-bio.NC stat.AP stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the potential of dynamic brain networks as a neuroimaging
biomarkers for mental illnesses is being increasingly recognized. However,
there are several unmet challenges in developing such biomarkers, including the
need for methods to model rapidly changing network states. In one of the first
such efforts, we develop a novel approach for computing dynamic brain
functional connectivity (FC), that is guided by brain structural connectivity
(SC) computed from diffusion tensor imaging (DTI) data. The proposed approach
involving dynamic Gaussian graphical models decomposes the time course into
non-overlapping state phases determined by change points, each having a
distinct network. We develop an optimization algorithm to implement the method
such that the estimation of both the change points and the state-phase specific
networks are fully data driven and unsupervised, and guided by SC information.
The approach is scalable to large dimensions and extensive simulations
illustrate its clear advantages over existing methods in terms of network
estimation accuracy and detecting dynamic network changes. An application of
the method to a posttraumatic stress disorder (PTSD) study reveals important
dynamic resting state connections in regions of the brain previously implicated
in PTSD. We also illustrate that the dynamic networks computed under the
proposed method are able to better predict psychological resilience among
trauma exposed individuals compared to existing dynamic and stationary
connectivity approaches, which highlights its potential as a neuroimaging
biomarker.
|
[
{
"created": "Mon, 7 Oct 2019 21:48:27 GMT",
"version": "v1"
}
] |
2019-10-10
|
[
[
"Kundu",
"Suprateek",
""
],
[
"Ming",
"Jin",
""
],
[
"Stevens",
"Jennifer",
""
]
] |
Recently, the potential of dynamic brain networks as a neuroimaging biomarkers for mental illnesses is being increasingly recognized. However, there are several unmet challenges in developing such biomarkers, including the need for methods to model rapidly changing network states. In one of the first such efforts, we develop a novel approach for computing dynamic brain functional connectivity (FC), that is guided by brain structural connectivity (SC) computed from diffusion tensor imaging (DTI) data. The proposed approach involving dynamic Gaussian graphical models decomposes the time course into non-overlapping state phases determined by change points, each having a distinct network. We develop an optimization algorithm to implement the method such that the estimation of both the change points and the state-phase specific networks are fully data driven and unsupervised, and guided by SC information. The approach is scalable to large dimensions and extensive simulations illustrate its clear advantages over existing methods in terms of network estimation accuracy and detecting dynamic network changes. An application of the method to a posttraumatic stress disorder (PTSD) study reveals important dynamic resting state connections in regions of the brain previously implicated in PTSD. We also illustrate that the dynamic networks computed under the proposed method are able to better predict psychological resilience among trauma exposed individuals compared to existing dynamic and stationary connectivity approaches, which highlights its potential as a neuroimaging biomarker.
|
2110.03796
|
Pedro Mendes
|
Abhishekh Gupta and Pedro Mendes
|
ShinyCOPASI: a web-based exploratory interface for COPASI models
| null | null | null | null |
q-bio.MN
|
http://creativecommons.org/licenses/by/4.0/
|
COPASI is a popular application for simulation and analysis of biochemical
networks and their dynamics. While this software is widely used, it works as a
standalone application and until now it was not possible for users to interact
with its models through the web. We built ShinyCOPASI, a web-based application
that allows COPASI models to be explored through a web browser. ShinyCOPASI was
written in R with the CoRC package, which provides a high-level R API for
COPASI, and the Shiny package to expose it as a web application. The web view
provided by ShinyCOPASI follows a similar interface to the standalone COPASI
and allows users to explore the details of a model, as well as running a subset
of the tasks available in COPASI from within a browser. A generic version
allows users to load model files from their computer, while another one
pre-loads a specific model from the server and may be useful to provide web
access to published models. The application is available at:
http://shiny.copasi.org/; and the source code is at:
https://github.com/copasi/shinyCOPASI.
|
[
{
"created": "Thu, 7 Oct 2021 21:16:03 GMT",
"version": "v1"
}
] |
2021-10-11
|
[
[
"Gupta",
"Abhishekh",
""
],
[
"Mendes",
"Pedro",
""
]
] |
COPASI is a popular application for simulation and analysis of biochemical networks and their dynamics. While this software is widely used, it works as a standalone application and until now it was not possible for users to interact with its models through the web. We built ShinyCOPASI, a web-based application that allows COPASI models to be explored through a web browser. ShinyCOPASI was written in R with the CoRC package, which provides a high-level R API for COPASI, and the Shiny package to expose it as a web application. The web view provided by ShinyCOPASI follows a similar interface to the standalone COPASI and allows users to explore the details of a model, as well as running a subset of the tasks available in COPASI from within a browser. A generic version allows users to load model files from their computer, while another one pre-loads a specific model from the server and may be useful to provide web access to published models. The application is available at: http://shiny.copasi.org/; and the source code is at: https://github.com/copasi/shinyCOPASI.
|
1301.4567
|
Marcin Zag\'orski
|
M. Zagorski, A. Krzywicki, O.C. Martin
|
Edge usage, motifs and regulatory logic for cell cycling genetic
networks
|
9 pages, 9 figures, to be published in Phys. Rev. E
|
Phys. Rev. E 87, 012727 (2013)
|
10.1103/PhysRevE.87.012727
| null |
q-bio.MN cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cell cycle is a tightly controlled process, yet its underlying genetic
network shows marked differences across species. Which of the associated
structural features follow solely from the ability to impose the appropriate
gene expression patterns? We tackle this question in silico by examining the
ensemble of all regulatory networks which satisfy the constraint of producing a
given sequence of gene expressions. We focus on three cell cycle profiles
coming from baker's yeast, fission yeast and mammals. First, we show that the
networks in each of the ensembles use just a few interactions that are
repeatedly reused as building blocks. Second, we find an enrichment in network
motifs that is similar in the two yeast cell cycle systems investigated. These
motifs do not have autonomous functions, but nevertheless they reveal a
regulatory logic for cell cycling based on a feed-forward cascade of activating
interactions.
|
[
{
"created": "Sat, 19 Jan 2013 14:51:52 GMT",
"version": "v1"
}
] |
2015-06-12
|
[
[
"Zagorski",
"M.",
""
],
[
"Krzywicki",
"A.",
""
],
[
"Martin",
"O. C.",
""
]
] |
The cell cycle is a tightly controlled process, yet its underlying genetic network shows marked differences across species. Which of the associated structural features follow solely from the ability to impose the appropriate gene expression patterns? We tackle this question in silico by examining the ensemble of all regulatory networks which satisfy the constraint of producing a given sequence of gene expressions. We focus on three cell cycle profiles coming from baker's yeast, fission yeast and mammals. First, we show that the networks in each of the ensembles use just a few interactions that are repeatedly reused as building blocks. Second, we find an enrichment in network motifs that is similar in the two yeast cell cycle systems investigated. These motifs do not have autonomous functions, but nevertheless they reveal a regulatory logic for cell cycling based on a feed-forward cascade of activating interactions.
|
2001.10035
|
Darius Vasco K\"oster
|
Lewis Mosby, Marco Polin, Darius V. K\"oster
|
A Python based automated tracking routine for myosin II filaments
|
14 pages, 5 figures
| null |
10.1088/1361-6463/ab87bf
| null |
q-bio.SC physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of motor protein dynamics within cytoskeletal networks is of high
interest to physicists and biologists to understand how the dynamics and
properties of individual motors lead to cooperative effects and control of
overall network behaviour. Here, we report a method to detect and track
muscular myosin II filaments within an actin network tethered to supported
lipid bilayers. Based on the characteristic shape of myosin II filaments, this
automated tracking routine allowed us to follow the position and orientation of
myosin II filaments over time, and to reliably classify their dynamics into
segments of diffusive and processive motion based on the analysis of
displacements and angular changes between time steps. This automated, high
throughput method will allow scientists to efficiently analyse motor dynamics
in different conditions, and will grant access to more detailed information
than provided by common tracking methods, without any need for time consuming
manual tracking or generation of kymographs.
|
[
{
"created": "Mon, 27 Jan 2020 19:43:58 GMT",
"version": "v1"
}
] |
2020-06-24
|
[
[
"Mosby",
"Lewis",
""
],
[
"Polin",
"Marco",
""
],
[
"Köster",
"Darius V.",
""
]
] |
The study of motor protein dynamics within cytoskeletal networks is of high interest to physicists and biologists to understand how the dynamics and properties of individual motors lead to cooperative effects and control of overall network behaviour. Here, we report a method to detect and track muscular myosin II filaments within an actin network tethered to supported lipid bilayers. Based on the characteristic shape of myosin II filaments, this automated tracking routine allowed us to follow the position and orientation of myosin II filaments over time, and to reliably classify their dynamics into segments of diffusive and processive motion based on the analysis of displacements and angular changes between time steps. This automated, high throughput method will allow scientists to efficiently analyse motor dynamics in different conditions, and will grant access to more detailed information than provided by common tracking methods, without any need for time consuming manual tracking or generation of kymographs.
|
1601.03334
|
Lior Pachter
|
Audrey Fu and Lior Pachter
|
Estimating intrinsic and extrinsic noise from single-cell gene
expression measurements
| null | null | null | null |
q-bio.QM stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gene expression is stochastic and displays variation ("noise") both within
and between cells. Intracellular (intrinsic) variance can be distinguished from
extracellular (extrinsic) variance by applying the law of total variance to
data from two-reporter assays that probe expression of identical gene pairs in
single-cells. We examine established formulas for the estimation of intrinsic
and extrinsic noise and provide interpretations of them in terms of a
hierarchical model. This allows us to derive corrections that minimize the mean
squared error, an objective that may be important when sample sizes are small.
The statistical framework also highlights the need for quantile normalization,
and provides justification for the use of the sample correlation between the
two reporter expression levels to estimate the percent contribution of
extrinsic noise to the total noise. Finally, we provide a geometric
interpretation of these results that clarifies the current interpretation.
|
[
{
"created": "Wed, 13 Jan 2016 18:08:51 GMT",
"version": "v1"
}
] |
2016-01-14
|
[
[
"Fu",
"Audrey",
""
],
[
"Pachter",
"Lior",
""
]
] |
Gene expression is stochastic and displays variation ("noise") both within and between cells. Intracellular (intrinsic) variance can be distinguished from extracellular (extrinsic) variance by applying the law of total variance to data from two-reporter assays that probe expression of identical gene pairs in single-cells. We examine established formulas for the estimation of intrinsic and extrinsic noise and provide interpretations of them in terms of a hierarchical model. This allows us to derive corrections that minimize the mean squared error, an objective that may be important when sample sizes are small. The statistical framework also highlights the need for quantile normalization, and provides justification for the use of the sample correlation between the two reporter expression levels to estimate the percent contribution of extrinsic noise to the total noise. Finally, we provide a geometric interpretation of these results that clarifies the current interpretation.
|
1202.1007
|
Michael B\"orsch
|
Eva Hammann, Andrea Zappe, Stefanie Keis, Stefan Ernst, Doreen
Matthies, Thomas Meier, Gregory M. Cook, Michael Boersch
|
Step size of the rotary proton motor in single FoF1-ATP synthase from a
thermoalkaliphilic bacterium by DCO-ALEX FRET
|
14 pages, 7 figures
| null |
10.1117/12.907242
| null |
q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thermophilic enzymes can operate at higher temperatures but show reduced
activities at room temperature. They are in general more stable during
preparation and, accordingly, are considered to be more rigid in structure.
Crystallization is often easier compared to proteins from bacteria growing at
ambient temperatures, especially for membrane proteins. The ATP-producing
enzyme FoF1-ATP synthase from thermoalkaliphilic Caldalkalibacillus thermarum
strain TA2.A1 is driven by a Fo motor consisting of a ring of 13 c-subunits. We
applied a single-molecule F\"orster resonance energy transfer (FRET) approach
using duty cycle-optimized alternating laser excitation (DCO-ALEX) to monitor
the expected 13-stepped rotary Fo motor at work. New FRET transition histograms
were developed to identify the smaller step sizes compared to the 10-stepped Fo
motor of the Escherichia coli enzyme. Dwell time analysis revealed the
temperature and the LDAO dependence of the Fo motor activity on the single
molecule level. Back-and-forth stepping of the Fo motor occurs fast indicating
a high flexibility in the membrane part of this thermophilic enzyme.
|
[
{
"created": "Sun, 5 Feb 2012 21:50:45 GMT",
"version": "v1"
}
] |
2015-06-04
|
[
[
"Hammann",
"Eva",
""
],
[
"Zappe",
"Andrea",
""
],
[
"Keis",
"Stefanie",
""
],
[
"Ernst",
"Stefan",
""
],
[
"Matthies",
"Doreen",
""
],
[
"Meier",
"Thomas",
""
],
[
"Cook",
"Gregory M.",
""
],
[
"Boersch",
"Michael",
""
]
] |
Thermophilic enzymes can operate at higher temperatures but show reduced activities at room temperature. They are in general more stable during preparation and, accordingly, are considered to be more rigid in structure. Crystallization is often easier compared to proteins from bacteria growing at ambient temperatures, especially for membrane proteins. The ATP-producing enzyme FoF1-ATP synthase from thermoalkaliphilic Caldalkalibacillus thermarum strain TA2.A1 is driven by a Fo motor consisting of a ring of 13 c-subunits. We applied a single-molecule F\"orster resonance energy transfer (FRET) approach using duty cycle-optimized alternating laser excitation (DCO-ALEX) to monitor the expected 13-stepped rotary Fo motor at work. New FRET transition histograms were developed to identify the smaller step sizes compared to the 10-stepped Fo motor of the Escherichia coli enzyme. Dwell time analysis revealed the temperature and the LDAO dependence of the Fo motor activity on the single molecule level. Back-and-forth stepping of the Fo motor occurs fast indicating a high flexibility in the membrane part of this thermophilic enzyme.
|
0901.2378
|
Georgy Karev
|
Georgiy P. Karev
|
Replicator equations and the principle of minimal production of
information
|
27 pages; submitted to Bulletin of Mathematical Biology
| null | null | null |
q-bio.QM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many complex systems in mathematical biology and other areas can be described
by the replicator equation. We show that solutions of a wide class of
replicator equations minimize the production of information under
time-dependent constraints, which, in their turn, can be computed explicitly at
every instant due to the system dynamics. Therefore, the Kullback principle of
minimum discrimination information, as well as the maximum entropy principle,
for systems governed by the replicator equations can be derived from the system
dynamics rather than postulated. Applications to the Malthusian inhomogeneous
models, global demography, and the Eigen quasispecies equation are given.
|
[
{
"created": "Fri, 16 Jan 2009 01:10:22 GMT",
"version": "v1"
}
] |
2009-01-19
|
[
[
"Karev",
"Georgiy P.",
""
]
] |
Many complex systems in mathematical biology and other areas can be described by the replicator equation. We show that solutions of a wide class of replicator equations minimize the production of information under time-dependent constraints, which, in their turn, can be computed explicitly at every instant due to the system dynamics. Therefore, the Kullback principle of minimum discrimination information, as well as the maximum entropy principle, for systems governed by the replicator equations can be derived from the system dynamics rather than postulated. Applications to the Malthusian inhomogeneous models, global demography, and the Eigen quasispecies equation are given.
|
1803.01056
|
Thierry Mora
|
Yuval Elhanati, Zachary Sethna, Curtis G. Callan Jr., Thierry Mora,
Aleksandra M. Walczak
|
Predicting the spectrum of TCR repertoire sharing with a data-driven
model of recombination
| null |
Immunol Rev. 2018;284:167-179
|
10.1111/imr.12665
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the extreme diversity of T cell repertoires, many identical T-cell
receptor (TCR) sequences are found in a large number of individual mice and
humans. These widely-shared sequences, often referred to as `public', have been
suggested to be over-represented due to their potential immune functionality or
their ease of generation by V(D)J recombination. Here we show that even for
large cohorts the observed degree of sharing of TCR sequences between
individuals is well predicted by a model accounting for by the known
quantitative statistical biases in the generation process, together with a
simple model of thymic selection. Whether a sequence is shared by many
individuals is predicted to depend on the number of queried individuals and the
sampling depth, as well as on the sequence itself, in agreement with the data.
We introduce the degree of publicness conditional on the queried cohort size
and the size of the sampled repertoires. Based on these observations we propose
a public/private sequence classifier, `PUBLIC' (Public Universal Binary
Likelihood Inference Classifier), based on the generation probability, which
performs very well even for small cohort sizes.
|
[
{
"created": "Fri, 2 Mar 2018 21:53:03 GMT",
"version": "v1"
}
] |
2019-01-24
|
[
[
"Elhanati",
"Yuval",
""
],
[
"Sethna",
"Zachary",
""
],
[
"Callan",
"Curtis G.",
"Jr."
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
]
] |
Despite the extreme diversity of T cell repertoires, many identical T-cell receptor (TCR) sequences are found in a large number of individual mice and humans. These widely-shared sequences, often referred to as `public', have been suggested to be over-represented due to their potential immune functionality or their ease of generation by V(D)J recombination. Here we show that even for large cohorts the observed degree of sharing of TCR sequences between individuals is well predicted by a model accounting for by the known quantitative statistical biases in the generation process, together with a simple model of thymic selection. Whether a sequence is shared by many individuals is predicted to depend on the number of queried individuals and the sampling depth, as well as on the sequence itself, in agreement with the data. We introduce the degree of publicness conditional on the queried cohort size and the size of the sampled repertoires. Based on these observations we propose a public/private sequence classifier, `PUBLIC' (Public Universal Binary Likelihood Inference Classifier), based on the generation probability, which performs very well even for small cohort sizes.
|
0912.1277
|
Kyung Myriam Kroll
|
K. Myriam Kroll, Gerard T. Barkema, Enrico Carlon
|
Linear model for fast background subtraction in oligonucleotide
microarrays
|
21 pages, 5 figures
|
Algorithms for Molecular Biology 2009, 4:15
|
10.1186/1748-7188-4-15
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One important preprocessing step in the analysis of microarray data is
background subtraction. In high-density oligonucleotide arrays this is
recognized as a crucial step for the global performance of the data analysis
from raw intensities to expression values.
We propose here an algorithm for background estimation based on a model in
which the cost function is quadratic in a set of fitting parameters such that
minimization can be performed through linear algebra. The model incorporates
two effects: 1) Correlated intensities between neighboring features in the chip
and 2) sequence-dependent affinities for non-specific hybridization fitted by
an extended nearest-neighbor model.
The algorithm has been tested on 360 GeneChips from publicly available data
of recent expression experiments. The algorithm is fast and accurate. Strong
correlations between the fitted values for different experiments as well as
between the free-energy parameters and their counterparts in aqueous solution
indicate that the model captures a significant part of the underlying physical
chemistry.
|
[
{
"created": "Mon, 7 Dec 2009 15:53:13 GMT",
"version": "v1"
}
] |
2009-12-08
|
[
[
"Kroll",
"K. Myriam",
""
],
[
"Barkema",
"Gerard T.",
""
],
[
"Carlon",
"Enrico",
""
]
] |
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
|
2302.06281
|
Philipp Kellmeyer
|
Sjors Ligthart, Marcello Ienca, Gerben Meynen, Fruzsina Molnar-Gabor,
Roberto Andorno, Christoph Bublitz, Paul Catley, Lisa Claydon, Thomas
Douglas, Nita Farahany, Joseph J. Fins, Sara Goering, Pim Haselager, Fabrice
Jotterand, Andrea Lavazza, Allan McCay, Abel Wajnerman Paz, Stephen Rainey,
Jesper Ryberg, Philipp Kellmeyer
|
Minding rights: Mapping ethical and legal foundations of 'neurorights'
| null | null | null | null |
q-bio.NC cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The rise of neurotechnologies, especially in combination with AI-based
methods for brain data analytics, has given rise to concerns around the
protection of mental privacy, mental integrity and cognitive liberty - often
framed as 'neurorights' in ethical, legal and policy discussions. Several
states are now looking at including 'neurorights' into their constitutional
legal frameworks and international institutions and organizations, such as
UNESCO and the Council of Europe, are taking an active interest in developing
international policy and governance guidelines on this issue. However, in many
discussions of 'neurorights' the philosophical assumptions, ethical frames of
reference and legal interpretation are either not made explicit or are in
conflict with each other. The aim of this multidisciplinary work here is to
provide conceptual, ethical and legal foundations that allow for facilitating a
common minimalist conceptual understanding of mental privacy, mental integrity
and cognitive liberty to facilitate scholarly, legal and policy discussions.
|
[
{
"created": "Mon, 13 Feb 2023 11:36:23 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Mar 2023 19:52:39 GMT",
"version": "v2"
}
] |
2023-03-22
|
[
[
"Ligthart",
"Sjors",
""
],
[
"Ienca",
"Marcello",
""
],
[
"Meynen",
"Gerben",
""
],
[
"Molnar-Gabor",
"Fruzsina",
""
],
[
"Andorno",
"Roberto",
""
],
[
"Bublitz",
"Christoph",
""
],
[
"Catley",
"Paul",
""
],
[
"Claydon",
"Lisa",
""
],
[
"Douglas",
"Thomas",
""
],
[
"Farahany",
"Nita",
""
],
[
"Fins",
"Joseph J.",
""
],
[
"Goering",
"Sara",
""
],
[
"Haselager",
"Pim",
""
],
[
"Jotterand",
"Fabrice",
""
],
[
"Lavazza",
"Andrea",
""
],
[
"McCay",
"Allan",
""
],
[
"Paz",
"Abel Wajnerman",
""
],
[
"Rainey",
"Stephen",
""
],
[
"Ryberg",
"Jesper",
""
],
[
"Kellmeyer",
"Philipp",
""
]
] |
The rise of neurotechnologies, especially in combination with AI-based methods for brain data analytics, has given rise to concerns around the protection of mental privacy, mental integrity and cognitive liberty - often framed as 'neurorights' in ethical, legal and policy discussions. Several states are now looking at including 'neurorights' into their constitutional legal frameworks and international institutions and organizations, such as UNESCO and the Council of Europe, are taking an active interest in developing international policy and governance guidelines on this issue. However, in many discussions of 'neurorights' the philosophical assumptions, ethical frames of reference and legal interpretation are either not made explicit or are in conflict with each other. The aim of this multidisciplinary work here is to provide conceptual, ethical and legal foundations that allow for facilitating a common minimalist conceptual understanding of mental privacy, mental integrity and cognitive liberty to facilitate scholarly, legal and policy discussions.
|
1812.00052
|
Sylvestre Aureliano Carvalho
|
Sylvestre Aureliano Carvalho and Marcelo Lobato Martins
|
Community structures in allelopathic interaction networks: an
eco-evolutionary approach
| null |
Phys. Rev. E 102, 042305 (2020)
|
10.1103/PhysRevE.102.042305
| null |
q-bio.PE physics.bio-ph physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, evidence is mounting that the race of living organisms for
adaptation to the chemicals synthesized by their neighbours may drive community
structures. Particularly, some bacterial infections and plant invasions
disruptive of the native community rely on the release of allelochemicals that
inhibit or kill sensitive strains or individuals from their own or other
species. In this report, an eco-evolutionary model for community assembly
through resource competition, allelophatic interactions, and evolutionary
branching is presented and studied by numerical analysis. Our major findings
are that stable communities with increasing biodiversity can emerge at weak
allelopathic suppression, but stronger allelophaty is negatively correlated
with community diversity. In the former regime the allelopathic interaction
networks exhibit Gaussian degree distributions, while in the later one the
network degrees are Weibull distributed.
|
[
{
"created": "Fri, 30 Nov 2018 20:48:29 GMT",
"version": "v1"
}
] |
2020-10-14
|
[
[
"Carvalho",
"Sylvestre Aureliano",
""
],
[
"Martins",
"Marcelo Lobato",
""
]
] |
Nowadays, evidence is mounting that the race of living organisms for adaptation to the chemicals synthesized by their neighbours may drive community structures. Particularly, some bacterial infections and plant invasions disruptive of the native community rely on the release of allelochemicals that inhibit or kill sensitive strains or individuals from their own or other species. In this report, an eco-evolutionary model for community assembly through resource competition, allelophatic interactions, and evolutionary branching is presented and studied by numerical analysis. Our major findings are that stable communities with increasing biodiversity can emerge at weak allelopathic suppression, but stronger allelophaty is negatively correlated with community diversity. In the former regime the allelopathic interaction networks exhibit Gaussian degree distributions, while in the later one the network degrees are Weibull distributed.
|
2310.04730
|
Juan Li
|
Juan Li and Claudia Bank
|
Dominance and multi-locus interaction
|
26 pages, 3 figures, 2 tables
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Dominance is usually considered a constant value that describes the relative
difference in fitness or phenotype between heterozygotes and the average of
homozygotes at a focal polymorphic locus. However, the observed dominance can
vary with the genetic background of the focal locus. Here, alleles at other
loci modify the observed phenotype through position effects or dominance
modifiers that are sometimes associated with pathogen resistance, lineage, sex,
or mating type. Theoretical models have illustrated how variable dominance
appears in the context of multi-locus interaction (epistasis). Here, we review
empirical evidence for variable dominance and how the observed patterns may be
captured by proposed epistatic models. We highlight how integrating epistasis
and dominance is crucial for comprehensively understanding adaptation and
speciation.
|
[
{
"created": "Sat, 7 Oct 2023 08:10:11 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Dec 2023 00:04:57 GMT",
"version": "v2"
}
] |
2023-12-05
|
[
[
"Li",
"Juan",
""
],
[
"Bank",
"Claudia",
""
]
] |
Dominance is usually considered a constant value that describes the relative difference in fitness or phenotype between heterozygotes and the average of homozygotes at a focal polymorphic locus. However, the observed dominance can vary with the genetic background of the focal locus. Here, alleles at other loci modify the observed phenotype through position effects or dominance modifiers that are sometimes associated with pathogen resistance, lineage, sex, or mating type. Theoretical models have illustrated how variable dominance appears in the context of multi-locus interaction (epistasis). Here, we review empirical evidence for variable dominance and how the observed patterns may be captured by proposed epistatic models. We highlight how integrating epistasis and dominance is crucial for comprehensively understanding adaptation and speciation.
|
2203.06128
|
Ashish B. George
|
Ashish B. George, Tong Wang, Sergei Maslov
|
Functional universality in slow-growing microbial communities arises
from thermodynamic constraints
| null | null | null | null |
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph q-bio.MN
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The dynamics of microbial communities is incredibly complex, determined by
competition for metabolic substrates and cross-feeding of byproducts. Species
in the community grow by harvesting energy from chemical reactions that
transform substrates to products. In many anoxic environments, these reactions
are close to thermodynamic equilibrium and growth is slow. To understand the
community structure in these energy-limited environments, we developed a
microbial community consumer-resource model incorporating energetic and
thermodynamic constraints on an interconnected metabolic network. The central
ingredient of the model is product inhibition, meaning that microbial growth
may be limited not only by depletion of metabolic substrates but also by
accumulation of products. We demonstrate that these additional constraints on
microbial growth cause a convergence in the structure and function of the
community metabolic network -- independent of species composition and
biochemical details -- providing a possible explanation for convergence of
community function despite taxonomic variation observed in many natural and
industrial environments. Furthermore, we discovered that the structure of
community metabolic network is governed by the thermodynamic principle of
maximum heat dissipation. Overall, the work demonstrates how universal
thermodynamic principles may constrain community metabolism and explain
observed functional convergence in microbial communities.
|
[
{
"created": "Fri, 11 Mar 2022 17:56:10 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Mar 2022 19:08:43 GMT",
"version": "v2"
}
] |
2022-03-22
|
[
[
"George",
"Ashish B.",
""
],
[
"Wang",
"Tong",
""
],
[
"Maslov",
"Sergei",
""
]
] |
The dynamics of microbial communities is incredibly complex, determined by competition for metabolic substrates and cross-feeding of byproducts. Species in the community grow by harvesting energy from chemical reactions that transform substrates to products. In many anoxic environments, these reactions are close to thermodynamic equilibrium and growth is slow. To understand the community structure in these energy-limited environments, we developed a microbial community consumer-resource model incorporating energetic and thermodynamic constraints on an interconnected metabolic network. The central ingredient of the model is product inhibition, meaning that microbial growth may be limited not only by depletion of metabolic substrates but also by accumulation of products. We demonstrate that these additional constraints on microbial growth cause a convergence in the structure and function of the community metabolic network -- independent of species composition and biochemical details -- providing a possible explanation for convergence of community function despite taxonomic variation observed in many natural and industrial environments. Furthermore, we discovered that the structure of community metabolic network is governed by the thermodynamic principle of maximum heat dissipation. Overall, the work demonstrates how universal thermodynamic principles may constrain community metabolism and explain observed functional convergence in microbial communities.
|
2001.03560
|
Justin Kinney
|
Ammar Tareen, Justin B. Kinney
|
Biophysical models of cis-regulation as interpretable neural networks
|
Presented at the 14th conference on Machine Learning in Computational
Biology (MLCB 2019), Vancouver, Canada. Revised to add a link to code and to
correct a typo in the King-Altman diagrams shown in Figure 3
| null | null | null |
q-bio.MN cs.LG physics.bio-ph q-bio.QM stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The adoption of deep learning techniques in genomics has been hindered by the
difficulty of mechanistically interpreting the models that these techniques
produce. In recent years, a variety of post-hoc attribution methods have been
proposed for addressing this neural network interpretability problem in the
context of gene regulation. Here we describe a complementary way of approaching
this problem. Our strategy is based on the observation that two large classes
of biophysical models of cis-regulatory mechanisms can be expressed as deep
neural networks in which nodes and weights have explicit physiochemical
interpretations. We also demonstrate how such biophysical networks can be
rapidly inferred, using modern deep learning frameworks, from the data produced
by certain types of massively parallel reporter assays (MPRAs). These results
suggest a scalable strategy for using MPRAs to systematically characterize the
biophysical basis of gene regulation in a wide range of biological contexts.
They also highlight gene regulation as a promising venue for the development of
scientifically interpretable approaches to deep learning.
|
[
{
"created": "Mon, 30 Dec 2019 14:45:58 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Feb 2020 22:07:08 GMT",
"version": "v2"
}
] |
2020-02-11
|
[
[
"Tareen",
"Ammar",
""
],
[
"Kinney",
"Justin B.",
""
]
] |
The adoption of deep learning techniques in genomics has been hindered by the difficulty of mechanistically interpreting the models that these techniques produce. In recent years, a variety of post-hoc attribution methods have been proposed for addressing this neural network interpretability problem in the context of gene regulation. Here we describe a complementary way of approaching this problem. Our strategy is based on the observation that two large classes of biophysical models of cis-regulatory mechanisms can be expressed as deep neural networks in which nodes and weights have explicit physiochemical interpretations. We also demonstrate how such biophysical networks can be rapidly inferred, using modern deep learning frameworks, from the data produced by certain types of massively parallel reporter assays (MPRAs). These results suggest a scalable strategy for using MPRAs to systematically characterize the biophysical basis of gene regulation in a wide range of biological contexts. They also highlight gene regulation as a promising venue for the development of scientifically interpretable approaches to deep learning.
|
q-bio/0509019
|
Guido Tiana
|
R. A. Broglia, D. Provasi, F. Vasile, G. Ottolina, R.Longhi and G.
Tiana
|
A folding inhibitor of the HIV-1 Protease
| null | null | null | null |
q-bio.BM
| null |
Being the HIV-1 Protease (HIV-1-PR) an essential enzyme in the viral life
cycle, its inhibition can control AIDS. The folding of single domain proteins,
like each of the monomers forming the HIV-1-PR homodimer, is controlled by
local elementary structures (LES, folding units stabilized by strongly
interacting, highly conserved, as a rule hydrophobic, amino acids). These LES
have evolved over myriad of generations to recognize and strongly attract each
other, so as to make the protein fold fast and be stable in its native
conformation. Consequently, peptides displaying a sequence identical to those
segments of the monomers associated with LES are expected to act as competitive
inhibitors and thus destabilize the native structure of the enzyme. These
inhibitors are unlikely to lead to escape mutants as they bind to the protease
monomers through highly conserved amino acids which play an essential role in
the folding process. The properties of one of the most promising inhibitors of
the folding of the HIV-1-PR monomers found among these peptides is demonstrated
with the help of spectrophotometric assays and CD spectroscopy.
|
[
{
"created": "Thu, 15 Sep 2005 08:10:04 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Broglia",
"R. A.",
""
],
[
"Provasi",
"D.",
""
],
[
"Vasile",
"F.",
""
],
[
"Ottolina",
"G.",
""
],
[
"Longhi",
"R.",
""
],
[
"Tiana",
"G.",
""
]
] |
Being the HIV-1 Protease (HIV-1-PR) an essential enzyme in the viral life cycle, its inhibition can control AIDS. The folding of single domain proteins, like each of the monomers forming the HIV-1-PR homodimer, is controlled by local elementary structures (LES, folding units stabilized by strongly interacting, highly conserved, as a rule hydrophobic, amino acids). These LES have evolved over myriad of generations to recognize and strongly attract each other, so as to make the protein fold fast and be stable in its native conformation. Consequently, peptides displaying a sequence identical to those segments of the monomers associated with LES are expected to act as competitive inhibitors and thus destabilize the native structure of the enzyme. These inhibitors are unlikely to lead to escape mutants as they bind to the protease monomers through highly conserved amino acids which play an essential role in the folding process. The properties of one of the most promising inhibitors of the folding of the HIV-1-PR monomers found among these peptides is demonstrated with the help of spectrophotometric assays and CD spectroscopy.
|
2006.13012
|
Robin Thompson
|
Robin N Thompson, T Deirdre Hollingsworth, Valerie Isham, Daniel
Arribas-Bel, Ben Ashby, Tom Britton, Peter Challoner, Lauren H K Chappell,
Hannah Clapham, Nik J Cunniffe, A Philip Dawid, Christl A Donnelly, Rosalind
Eggo, Sebastian Funk, Nigel Gilbert, Julia R Gog, Paul Glendinning, William S
Hart, Hans Heesterbeek, Thomas House, Matt Keeling, Istvan Z Kiss, Mirjam
Kretzschmar, Alun L Lloyd, Emma S McBryde, James M McCaw, Joel C Miller,
Trevelyan J McKinley, Martina Morris, Philip D ONeill, Carl A B Pearson, Kris
V Parag, Lorenzo Pellis, Juliet R C Pulliam, Joshua V Ross, Michael J
Tildesley, Gianpaolo Scalia Tomba, Bernard W Silverman, Claudio J Struchiner,
Pieter Trapman, Cerian R Webb, Denis Mollison, Olivier Restif
|
Key Questions for Modelling COVID-19 Exit Strategies
| null |
Proc. Roy. Soc. B, 2020
|
10.1098/rspb.2020.1405
| null |
q-bio.OT q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combinations of intense non-pharmaceutical interventions ('lockdowns') were
introduced in countries worldwide to reduce SARS-CoV-2 transmission. Many
governments have begun to implement lockdown exit strategies that allow
restrictions to be relaxed while attempting to control the risk of a surge in
cases. Mathematical modelling has played a central role in guiding
interventions, but the challenge of designing optimal exit strategies in the
face of ongoing transmission is unprecedented. Here, we report discussions from
the Isaac Newton Institute 'Models for an exit strategy' workshop (11-15 May
2020). A diverse community of modellers who are providing evidence to
governments worldwide were asked to identify the main questions that, if
answered, will allow for more accurate predictions of the effects of different
exit strategies. Based on these questions, we propose a roadmap to facilitate
the development of reliable models to guide exit strategies. The roadmap
requires a global collaborative effort from the scientific community and
policy-makers, and is made up of three parts: i) improve estimation of key
epidemiological parameters; ii) understand sources of heterogeneity in
populations; iii) focus on requirements for data collection, particularly in
Low-to-Middle-Income countries. This will provide important information for
planning exit strategies that balance socio-economic benefits with public
health.
|
[
{
"created": "Sun, 21 Jun 2020 17:06:13 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Jun 2020 15:09:13 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Jun 2020 14:09:15 GMT",
"version": "v3"
},
{
"created": "Tue, 21 Jul 2020 13:08:34 GMT",
"version": "v4"
}
] |
2020-08-17
|
[
[
"Thompson",
"Robin N",
""
],
[
"Hollingsworth",
"T Deirdre",
""
],
[
"Isham",
"Valerie",
""
],
[
"Arribas-Bel",
"Daniel",
""
],
[
"Ashby",
"Ben",
""
],
[
"Britton",
"Tom",
""
],
[
"Challoner",
"Peter",
""
],
[
"Chappell",
"Lauren H K",
""
],
[
"Clapham",
"Hannah",
""
],
[
"Cunniffe",
"Nik J",
""
],
[
"Dawid",
"A Philip",
""
],
[
"Donnelly",
"Christl A",
""
],
[
"Eggo",
"Rosalind",
""
],
[
"Funk",
"Sebastian",
""
],
[
"Gilbert",
"Nigel",
""
],
[
"Gog",
"Julia R",
""
],
[
"Glendinning",
"Paul",
""
],
[
"Hart",
"William S",
""
],
[
"Heesterbeek",
"Hans",
""
],
[
"House",
"Thomas",
""
],
[
"Keeling",
"Matt",
""
],
[
"Kiss",
"Istvan Z",
""
],
[
"Kretzschmar",
"Mirjam",
""
],
[
"Lloyd",
"Alun L",
""
],
[
"McBryde",
"Emma S",
""
],
[
"McCaw",
"James M",
""
],
[
"Miller",
"Joel C",
""
],
[
"McKinley",
"Trevelyan J",
""
],
[
"Morris",
"Martina",
""
],
[
"ONeill",
"Philip D",
""
],
[
"Pearson",
"Carl A B",
""
],
[
"Parag",
"Kris V",
""
],
[
"Pellis",
"Lorenzo",
""
],
[
"Pulliam",
"Juliet R C",
""
],
[
"Ross",
"Joshua V",
""
],
[
"Tildesley",
"Michael J",
""
],
[
"Tomba",
"Gianpaolo Scalia",
""
],
[
"Silverman",
"Bernard W",
""
],
[
"Struchiner",
"Claudio J",
""
],
[
"Trapman",
"Pieter",
""
],
[
"Webb",
"Cerian R",
""
],
[
"Mollison",
"Denis",
""
],
[
"Restif",
"Olivier",
""
]
] |
Combinations of intense non-pharmaceutical interventions ('lockdowns') were introduced in countries worldwide to reduce SARS-CoV-2 transmission. Many governments have begun to implement lockdown exit strategies that allow restrictions to be relaxed while attempting to control the risk of a surge in cases. Mathematical modelling has played a central role in guiding interventions, but the challenge of designing optimal exit strategies in the face of ongoing transmission is unprecedented. Here, we report discussions from the Isaac Newton Institute 'Models for an exit strategy' workshop (11-15 May 2020). A diverse community of modellers who are providing evidence to governments worldwide were asked to identify the main questions that, if answered, will allow for more accurate predictions of the effects of different exit strategies. Based on these questions, we propose a roadmap to facilitate the development of reliable models to guide exit strategies. The roadmap requires a global collaborative effort from the scientific community and policy-makers, and is made up of three parts: i) improve estimation of key epidemiological parameters; ii) understand sources of heterogeneity in populations; iii) focus on requirements for data collection, particularly in Low-to-Middle-Income countries. This will provide important information for planning exit strategies that balance socio-economic benefits with public health.
|
1611.07272
|
Ines Samengo
|
Maria da Fonseca and Ines Samengo
|
Derivation of human chromatic discrimination ability from an
information-theoretical notion of distance in color space
|
23 pages, 10 figures
|
Neural Computation doi:10.1162/NECO_a_00903 pp 1-18 (2016)
|
10.1162/NECO_a_00903
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The accuracy with which humans can detect small chromatic differences varies
throughout color space. For example, we are far more precise when
discriminating two similar orange stimuli than two similar green stimuli. In
order for two colors to be perceived as different, the neurons representing
chromatic information must respond differently, and the difference must be
larger than the trial-to-trial variability of the response to each separate
color. Photoreceptors constitute the first stage in the processing of color
information; many more stages are required before humans can consciously report
whether two stimuli are perceived as chromatically distinguishable or not.
Therefore, although photoreceptor absorption curves are expected to influence
the accuracy of conscious discriminability, there is no reason to believe that
they should suffice to explain it. Here we develop information-theoretical
tools based on the Fisher metric that demonstrate that photoreceptor absorption
properties explain ~87% of the variance of human color discrimination ability,
as tested by previous behavioral experiments. In the context of this theory,
the bottleneck in chromatic information processing is determined by
photoreceptor absorption characteristics. Subsequent encoding stages modify
only marginally the chromatic discriminability at the photoreceptor level.
|
[
{
"created": "Tue, 22 Nov 2016 12:27:04 GMT",
"version": "v1"
}
] |
2016-11-23
|
[
[
"da Fonseca",
"Maria",
""
],
[
"Samengo",
"Ines",
""
]
] |
The accuracy with which humans can detect small chromatic differences varies throughout color space. For example, we are far more precise when discriminating two similar orange stimuli than two similar green stimuli. In order for two colors to be perceived as different, the neurons representing chromatic information must respond differently, and the difference must be larger than the trial-to-trial variability of the response to each separate color. Photoreceptors constitute the first stage in the processing of color information; many more stages are required before humans can consciously report whether two stimuli are perceived as chromatically distinguishable or not. Therefore, although photoreceptor absorption curves are expected to influence the accuracy of conscious discriminability, there is no reason to believe that they should suffice to explain it. Here we develop information-theoretical tools based on the Fisher metric that demonstrate that photoreceptor absorption properties explain ~87% of the variance of human color discrimination ability, as tested by previous behavioral experiments. In the context of this theory, the bottleneck in chromatic information processing is determined by photoreceptor absorption characteristics. Subsequent encoding stages modify only marginally the chromatic discriminability at the photoreceptor level.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.